Method of estimating flood-frequency parameters for streams in Idaho
Kjelstrom, L.C.; Moffatt, R.L.
1981-01-01
Skew coefficients for the log-Pearson type III distribution are generalized on the basis of some similarity of floods in the Snake River basin and other parts of Idaho. Generalized skew coefficients aid in shaping flood-frequency curves because skew coefficients computed from gaging stations having relatively short periods of peak flow records can be unreliable. Generalized skew coefficients can be obtained for a gaging station from one of three maps in this report. The map to be used depends on whether (1) snowmelt floods are domiant (generally when more than 20 percent of the drainage area is above 6,000 feet altitude), (2) rainstorm floods are dominant (generally when the mean altitude is less than 3,000 feet), or (3) either snowmelt or rainstorm floods can be the annual miximum discharge. For the latter case, frequency curves constructed using separate arrays of each type of runoff can be combined into one curve, which, for some stations, is significantly different than the frequency curve constructed using only annual maximum discharges. For 269 gaging stations, flood-frequency curves that include the generalized skew coefficients in the computation of the log-Pearson type III equation tend to fit the data better than previous analyses. Frequency curves for ungaged sites can be derived by estimating three statistics of the log-Pearson type III distribution. The mean and standard deviation of logarithms of annual maximum discharges are estimated by regression equations that use basin characteristics as independent variables. Skew coefficient estimates are the generalized skews. The log-Pearson type III equation is then applied with the three estimated statistics to compute the discharge at selected exceedance probabilities. Standard errors at the 2-percent exceedance probability range from 41 to 90 percent. (USGS)
NASA Astrophysics Data System (ADS)
Alahmadi, F.; Rahman, N. A.; Abdulrazzak, M.
2014-09-01
Rainfall frequency analysis is an essential tool for the design of water related infrastructure. It can be used to predict future flood magnitudes for a given magnitude and frequency of extreme rainfall events. This study analyses the application of rainfall partial duration series (PDS) in the vast growing urban Madinah city located in the western part of Saudi Arabia. Different statistical distributions were applied (i.e. Normal, Log Normal, Extreme Value type I, Generalized Extreme Value, Pearson Type III, Log Pearson Type III) and their distribution parameters were estimated using L-moments methods. Also, different selection criteria models are applied, e.g. Akaike Information Criterion (AIC), Corrected Akaike Information Criterion (AICc), Bayesian Information Criterion (BIC) and Anderson-Darling Criterion (ADC). The analysis indicated the advantage of Generalized Extreme Value as the best fit statistical distribution for Madinah partial duration daily rainfall series. The outcome of such an evaluation can contribute toward better design criteria for flood management, especially flood protection measures.
Flood frequency analysis using optimization techniques : final report.
DOT National Transportation Integrated Search
1992-10-01
this study consists of three parts. In the first part, a comprehensive investigation was made to find an improved estimation method for the log-Pearson type 3 (LP3) distribution by using optimization techniques. Ninety sets of observed Louisiana floo...
Peak-flow frequency estimates through 1994 for gaged streams in South Dakota
Burr, M.J.; Korkow, K.L.
1996-01-01
Annual peak-flow data are listed for 250 continuous-record and crest-stage gaging stations in South Dakota. Peak-flow frequency estimates for selected recurrence intervals ranging from 2 to 500 years are given for 234 of these 250 stations. The log-Pearson Type III procedure was used to compute the frequency relations for the 234 stations, which in 1994 included 105 active and 129 inactive stations. The log-Pearson Type III procedure is recommended by the Hydrology Subcommittee of the Interagency Advisory Committee on Water Data, 1982, "Guidelines for Determining Flood Flow Frequency."No peak-flow frequency estimates are given for 16 of the 250 stations because: (1) of extreme variability in data set; (2) more than 20 percent of years had no flow; (3) annual peak flows represent large outflow from a spring; (4) of insufficient peak-flow record subsequent to reservoir regulation; and (5) peak-flow records were combined with records from nearby stations.
Analyses of flood-flow frequency for selected gaging stations in South Dakota
Benson, R.D.; Hoffman, E.B.; Wipf, V.J.
1985-01-01
Analyses of flood flow frequency were made for 111 continuous-record gaging stations in South Dakota with 10 or more years of record. The analyses were developed using the log-Pearson Type III procedure recommended by the U.S. Water Resources Council. The procedure characterizes flood occurrence at a single site as a sequence of annual peak flows. The magnitudes of the annual peak flows are assumed to be independent random variables following a log-Pearson Type III probability distribution, which defines the probability that any single annual peak flow will exceed a specified discharge. By considering only annual peak flows, the flood-frequency analysis becomes the estimation of the log-Pearson annual-probability curve using the record of annual peak flows at the site. The recorded data are divided into two classes: systematic and historic. The systematic record includes all annual peak flows determined in the process of conducting a systematic gaging program at a site. In this program, the annual peak flow is determined for each and every year of the program. The systematic record is intended to constitute an unbiased and representative sample of the population of all possible annual peak flows at the site. In contrast to the systematic record, the historic record consists of annual peak flows that would not have been determined except for evidence indicating their unusual magnitude. Flood information acquired from historical sources almost invariably refers to floods of noteworthy, and hence extraordinary, size. Although historic records form a biased and unrepresentative sample, they can be used to supplement the systematic record. (Author 's abstract)
Nishimura, Takeshi; Tanaka, Masami; Sekioka, Risa; Itoh, Hiroshi
2015-01-01
Although relationships of serum bilirubin concentration with estimated glomerular filtration rate (eGFR) and urinary albumin excretion (UAE) in patients with type 2 diabetes have been reported, whether such relationships exist in patients with type 1 diabetes is unknown. A total of 123 patients with type 1 diabetes were investigated in this cross-sectional study. The relationship between bilirubin (total and indirect) concentrations and log(UAE) as well as eGFR was examined by Pearson's correlation analyses. Multivariate regression analyses were used to assess the association of bilirubin (total and indirect) with eGFR as well as log(UAE). A positive correlation was found between serum bilirubin concentration and eGFR; total bilirubin (r=0.223, p=0.013), indirect bilirubin (r=0.244, p=0.007). A negative correlation was found between serum bilirubin concentration and log(UAE); total bilirubin (r=-0.258, p=0.005), indirect bilirubin (r=-0.271, p=0.003). Multivariate regression analyses showed that indirect bilirubin concentration was an independent determinant of eGFR and log(UAE). Bilirubin concentration is associated with both eGFR and log(UAE) in patients with type 1 diabetes. Bilirubin might have a protective role in the progression of type 1 diabetic nephropathy. Copyright © 2015 Elsevier Inc. All rights reserved.
A modified weighted function method for parameter estimation of Pearson type three distribution
NASA Astrophysics Data System (ADS)
Liang, Zhongmin; Hu, Yiming; Li, Binquan; Yu, Zhongbo
2014-04-01
In this paper, an unconventional method called Modified Weighted Function (MWF) is presented for the conventional moment estimation of a probability distribution function. The aim of MWF is to estimate the coefficient of variation (CV) and coefficient of skewness (CS) from the original higher moment computations to the first-order moment calculations. The estimators for CV and CS of Pearson type three distribution function (PE3) were derived by weighting the moments of the distribution with two weight functions, which were constructed by combining two negative exponential-type functions. The selection of these weight functions was based on two considerations: (1) to relate weight functions to sample size in order to reflect the relationship between the quantity of sample information and the role of weight function and (2) to allocate more weights to data close to medium-tail positions in a sample series ranked in an ascending order. A Monte-Carlo experiment was conducted to simulate a large number of samples upon which statistical properties of MWF were investigated. For the PE3 parent distribution, results of MWF were compared to those of the original Weighted Function (WF) and Linear Moments (L-M). The results indicate that MWF was superior to WF and slightly better than L-M, in terms of statistical unbiasness and effectiveness. In addition, the robustness of MWF, WF, and L-M were compared by designing the Monte-Carlo experiment that samples are obtained from Log-Pearson type three distribution (LPE3), three parameter Log-Normal distribution (LN3), and Generalized Extreme Value distribution (GEV), respectively, but all used as samples from the PE3 distribution. The results show that in terms of statistical unbiasness, no one method possesses the absolutely overwhelming advantage among MWF, WF, and L-M, while in terms of statistical effectiveness, the MWF is superior to WF and L-M.
A comparison of moment-based methods of estimation for the log Pearson type 3 distribution
NASA Astrophysics Data System (ADS)
Koutrouvelis, I. A.; Canavos, G. C.
2000-06-01
The log Pearson type 3 distribution is a very important model in statistical hydrology, especially for modeling annual flood series. In this paper we compare the various methods based on moments for estimating quantiles of this distribution. Besides the methods of direct and mixed moments which were found most successful in previous studies and the well-known indirect method of moments, we develop generalized direct moments and generalized mixed moments methods and a new method of adaptive mixed moments. The last method chooses the orders of two moments for the original observations by utilizing information contained in the sample itself. The results of Monte Carlo experiments demonstrated the superiority of this method in estimating flood events of high return periods when a large sample is available and in estimating flood events of low return periods regardless of the sample size. In addition, a comparison of simulation and asymptotic results shows that the adaptive method may be used for the construction of meaningful confidence intervals for design events based on the asymptotic theory even with small samples. The simulation results also point to the specific members of the class of generalized moments estimates which maintain small values for bias and/or mean square error.
Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments
NASA Astrophysics Data System (ADS)
Griffis, V. W.; Stedinger, J. R.; Cohn, T. A.
2004-07-01
The recently developed expected moments algorithm (EMA) [, 1997] does as well as maximum likelihood estimations at estimating log-Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.
Techniques for estimating flood-peak discharges of rural, unregulated streams in Ohio
Koltun, G.F.; Roberts, J.W.
1990-01-01
Multiple-regression equations are presented for estimating flood-peak discharges having recurrence intervals of 2, 5, 10, 25, 50, and 100 years at ungaged sites on rural, unregulated streams in Ohio. The average standard errors of prediction for the equations range from 33.4% to 41.4%. Peak discharge estimates determined by log-Pearson Type III analysis using data collected through the 1987 water year are reported for 275 streamflow-gaging stations. Ordinary least-squares multiple-regression techniques were used to divide the State into three regions and to identify a set of basin characteristics that help explain station-to- station variation in the log-Pearson estimates. Contributing drainage area, main-channel slope, and storage area were identified as suitable explanatory variables. Generalized least-square procedures, which include historical flow data and account for differences in the variance of flows at different gaging stations, spatial correlation among gaging station records, and variable lengths of station record were used to estimate the regression parameters. Weighted peak-discharge estimates computed as a function of the log-Pearson Type III and regression estimates are reported for each station. A method is provided to adjust regression estimates for ungaged sites by use of weighted and regression estimates for a gaged site located on the same stream. Limitations and shortcomings cited in an earlier report on the magnitude and frequency of floods in Ohio are addressed in this study. Geographic bias is no longer evident for the Maumee River basin of northwestern Ohio. No bias is found to be associated with the forested-area characteristic for the range used in the regression analysis (0.0 to 99.0%), nor is this characteristic significant in explaining peak discharges. Surface-mined area likewise is not significant in explaining peak discharges, and the regression equations are not biased when applied to basins having approximately 30% or less surface-mined area. Analyses of residuals indicate that the equations tend to overestimate flood-peak discharges for basins having approximately 30% or more surface-mined area. (USGS)
Wang, Jihan; Yang, Kai
2014-07-01
An efficient operating room needs both little underutilised and overutilised time to achieve optimal cost efficiency. The probabilities of underrun and overrun of lists of cases can be estimated by a well defined duration distribution of the lists. To propose a method of predicting the probabilities of underrun and overrun of lists of cases using Type IV Pearson distribution to support case scheduling. Six years of data were collected. The first 5 years of data were used to fit distributions and estimate parameters. The data from the last year were used as testing data to validate the proposed methods. The percentiles of the duration distribution of lists of cases were calculated by Type IV Pearson distribution and t-distribution. Monte Carlo simulation was conducted to verify the accuracy of percentiles defined by the proposed methods. Operating rooms in John D. Dingell VA Medical Center, United States, from January 2005 to December 2011. Differences between the proportion of lists of cases that were completed within the percentiles of the proposed duration distribution of the lists and the corresponding percentiles. Compared with the t-distribution, the proposed new distribution is 8.31% (0.38) more accurate on average and 14.16% (0.19) more accurate in calculating the probabilities at the 10th and 90th percentiles of the distribution, which is a major concern of operating room schedulers. The absolute deviations between the percentiles defined by Type IV Pearson distribution and those from Monte Carlo simulation varied from 0.20 min (0.01) to 0.43 min (0.03). Operating room schedulers can rely on the most recent 10 cases with the same combination of surgeon and procedure(s) for distribution parameter estimation to plan lists of cases. Values are mean (SEM). The proposed Type IV Pearson distribution is more accurate than t-distribution to estimate the probabilities of underrun and overrun of lists of cases. However, as not all the individual case durations followed log-normal distributions, there was some deviation from the true duration distribution of the lists.
Tracking reflective practice-based learning by medical students during an ambulatory clerkship.
Thomas, Patricia A; Goldberg, Harry
2007-11-01
To explore the use of web and palm digital assistant (PDA)-based patient logs to facilitate reflective learning in an ambulatory medicine clerkship. Thematic analysis of convenience sample of three successive rotations of medical students' patient log entries. Johns Hopkins University School of Medicine. MS3 and MS4 students rotating through a required block ambulatory medicine clerkship. Students are required to enter patient encounters into a web-based log system during the clerkship. Patient-linked entries included an open text field entitled, "Learning Need." Students were encouraged to use this field to enter goals for future study or teaching points related to the encounter. The logs of 59 students were examined. These students entered 3,051 patient encounters, and 51 students entered 1,347 learning need entries (44.1% of encounters). The use of the "Learning Need" field was not correlated with MS year, gender or end-of-clerkship knowledge test performance. There were strong correlations between the use of diagnostic thinking comments and observations of therapeutic relationships (Pearson's r=.42, p<0.001), and between diagnostic thinking and primary interpretation skills (Pearson's r=.60, p<0.001), but not between diagnostic thinking and factual knowledge (Pearson's r =.10, p=.46). We found that when clerkship students were cued to reflect on each patient encounter with the electronic log system, student entries grouped into categories that suggested different levels of reflective thinking. Future efforts should explore the use of such entries to encourage and track habits of reflective practice in the clinical curriculum.
Cohn, T.A.; Lane, W.L.; Baier, W.G.
1997-01-01
This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.
NASA Astrophysics Data System (ADS)
Cohn, T. A.; Lane, W. L.; Baier, W. G.
This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.
A Bayesian Surrogate for Regional Skew in Flood Frequency Analysis
NASA Astrophysics Data System (ADS)
Kuczera, George
1983-06-01
The problem of how to best utilize site and regional flood data to infer the shape parameter of a flood distribution is considered. One approach to this problem is given in Bulletin 17B of the U.S. Water Resources Council (1981) for the log-Pearson distribution. Here a lesser known distribution is considered, namely, the power normal which fits flood data as well as the log-Pearson and has a shape parameter denoted by λ derived from a Box-Cox power transformation. The problem of regionalizing λ is considered from an empirical Bayes perspective where site and regional flood data are used to infer λ. The distortive effects of spatial correlation and heterogeneity of site sampling variance of λ are explicitly studied with spatial correlation being found to be of secondary importance. The end product of this analysis is the posterior distribution of the power normal parameters expressing, in probabilistic terms, what is known about the parameters given site flood data and regional information on λ. This distribution can be used to provide the designer with several types of information. The posterior distribution of the T-year flood is derived. The effect of nonlinearity in λ on inference is illustrated. Because uncertainty in λ is explicitly allowed for, the understatement in confidence limits due to fixing λ (analogous to fixing log skew) is avoided. Finally, it is shown how to obtain the marginal flood distribution which can be used to select a design flood with specified exceedance probability.
England, John F.; Salas, José D.; Jarrett, Robert D.
2003-01-01
The expected moments algorithm (EMA) [Cohn et al., 1997] and the Bulletin 17B [Interagency Committee on Water Data, 1982] historical weighting procedure (B17H) for the log Pearson type III distribution are compared by Monte Carlo computer simulation for cases in which historical and/or paleoflood data are available. The relative performance of the estimators was explored for three cases: fixed‐threshold exceedances, a fixed number of large floods, and floods generated from a different parent distribution. EMA can effectively incorporate four types of historical and paleoflood data: floods where the discharge is explicitly known, unknown discharges below a single threshold, floods with unknown discharge that exceed some level, and floods with discharges described in a range. The B17H estimator can utilize only the first two types of historical information. Including historical/paleoflood data in the simulation experiments significantly improved the quantile estimates in terms of mean square error and bias relative to using gage data alone. EMA performed significantly better than B17H in nearly all cases considered. B17H performed as well as EMA for estimating X100 in some limited fixed‐threshold exceedance cases. EMA performed comparatively much better in other fixed‐threshold situations, for the single large flood case, and in cases when estimating extreme floods equal to or greater than X500. B17H did not fully utilize historical information when the historical period exceeded 200 years. Robustness studies using GEV‐simulated data confirmed that EMA performed better than B17H. Overall, EMA is preferred to B17H when historical and paleoflood data are available for flood frequency analysis.
NASA Astrophysics Data System (ADS)
England, John F.; Salas, José D.; Jarrett, Robert D.
2003-09-01
The expected moments algorithm (EMA) [, 1997] and the Bulletin 17B [, 1982] historical weighting procedure (B17H) for the log Pearson type III distribution are compared by Monte Carlo computer simulation for cases in which historical and/or paleoflood data are available. The relative performance of the estimators was explored for three cases: fixed-threshold exceedances, a fixed number of large floods, and floods generated from a different parent distribution. EMA can effectively incorporate four types of historical and paleoflood data: floods where the discharge is explicitly known, unknown discharges below a single threshold, floods with unknown discharge that exceed some level, and floods with discharges described in a range. The B17H estimator can utilize only the first two types of historical information. Including historical/paleoflood data in the simulation experiments significantly improved the quantile estimates in terms of mean square error and bias relative to using gage data alone. EMA performed significantly better than B17H in nearly all cases considered. B17H performed as well as EMA for estimating X100 in some limited fixed-threshold exceedance cases. EMA performed comparatively much better in other fixed-threshold situations, for the single large flood case, and in cases when estimating extreme floods equal to or greater than X500. B17H did not fully utilize historical information when the historical period exceeded 200 years. Robustness studies using GEV-simulated data confirmed that EMA performed better than B17H. Overall, EMA is preferred to B17H when historical and paleoflood data are available for flood frequency analysis.
Measuring physical activity during US Army Basic Combat Training: a comparison of 3 methods.
Redmond, Jan E; Cohen, Bruce S; Simpson, Kathleen; Spiering, Barry A; Sharp, Marilyn A
2013-01-01
An understanding of the demands of physical activity (PA) during US Army Basic Combat Training (BCT) is necessary to support Soldier readiness and resilience. The purpose of this study was to determine the agreement among 3 different PA measurement instruments in the BCT environment. Twenty-four recruits from each of 11 companies wore an ActiGraph accelerometer (Actigraph, LLC, Pensacola, FL) and completed a daily PA log during 8 weeks of BCT at 2 different training sites. The PA of one recruit from each company was recorded using PAtracker, an Army-developed direct observation tool. Information obtained from the accelerometer, PA log, and PAtracker included time spent in various types of PA, body positions, PA intensities, and external loads carried. Pearson product moment correlations were run to determine the strength of association between the ActiGraph and PAtracker for measures of PA intensity and between the PAtracker and daily PA log for measures of body position and PA type. The Bland-Altman method was used to assess the limits of agreement (LoA) between the measurement instruments. Weak correlations (r=-0.052 to r=0.302) were found between the ActiGraph and PAtracker for PA intensity. Weak but positive correlations (r=0.033 to r=0.268) were found between the PAtracker and daily PA log for body position and type of PA. The 95% LoA for the ActiGraph and PAtracker for PA intensity were in disagreement. The 95% LoA for the PAtracker and daily PA log for standing and running and all PA types were in disagreement; sitting and walking were in agreement. The ActiGraph accelerometer provided the best measure of the recruits' PA intensity while the PAtracker and daily PA log were best for capturing body position and type of PA in the BCT environment. The use of multiple PA measurement instruments in this study was necessary to best characterize the physical demands of BCT.
Kessler, Erich W.; Lorenz, David L.; Sanocki, Christopher A.
2013-01-01
Peak-flow frequency analyses were completed for 409 streamgages in and bordering Minnesota having at least 10 systematic peak flows through water year 2011. Selected annual exceedance probabilities were determined by fitting a log-Pearson type III probability distribution to the recorded annual peak flows. A detailed explanation of the methods that were used to determine the annual exceedance probabilities, the historical period, acceptable low outliers, and analysis method for each streamgage are presented. The final results of the analyses are presented.
Comparison of methods for estimating flood magnitudes on small streams in Georgia
Hess, Glen W.; Price, McGlone
1989-01-01
The U.S. Geological Survey has collected flood data for small, natural streams at many sites throughout Georgia during the past 20 years. Flood-frequency relations were developed for these data using four methods: (1) observed (log-Pearson Type III analysis) data, (2) rainfall-runoff model, (3) regional regression equations, and (4) map-model combination. The results of the latter three methods were compared to the analyses of the observed data in order to quantify the differences in the methods and determine if the differences are statistically significant.
Flood-frequency characteristics of Wisconsin streams
Walker, John F.; Peppler, Marie C.; Danz, Mari E.; Hubbard, Laura E.
2017-05-22
Flood-frequency characteristics for 360 gaged sites on unregulated rural streams in Wisconsin are presented for percent annual exceedance probabilities ranging from 0.2 to 50 using a statewide skewness map developed for this report. Equations of the relations between flood-frequency and drainage-basin characteristics were developed by multiple-regression analyses. Flood-frequency characteristics for ungaged sites on unregulated, rural streams can be estimated by use of the equations presented in this report. The State was divided into eight areas of similar physiographic characteristics. The most significant basin characteristics are drainage area, soil saturated hydraulic conductivity, main-channel slope, and several land-use variables. The standard error of prediction for the equation for the 1-percent annual exceedance probability flood ranges from 56 to 70 percent for Wisconsin Streams; these values are larger than results presented in previous reports. The increase in the standard error of prediction is likely due to increased variability of the annual-peak discharges, resulting in increased variability in the magnitude of flood peaks at higher frequencies. For each of the unregulated rural streamflow-gaging stations, a weighted estimate based on the at-site log Pearson type III analysis and the multiple regression results was determined. The weighted estimate generally has a lower uncertainty than either the Log Pearson type III or multiple regression estimates. For regulated streams, a graphical method for estimating flood-frequency characteristics was developed from the relations of discharge and drainage area for selected annual exceedance probabilities. Graphs for the major regulated streams in Wisconsin are presented in the report.
Winterstein, Thomas A.; Arntson, Allan D.; Mitton, Gregory B.
2007-01-01
The 1-, 7-, and 30-day low-flow series were determined for 120 continuous-record streamflow stations in Minnesota having at least 20 years of continuous record. The 2-, 5-, 10-, 50-, and 100-year statistics were determined for each series by fitting a log Pearson type III distribution to the data. The methods used to determine the low-flow statistics and to construct the plots of the low-flow frequency curves are described. The low-flow series and the low-flow statistics are presented in tables and graphs.
Swathirajan, Chinnambedu Ravichandran; Vignesh, Ramachandran; Boobalan, Jayaseelan; Solomon, Sunil Suhas; Saravanan, Shanmugam; Balakrishnan, Pachamuthu
2017-10-01
Sustainable suppression of HIV replication forms the basis of anti-retroviral therapy (ART) medication. Thus, reliable quantification of HIV viral load has become an essential factor to monitor the effectiveness of the ART. Longer turnaround-time (TAT), batch testing and technical skills are major drawbacks of standard real-time PCR assays. The performance of the point-of-care Xpert HIV-1 viral load assay was evaluated against the Abbott RealTime PCR m2000rt system. A total of 96 plasma specimens ranging from 2.5 log10 copies ml -1 to 4.99 log10 copies ml -1 and proficiency testing panel specimens were used. Precision and accuracy were checked using the Pearson correlation co-efficient test and Bland-Altman analysis. Compared to the Abbott RealTime PCR, the Xpert HIV-1 viral load assay showed a good correlation (Pearson r=0.81; P<0.0001) with a mean difference of 0.27 log10 copies ml -1 (95 % CI, -0.41 to 0.96 log10 copies ml -1 ; sd, 0.35 log10 copies ml -1 ). Reliable and ease of testing individual specimens could make the Xpert HIV-1 viral load assay an efficient alternative method for ART monitoring in clinical management of HIV disease in resource-limited settings. The rapid test results (less than 2 h) could help in making an immediate clinical decision, which further strengthens patient care.
Best Statistical Distribution of flood variables for Johor River in Malaysia
NASA Astrophysics Data System (ADS)
Salarpour Goodarzi, M.; Yusop, Z.; Yusof, F.
2012-12-01
A complex flood event is always characterized by a few characteristics such as flood peak, flood volume, and flood duration, which might be mutually correlated. This study explored the statistical distribution of peakflow, flood duration and flood volume at Rantau Panjang gauging station on the Johor River in Malaysia. Hourly data were recorded for 45 years. The data were analysed based on water year (July - June). Five distributions namely, Log Normal, Generalize Pareto, Log Pearson, Normal and Generalize Extreme Value (GEV) were used to model the distribution of all the three variables. Anderson-Darling and Kolmogorov-Smirnov goodness-of-fit tests were used to evaluate the best fit. Goodness-of-fit tests at 5% level of significance indicate that all the models can be used to model the distribution of peakflow, flood duration and flood volume. However, Generalize Pareto distribution is found to be the most suitable model when tested with the Anderson-Darling test and the, Kolmogorov-Smirnov suggested that GEV is the best for peakflow. The result of this research can be used to improve flood frequency analysis. Comparison between Generalized Extreme Value, Generalized Pareto and Log Pearson distributions in the Cumulative Distribution Function of peakflow
Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments
Griffis, V.W.; Stedinger, Jery R.; Cohn, T.A.
2004-01-01
The recently developed expected moments algorithm (EMA) [Cohn et al., 1997] does as well as maximum likelihood estimations at estimating log‐Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.
A product Pearson-type VII density distribution
NASA Astrophysics Data System (ADS)
Nadarajah, Saralees; Kotz, Samuel
2008-01-01
The Pearson-type VII distributions (containing the Student's t distributions) are becoming increasing prominent and are being considered as competitors to the normal distribution. Motivated by real examples in decision sciences, Bayesian statistics, probability theory and Physics, a new Pearson-type VII distribution is introduced by taking the product of two Pearson-type VII pdfs. Various structural properties of this distribution are derived, including its cdf, moments, mean deviation about the mean, mean deviation about the median, entropy, asymptotic distribution of the extreme order statistics, maximum likelihood estimates and the Fisher information matrix. Finally, an application to a Bayesian testing problem is illustrated.
Techniques for estimating flood-peak discharges of rural, unregulated streams in Ohio
Koltun, G.F.
2003-01-01
Regional equations for estimating 2-, 5-, 10-, 25-, 50-, 100-, and 500-year flood-peak discharges at ungaged sites on rural, unregulated streams in Ohio were developed by means of ordinary and generalized least-squares (GLS) regression techniques. One-variable, simple equations and three-variable, full-model equations were developed on the basis of selected basin characteristics and flood-frequency estimates determined for 305 streamflow-gaging stations in Ohio and adjacent states. The average standard errors of prediction ranged from about 39 to 49 percent for the simple equations, and from about 34 to 41 percent for the full-model equations. Flood-frequency estimates determined by means of log-Pearson Type III analyses are reported along with weighted flood-frequency estimates, computed as a function of the log-Pearson Type III estimates and the regression estimates. Values of explanatory variables used in the regression models were determined from digital spatial data sets by means of a geographic information system (GIS), with the exception of drainage area, which was determined by digitizing the area within basin boundaries manually delineated on topographic maps. Use of GIS-based explanatory variables represents a major departure in methodology from that described in previous reports on estimating flood-frequency characteristics of Ohio streams. Examples are presented illustrating application of the regression equations to ungaged sites on ungaged and gaged streams. A method is provided to adjust regression estimates for ungaged sites by use of weighted and regression estimates for a gaged site on the same stream. A region-of-influence method, which employs a computer program to estimate flood-frequency characteristics for ungaged sites based on data from gaged sites with similar characteristics, was also tested and compared to the GLS full-model equations. For all recurrence intervals, the GLS full-model equations had superior prediction accuracy relative to the simple equations and therefore are recommended for use.
Hydrological and hydroclimatic regimes in the Ouergha watershed
NASA Astrophysics Data System (ADS)
Msatef, Karim; Benaabidate, Lahcen; Bouignane, Aziz
2018-05-01
This work consists in studying the hydrological and hydroclimatic regime of the Ouergha watershed and frequency analysis of extreme flows and extreme rainfall for peak estimation and return periods, in order to prevention and forecasting against risks (flood...). Hydrological regime analysis showed a regime of the rain type, characterized by rainfed abundance with very high winter flows, so strong floods. The annual module and the different coefficients show hydroclimatic fluctuations in relation to a semihumid climate. The water balance has highlighted the importance of the volumes of water conveyed upstream than downstream, thus confirming the morphometric parameters of watershed and the lithological nature. Frequency study of flows and extreme rainfall showed that these flows governed by dissymmetrical laws based on methods Gumbel, GEV, Gamma and Log Pearson III.
Bishara, Anthony J; Hittner, James B
2012-09-01
It is well known that when data are nonnormally distributed, a test of the significance of Pearson's r may inflate Type I error rates and reduce power. Statistics textbooks and the simulation literature provide several alternatives to Pearson's correlation. However, the relative performance of these alternatives has been unclear. Two simulation studies were conducted to compare 12 methods, including Pearson, Spearman's rank-order, transformation, and resampling approaches. With most sample sizes (n ≥ 20), Type I and Type II error rates were minimized by transforming the data to a normal shape prior to assessing the Pearson correlation. Among transformation approaches, a general purpose rank-based inverse normal transformation (i.e., transformation to rankit scores) was most beneficial. However, when samples were both small (n ≤ 10) and extremely nonnormal, the permutation test often outperformed other alternatives, including various bootstrap tests.
The Spontaneous Ray Log: A New Aid for Constructing Pseudo-Synthetic Seismograms
NASA Astrophysics Data System (ADS)
Quadir, Adnan; Lewis, Charles; Rau, Ruey-Juin
2018-02-01
Conventional synthetic seismograms for hydrocarbon exploration combine the sonic and density logs, whereas pseudo-synthetic seismograms are constructed with a density log plus a resistivity, neutron, gamma ray, or rarely a spontaneous potential log. Herein, we introduce a new technique for constructing a pseudo-synthetic seismogram by combining the gamma ray (GR) and self-potential (SP) logs to produce the spontaneous ray (SR) log. Three wells, each of which consisted of more than 1000 m of carbonates, sandstones, and shales, were investigated; each well was divided into 12 Groups based on formation tops, and the Pearson product-moment correlation coefficient (PCC) was calculated for each "Group" from each of the GR, SP, and SR logs. The highest PCC-valued log curves for each Group were then combined to produce a single log whose values were cross-plotted against the reference well's sonic ITT values to determine a linear transform for producing a pseudo-sonic (PS) log and, ultimately, a pseudo-synthetic seismogram. The range for the Nash-Sutcliffe efficiency (NSE) acceptable value for the pseudo-sonic logs of three wells was 78-83%. This technique was tested on three wells, one of which was used as a blind test well, with satisfactory results. The PCC value between the composite PS (SR) log with low-density correction and the conventional sonic (CS) log was 86%. Because of the common occurrence of spontaneous potential and gamma ray logs in many of the hydrocarbon basins of the world, this inexpensive and straightforward technique could hold significant promise in areas that are in need of alternate ways to create pseudo-synthetic seismograms for seismic reflection interpretation.
Probability distribution functions for unit hydrographs with optimization using genetic algorithm
NASA Astrophysics Data System (ADS)
Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh
2017-05-01
A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.
Mor, Orna; Gozlan, Yael; Wax, Marina; Mileguir, Fernando; Rakovsky, Avia; Noy, Bina; Mendelson, Ella; Levy, Itzchak
2015-11-01
HIV-1 RNA monitoring, both before and during antiretroviral therapy, is an integral part of HIV management worldwide. Measurements of HIV-1 viral loads are expected to assess the copy numbers of all common HIV-1 subtypes accurately and to be equally sensitive at different viral loads. In this study, we compared for the first time the performance of the NucliSens v2.0, RealTime HIV-1, Aptima HIV-1 Quant Dx, and Xpert HIV-1 viral load assays. Plasma samples (n = 404) were selected on the basis of their NucliSens v2.0 viral load results and HIV-1 subtypes. Concordance, linear regression, and Bland-Altman plots were assessed, and mixed-model analysis was utilized to compare the analytical performance of the assays for different HIV-1 subtypes and for low and high HIV-1 copy numbers. Overall, high concordance (>83.89%), high correlation values (Pearson r values of >0.89), and good agreement were observed among all assays, although the Xpert and Aptima assays, which provided the most similar outputs (estimated mean viral loads of 2.67 log copies/ml [95% confidence interval [CI], 2.50 to 2.84 log copies/ml] and 2.68 log copies/ml [95% CI, 2.49 to 2.86 log copies/ml], respectively), correlated best with the RealTime assay (89.8% concordance, with Pearson r values of 0.97 to 0.98). These three assays exhibited greater precision than the NucliSens v2.0 assay. All assays were equally sensitive for subtype B and AG/G samples and for samples with viral loads of 1.60 to 3.00 log copies/ml. The NucliSens v2.0 assay underestimated A1 samples and those with viral loads of >3.00 log copies/ml. The RealTime assay tended to underquantify subtype C (compared to the Xpert and Aptima assays) and subtype A1 samples. The Xpert and Aptima assays were equally efficient for detection of all subtypes and viral loads, which renders these new assays most suitable for clinical HIV laboratories. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Using Empirical Data to Estimate Potential Functions in Commodity Markets: Some Initial Results
NASA Astrophysics Data System (ADS)
Shen, C.; Haven, E.
2017-12-01
This paper focuses on estimating real and quantum potentials from financial commodities. The log returns of six common commodities are considered. We find that some phenomena, such as the vertical potential walls and the time scale issue of the variation on returns, also exists in commodity markets. By comparing the quantum and classical potentials, we attempt to demonstrate that the information within these two types of potentials is different. We believe this empirical result is consistent with the theoretical assumption that quantum potentials (when embedded into social science contexts) may contain some social cognitive or market psychological information, while classical potentials mainly reflect `hard' market conditions. We also compare the two potential forces and explore their relationship by simply estimating the Pearson correlation between them. The Medium or weak interaction effect may indicate that the cognitive system among traders may be affected by those `hard' market conditions.
Parsimonious nonstationary flood frequency analysis
NASA Astrophysics Data System (ADS)
Serago, Jake M.; Vogel, Richard M.
2018-02-01
There is now widespread awareness of the impact of anthropogenic influences on extreme floods (and droughts) and thus an increasing need for methods to account for such influences when estimating a frequency distribution. We introduce a parsimonious approach to nonstationary flood frequency analysis (NFFA) based on a bivariate regression equation which describes the relationship between annual maximum floods, x, and an exogenous variable which may explain the nonstationary behavior of x. The conditional mean, variance and skewness of both x and y = ln (x) are derived, and combined with numerous common probability distributions including the lognormal, generalized extreme value and log Pearson type III models, resulting in a very simple and general approach to NFFA. Our approach offers several advantages over existing approaches including: parsimony, ease of use, graphical display, prediction intervals, and opportunities for uncertainty analysis. We introduce nonstationary probability plots and document how such plots can be used to assess the improved goodness of fit associated with a NFFA.
Williams-Sether, Tara
2015-08-06
Annual peak-flow frequency data from 231 U.S. Geological Survey streamflow-gaging stations in North Dakota and parts of Montana, South Dakota, and Minnesota, with 10 or more years of unregulated peak-flow record, were used to develop regional regression equations for exceedance probabilities of 0.5, 0.20, 0.10, 0.04, 0.02, 0.01, and 0.002 using generalized least-squares techniques. Updated peak-flow frequency estimates for 262 streamflow-gaging stations were developed using data through 2009 and log-Pearson Type III procedures outlined by the Hydrology Subcommittee of the Interagency Advisory Committee on Water Data. An average generalized skew coefficient was determined for three hydrologic zones in North Dakota. A StreamStats web application was developed to estimate basin characteristics for the regional regression equation analysis. Methods for estimating a weighted peak-flow frequency for gaged sites and ungaged sites are presented.
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.
1983-01-01
Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.H.
1980-01-01
Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Barwais, Faisal Awad; Cuddihy, Thomas F; Washington, Tracy; Tomson, L Michaud; Brymer, Eric
2014-08-01
Low levels of physical activity and high levels of sedentary behavior (SB) are major public health concerns. This study was designed to develop and validate the 7-day Sedentary (S) and Light Intensity Physical Activity (LIPA) Log (7-day SLIPA Log), a self-report measure of specific daily behaviors. To develop the log, 62 specific SB and LIPA behaviors were chosen from the Compendium of Physical Activities. Face-to-face interviews were conducted with 32 sedentary volunteers to identify domains and behaviors of SB and LIPA. To validate the log, a further 22 sedentary adults were recruited to wear the GT3x for 7 consecutive days and nights. Pearson correlations (r) between the 7-day SLIPA Log and GT3x were significant for sedentary (r = .86, P < .001), for LIPA (r = .80, P < .001). Lying and sitting postures were positively correlated with GT3x output (r = .60 and r = .64, P < .001, respectively). No significant correlation was found for standing posture (r = .14, P = .53).The kappa values between the 7-day SLIPA Log and GT3x variables ranged from 0.09 to 0.61, indicating poor to good agreement. The 7-day SLIPA Log is a valid self-report measure of SB and LIPA in specific behavioral domains.
Spatial trends in Pearson Type III statistical parameters
Lichty, R.W.; Karlinger, M.R.
1995-01-01
Spatial trends in the statistical parameters (mean, standard deviation, and skewness coefficient) of a Pearson Type III distribution of the logarithms of annual flood peaks for small rural basins (less than 90 km2) are delineated using a climate factor CT, (T=2-, 25-, and 100-yr recurrence intervals), which quantifies the effects of long-term climatic data (rainfall and pan evaporation) on observed T-yr floods. Maps showing trends in average parameter values demonstrate the geographically varying influence of climate on the magnitude of Pearson Type III statistical parameters. The spatial trends in variability of the parameter values characterize the sensitivity of statistical parameters to the interaction of basin-runoff characteristics (hydrology) and climate. -from Authors
Comparison of Methods for Estimating Low Flow Characteristics of Streams
Tasker, Gary D.
1987-01-01
Four methods for estimating the 7-day, 10-year and 7-day, 20-year low flows for streams are compared by the bootstrap method. The bootstrap method is a Monte Carlo technique in which random samples are drawn from an unspecified sampling distribution defined from observed data. The nonparametric nature of the bootstrap makes it suitable for comparing methods based on a flow series for which the true distribution is unknown. Results show that the two methods based on hypothetical distribution (Log-Pearson III and Weibull) had lower mean square errors than did the G. E. P. Box-D. R. Cox transformation method or the Log-W. C. Boughton method which is based on a fit of plotting positions.
Sauer, Vernon B.
1974-01-01
The 2-, 5-, 10-, 25-, 50-, and 100-year recurrence interval floods are related to basin and climatic parameters for natural streams in Oklahoma by multiple regression techniques through the mathematical model, Qx=aAbScPd,where Qx is peak discharge for recurrence interval x, A is contributing drainage area, S is main channel slope, P is mean annual precipitation, and a, b, c, and d are regression constants and coefficients. One equation for each recurrence interval applies statewide for all natural streams of less than 2,500 mil (6,500 km2), except where manmade works, such as dams, flood-detention structures, levees, channelization, and urban development, appreciably affect flood runoff. The equations can be used to estimate flood frequency of a stream at an ungaged site if drainage area size, main channel slope, and mean annual precipitation are known. At or near gaged sites, a weighted average of the regression results and the gaging station data is recommended.Individual relations of flood magnitude to contributing drainage area are given for all or parts of the main stems of the Arkansas, Salt Fork Arkansas, Cimarron, North Canadian, Canadian, Washita, North Fork Red, and Red Rivers. Parts of some of these streams, and all of the Neosho and Verdigris Rivers are not included because the effects of. major regulation from large reservoirs cannot be evaluated within the scope of the report. Graphical relations of maximum floods of record for eastern and western Oklahoma provide a guide to maximum probable floods. A random sampling of the seasonal occurrence of floods indicated about two-thirds of all annual floods in Oklahoma occur during. April through July. Less than one-half of one percent of annual floods occur in December. A compilation of flood records at all gaging sites in Oklahoma and some selected sites in adjacent States is given in an appendix. Basin and climatic parameters and log-Pearson Type III frequency data and statistics are given for most station records. A second appendix gives a reprint of the U.S. Water Resources Council Bulletin 15 which describes procedures for fitting a log-Pearson Type III distribution to gaging station data.
The 1993 Mississippi river flood: A one hundred or a one thousand year event?
Malamud, B.D.; Turcotte, D.L.; Barton, C.C.
1996-01-01
Power-law (fractal) extreme-value statistics are applicable to many natural phenomena under a wide variety of circumstances. Data from a hydrologic station in Keokuk, Iowa, shows the great flood of the Mississippi River in 1993 has a recurrence interval on the order of 100 years using power-law statistics applied to partial-duration flood series and on the order of 1,000 years using a log-Pearson type 3 (LP3) distribution applied to annual series. The LP3 analysis is the federally adopted probability distribution for flood-frequency estimation of extreme events. We suggest that power-law statistics are preferable to LP3 analysis. As a further test of the power-law approach we consider paleoflood data from the Colorado River. We compare power-law and LP3 extrapolations of historical data with these paleo-floods. The results are remarkably similar to those obtained for the Mississippi River: Recurrence intervals from power-law statistics applied to Lees Ferry discharge data are generally consistent with inferred 100- and 1,000-year paleofloods, whereas LP3 analysis gives recurrence intervals that are orders of magnitude longer. For both the Keokuk and Lees Ferry gauges, the use of an annual series introduces an artificial curvature in log-log space that leads to an underestimate of severe floods. Power-law statistics are predicting much shorter recurrence intervals than the federally adopted LP3 statistics. We suggest that if power-law behavior is applicable, then the likelihood of severe floods is much higher. More conservative dam designs and land-use restrictions Nay be required.
NASA Astrophysics Data System (ADS)
Nobert, Joel; Mugo, Margaret; Gadain, Hussein
Reliable estimation of flood magnitudes corresponding to required return periods, vital for structural design purposes, is impacted by lack of hydrological data in the study area of Lake Victoria Basin in Kenya. Use of regional information, derived from data at gauged sites and regionalized for use at any location within a homogenous region, would improve the reliability of the design flood estimation. Therefore, the regional index flood method has been applied. Based on data from 14 gauged sites, a delineation of the basin into two homogenous regions was achieved using elevation variation (90-m DEM), spatial annual rainfall pattern and Principal Component Analysis of seasonal rainfall patterns (from 94 rainfall stations). At site annual maximum series were modelled using the Log normal (LN) (3P), Log Logistic Distribution (LLG), Generalized Extreme Value (GEV) and Log Pearson Type 3 (LP3) distributions. The parameters of the distributions were estimated using the method of probability weighted moments. Goodness of fit tests were applied and the GEV was identified as the most appropriate model for each site. Based on the GEV model, flood quantiles were estimated and regional frequency curves derived from the averaged at site growth curves. Using the least squares regression method, relationships were developed between the index flood, which is defined as the Mean Annual Flood (MAF) and catchment characteristics. The relationships indicated area, mean annual rainfall and altitude were the three significant variables that greatly influence the index flood. Thereafter, estimates of flood magnitudes in ungauged catchments within a homogenous region were estimated from the derived equations for index flood and quantiles from the regional curves. These estimates will improve flood risk estimation and to support water management and engineering decisions and actions.
Albert, Dara V; Brorson, James R; Amidei, Christina; Lukas, Rimas V
2014-04-22
Using outpatient neurology clinic case logs completed by medical students on neurology clerkships, we examined the impact of outpatient clinical encounter volume per student on outcomes of knowledge assessed by the National Board of Medical Examiners (NBME) Clinical Neurology Subject Examination and clinical skills assessed by the Objective Structured Clinical Examination (OSCE). Data from 394 medical students from July 2008 to June 2012, representing 9,791 patient encounters, were analyzed retrospectively. Pearson correlations were calculated examining the relationship between numbers of cases logged per student and performance on the NBME examination. Similarly, correlations between cases logged and performance on the OSCE, as well as on components of the OSCE (history, physical examination, clinical formulation), were evaluated. There was a correlation between the total number of cases logged per student and NBME examination scores (r = 0.142; p = 0.005) and OSCE scores (r = 0.136; p = 0.007). Total number of cases correlated with the clinical formulation component of the OSCE (r = 0.172; p = 0.001) but not the performance on history or physical examination components. The volume of cases logged by individual students in the outpatient clinic correlates with performance on measures of knowledge and clinical skill. In measurement of clinical skill, seeing a greater volume of patients in the outpatient clinic is related to improved clinical formulation on the OSCE. These findings may affect methods employed in assessment of medical students, residents, and fellows.
NASA Astrophysics Data System (ADS)
Sabarish, R. Mani; Narasimhan, R.; Chandhru, A. R.; Suribabu, C. R.; Sudharsan, J.; Nithiyanantham, S.
2017-05-01
In the design of irrigation and other hydraulic structures, evaluating the magnitude of extreme rainfall for a specific probability of occurrence is of much importance. The capacity of such structures is usually designed to cater to the probability of occurrence of extreme rainfall during its lifetime. In this study, an extreme value analysis of rainfall for Tiruchirapalli City in Tamil Nadu was carried out using 100 years of rainfall data. Statistical methods were used in the analysis. The best-fit probability distribution was evaluated for 1, 2, 3, 4 and 5 days of continuous maximum rainfall. The goodness of fit was evaluated using Chi-square test. The results of the goodness-of-fit tests indicate that log-Pearson type III method is the overall best-fit probability distribution for 1-day maximum rainfall and consecutive 2-, 3-, 4-, 5- and 6-day maximum rainfall series of Tiruchirapalli. To be reliable, the forecasted maximum rainfalls for the selected return periods are evaluated in comparison with the results of the plotting position.
Analysis of water-level fluctuations of the US Highway 90 retention pond, Madison, Florida
Bridges, W.C.
1985-01-01
A closed basin stormwater retention pond, located 1 mile west of Madison, Florida, has a maximum storage capacity of 134.1 acre-feet at the overtopping altitude of 100.2 feet. The maximum observed altitude (July 1982 to March 1984) was 99.52 feet (126.7 acre-feet) on March 28, 1984. This report provides a technique for simulating net monthly change-in-altitude in response to rainfall and evaporation. A regression equation was developed which relates net monthly change in altitude (dependent variable) to rainfall and evaporation (independent variables). Rainfall frequency curves were developed using a log-Pearson Type III distribution of the annual, January through April, June through August, and July monthly rainfall totals for the years 1908-72, 1974, 1976-82. The altitude of the retention pond increased almost 7 feet during the 4-month period January through April 1983. The rainfall total was 35.1 inches, and the recurrence interval exceeded the 100-year January-April rainfall. (USGS)
Use of regionalisation approach to develop fire frequency curves for Victoria, Australia
NASA Astrophysics Data System (ADS)
Khastagir, Anirban; Jayasuriya, Niranjali; Bhuyian, Muhammed A.
2017-11-01
It is important to perform fire frequency analysis to obtain fire frequency curves (FFC) based on fire intensity at different parts of Victoria. In this paper fire frequency curves (FFCs) were derived based on forest fire danger index (FFDI). FFDI is a measure related to fire initiation, spreading speed and containment difficulty. The mean temperature (T), relative humidity (RH) and areal extent of open water (LC2) during summer months (Dec-Feb) were identified as the most important parameters for assessing the risk of occurrence of bushfire. Based on these parameters, Andrews' curve equation was applied to 40 selected meteorological stations to identify homogenous stations to form unique clusters. A methodology using peak FFDI from cluster averaged FFDIs was developed by applying Log Pearson Type III (LPIII) distribution to generate FFCs. A total of nine homogeneous clusters across Victoria were identified, and subsequently their FFC's were developed in order to estimate the regionalised fire occurrence characteristics.
Predicting commuter flows in spatial networks using a radiation model based on temporal ranges
NASA Astrophysics Data System (ADS)
Ren, Yihui; Ercsey-Ravasz, Mária; Wang, Pu; González, Marta C.; Toroczkai, Zoltán
2014-11-01
Understanding network flows such as commuter traffic in large transportation networks is an ongoing challenge due to the complex nature of the transportation infrastructure and human mobility. Here we show a first-principles based method for traffic prediction using a cost-based generalization of the radiation model for human mobility, coupled with a cost-minimizing algorithm for efficient distribution of the mobility fluxes through the network. Using US census and highway traffic data, we show that traffic can efficiently and accurately be computed from a range-limited, network betweenness type calculation. The model based on travel time costs captures the log-normal distribution of the traffic and attains a high Pearson correlation coefficient (0.75) when compared with real traffic. Because of its principled nature, this method can inform many applications related to human mobility driven flows in spatial networks, ranging from transportation, through urban planning to mitigation of the effects of catastrophic events.
NASA Astrophysics Data System (ADS)
Reis, D. S.; Stedinger, J. R.; Martins, E. S.
2005-10-01
This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.
NASA Astrophysics Data System (ADS)
Pelle, A.; Allen, M.; Fu, J. S.
2013-12-01
With rising population and increasing urban density, it is of pivotal importance for urban planners to plan for increasing extreme precipitation events. Climate models indicate that an increase in global mean temperature will lead to increased frequency and intensity of storms of a variety of types. Analysis of results from the Coupled Model Intercomparison Project, Phase 5 (CMIP5) has demonstrated that global climate models severely underestimate precipitation, however. Preliminary results from dynamical downscaling indicate that Philadelphia, Pennsylvania is expected to experience the greatest increase of precipitation due to an increase in annual extreme events in the US. New York City, New York and Chicago, Illinois are anticipated to have similarly large increases in annual extreme precipitation events. In order to produce more accurate results, we downscale Philadelphia, Chicago, and New York City using the Weather Research and Forecasting model (WRF). We analyze historical precipitation data and WRF output utilizing a Log Pearson Type III (LP3) distribution for frequency of extreme precipitation events. This study aims to determine the likelihood of extreme precipitation in future years and its effect on the of cost of stormwater management for these three cities.
Sando, Steven K.; McCarthy, Peter M.
2018-05-10
This report documents the methods for peak-flow frequency (hereinafter “frequency”) analysis and reporting for streamgages in and near Montana following implementation of the Bulletin 17C guidelines. The methods are used to provide estimates of peak-flow quantiles for 50-, 42.9-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for selected streamgages operated by the U.S. Geological Survey Wyoming-Montana Water Science Center (WY–MT WSC). These annual exceedance probabilities correspond to 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.Standard procedures specific to the WY–MT WSC for implementing the Bulletin 17C guidelines include (1) the use of the Expected Moments Algorithm analysis for fitting the log-Pearson Type III distribution, incorporating historical information where applicable; (2) the use of weighted skew coefficients (based on weighting at-site station skew coefficients with generalized skew coefficients from the Bulletin 17B national skew map); and (3) the use of the Multiple Grubbs-Beck Test for identifying potentially influential low flows. For some streamgages, the peak-flow records are not well represented by the standard procedures and require user-specified adjustments informed by hydrologic judgement. The specific characteristics of peak-flow records addressed by the informed-user adjustments include (1) regulated peak-flow records, (2) atypical upper-tail peak-flow records, and (3) atypical lower-tail peak-flow records. In all cases, the informed-user adjustments use the Expected Moments Algorithm fit of the log-Pearson Type III distribution using the at-site station skew coefficient, a manual potentially influential low flow threshold, or both.Appropriate methods can be applied to at-site frequency estimates to provide improved representation of long-term hydroclimatic conditions. The methods for improving at-site frequency estimates by weighting with regional regression equations and by Maintenance of Variance Extension Type III record extension are described.Frequency analyses were conducted for 99 example streamgages to indicate various aspects of the frequency-analysis methods described in this report. The frequency analyses and results for the example streamgages are presented in a separate data release associated with this report consisting of tables and graphical plots that are structured to include information concerning the interpretive decisions involved in the frequency analyses. Further, the separate data release includes the input files to the PeakFQ program, version 7.1, including the peak-flow data file and the analysis specification file that were used in the peak-flow frequency analyses. Peak-flow frequencies are also reported in separate data releases for selected streamgages in the Beaverhead River and Clark Fork Basins and also for selected streamgages in the Ruby, Jefferson, and Madison River Basins.
Clinical evaluation of flowable resins in non-carious cervical lesions: two-year results.
Celik, Cigdem; Ozgünaltay, Gül; Attar, Nuray
2007-01-01
This study evaluated the two-year clinical performance of one microhybrid composite and three different types of flowable resin materials in non-carious cervical lesions. A total of 252 noncarious cervical lesions were restored in 37 patients (12 male, 25 female) with Admira Flow, Dyract Flow, Filtek Flow and Filtek Z250, according to manufacturers' instructions. All the restorations were placed by one operator, and two other examiners evaluated the restorations clinically within one week after placement and after 6, 12, 18 and 24 months, using modified USPHS criteria. At the end of 24 months, 172 restorations were evaluated in 26 patients, with a recall rate of 68%. Statistical analysis was completed using the Pearson Chi-square and Fisher-Freeman-Halton tests (p < 0.05). Additionally, survival rates were analyzed with the Kaplan-Meier estimator and the Log-Rank test (p < 0.05). The Log-Rank test indicated statistically significant differences between the survival rates of Dyract Flow/Admira Flow and Dyract Flow/Filtek Z250 (p < 0.05). While there was a statistically significant difference between Dyract Flow and the other materials for color match at 12 and 18 months, no significant difference was observed among all of the materials tested at 24 months. Significant differences were revealed between Filtek Z250 and the other materials for marginal adaptation at 18 and 24 months (p < 0.05). With respect to marginal discoloration, secondary caries, surface texture and anatomic form, no significant differences were found between the resin materials (p > 0.05). It was concluded that different types of resin materials demonstrated acceptable clinical performance in non-carious cervical lesions, except for the retention rates of the Dyract Flow restorations.
Experience may not be the best teacher: patient logs do not correlate with clerkship performance.
Poisson, Sharon N; Gelb, Douglas J; Oh, Mary F S; Gruppen, Larry D
2009-02-24
With the recent emphasis on core competencies, medical schools and residency programs have attempted to monitor and regulate trainees' patient encounters. The educational validity of this practice is unknown. Our objective was to determine whether patient encounter logs correlate with educational outcomes. We reviewed patient logs of all 212 neurology clerkship students from the 2005-2006 academic year and determined the number of patients each student saw in five diagnostic categories (seizure, headache, stroke, acute mental status change, and dementia). We compared these numbers with the students' written examination scores (total and category-specific) and clinical evaluation scores using Pearson product-moment correlations. The more patients in a given diagnostic category that students saw, the lower the students' examination subscores in that disease category (r = -0.066, p = 0.03). The total number of patients each student saw did not correlate with the student's total examination score (r = -0.021, p = 0.77) or the student's overall clinical performance rating (r = 0.089, p = 0.23). Higher numbers of logged patients did not correlate with better clerkship performance, whether the outcome measures were written tests or faculty ratings, and whether the analysis involved total or disease-specific patient counts. Thus, patient census may not be a meaningful index of educational experience or outcome. Considerable time, money, and effort are required to maintain accurate logs of trainees' encounters with patients; based on the current study, this may be an inefficient use of resources.
NASA Astrophysics Data System (ADS)
Shirmohamadi, Mohamad; Kadkhodaie, Ali; Rahimpour-Bonab, Hossain; Faraji, Mohammad Ali
2017-04-01
Velocity deviation log (VDL) is a synthetic log used to determine pore types in reservoir rocks based on a combination of the sonic log with neutron-density logs. The current study proposes a two step approach to create a map of porosity and pore types by integrating the results of petrographic studies, well logs and seismic data. In the first step, velocity deviation log was created from the combination of the sonic log with the neutron-density log. The results allowed identifying negative, zero and positive deviations based on the created synthetic velocity log. Negative velocity deviations (below - 500 m/s) indicate connected or interconnected pores and fractures, while positive deviations (above + 500 m/s) are related to isolated pores. Zero deviations in the range of [- 500 m/s, + 500 m/s] are in good agreement with intercrystalline and microporosities. The results of petrographic studies were used to validate the main pore type derived from velocity deviation log. In the next step, velocity deviation log was estimated from seismic data by using a probabilistic neural network model. For this purpose, the inverted acoustic impedance along with the amplitude based seismic attributes were formulated to VDL. The methodology is illustrated by performing a case study from the Hendijan oilfield, northwestern Persian Gulf. The results of this study show that integration of petrographic, well logs and seismic attributes is an instrumental way for understanding the spatial distribution of main reservoir pore types.
Low-flow frequency analyses for streams in west-central Florida
Hammett, K.M.
1985-01-01
The log-Pearson type III distribution was used for defining low-flow frequency at 116 continuous-record streamflow stations in west-central Florida. Frequency distributions were calculated for 1, 3, 7, 14, 30, 60, 90, 120, and 183 consecutive-day periods for recurrence intervals of 2, 5, 10, and 20 years. Discharge measurements at more than 100 low-flow partial-record stations and miscellaneous discharge-measurement stations were correlated with concurrent daily mean discharge at continuous-record stations. Estimates of the 7-day, 2-year; 7-day, 10-year; 30-day, 2-year; and 30-day, 10-year discharges were made for most of the low-flow partial-record and miscellaneous discharge-measurement stations based on those correlations. Multiple linear-regression analysis was used in an attempt to mathematically relate low-flow frequency data to basin characteristics. The resulting equations showed an apparent bias and were considered unsatisfactory for use in estimating low-flow characteristics. Maps of the 7-day, 10-year and 30-day, 10-year low flows are presented. Techniques that can be used to estimate low-flow characteristics at an ungaged site are also provided. (USGS)
Neyman Pearson detection of K-distributed random variables
NASA Astrophysics Data System (ADS)
Tucker, J. Derek; Azimi-Sadjadi, Mahmood R.
2010-04-01
In this paper a new detection method for sonar imagery is developed in K-distributed background clutter. The equation for the log-likelihood is derived and compared to the corresponding counterparts derived for the Gaussian and Rayleigh assumptions. Test results of the proposed method on a data set of synthetic underwater sonar images is also presented. This database contains images with targets of different shapes inserted into backgrounds generated using a correlated K-distributed model. Results illustrating the effectiveness of the K-distributed detector are presented in terms of probability of detection, false alarm, and correct classification rates for various bottom clutter scenarios.
The Value of Web Log Data in Use-based Design and Testing.
ERIC Educational Resources Information Center
Burton, Mary C.; Walther, Joseph B.
2001-01-01
Suggests Web-based logs contain useful empirical data with which World Wide Web designers and design theorists can assess usability and effectiveness of design choices. Enumerates identification of types of Web server logs, client logs, types and uses of log data, and issues associated with the validity of these data. Presents an approach to…
Joseph L. Ganey; Scott C. Vojta
2012-01-01
Down logs provide important ecosystem services in forests and affect surface fuel loads and fire behavior. Amounts and kinds of logs are influenced by factors such as forest type, disturbance regime, forest man-agement, and climate. To quantify potential short-term changes in log populations during a recent global- climate-change type drought, we sampled logs in mixed-...
Tripathi, Avnish; Benjamin, Emelia J; Musani, Solomon K; Hamburg, Naomi M; Tsao, Connie W; Saraswat, Arti; Vasan, Ramachandran S; Mitchell, Gary F; Fox, Ervin R
2017-05-01
Peripheral vascular endothelial dysfunction assessed by digital peripheral arterial tonometry (PAT) has been associated with risk for adverse cardiovascular events. We examined the relations of peripheral microvascular dysfunction and left ventricular mass in a community-based cohort of African Americans. We examined participants of the Jackson Heart Study who had PAT and cardiac magnetic resonance imaging evaluations between 2007 and 2013. Consistent with pertinent literature, left ventricular mass index (LVMI) was adjusted for body size by indexing to height 2.7 . Pearson's correlation and general linear regression analyses were used to relate reactive hyperemia index, baseline pulse amplitude (BPA), and augmentation index (markers of microvascular vasodilator function, baseline vascular pulsatility, and relative wave reflection, respectively) to LVMI after adjusting for traditional cardiovascular risk factors. A total of 440 participants (mean age 59 ± 10 years, 60% women) were included. Age- and sex-adjusted Pearson's correlation analysis suggested that natural log transformed LVMI was negatively correlated with reactive hyperemia index (coefficient: -0.114; P = .02) and positively correlated with BPA (coefficient: 0.272; P < .001). In multivariable analyses, higher log e LVMI was associated with higher BPA (β: 0.210; P = .03) after accounting for age, sex, body mass index, diabetes, hypertension, ratio of total cholesterol and high-density lipoprotein cholesterol, smoking, and history of cardiovascular disease. In a community-based sample of African Americans, higher baseline pulsatility measured by PAT was associated with higher LVMI by cardiac magnetic resonance imaging after adjusting for traditional risk factors. Copyright © 2017 American Society of Hypertension. Published by Elsevier Inc. All rights reserved.
Boat-Wave-Induced Bank Erosion on the Kenai River, Alaska
2008-03-01
with coir log habitat restoration. .....................................................................75 Figure 51. Type 1 bank with willow...various types of streambank stabilization. Common stabilization techniques consist of root wads, spruce tree revetments, coir logs, and riprap...restoration. ERDC TR-08-5 75 Figure 50. Type 1 bank with coir log habitat restoration. Figure 51. Type 1 bank with willow plantings/ladder access habitat
Clinical significance of serum complement factor 3 in patients with type 2 diabetes mellitus.
Nishimura, Takeshi; Itoh, Yoshihisa; Yamashita, Shigeo; Koide, Keiko; Harada, Noriaki; Yano, Yasuo; Ikeda, Nobuko; Azuma, Koichiro; Atsumi, Yoshihito
2017-05-01
Although serum complement factor 3 (C3) is an acute phase reactant mainly synthesized in the liver, several recent studies have shown high C3 gene expression in adipose tissue (AT). However, the relationship between C3 and AT levels has not been fully clarified in type 2 diabetes mellitus (T2DM) patients. A total of 164 T2DM patients (109men and 55 women) participated in this cross-sectional study. A computed tomography scan was performed to measure visceral, subcutaneous, and total AT. The correlation between these factors and C3 levels was examined using Pearson's correlation analysis. A multivariate regression model was used to assess an independent determinant associated with C3 levels after adjusting the explanatory variables (i.e., all ATs [visceral, subcutaneous, and total], and clinical features [sex, age, body mass index, waist circumference, glycated hemoglobin, duration of diabetes, systolic blood pressure, diastolic blood pressure, aspartate aminotransferase levels, alanine aminotransferase levels, low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, log(triglyceride levels), estimated glomerular filtration rate, and log(high-sensitivity C-reactive protein levels)]). Serum C3 levels were correlated with visceral, subcutaneous, and total AT among both men (r=0.505, p<0.001; r=0.545, p<0.001; r=0.617, p<0.001, respectively) and women (r=0.396, p=0.003; r=0.517, p<0.001; r=0.548, p<0.001, respectively). In the multivariate regression model, the association between total AT and C3 levels remained significantly positive (β=0.490, p<0.001). Serum C3 levels are associated with visceral, subcutaneous, and total AT in T2DM patients. Furthermore, C3 levels seem to be a marker for overall adiposity rather than regional adiposity. Copyright © 2017 Elsevier B.V. All rights reserved.
Decision Support System for hydrological extremes
NASA Astrophysics Data System (ADS)
Bobée, Bernard; El Adlouni, Salaheddine
2014-05-01
The study of the tail behaviour of extreme event distributions is important in several applied statistical fields such as hydrology, finance, and telecommunications. For example in hydrology, it is important to estimate adequately extreme quantiles in order to build and manage safe and effective hydraulic structures (dams, for example). Two main classes of distributions are used in hydrological frequency analysis: the class D of sub-exponential (Gamma (G2), Gumbel, Halphen type A (HA), Halphen type B (HB)…) and the class C of regularly varying distributions (Fréchet, Log-Pearson, Halphen type IB …) with a heavier tail. A Decision Support System (DSS) based on the characterization of the right tail, corresponding low probability of excedence p (high return period T=1/p, in hydrology), has been developed. The DSS allows discriminating between the class C and D and in its last version, a new prior step is added in order to test Lognormality. Indeed, the right tail of the Lognormal distribution (LN) is between the tails of distributions of the classes C and D; studies indicated difficulty with the discrimination between LN and distributions of the classes C and D. Other tools are useful to discriminate between distributions of the same class D (HA, HB and G2; see other communication). Some numerical illustrations show that, the DSS allows discriminating between Lognormal, regularly varying and sub-exponential distributions; and lead to coherent conclusions. Key words: Regularly varying distributions, subexponential distributions, Decision Support System, Heavy tailed distribution, Extreme value theory
Theodossiadis, George P; Grigoropoulos, Vlassis G; Liarakos, Vasilis S; Rouvas, Alexandros; Emfietzoglou, Ioannis; Theodossiadis, Panagiotis G
2012-07-01
To investigate by optical coherence tomography (OCT) the evolution of the photoreceptor layer and its association with best-corrected visual acuity (BCVA) in optic disc pit (ODP) maculopathy after successful surgical treatment. Fourteen eyes of 14 patients were included in this study, and followed up from 36 to 95 months (mean 57.36 ± 18.32 months). The follow-up period started at the time of complete subretinal fluid absorption. Examination was performed by time-domain OCT before and after treatment. Spectral-domain OCT was used after treatment. Parameters assessed were type of elevation, central foveal thickness, time elapsed from onset to treatment, type of treatment, BCVA, and inner segment outer segment (IS/OS) junction line. The IS/OS junction was characterized after treatment as intact, interrupted, or absent (not distinguishable). Significant restoration of the IS/OS junction line was first noticed between 6 and 12 months after fluid absorption (p = 0.02; Wilcoxon signed rank test). Restoration was continuous up to the 24th month of postoperative examination after fluid absorption (p = 0.14; Wilcoxon signed rank test). BCVA was 0.99 ± 0.38 logMar before treatment, 0.81 ± 0.26 logMar (p = 0.011; paired t-test) immediately after fluid absorption and 0.61 ± 0.33 logMar (p = 0.026; one-way ANOVA) 24 months after fluid resolution. BCVA was significantly positively correlated with the integrity of the IS/OS junction line during follow-up (Pearson r = 0.775; p < 0.001). The IS/OS junction restoration cannot be detected immediately after fluid resolution in the majority of cases. It became evident 6-12 months later and was completed 24 months after fluid absorption. Improvement in BCVA was noticed only during the first 2 years of follow-up. No significant changes were noticed in BCVA or the IS/OS line after 2 years. Among the studied variables, the final photoreceptor layer condition and BCVA immediately after fluid absorption are the main factors predicting final BCVA after successful surgical treatment of ODP maculopathy.
NASA Astrophysics Data System (ADS)
Kierkels, R. G. J.; den Otter, L. A.; Korevaar, E. W.; Langendijk, J. A.; van der Schaaf, A.; Knopf, A. C.; Sijtsema, N. M.
2018-02-01
A prerequisite for adaptive dose-tracking in radiotherapy is the assessment of the deformable image registration (DIR) quality. In this work, various metrics that quantify DIR uncertainties are investigated using realistic deformation fields of 26 head and neck and 12 lung cancer patients. Metrics related to the physiologically feasibility (the Jacobian determinant, harmonic energy (HE), and octahedral shear strain (OSS)) and numerically robustness of the deformation (the inverse consistency error (ICE), transitivity error (TE), and distance discordance metric (DDM)) were investigated. The deformable registrations were performed using a B-spline transformation model. The DIR error metrics were log-transformed and correlated (Pearson) against the log-transformed ground-truth error on a voxel level. Correlations of r ⩾ 0.5 were found for the DDM and HE. Given a DIR tolerance threshold of 2.0 mm and a negative predictive value of 0.90, the DDM and HE thresholds were 0.49 mm and 0.014, respectively. In conclusion, the log-transformed DDM and HE can be used to identify voxels at risk for large DIR errors with a large negative predictive value. The HE and/or DDM can therefore be used to perform automated quality assurance of each CT-based DIR for head and neck and lung cancer patients.
A shift from significance test to hypothesis test through power analysis in medical research.
Singh, G
2006-01-01
Medical research literature until recently, exhibited substantial dominance of the Fisher's significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson's hypothesis test considering both probability of type I and II error. Fisher's approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson's approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher's significance test to Neyman-Pearson's hypothesis test procedure.
ERIC Educational Resources Information Center
Nicassio, Frank J.
SwampLog is a type of journal keeping that records the facts of daily activities as experienced and perceived by practitioners. The label, "SwampLog," was inspired by Donald Schon's metaphor used to distinguish the "swamplands of practice" from the "high, hard ground of research." Keeping a SwampLog consists of recording four general types of…
Barth, Nancy A.; Veilleux, Andrea G.
2012-01-01
The U.S. Geological Survey (USGS) is currently updating at-site flood frequency estimates for USGS streamflow-gaging stations in the desert region of California. The at-site flood-frequency analysis is complicated by short record lengths (less than 20 years is common) and numerous zero flows/low outliers at many sites. Estimates of the three parameters (mean, standard deviation, and skew) required for fitting the log Pearson Type 3 (LP3) distribution are likely to be highly unreliable based on the limited and heavily censored at-site data. In a generalization of the recommendations in Bulletin 17B, a regional analysis was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the LP3 distribution. A regional skew value of zero from a previously published report was used with a new estimated mean squared error (MSE) of 0.20. A weighted least squares (WLS) regression method was used to develop both a regional standard deviation and a mean model based on annual peak-discharge data for 33 USGS stations throughout California’s desert region. At-site standard deviation and mean values were determined by using an expected moments algorithm (EMA) method for fitting the LP3 distribution to the logarithms of annual peak-discharge data. Additionally, a multiple Grubbs-Beck (MGB) test, a generalization of the test recommended in Bulletin 17B, was used for detecting multiple potentially influential low outliers in a flood series. The WLS regression found that no basin characteristics could explain the variability of standard deviation. Consequently, a constant regional standard deviation model was selected, resulting in a log-space value of 0.91 with a MSE of 0.03 log units. Yet drainage area was found to be statistically significant at explaining the site-to-site variability in mean. The linear WLS regional mean model based on drainage area had a Pseudo- 2 R of 51 percent and a MSE of 0.32 log units. The regional parameter estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final equations are functions of drainage area.Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent.
Processing mill scale study data on a type 650 electronic machine.
Floyd A. Johnson
1956-01-01
During April 1956, about 20,000 boards from 210 white fir and 290 western hemlock logs were measured at a lumber mill in western Washington. A magnetic drum, data-processing machine (type 650) was then used to calculate board-feet volumes by lumber grade for each log, and average board-foot volumes by lumber grade for each log diameter-class within log grades and...
Magnitude and frequency of floods in Washington
Cummans, J.E.; Collings, Michael R.; Nasser, Edmund George
1975-01-01
Relations are provided to estimate the magnitude and frequency of floods on Washington streams. Annual-peak-flow data from stream gaging stations on unregulated streams having 1 years or more of record were used to determine a log-Pearson Type III frequency curve for each station. Flood magnitudes having recurrence intervals of 2, 5, i0, 25, 50, and 10years were then related to physical and climatic indices of the drainage basins by multiple-regression analysis using the Biomedical Computer Program BMDO2R. These regression relations are useful for estimating flood magnitudes of the specified recurrence intervals at ungaged or short-record sites. Separate sets of regression equations were defined for western and eastern parts of the State, and the State was further subdivided into 12 regions in which the annual floods exhibit similar flood characteristics. Peak flows are related most significantly in western Washington to drainage-area size and mean annual precipitation. In eastern Washington-they are related most significantly to drainage-area size, mean annual precipitation, and percentage of forest cover. Standard errors of estimate of the estimating relations range from 25 to 129 percent, and the smallest errors are generally associated with the more humid regions.
Dietsch, Benjamin J.; Wilson, Richard C.; Strauch, Kellan R.
2008-01-01
Repeated flooding of Omaha Creek has caused damage in the Village of Homer. Long-term degradation and bridge scouring have changed substantially the channel characteristics of Omaha Creek. Flood-plain managers, planners, homeowners, and others rely on maps to identify areas at risk of being inundated. To identify areas at risk for inundation by a flood having a 1-percent annual probability, maps were created using topographic data and water-surface elevations resulting from hydrologic and hydraulic analyses. The hydrologic analysis for the Omaha Creek study area was performed using historical peak flows obtained from the U.S. Geological Survey streamflow gage (station number 06601000). Flood frequency and magnitude were estimated using the PEAKFQ Log-Pearson Type III analysis software. The U.S. Army Corps of Engineers' Hydrologic Engineering Center River Analysis System, version 3.1.3, software was used to simulate the water-surface elevation for flood events. The calibrated model was used to compute streamflow-gage stages and inundation elevations for the discharges corresponding to floods of selected probabilities. Results of the hydrologic and hydraulic analyses indicated that flood inundation elevations are substantially lower than from a previous study.
Estimation of magnitude and frequency of floods for streams in Puerto Rico : new empirical models
Ramos-Gines, Orlando
1999-01-01
Flood-peak discharges and frequencies are presented for 57 gaged sites in Puerto Rico for recurrence intervals ranging from 2 to 500 years. The log-Pearson Type III distribution, the methodology recommended by the United States Interagency Committee on Water Data, was used to determine the magnitude and frequency of floods at the gaged sites having 10 to 43 years of record. A technique is presented for estimating flood-peak discharges at recurrence intervals ranging from 2 to 500 years for unregulated streams in Puerto Rico with contributing drainage areas ranging from 0.83 to 208 square miles. Loglinear multiple regression analyses, using climatic and basin characteristics and peak-discharge data from the 57 gaged sites, were used to construct regression equations to transfer the magnitude and frequency information from gaged to ungaged sites. The equations have contributing drainage area, depth-to-rock, and mean annual rainfall as the basin and climatic characteristics in estimating flood peak discharges. Examples are given to show a step-by-step procedure in calculating a 100-year flood at a gaged site, an ungaged site, a site near a gaged location, and a site between two gaged sites.
Oberg, Kevin A.; Mades, Dean M.
1987-01-01
Four techniques for estimating generalized skew in Illinois were evaluated: (1) a generalized skew map of the US; (2) an isoline map; (3) a prediction equation; and (4) a regional-mean skew. Peak-flow records at 730 gaging stations having 10 or more annual peaks were selected for computing station skews. Station skew values ranged from -3.55 to 2.95, with a mean of -0.11. Frequency curves computed for 30 gaging stations in Illinois using the variations of the regional-mean skew technique are similar to frequency curves computed using a skew map developed by the US Water Resources Council (WRC). Estimates of the 50-, 100-, and 500-yr floods computed for 29 of these gaging stations using the regional-mean skew techniques are within the 50% confidence limits of frequency curves computed using the WRC skew map. Although the three variations of the regional-mean skew technique were slightly more accurate than the WRC map, there is no appreciable difference between flood estimates computed using the variations of the regional-mean technique and flood estimates computed using the WRC skew map. (Peters-PTT)
Confidence intervals for expected moments algorithm flood quantile estimates
Cohn, Timothy A.; Lane, William L.; Stedinger, Jery R.
2001-01-01
Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient “weighting” procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed‐form method has been available for quantifying the uncertainty of EMA‐based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood‐quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25‐ to 100‐year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.
Circulating betatrophin is elevated in patients with type 1 and type 2 diabetes.
Yamada, Hodaka; Saito, Tomoyuki; Aoki, Atsushi; Asano, Tomoko; Yoshida, Masashi; Ikoma, Aki; Kusaka, Ikuyo; Toyoshima, Hideo; Kakei, Masafumi; Ishikawa, San-E
2015-01-01
There is evidence that betatrophin, a hormone derived from adipose tissue and liver, affects the proliferation of pancreatic beta cells in mice. The aim of this study was to examine circulating betatrophin concentrations in Japanese healthy controls and patients with type 1 and type 2 diabetes. A total of 76 subjects (12 healthy controls, 34 type 1 diabetes, 30 type 2 diabetes) were enrolled in the study. Circulating betatrophin was measured with an ELISA kit and clinical parameters related to betatrophin were analyzed statistically. Circulating betatrophin (Log transformed) was significantly increased in patients with diabetes compared with healthy subjects (healthy controls, 2.29 ± 0.51; type 1 diabetes, 2.94 ± 0.44; type 2 diabetes, 3.17 ± 0.18; p<0.001, 4.1 to 5.4 times in pg/mL order). Age, HbA1c, fasting plasma glucose and Log triglyceride were strongly associated with Log betatrophin in all subjects (n=76) in correlation analysis. In type 1 diabetes, there was a correlation between Log betatrophin and Log CPR. These results provide the first evidence that circulating betatrophin is significantly elevated in Japanese patients with diabetes. The findings of this pilot study also suggest a possibility of association between the level of betatrophin and the levels of glucose and triglycerides.
NASA Astrophysics Data System (ADS)
Wagenbrenner, J. W.; Robichaud, P. R.; Brown, R. E.
2016-10-01
Following wildfires, forest managers often consider salvage logging burned trees to recover monetary value of timber, reduce fuel loads, or to meet other objectives. Relatively little is known about the cumulative hydrologic effects of wildfire and subsequent timber harvest using logging equipment. We used controlled rill experiments in logged and unlogged (control) forests burned at high severity in northern Montana, eastern Washington, and southern British Columbia to quantify rill overland flow and sediment production rates (fluxes) after ground-based salvage logging. We tested different types of logging equipment-feller-bunchers, tracked and wheeled skidders, and wheeled forwarders-as well as traffic levels and the addition of slash to skid trails as a best management practice. Rill experiments were done at each location in the first year after the fire and repeated in subsequent years. Logging was completed in the first or second post-fire year. We found that ground-based logging using heavy equipment compacted soil, reduced soil water repellency, and reduced vegetation cover. Vegetation recovery rates were slower in most logged areas than the controls. Runoff rates were higher in the skidder and forwarder plots than their respective controls in the Montana and Washington sites in the year that logging occurred, and the difference in runoff between the skidder and control plots at the British Columbia site was nearly significant (p = 0.089). Most of the significant increases in runoff in the logged plots persisted for subsequent years. The type of skidder, the addition of slash, and the amount of forwarder traffic did not significantly affect the runoff rates. Across the three sites, rill sediment fluxes were 5-1900% greater in logged plots than the controls in the year of logging, and the increases were significant for all logging treatments except the low use forwarder trails. There was no difference in the first-year sediment fluxes between the feller-buncher and tracked skidder plots, but the feller-buncher fluxes were lower than the values from the wheeled skidder plots. Manually adding slash after logging did not affect sediment flux rates. There were no significant changes in the control sediment fluxes over time, and none of the logging equipment impacted plots produced greater sediment fluxes than the controls in the second or third year after logging. Our results indicate that salvage logging increases the risk of sedimentation regardless of equipment type and amount of traffic, and that specific best management practices are needed to mitigate the hydrologic impacts of post-fire salvage logging.
Vision of low astigmats through thick and thin lathe-cut soft contact lenses.
Cho, P; Woo, G C
2001-01-01
Distance and near visual acuity of 13 low astigmats were determined in a double-masked experiment through thick and thin (centre thickness 0.12 mm and 0.06 mm, respectively) spherical lathe-cut soft lenses. For each lens type, distance and near LogMAR VA and over-refraction were assessed with different logMAR VA charts. For 70% of the subjects, the residual astigmatism was significantly lower than the refractive astigmatism with thicker lenses. No statistically significant differences in the distance and near logMAR VA was found between the two lens types using any of the charts used, though, in general, logMAR VA obtained through the thicker lens was better than logMAR VA through the thinner lens. The variabilities in distance and near logMAR VA between the two lens types increased with decreased contrast. The variabilities in distance logMAR VA were greater with Chinese charts than with English charts, and LogMAR VA with Chinese charts were significantly worse for both lens types. Based on the results of this study, we concluded that thicker spherical lathe-cut soft lenses provide better vision in low astigmats. The Snellen acuity test is inadequate for vision assessment of soft contact lens wearers. When a patient wearing thin soft contact lenses complains of poor vision in spite of 6/6 or 6/5 Snellen acuity, changing to thicker lenses may be considered.
Inequalities of extended beta and extended hypergeometric functions.
Mondal, Saiful R
2017-01-01
We study the log-convexity of the extended beta functions. As a consequence, we establish Turán-type inequalities. The monotonicity, log-convexity, log-concavity of extended hypergeometric functions are deduced by using the inequalities on extended beta functions. The particular cases of those results also give the Turán-type inequalities for extended confluent and extended Gaussian hypergeometric functions. Some reverses of Turán-type inequalities are also derived.
Time-scale effects on the gain-loss asymmetry in stock indices
NASA Astrophysics Data System (ADS)
Sándor, Bulcsú; Simonsen, Ingve; Nagy, Bálint Zsolt; Néda, Zoltán
2016-08-01
The gain-loss asymmetry, observed in the inverse statistics of stock indices is present for logarithmic return levels that are over 2 % , and it is the result of the non-Pearson-type autocorrelations in the index. These non-Pearson-type correlations can be viewed also as functionally dependent daily volatilities, extending for a finite time interval. A generalized time-window shuffling method is used to show the existence of such autocorrelations. Their characteristic time scale proves to be smaller (less than 25 trading days) than what was previously believed. It is also found that this characteristic time scale has decreased with the appearance of program trading in the stock market transactions. Connections with the leverage effect are also established.
Defect detection on hardwood logs using high resolution three-dimensional laser scan data
Liya Thomas; Lamine Mili; Clifford A. Shaffer; Ed Thomas; Ed Thomas
2004-01-01
The location, type, and severity of external defects on hardwood logs and skills are the primary indicators of overall log quality and value. External defects provide hints about the internal log characteristics. Defect data would improve the sawyer's ability to process logs such that a higher valued product (lumber) is generated. Using a high-resolution laser log...
Automated Grading System for Evaluation of Superficial Punctate Keratitis Associated With Dry Eye.
Rodriguez, John D; Lane, Keith J; Ousler, George W; Angjeli, Endri; Smith, Lisa M; Abelson, Mark B
2015-04-01
To develop an automated method of grading fluorescein staining that accurately reproduces the clinical grading system currently in use. From the slit lamp photograph of the fluorescein-stained cornea, the region of interest was selected and punctate dot number calculated using software developed with the OpenCV computer vision library. Images (n = 229) were then divided into six incremental severity categories based on computed scores. The final selection of 54 photographs represented the full range of scores: nine images from each of six categories. These were then evaluated by three investigators using a clinical 0 to 4 corneal staining scale. Pearson correlations were calculated to compare investigator scores, and mean investigator and automated scores. Lin's Concordance Correlation Coefficients (CCC) and Bland-Altman plots were used to assess agreement between methods and between investigators. Pearson's correlation between investigators was 0.914; mean CCC between investigators was 0.882. Bland-Altman analysis indicated that scores assessed by investigator 3 were significantly higher than those of investigators 1 and 2 (paired t-test). The predicted grade was calculated to be: Gpred = 1.48log(Ndots) - 0.206. The two-point Pearson's correlation coefficient between the methods was 0.927 (P < 0.0001). The CCC between predicted automated score Gpred and mean investigator score was 0.929, 95% confidence interval (0.884-0.957). Bland-Altman analysis did not indicate bias. The difference in SD between clinical and automated methods was 0.398. An objective, automated analysis of corneal staining provides a quality assurance tool to be used to substantiate clinical grading of key corneal staining endpoints in multicentered clinical trials of dry eye.
NASA Astrophysics Data System (ADS)
Podladchikova, O.; Lefebvre, B.; Krasnoselskikh, V.; Podladchikov, V.
An important task for the problem of coronal heating is to produce reliable evaluation of the statistical properties of energy release and eruptive events such as micro-and nanoflares in the solar corona. Different types of distributions for the peak flux, peak count rate measurements, pixel intensities, total energy flux or emission measures increases or waiting times have appeared in the literature. This raises the question of a precise evaluation and classification of such distributions. For this purpose, we use the method proposed by K. Pearson at the beginning of the last century, based on the relationship between the first 4 moments of the distribution. Pearson's technique encompasses and classifies a broad range of distributions, including some of those which have appeared in the literature about coronal heating. This technique is successfully applied to simulated data from the model of Krasnoselskikh et al. (2002). It allows to provide successful fits to the empirical distributions of the dissipated energy, and to classify them as a function of model parameters such as dissipation mechanisms and threshold.
Study on fracture identification of shale reservoir based on electrical imaging logging
NASA Astrophysics Data System (ADS)
Yu, Zhou; Lai, Fuqiang; Xu, Lei; Liu, Lin; Yu, Tong; Chen, Junyu; Zhu, Yuantong
2017-05-01
In recent years, shale gas exploration has made important development, access to a major breakthrough, in which the study of mud shale fractures is extremely important. The development of fractures has an important role in the development of gas reservoirs. Based on the core observation and the analysis of laboratory flakes and laboratory materials, this paper divides the lithology of the shale reservoirs of the XX well in Zhanhua Depression. Based on the response of the mudstone fractures in the logging curve, the fracture development and logging Response to the relationship between the conventional logging and electrical imaging logging to identify the fractures in the work, the final completion of the type of fractures in the area to determine and quantify the calculation of fractures. It is concluded that the fracture type of the study area is high and the microstructures are developed from the analysis of the XX wells in Zhanhua Depression. The shape of the fractures can be clearly seen by imaging logging technology to determine its type.
Querques, Lea; Querques, Giuseppe; Forte, Raimondo; Souied, Eric H
2012-06-01
To investigate the microperimetric correlations of autofluorescence imaging and optical coherence tomography (OCT) in dry age-related macular degeneration (AMD). Retrospective, observational, cross-sectional study. Consecutive patients with dry AMD underwent a complete ophthalmologic examination, including best-corrected visual acuity (BCVA), blue fundus autofluorescence (FAF), near-infrared autofluorescence, and spectral-domain (SD)-OCT with integrated microperimetry. A total of 58 eyes of 29 patients (21 women; mean age 73 ± 9 years) were included. Mean BCVA was 0.28 ± 0.3 logarithm of the minimal angle of resolution (logMAR). Overall, 2842 points were analyzed as regards FAF and near-infrared autofluorescence patterns, the status of inner segment/outer segment (IS/OS) interface, and retinal sensitivity. We observed a good correlation between the FAF and near-infrared autofluorescence patterns for all the points graded (increased FAF/near-infrared autofluorescence, Pearson rho = 0.6, P = .02; decreased FAF/near-infrared autofluorescence, Pearson rho = 0.7, P = .01; normal FAF/near-infrared autofluorescence, Pearson rho = 0.7, P = .01). Mean retinal sensitivity was significantly reduced in cases of decreased FAF (4.73 ± 2.23 dB) or increased FAF (4.75 ± 2.39 dB) compared with normal FAF (7.44 ± 2.34 dB) (P = .001). Mean retinal sensitivity was significantly reduced in case of decreased near-infrared autofluorescence (3.87 ± 2.28 dB), compared with increased near-infrared autofluorescence (5.76 ± 2.44 dB) (P = .02); mean retinal sensitivity in case of increased near-infrared autofluorescence was significantly reduced compared with normal near-infrared autofluorescence (7.15 ± 2.38 dB) (P = .002). On SD-OCT, there was a high inverse correlation between retinal sensitivity and rate of disruptions in IS/OS interface (Pearson rho = -0.72, P = .001). A reduced retinal sensitivity consistently correlates with decreased FAF/near-infrared autofluorescence and a disrupted IS/OS interface. Increased near-infrared autofluorescence may represent a useful method for detection of retinal abnormalities early in dry AMD development. Copyright © 2012 Elsevier Inc. All rights reserved.
Ravyts, Frédéric; Barbuti, Silvana; Frustoli, Maria Angela; Parolari, Giovanni; Saccani, Giovanna; De Vuyst, Luc; Leroy, Frédéric
2008-09-01
Application of bacteriocin-producing starter cultures of lactic acid bacteria in fermented sausage production contributes to food safety. This is sometimes hampered by limited efficacy in situ and by uncertainty about strain dependency and universal applicability for different sausage types. In the present study, a promising antilisterial-bacteriocin producer, Lactobacillus sakei CTC 494, was applied as a coculture in addition to commercial fermentative starters in different types of dry-fermented sausages. The strain was successful in both Belgian-type sausage and Italian salami that were artificially contaminated with about 3.5 log CFU g(-1) of Listeria monocytogenes. After completion of the production process, this led to listerial reductions of up to 1.4 and 0.6 log CFU g(-1), respectively. In a control sausage, containing only the commercial fermentative starter, the reduction was limited to 0.8 log CFU g(-1) for the Belgian-type recipe, where pH decreased from 5.9 to 4.9, whereas an increase of 0.2 log CFU g(-1) was observed for Italian salami, in which the pH rose from 5.7 to 5.9 after an initial decrease to pH 5.3. In a Cacciatore recipe inoculated with 5.5 log CFU g(-1) of L. monocytogenes and in the presence of L. sakei CTC 494, there was a listerial reduction of 1.8 log CFU g(-1) at the end of the production process. This was superior to the effect obtained with the control sausage (0.8 log CFU g(-1)). Two commercial antilisterial cultures yielded reductions of 1.2 and 1.5 log CFU g(-1). Moreover, repetitive DNA sequence-based PCR fingerprinting demonstrated the competitive superiority of L. sakei CTC 494.
Bielská, Lucie; Hovorková, Ivana; Kuta, Jan; Machát, Jiří; Hofman, Jakub
2017-01-01
Artificial soil (AS) is used in soil ecotoxicology as a test medium or reference matrix. AS is prepared according to standard OECD/ISO protocols and components of local sources are usually used by laboratories. This may result in significant inter-laboratory variations in AS properties and, consequently, in the fate and bioavailability of tested chemicals. In order to reveal the extent and sources of variations, the batch equilibrium method was applied to measure the sorption of 2 model compounds (phenanthrene and cadmium) to 21 artificial soils from different laboratories. The distribution coefficients (K d ) of phenanthrene and cadmium varied over one order of magnitude: from 5.3 to 61.5L/kg for phenanthrene and from 17.9 to 190L/kg for cadmium. Variations in phenanthrene sorption could not be reliably explained by measured soil properties; not even by the total organic carbon (TOC) content which was expected. Cadmium logK d values significantly correlated with cation exchange capacity (CEC), pH H2O and pH KCl , with Pearson correlation coefficients of 0.62, 0.80, and 0.79, respectively. CEC and pH H2O together were able to explain 72% of cadmium logK d variability in the following model: logK d =0.29pH H2O +0.0032 CEC -0.53. Similarly, 66% of cadmium logK d variability could be explained by CEC and pH KCl in the model: logKd=0.27pH KCl +0.0028 CEC -0.23. Variable cadmium sorption in differing ASs could be partially treated with these models. However, considering the unpredictable variability of phenanthrene sorption, a more reliable solution for reducing the variability of ASs from different laboratories would be better harmonization of AS preparation and composition. Copyright © 2016 Elsevier Inc. All rights reserved.
Koltun, G.F.
2009-01-01
This report describes the results of a study to determine frequency characteristics of postregulation annual peak flows at streamflow-gaging stations at or near the Lockington, Taylorsville, Englewood, Huffman, and Germantown dry dams in the Miami Conservancy District flood-protection system (southwestern Ohio) and five other streamflow-gaging stations in the Great Miami River Basin further downstream from one or more of the dams. In addition, this report describes frequency characteristics of annual peak elevations of the dry-dam pools. In most cases, log-Pearson Type III distributions were fit to postregulation annual peak-flow values through 2007 (the most recent year of published peak-flow values at the time of this analysis) and annual peak dam-pool storage values for the period 1922-2008 to determine peaks with recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years. For one streamflow-gaging station (03272100) with a short period of record, frequency characteristics were estimated by means of a process involving interpolation of peak-flow yields determined for an upstream and downstream gage. Once storages had been estimated for the various recurrence intervals, corresponding dam-pool elevations were determined from elevation-storage ratings provided by the Miami Conservancy District.
Archer, Roger J.
1978-01-01
Minimum average 7-day, 10-year flow at 67 gaging stations and 173 partial-record stations in the Hudson River basin are given in tabular form. Variation of the 7-day, 10-year low flow from point to point in selected reaches, and the corresponding times of travel, are shown graphically for Wawayanda Creek, Wallkill River, Woodbury-Moodna Creek, and the Fishkill Creek basins. The 7-day, 10-year low flow for the Saw Kill basin, and estimates of the 7-day, 10-year low flow of the Roeliff Jansen Kill at Ancram and of Birch Creek at Pine Hill, are given. Summaries of discharge from Rondout and Ashokan Reservoirs, in Ulster County, are also included. Minimum average 7-day, 10-year flow for gaging stations with 10 years or more of record were determined by log-Pearson Type III computation; those for partial-record stations were developed by correlation of discharge measurements made at the partial-record stations with discharge data from appropriate long-term gaging stations. The variation in low flows from point to point within the selected subbasins were estimated from available data and regional regression formula. Time of travel at these flows in the four subbasins was estimated from available data and Boning's equations.
Accounting for Atmospheric Rivers in the Flood Frequency Estimation in the Western United States
NASA Astrophysics Data System (ADS)
Barth, N. A.; Villarini, G.; White, K. D.
2016-12-01
The Bulletin 17B framework assumes that the observed annual peak flow data included in a flood frequency analysis are a "representative time sample of random homogeneous events." However, flood frequency analysis over the western United States is complicated by annual peak flow records that frequently contain flows generated from distinctly different flood generating mechanisms. Among the different flood generating mechanisms, atmospheric rivers (ARs) are responsible for large, regional scale floods. USGS streamgaging stations in the central Columbia River Basin in the Pacific Northwest, the Sierra Nevada, the central and southern California coast, and central Arizona show a mixture of 30-70% AR-generated flood peaks among the complete period of record. It is relatively common for the annual peaks fitted to the log-Pearson Type III distribution in these regions to show sharp breaks in the slope or a curve that reverses direction, pointing to the presence of different flood generating mechanisms. Following the recommendation by B17B to develop separate frequency curves when different flood agents can be identified, we will perform flood frequency analyses accounting for the role played by ARs. We will compare and contrast the results obtained by treating all annual maximum discharge values as generated from a single population against those from a mixed population analyses.
The Sensetivity of Flood Frequency Analysis on Record Length in Continuous United States
NASA Astrophysics Data System (ADS)
Hu, L.; Nikolopoulos, E. I.; Anagnostou, E. N.
2017-12-01
In flood frequency analysis (FFA), sufficiently long data series are important to get more reliable results. Compared to return periods of interest, at-site FFA usually needs large data sets. Generally, the precision of at site estimators and time-sampling errors are associated with the length of a gauged record. In this work, we quantify the difference with various record lengths. we use generalized extreme value (GEV) and Log Pearson type III (LP3), two traditional methods on annual maximum stream flows to undertake FFA, and propose quantitative ways, relative difference in median and interquartile range (IQR) to compare the flood frequency performances on different record length from selected 350 USGS gauges, which have more than 70 years record length in Continuous United States. Also, we group those gauges into different regions separately based on hydrological unit map and discuss the geometry impacts. The results indicate that long record length can avoid imposing an upper limit on the degree of sophistication. Working with relatively longer record length may lead accurate results than working with shorter record length. Furthermore, the influence of hydrologic unites for the watershed boundary dataset on those gauges also be presented. The California region is the most sensitive to record length, while gauges in the east perform steady.
Peak-flow characteristics of Virginia streams
Austin, Samuel H.; Krstolic, Jennifer L.; Wiegand, Ute
2011-01-01
Peak-flow annual exceedance probabilities, also called probability-percent chance flow estimates, and regional regression equations are provided describing the peak-flow characteristics of Virginia streams. Statistical methods are used to evaluate peak-flow data. Analysis of Virginia peak-flow data collected from 1895 through 2007 is summarized. Methods are provided for estimating unregulated peak flow of gaged and ungaged streams. Station peak-flow characteristics identified by fitting the logarithms of annual peak flows to a Log Pearson Type III frequency distribution yield annual exceedance probabilities of 0.5, 0.4292, 0.2, 0.1, 0.04, 0.02, 0.01, 0.005, and 0.002 for 476 streamgaging stations. Stream basin characteristics computed using spatial data and a geographic information system are used as explanatory variables in regional regression model equations for six physiographic regions to estimate regional annual exceedance probabilities at gaged and ungaged sites. Weighted peak-flow values that combine annual exceedance probabilities computed from gaging station data and from regional regression equations provide improved peak-flow estimates. Text, figures, and lists are provided summarizing selected peak-flow sites, delineated physiographic regions, peak-flow estimates, basin characteristics, regional regression model equations, error estimates, definitions, data sources, and candidate regression model equations. This study supersedes previous studies of peak flows in Virginia.
Lopez, M.A.; Woodham, W.M.
1983-01-01
Hydrologic data collected on nine small urban watersheds in the Tampa Bay area of west-central Florida and a method for estimating peak discharges in the study area are described. The watersheds have mixed land use and range in size from 0.34 to 3.45 square miles. Watershed soils, land use, and storm-drainage system data are described. Urban development ranged from a sparsely populated area with open-ditch storm sewers and 19% impervious area to a completely sewered watershed with 61% impervious cover. The U.S. Geological Survey natural-basin and urban-watershed models were calibrated for the nine watersheds using 5-minute interval rainfall data from the Tampa, Florida, National Weather Service rain gage to simulate annual peak discharge for the period 1906-52. A log-Pearson Type III frequency analysis of the simulated annual maximum discharge was used to determine the 2-, 5-, 10-, 25-, 50-, and 100-year flood discharges for each watershed. Flood discharges were related in a multiple-linear regression to drainage area, channel slope, detention storage area, and an urban-development factor determined by the extent of curb and gutter street drainage and storm-sewer system. The average standard error for the regional relations ranged from + or - 32 to + or - 42%. (USGS)
Kodama, M; Kodama, T; Murakami, M
2000-01-01
The purpose of the present investigation is to elucidate the relation between the distribution pattern of the age-adjusted incidence rate (AAIR) changes in time and space of 15 tumors of bothe sexes and the locations of centers of centripetal-(oncogene type) and centrifugal-(tumoe suppressor gene type) forces. The fitness of the observed log AAIR data sets to the oncogene type- and the tumor suppressor gene type-equilibrium models and the locations of 2 force centers were calculated by applying the least square method of Gauss to log AAIR pair data series with and without topological data manipulations, which are so designed as to let log AAIR pair data series fit to 2 variant (x, y) frameworks, the Rect-coordinates and the Para-coordinates. The 2 variant (x, y) coordinates are defined each as an (x, y) framework with its X axis crossed at a right angle to the regression line of the original log AAIR data (the Rect-coordinates) and as another framework with its X axis run in parallel with the regression line of the original log AAIR pair data series (the Para-coordinates). The fitness test of log AAIR data series to either the oncogene activation type equilibrium model (r = -1.000) or the tumor suppressor gene inactivation type (r = 1.000) was conducted for each of the male-female type pair data and the female-male type data, for each of log AAIR changes in space and log AAIR changes in time, and for each of the 3 (x, y) frameworks in a given neoplasia of both sexes. The results obtained are given as follows: 1) The positivity rates of the fitness test to the oncogene type equilibrium model and the tumor suppressor gene type model were each 63.3% and 56.7% with the log AAIR changes in space, and 73.3% and 73.3% with log AAIR changes in time, as tested in 15 human neoplasias of both sexes. 2) Evidence was presented to indicate that the clearance of oncogene activation and tumor suppressor gene inactivation is the sine qua non premise of carciniogenesis. 3) The r profile in which the correlation coefficient r, a measure of fitness to the 2 equilibrium models, is converted to either +(r > 0) or -(0 > r) for each of the original-, the Rect-, and the Para-coordinates was found to be informative in identifying a group of tumors with sex discrimination of cancer risk (log AAIR changes in space) or another group of environmental hormone-linked tumors (log AAIR changes in time and space)--a finding to indicate that the r-profile of a given tumor, when compared with other neoplasias, may provide a clue to investigating the biological behavior of the tumor. 4) The recent risk increase of skin cancer of both sexes, being classified as an example of environmental hormone-linked neoplasias, was found to commit its ascension of cancer risk along the direction of the centrifugal forces of the time- and space-linked tumor suppressor gene inactivation plotted in the 2-dimension diagram. In conclusion, the centripetal force of oncogene activation and centrifugal force of tumor suppressor gene inactivation found their sites of expression in the distribution pattern of a cancer risk parameter, log AAIR, of a given neoplasias of both sexes on the 2-dimension diagram. The application of the least square method of Gauss to the log AAIR changes in time and space, and also with and without topological modulations of the original sets, when presented in terms of the r-profile, was found to be informative in understanding behavioral characteristics of human neoplaisias.
WILSON-BAPPU EFFECT: EXTENDED TO SURFACE GRAVITY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Sunkyung; Kang, Wonseok; Lee, Jeong-Eun
2013-10-01
In 1957, Wilson and Bappu found a tight correlation between the stellar absolute visual magnitude (M{sub V} ) and the width of the Ca II K emission line for late-type stars. Here, we revisit the Wilson-Bappu relationship (WBR) to claim that the WBR can be an excellent indicator of stellar surface gravity of late-type stars as well as a distance indicator. We have measured the width (W) of the Ca II K emission line in high-resolution spectra of 125 late-type stars obtained with the Bohyunsan Optical Echelle Spectrograph and adopted from the Ultraviolet and Visual Echelle Spectrograph archive. Based onmore » our measurement of the emission line width (W), we have obtained a WBR of M{sub V} = 33.76 - 18.08 log W. In order to extend the WBR to being a surface gravity indicator, stellar atmospheric parameters such as effective temperature (T{sub eff}), surface gravity (log g), metallicity ([Fe/H]), and micro-turbulence ({xi}{sub tur}) have been derived from self-consistent detailed analysis using the Kurucz stellar atmospheric model and the abundance analysis code, MOOG. Using these stellar parameters and log W, we found that log g = -5.85 log W+9.97 log T{sub eff} - 23.48 for late-type stars.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darone, Gregory M.; Hmiel, Benjamin; Zhang, Jiliang
Fifteen ternary rare-earth metal gallium silicides have been synthesized using molten Ga as a molten flux. They have been structurally characterized by single-crystal and powder X-ray diffraction to form with three different structures—the early to mid-late rare-earth metals RE=La–Nd, Sm, Gd–Ho, Yb and Y form compounds with empirical formulae RE(Ga xSi 1–x)₂ (0.38≤x≤0.63), which crystallize with the tetragonal α-ThSi₂ structure type (space group I4₁/amd, No. 141; Pearson symbol tI12). The compounds of the late rare-earth crystallize with the orthorhombic α-GdSi₂ structure type (space group Imma, No. 74; Pearson symbol oI12), with refined empirical formula REGa xSi 2–x–y (RE=Ho, Er, Tm;more » 0.33≤x≤0.40, 0.10≤y≤0.18). LuGa₀.₃₂₍₁₎Si₁.₄₃₍₁₎ crystallizes with the orthorhombic YbMn₀.₁₇Si₁.₈₃ structure type (space group Cmcm, No. 63; Pearson symbol oC24). Structural trends are reviewed and analyzed; the magnetic susceptibilities of the grown single-crystals are presented. - Graphical abstract: This article details the exploration of the RE–Ga–Si ternary system with the aim to systematically investigate the structural “boundaries” between the α-ThSi₂ and α-GdSi₂-type structures, and studies of the magnetic properties of the newly synthesized single-crystalline materials. Highlights: • Light rare-earth gallium silicides crystallize in α-ThSi₂ structure type. • Heavy rare-earth gallium silicides crystallize in α-GdSi₂ structure type. • LuGaSi crystallizes in a defect variant of the YbMn₀.₁₇Si₁.₈₃ structure type.« less
J. W. Wagenbrenner; P. R. Robichaud; R. E. Brown
2016-01-01
Following wildfires, forest managers often consider salvage logging burned trees to recover monetary value of timber, reduce fuel loads, or to meet other objectives. Relatively little is known about the cumulative hydrologic effects of wildfire and subsequent timber harvest using logging equipment. We used controlled rill experiments in logged and unlogged (control)...
Waltemeyer, Scott D.
2006-01-01
Estimates of the magnitude and frequency of peak discharges are necessary for the reliable flood-hazard mapping in the Navajo Nation in Arizona, Utah, Colorado, and New Mexico. The Bureau of Indian Affairs, U.S. Army Corps of Engineers, and Navajo Nation requested that the U.S. Geological Survey update estimates of peak discharge magnitude for gaging stations in the region and update regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites using data collected through 1999 at 146 gaging stations, an additional 13 years of peak-discharge data since a 1997 investigation, which used gaging-station data through 1986. The equations for estimation of peak discharges at ungaged sites were developed for flood regions 8, 11, high elevation, and 6 and are delineated on the basis of the hydrologic codes from the 1997 investigation. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 82 of the 146 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge having a recurrence interval of less than 1.4 years in the probability-density function. Within each region, logarithms of the peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then was applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction for a peak discharge have a recurrence interval of 100-years for region 8 was 53 percent (average) for the 100-year flood. The average standard of prediction, which includes average sampling error and average standard error of regression, ranged from 45 to 83 percent for the 100-year flood. Estimated standard error of prediction for a hybrid method for region 11 was large in the 1997 investigation. No distinction of floods produced from a high-elevation region was presented in the 1997 investigation. Overall, the equations based on generalized least-squares regression techniques are considered to be more reliable than those in the 1997 report because of the increased length of record and improved GIS method. Techniques for transferring flood-frequency relations to ungaged sites on the same stream can be estimated at an ungaged site by a direct application of the regional regression equation or at an ungaged site on a stream that has a gaging station upstream or downstream by using the drainage-area ratio and the drainage-area exponent from the regional regression equation of the respective region.
Uses and Benefits of Journal Writing.
ERIC Educational Resources Information Center
Hiemstra, Roger
2001-01-01
Describes various types of journals: learning journals, diaries, dream logs, autobiographies, spiritual journals, professional journals, interactive reading logs, theory logs, and electronic journals. Lists benefits of journal writing and ways to overcome writing blocks. (Contains 19 references.) (SK)
ERIC Educational Resources Information Center
Harvey, Patricia Lee
2009-01-01
This study, based on Bandura's social cognitive theory, explored the two dimensions of teacher efficacy among reading program types (Harcourt; Houghton Mifflin; MacMillan McGraw Hill; Pearson Scott Foresman; and, Other) and selected demographic factors (school enrollment size; student ethnicity; school district of urban, rural, and suburban;…
The Robustness of LISREL Estimates in Structural Equation Models with Categorical Variables.
ERIC Educational Resources Information Center
Ethington, Corinna A.
This study examined the effect of type of correlation matrix on the robustness of LISREL maximum likelihood and unweighted least squares structural parameter estimates for models with categorical manifest variables. Two types of correlation matrices were analyzed; one containing Pearson product-moment correlations and one containing tetrachoric,…
Mastin, Mark C.; Konrad, Christopher P.; Veilleux, Andrea G.; Tecca, Alison E.
2016-09-20
An investigation into the magnitude and frequency of floods in Washington State computed the annual exceedance probability (AEP) statistics for 648 U.S. Geological Survey unregulated streamgages in and near the borders of Washington using the recorded annual peak flows through water year 2014. This is an updated report from a previous report published in 1998 that used annual peak flows through the water year 1996. New in this report, a regional skew coefficient was developed for the Pacific Northwest region that includes areas in Oregon, Washington, Idaho and western Montana within the Columbia River drainage basin south of the United States-Canada border, the coastal areas of Oregon and western Washington, and watersheds draining into Puget Sound, Washington. The skew coefficient is an important term in the Log Pearson Type III equation used to define the distribution of the log-transformed annual peaks. The Expected Moments Algorithm was used to fit historical and censored peak-flow data to the log Pearson Type III distribution. A Multiple Grubb-Beck test was employed to censor low outliers of annual peak flows to improve on the frequency distribution. This investigation also includes a section on observed trends in annual peak flows that showed significant trends (p-value < 0.05) in 21 of 83 long-term sites, but with small magnitude Kendall tau values suggesting a limited monotonic trend in the time series of annual peaks. Most of the sites with a significant trend in western Washington were positive and all the sites with significant trends (three sites) in eastern Washington were negative.Multivariate regression analysis with measured basin characteristics and the AEP statistics at long-term, unregulated, and un-urbanized (defined as drainage basins with less than 5 percent impervious land cover for this investigation) streamgages within Washington and some in Idaho and Oregon that are near the Washington border was used to develop equations to estimate AEP statistics at ungaged basins. Washington was divided into four regions to improve the accuracy of the regression equations; a set of equations for eight selected AEPs and for each region were constructed. Selected AEP statistics included the annual peak flows that equaled or exceeded 50, 20, 10, 4, 2, 1, 0.5 and 0.2 percent of the time equivalent to peak flows for peaks with a 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively. Annual precipitation and drainage area were the significant basin characteristics in the regression equations for all four regression regions in Washington and forest cover was significant for the two regression regions in eastern Washington. Average standard error of prediction for the regional regression equations ranged from 70.19 to 125.72 percent for Regression Regions 1 and 2 on the eastern side of the Cascade Mountains and from 43.22 to 58.04 percent for Regression Regions 3 and 4 on the western side of the Cascade Mountains. The pseudo coefficient of determination (where a value of 100 signifies a perfect regression model) ranged from 68.39 to 90.68 for Regression Regions 1 and 2, and 92.35 to 95.44 for Regions 3 and 4.The calculated AEP statistics for the streamgages and the regional regression equations are expected to be incorporated into StreamStats after the publication of this report. StreamStats is the interactive Web-based map tool created by the U.S. Geological Survey to allow the user to choose a streamgage and obtain published statistics or choose ungaged locations where the program automatically applies the regional regression equations and computes the estimates of the AEP statistics.
Incidence of Russian log export tax: A vertical log-lumber model
Ying Lin; Daowei Zhang
2017-01-01
In 2007, Russia imposed an ad valorem tax on its log exports that lasted until 2012. In this paper, weuse a Muth-type equilibrium displacement model to investigate the market and welfare impacts of this tax, utilizing a vertical linkage between log and lumber markets and considering factor substitution. Our theoretical analysis indicates...
NASA Astrophysics Data System (ADS)
Matthews, L.; Gurrola, H.
2015-12-01
Typical petrophysical well log correlation is accomplished by manual pattern recognition leading to subjective correlations. The change in character in a well log is dependent upon the change in the response of the tool to lithology. The petrophysical interpreter looks for a change in one log type that would correspond to the way a different tool responds to the same lithology. To develop an objective way to pick changes in well log characteristics, we adapt a method of first arrival picking used in seismic data to analyze changes in the character of well logs. We chose to use the fractal method developed by Boschetti et al[1] (1996). This method worked better than we expected and we found similar changes in the fractal dimension across very different tool types (sonic vs density vs gamma ray). We reason the fractal response of the log is not dependent on the physics of the tool response but rather the change in the complexity of the log data. When a formation changes physical character in time or space the recorded magnitude in tool data changes complexity at the same time even if the original tool response is very different. The relative complexity of the data regardless of the tool used is dependent upon the complexity of the medium relative to tool measurement. The relative complexity of the recorded magnitude data changes as a tool transitions from one character type to another. The character we are measuring is the roughness or complexity of the petrophysical curve. Our method provides a way to directly compare different log types based on a quantitative change in signal complexity. For example, using changes in data complexity allow us to correlate gamma ray suites with sonic logs within a well and then across to an adjacent well with similar signatures. Our method creates reliable and automatic correlations to be made in data sets beyond the reasonable cognitive limits of geoscientists in both speed and consistent pattern recognition. [1] Fabio Boschetti, Mike D. Dentith, and Ron D. List, (1996). A fractal-based algorithm for detecting first arrivals on seismic traces. Geophysics, Vol.61, No.4, P. 1095-1102.
Albert, David M; Schoen, John W
2013-08-01
The forests of southeastern Alaska remain largely intact and contain a substantial proportion of Earth's remaining old-growth temperate rainforest. Nonetheless, industrial-scale logging has occurred since the 1950s within a relatively narrow range of forest types that has never been quantified at a regional scale. We analyzed historical patterns of logging from 1954 through 2004 and compared the relative rates of change among forest types, landform associations, and biogeographic provinces. We found a consistent pattern of disproportionate logging at multiple scales, including large-tree stands and landscapes with contiguous productive old-growth forests. The highest rates of change were among landform associations and biogeographic provinces that originally contained the largest concentrations of productive old growth (i.e., timber volume >46.6 m³/ha). Although only 11.9% of productive old-growth forests have been logged region wide, large-tree stands have been reduced by at least 28.1%, karst forests by 37%, and landscapes with the highest volume of contiguous old growth by 66.5%. Within some island biogeographic provinces, loss of rare forest types may place local viability of species dependent on old growth at risk of extirpation. Examination of historical patterns of change among ecological forest types can facilitate planning for conservation of biodiversity and sustainable use of forest resources. © 2013 Society for Conservation Biology.
Functional response of ungulate browsers in disturbed eastern hemlock forests
DeStefano, Stephen
2015-01-01
Ungulate browsing in predator depleted North American landscapes is believed to be causing widespread tree recruitment failures. However, canopy disturbances and variations in ungulate densities are sources of heterogeneity that can buffer ecosystems against herbivory. Relatively little is known about the functional response (the rate of consumption in relation to food availability) of ungulates in eastern temperate forests, and therefore how “top down” control of vegetation may vary with disturbance type, intensity, and timing. This knowledge gap is relevant in the Northeastern United States today with the recent arrival of hemlock woolly adelgid (HWA; Adelges tsugae) that is killing eastern hemlocks (Tsuga canadensis) and initiating salvage logging as a management response. We used an existing experiment in central New England begun in 2005, which simulated severe adelgid infestation and intensive logging of intact hemlock forest, to examine the functional response of combined moose (Alces americanus) and white-tailed deer (Odocoileus virginianus) foraging in two different time periods after disturbance (3 and 7 years). We predicted that browsing impacts would be linear or accelerating (Type I or Type III response) in year 3 when regenerating stem densities were relatively low and decelerating (Type II response) in year 7 when stem densities increased. We sampled and compared woody regeneration and browsing among logged and simulated insect attack treatments and two intact controls (hemlock and hardwood forest) in 2008 and again in 2012. We then used AIC model selection to compare the three major functional response models (Types I, II, and III) of ungulate browsing in relation to forage density. We also examined relative use of the different stand types by comparing pellet group density and remote camera images. In 2008, total and proportional browse consumption increased with stem density, and peaked in logged plots, revealing a Type I response. In 2012, stem densities were greatest in girdled plots, but proportional browse consumption was highest at intermediate stem densities in logged plots, exhibiting a Type III (rather than a Type II) functional response. Our results revealed shifting top–down control by herbivores at different stages of stand recovery after disturbance and in different understory conditions resulting from logging vs. simulated adelgid attack. If forest managers wish to promote tree regeneration in hemlock stands that is more resistant to ungulate browsers, leaving HWA-infested stands unmanaged may be a better option than preemptively logging them.
Van Emon, Jeanette M.; Chuang, Jane C.; Lordo, Robert A.; Schrock, Mary E.; Nichkova, Mikaela; Gee, Shirley J.; Hammock, Bruce D.
2010-01-01
A 96-microwell enzyme-linked immunosorbent assay (ELISA) method was evaluated to determine PCDDs/PCDFs in sediment and soil samples from an EPA Superfund site. Samples were prepared and analyzed by both the ELISA and a gas chromatography/high resolution mass spectrometry (GC/HRMS) method. Comparable method precision, accuracy, and detection level (8 ng kg−1) were achieved by the ELISA method with respect to GC/HRMS. However, the extraction and cleanup method developed for the ELISA requires refinement for the soil type that yielded a waxy residue after sample processing. Four types of statistical analyses (Pearson correlation coefficient, paired t-test, nonparametric tests, and McNemar’s test of association) were performed to determine whether the two methods produced statistically different results. The log-transformed ELISA-derived 2,3,7,8-tetrachlorodibenzo-p-dioxin values and logtransformed GC/HRMS-derived TEQ values were significantly correlated (r = 0.79) at the 0.05 level. The median difference in values between ELISA and GC/HRMS was not significant at the 0.05 level. Low false negative and false positive rates (<10%) were observed for the ELISA when compared to the GC/HRMS at 1000 ng TEQ kg−1. The findings suggest that immunochemical technology could be a complementary monitoring tool for determining concentrations at the 1000 ng TEQ kg−1 action level for contaminated sediment and soil. The ELISA could also be used in an analytical triage approach to screen and rank samples prior to instrumental analysis. PMID:18313102
Omer, M K; Alvseike, O; Holck, A; Axelsson, L; Prieto, M; Skjerve, E; Heir, E
2010-12-01
The effect of high pressure processing (HPP) on the survival of verotoxigenic Escherichia coli (VTEC) in two types of Norwegian type dry-fermented sausages was studied. Two different types of recipes for each sausage type were produced. The sausage batter was inoculated with 6.8 log(10) CFU/g of VTEC O103:H25. After fermentation, drying and maturation, slices of finished sausages were vacuum packed and subjected to two treatment regimes of HPP. One group was treated at 600 MPa for 10 min and another at three cycles of 600 MPa for 200 s per cycle. A generalized linear model split by recipe type showed that these two HPP treatments on standard recipe sausages reduced E. coli by 2.9 log(10) CFU/g and 3.3 log(10) CFU/g, respectively. In the recipe with higher levels of dextrose, sodium chloride and sodium nitrite E. coli reduction was 2.7 log(10) CFU/g in both treatments. The data show that HPP has a potential to make the sausages safer and also that the effect depends somewhat on recipe. Copyright © 2010 The American Meat Science Association. Published by Elsevier Ltd. All rights reserved.
Species-area curves indicate the importance of habitats' contributions to regional biodiversity
Chong, G.W.; Stohlgren, T.J.
2007-01-01
We examined species-area curves, species composition and similarity (Jaccard's coefficients), and species richness in 17 vegetation types to develop a composite index of a vegetation type's contribution to regional species richness. We collected data from 1 to 1000 m2 scales in 147 nested plots in Rocky Mountain National Park, Colorado, USA to compare three species-area curve models' abilities to estimate the number of species observed in each vegetation type. The log(species)-log(area) curve had the largest adjusted coefficients of determination (r2 values) in 12 of the 17 types, followed by the species-log(area) curve with five of the highest values. When the slopes of the curves were corrected for species overlap among plots with Jaccard's coefficients, the species-log(area) curves estimated values closest to those observed. We combined information from species-area curves and measures of heterogeneity with information on the area covered by each vegetation type and found that the types making the greatest contributions to regional biodiversity covered the smallest areas. This approach may provide an accurate and relatively rapid way to rank hotspots of plant diversity within regions of interest.
Gotvald, Anthony J.; Barth, Nancy A.; Veilleux, Andrea G.; Parrett, Charles
2012-01-01
Methods for estimating the magnitude and frequency of floods in California that are not substantially affected by regulation or diversions have been updated. Annual peak-flow data through water year 2006 were analyzed for 771 streamflow-gaging stations (streamgages) in California having 10 or more years of data. Flood-frequency estimates were computed for the streamgages by using the expected moments algorithm to fit a Pearson Type III distribution to logarithms of annual peak flows for each streamgage. Low-outlier and historic information were incorporated into the flood-frequency analysis, and a generalized Grubbs-Beck test was used to detect multiple potentially influential low outliers. Special methods for fitting the distribution were developed for streamgages in the desert region in southeastern California. Additionally, basin characteristics for the streamgages were computed by using a geographical information system. Regional regression analysis, using generalized least squares regression, was used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins in California that are outside of the southeastern desert region. Flood-frequency estimates and basin characteristics for 630 streamgages were combined to form the final database used in the regional regression analysis. Five hydrologic regions were developed for the area of California outside of the desert region. The final regional regression equations are functions of drainage area and mean annual precipitation for four of the five regions. In one region, the Sierra Nevada region, the final equations are functions of drainage area, mean basin elevation, and mean annual precipitation. Average standard errors of prediction for the regression equations in all five regions range from 42.7 to 161.9 percent. For the desert region of California, an analysis of 33 streamgages was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the log-Pearson Type III distribution. The regional estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final regional regression equations are functions of drainage area. Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent. Annual peak-flow data through water year 2006 were analyzed for eight streamgages in California having 10 or more years of data considered to be affected by urbanization. Flood-frequency estimates were computed for the urban streamgages by fitting a Pearson Type III distribution to logarithms of annual peak flows for each streamgage. Regression analysis could not be used to develop flood-frequency estimation equations for urban streams because of the limited number of sites. Flood-frequency estimates for the eight urban sites were graphically compared to flood-frequency estimates for 630 non-urban sites. The regression equations developed from this study will be incorporated into the U.S. Geological Survey (USGS) StreamStats program. The StreamStats program is a Web-based application that provides streamflow statistics and basin characteristics for USGS streamgages and ungaged sites of interest. StreamStats can also compute basin characteristics and provide estimates of streamflow statistics for ungaged sites when users select the location of a site along any stream in California.
Veilleux, Andrea G.; Stedinger, Jery R.; Eash, David A.
2012-01-01
This paper summarizes methodological advances in regional log-space skewness analyses that support flood-frequency analysis with the log Pearson Type III (LP3) distribution. A Bayesian Weighted Least Squares/Generalized Least Squares (B-WLS/B-GLS) methodology that relates observed skewness coefficient estimators to basin characteristics in conjunction with diagnostic statistics represents an extension of the previously developed B-GLS methodology. B-WLS/B-GLS has been shown to be effective in two California studies. B-WLS/B-GLS uses B-WLS to generate stable estimators of model parameters and B-GLS to estimate the precision of those B-WLS regression parameters, as well as the precision of the model. The study described here employs this methodology to develop a regional skewness model for the State of Iowa. To provide cost effective peak-flow data for smaller drainage basins in Iowa, the U.S. Geological Survey operates a large network of crest stage gages (CSGs) that only record flow values above an identified recording threshold (thus producing a censored data record). CSGs are different from continuous-record gages, which record almost all flow values and have been used in previous B-GLS and B-WLS/B-GLS regional skewness studies. The complexity of analyzing a large CSG network is addressed by using the B-WLS/B-GLS framework along with the Expected Moments Algorithm (EMA). Because EMA allows for the censoring of low outliers, as well as the use of estimated interval discharges for missing, censored, and historic data, it complicates the calculations of effective record length (and effective concurrent record length) used to describe the precision of sample estimators because the peak discharges are no longer solely represented by single values. Thus new record length calculations were developed. The regional skewness analysis for the State of Iowa illustrates the value of the new B-WLS/BGLS methodology with these new extensions.
Selective Logging, Fire, and Biomass in Amazonia
NASA Technical Reports Server (NTRS)
Houghton, R. A.
1999-01-01
Biomass and rates of disturbance are major factors in determining the net flux of carbon between terrestrial ecosystems and the atmosphere, and neither of them is well known for most of the earth's surface. Satellite data over large areas are beginning to be used systematically to measure rates of two of the most important types of disturbance, deforestation and reforestation, but these are not the only types of disturbance that affect carbon storage. Other examples include selective logging and fire. In northern mid-latitude forests, logging and subsequent regrowth of forests have, in recent decades, contributed more to the net flux of carbon between terrestrial ecosystems and the atmosphere than any other type of land use. In the tropics logging is also becoming increasingly important. According to the FAO/UNEP assessment of tropical forests, about 25% of total area of productive forests have been logged one or more times in the 60-80 years before 1980. The fraction must be considerably greater at present. Thus, deforestation by itself accounts for only a portion of the emissions carbon from land. Furthermore, as rates of deforestation become more accurately measured with satellites, uncertainty in biomass will become the major factor accounting for the remaining uncertainty in estimates of carbon flux. An approach is needed for determining the biomass of terrestrial ecosystems. 3 Selective logging is increasingly important in Amazonia, yet it has not been included in region-wide, satellite-based assessments of land-cover change, in part because it is not as striking as deforestation. Nevertheless, logging affects terrestrial carbon storage both directly and indirectly. Besides the losses of carbon directly associated with selective logging, logging also increases the likelihood of fire.
Pearson-type goodness-of-fit test with bootstrap maximum likelihood estimation.
Yin, Guosheng; Ma, Yanyuan
2013-01-01
The Pearson test statistic is constructed by partitioning the data into bins and computing the difference between the observed and expected counts in these bins. If the maximum likelihood estimator (MLE) of the original data is used, the statistic generally does not follow a chi-squared distribution or any explicit distribution. We propose a bootstrap-based modification of the Pearson test statistic to recover the chi-squared distribution. We compute the observed and expected counts in the partitioned bins by using the MLE obtained from a bootstrap sample. This bootstrap-sample MLE adjusts exactly the right amount of randomness to the test statistic, and recovers the chi-squared distribution. The bootstrap chi-squared test is easy to implement, as it only requires fitting exactly the same model to the bootstrap data to obtain the corresponding MLE, and then constructs the bin counts based on the original data. We examine the test size and power of the new model diagnostic procedure using simulation studies and illustrate it with a real data set.
Cost of skid roads for arch logging in West Virginia
George R., Jr. Trimble; Carl R. Barr
1960-01-01
In the mountain hardwood country of the northern Appalachians, tree-length skidding with tractor and arch has proved to be economical logging. One essential part of this type of logging is that tree-length logs are winched to the skid roads: tractor and arch do not run around through the woods. Winching distance is commonly 200 to 300 feet; and occasionally an extra...
Comparison of logging residue from lump sum and log scale timber sales.
James O Howard; Donald J. DeMars
1985-01-01
Data from 1973 and 1980 logging residues studies were used to compare the volume of residue from lump sum and log scale timber sales. Covariance analysis was used to adjust the mean volume for each data set for potential variation resulting from differences in stand conditions. Mean residue volumes from the two sale types were significantly different at the 5-percent...
Wacker, Michael A.
2010-01-01
Borehole geophysical logs were obtained from selected exploratory coreholes in the vicinity of the Florida Power and Light Company Turkey Point Power Plant. The geophysical logging tools used and logging sequences performed during this project are summarized herein to include borehole logging methods, descriptions of the properties measured, types of data obtained, and calibration information.
Standardized Pearson type 3 density function area tables
NASA Technical Reports Server (NTRS)
Cohen, A. C.; Helm, F. R.; Sugg, M.
1971-01-01
Tables constituting extension of similar tables published in 1936 are presented in report form. Single and triple parameter gamma functions are discussed. Report tables should interest persons concerned with development and use of numerical analysis and evaluation methods.
Sando, Steven K.; Driscoll, Daniel G.; Parrett, Charles
2008-01-01
Numerous users, including the South Dakota Department of Transportation, have continuing needs for peak-flow information for the design of highway infrastructure and many other purposes. This report documents results from a cooperative study between the South Dakota Department of Transportation and the U.S. Geological Survey to provide an update of peak-flow frequency estimates for South Dakota. Estimates of peak-flow magnitudes for 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals are reported for 272 streamflow-gaging stations, which include most gaging stations in South Dakota with 10 or more years of systematic peak-flow records through water year 2001. Recommended procedures described in Bulletin 17B were used as primary guidelines for developing peak-flow frequency estimates. The computer program PEAKFQ developed by the U.S. Geological Survey was used to run the frequency analyses. Flood frequencies for all stations were initially analyzed by using standard Bulletin 17B default procedures for fitting the log-Pearson III distribution. The resulting preliminary frequency curves were then plotted on a log-probability scale, and fits of the curves with systematic data were evaluated. In many cases, results of the default Bulletin 17B analyses were determined to be satisfactory. In other cases, however, the results could be improved by using various alternative procedures for frequency analysis. Alternative procedures for some stations included adjustments to skew coefficients or use of user-defined low-outlier criteria. Peak-flow records for many gaging stations are strongly influenced by low- or zero-flow values. This situation often results in a frequency curve that plots substantially above the systematic record data points at the upper end of the frequency curve. Adjustments to low-outlier criteria reduced the influence of very small peak flows and generally focused the analyses on the upper parts of the frequency curves (10- to 500-year recurrence intervals). The most common alternative procedures involved several different methods to extend systematic records, which was done primarily to address biases resulting from nonrepresentative climatic conditions during several specific periods of record and to reduce inconsistencies among multiple gaging stations along common stream channels with different periods of record. In some cases, records for proximal stations could be combined directly. In other cases, the two-station comparison procedure recommended in Bulletin 17B was used to adjust the mean and standard deviation of the logs of the systematic data for a target station on the basis of correlation with concurrent records from a nearby long-term index station. In some other cases, a 'mixed-station procedure' was used to adjust the log-distributional parameters for a target station, on the basis of correlation with one or more index stations, for the purpose of fitting the log-Pearson III distribution. Historical adjustment procedures were applied to peak-flow frequency analyses for 17 South Dakota gaging stations. A historical adjustment period extending back to 1881 (121 years) was used for 12 gaging stations in the James and Big Sioux River Basins, and various other adjustment periods were used for additional stations. Large peak flows that occurred in 1969 and 1997 accounted for 13 of the 17 historical adjustments. Other years for which historical peak flows were used include 1957, 1962, 1992, and 2001. A regional mixed-population analysis was developed to address complications associated with many high outliers for the Black Hills region. This analysis included definition of two populations of flood events. The population of flood events that composes the main body of peak flows for a given station is considered the 'ordinary-peaks population,' and the population of unusually large peak flows that plot substantially above the main body of peak flows on log-probability scale is co
Weight and volume variation in truckloads of logs hauled in the central Appalachians
Floyd G. Timson
1974-01-01
Variation in volume and weight was found among loaded log trucks even when such factors as truck type, logging job, and driver influence were eliminated. A load range of 10,000 pounds or 1,000 board feet was commonplace for the same truck, driver, and cutting site. Differences in log size, shape, weight, and species caused a major share of this variation. Yet,...
Tests of Independence in Contingency Tables with Small Samples: A Comparison of Statistical Power.
ERIC Educational Resources Information Center
Parshall, Cynthia G.; Kromrey, Jeffrey D.
1996-01-01
Power and Type I error rates were estimated for contingency tables with small sample sizes for the following four types of tests: (1) Pearson's chi-square; (2) chi-square with Yates's continuity correction; (3) the likelihood ratio test; and (4) Fisher's Exact Test. Various marginal distributions, sample sizes, and effect sizes were examined. (SLD)
Revisiting Pearson's climate and forest type studies on the Fort Valley Experimental Forest
Joseph E. Crouse; Margaret M. Moore; Peter Fule
2008-01-01
Five weather station sites were established in 1916 by Fort Valley personnel along an elevational gradient from the Experimental Station to near the top of the San Francisco Peaks to investigate the factors that controlled and limited forest types. The stations were located in the ponderosa pine, Douglas-fir, limber pine, Engelmann spruce, and Engelmann spruce/...
Revisiting Pearson's climate and forest type studies on the Fort Valley Experimental Forest (P-53)
Joseph E. Crouse; Margaret M. Moore; Peter Z. Fule
2008-01-01
Five weather station sites were established in 1916 by Fort Valley personnel along an elevational gradient from the Experimental Station to near the top of the San Francisco Peaks to investigate the factors that controlled and limited forest types. The stations were located in the ponderosa pine, Douglas-fir, limber pine, Engelmann spruce, and Engelmann spruce/...
NASA Astrophysics Data System (ADS)
Tsai, Meng-Jung; Hsu, Chung-Yuan; Tsai, Chin-Chung
2012-04-01
Due to a growing trend of exploring scientific knowledge on the Web, a number of studies have been conducted to highlight examination of students' online searching strategies. The investigation of online searching generally employs methods including a survey, interview, screen-capturing, or transactional logs. The present study firstly intended to utilize a survey, the Online Information Searching Strategies Inventory (OISSI), to examine users' searching strategies in terms of control, orientation, trial and error, problem solving, purposeful thinking, selecting main ideas, and evaluation, which is defined as implicit strategies. Second, this study conducted screen-capturing to investigate the students' searching behaviors regarding the number of keywords, the quantity and depth of Web page exploration, and time attributes, which is defined as explicit strategies. Ultimately, this study explored the role that these two types of strategies played in predicting the students' online science information searching outcomes. A total of 103 Grade 10 students were recruited from a high school in northern Taiwan. Through Pearson correlation and multiple regression analyses, the results showed that the students' explicit strategies, particularly the time attributes proposed in the present study, were more successful than their implicit strategies in predicting their outcomes of searching science information. The participants who spent more time on detailed reading (explicit strategies) and had better skills of evaluating Web information (implicit strategies) tended to have superior searching performance.
Performance of the Xpert HIV-1 Viral Load Assay: a Systematic Review and Meta-analysis
Nash, Madlen; Huddart, Sophie; Badar, Sayema; Baliga, Shrikala; Saravu, Kavitha
2018-01-01
ABSTRACT Viral load (VL) is the preferred treatment-monitoring approach for HIV-positive patients. However, more rapid, near-patient, and low-complexity assays are needed to scale up VL testing. The Xpert HIV-1 VL assay (Cepheid, Sunnyvale, CA) is a new, automated molecular test, and it can leverage the GeneXpert systems that are being used widely for tuberculosis diagnosis. We systematically reviewed the evidence on the performance of this new tool in comparison to established reference standards. A total of 12 articles (13 studies) in which HIV patient VLs were compared between Xpert HIV VL assay and a reference standard VL assay were identified. Study quality was generally high, but substantial variability was observed in the number and type of agreement measures reported. Correlation coefficients between Xpert and reference assays were high, with a pooled Pearson correlation (n = 8) of 0.94 (95% confidence interval [CI], 0.89, 0.97) and Spearman correlation (n = 3) of 0.96 (95% CI, 0.86, 0.99). Bland-Altman metrics (n = 11) all were within 0.35 log copies/ml of perfect agreement. Overall, Xpert HIV-1 VL performed well compared to current reference tests. The minimal training and infrastructure requirements for the Xpert HIV-1 VL assay make it attractive for use in resource-constrained settings, where point-of-care VL testing is most needed. PMID:29386266
Performance of the Xpert HIV-1 Viral Load Assay: a Systematic Review and Meta-analysis.
Nash, Madlen; Huddart, Sophie; Badar, Sayema; Baliga, Shrikala; Saravu, Kavitha; Pai, Madhukar
2018-04-01
Viral load (VL) is the preferred treatment-monitoring approach for HIV-positive patients. However, more rapid, near-patient, and low-complexity assays are needed to scale up VL testing. The Xpert HIV-1 VL assay (Cepheid, Sunnyvale, CA) is a new, automated molecular test, and it can leverage the GeneXpert systems that are being used widely for tuberculosis diagnosis. We systematically reviewed the evidence on the performance of this new tool in comparison to established reference standards. A total of 12 articles (13 studies) in which HIV patient VLs were compared between Xpert HIV VL assay and a reference standard VL assay were identified. Study quality was generally high, but substantial variability was observed in the number and type of agreement measures reported. Correlation coefficients between Xpert and reference assays were high, with a pooled Pearson correlation ( n = 8) of 0.94 (95% confidence interval [CI], 0.89, 0.97) and Spearman correlation ( n = 3) of 0.96 (95% CI, 0.86, 0.99). Bland-Altman metrics ( n = 11) all were within 0.35 log copies/ml of perfect agreement. Overall, Xpert HIV-1 VL performed well compared to current reference tests. The minimal training and infrastructure requirements for the Xpert HIV-1 VL assay make it attractive for use in resource-constrained settings, where point-of-care VL testing is most needed. Copyright © 2018 Nash et al.
Alonso, Roberto; Pérez-García, Felipe; López-Roa, Paula; Alcalá, Luis; Rodeño, Pilar; Bouza, Emilio
2018-03-01
Detection of hepatitis C virus (HCV) RNA and the HCV core antigen assay (HCV-Ag) are reliable techniques for the diagnosis of active and chronic HCV infection. Our aim was to evaluate the HCV-Ag assay as an alternative to quantification of HVC RNA. A comparison was made of the sensitivity and specificity of an HCV-Ag assay (204 serum samples) with those of a PCR assay, and the correlation between the two techniques was determined. The sensitivity and specificity of HCV-Ag was 76.6% and 100%, respectively. Both assays were extremely well correlated (Pearson coefficient=0.951). The formula (LogCV=1.15*LogAg+2.26) was obtained to calculate the viral load by PCR from HCV-Ag values. HCV-Ag was unable to detect viral loads below 5000IU/mL. Although the HCV-Ag assay was less sensitive than the PCR assay, the correlation between both assays was excellent. HCV-Ag can be useful as a first step in the diagnosis of acute or chronic HCV infection and in emergency situations. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.
MP estimation applied to platykurtic sets of geodetic observations
NASA Astrophysics Data System (ADS)
Wiśniewski, Zbigniew
2017-06-01
MP estimation is a method which concerns estimating of the location parameters when the probabilistic models of observations differ from the normal distributions in the kurtosis or asymmetry. The system of Pearson's distributions is the probabilistic basis for the method. So far, such a method was applied and analyzed mostly for leptokurtic or mesokurtic distributions (Pearson's distributions of types IV or VII), which predominate practical cases. The analyses of geodetic or astronomical observations show that we may also deal with sets which have moderate asymmetry or small negative excess kurtosis. Asymmetry might result from the influence of many small systematic errors, which were not eliminated during preprocessing of data. The excess kurtosis can be related with bigger or smaller (in relations to the Hagen hypothesis) frequency of occurrence of the elementary errors which are close to zero. Considering that fact, this paper focuses on the estimation with application of the Pearson platykurtic distributions of types I or II. The paper presents the solution of the corresponding optimization problem and its basic properties. Although platykurtic distributions are rare in practice, it was an interesting issue to find out what results can be provided by MP estimation in the case of such observation distributions. The numerical tests which are presented in the paper are rather limited; however, they allow us to draw some general conclusions.
Macinga, David R.; Sattar, Syed A.; Jaykus, Lee-Ann; Arbogast, James W.
2008-01-01
Norovirus is the leading cause of food-related illness in the United States, and contamination of ready-to-eat items by food handlers poses a high risk for disease. This study reports the in vitro (suspension test) and in vivo (fingerpad protocol) assessments of a new ethanol-based hand sanitizer containing a synergistic blend of polyquaternium polymer and organic acid, which is active against viruses of public health importance, including norovirus. When tested in suspension, the test product reduced the infectivity of the nonenveloped viruses human rotavirus (HRV), poliovirus type 1 (PV-1), and the human norovirus (HNV) surrogates feline calicivirus (FCV) F-9 and murine norovirus type 1 (MNV-1) by greater than 3 log10 after a 30-s exposure. In contrast, a benchmark alcohol-based hand sanitizer reduced only HRV by greater than 3 log10 and none of the additional viruses by greater than 1.2 log10 after the same exposure. In fingerpad experiments, the test product produced a 2.48 log10 reduction of MNV-1 after a 30-s exposure, whereas a 75% ethanol control produced a 0.91 log10 reduction. Additionally, the test product reduced the infectivity titers of adenovirus type 5 (ADV-5) and HRV by ≥3.16 log10 and ≥4.32 log10, respectively, by the fingerpad assay within 15 s; and PV-1 was reduced by 2.98 log10 in 30 s by the same method. Based on these results, we conclude that this new ethanol-based hand sanitizer is a promising option for reducing the transmission of enteric viruses, including norovirus, by food handlers and care providers. PMID:18586970
Macinga, David R; Sattar, Syed A; Jaykus, Lee-Ann; Arbogast, James W
2008-08-01
Norovirus is the leading cause of food-related illness in the United States, and contamination of ready-to-eat items by food handlers poses a high risk for disease. This study reports the in vitro (suspension test) and in vivo (fingerpad protocol) assessments of a new ethanol-based hand sanitizer containing a synergistic blend of polyquaternium polymer and organic acid, which is active against viruses of public health importance, including norovirus. When tested in suspension, the test product reduced the infectivity of the nonenveloped viruses human rotavirus (HRV), poliovirus type 1 (PV-1), and the human norovirus (HNV) surrogates feline calicivirus (FCV) F-9 and murine norovirus type 1 (MNV-1) by greater than 3 log(10) after a 30-s exposure. In contrast, a benchmark alcohol-based hand sanitizer reduced only HRV by greater than 3 log(10) and none of the additional viruses by greater than 1.2 log(10) after the same exposure. In fingerpad experiments, the test product produced a 2.48 log(10) reduction of MNV-1 after a 30-s exposure, whereas a 75% ethanol control produced a 0.91 log(10) reduction. Additionally, the test product reduced the infectivity titers of adenovirus type 5 (ADV-5) and HRV by > or =3.16 log(10) and > or =4.32 log(10), respectively, by the fingerpad assay within 15 s; and PV-1 was reduced by 2.98 log(10) in 30 s by the same method. Based on these results, we conclude that this new ethanol-based hand sanitizer is a promising option for reducing the transmission of enteric viruses, including norovirus, by food handlers and care providers.
NASA Astrophysics Data System (ADS)
Owen, D. Des. R.; Pawlowsky-Glahn, V.; Egozcue, J. J.; Buccianti, A.; Bradd, J. M.
2016-08-01
Isometric log ratios of proportions of major ions, derived from intuitive sequential binary partitions, are used to characterize hydrochemical variability within and between coal seam gas (CSG) and surrounding aquifers in a number of sedimentary basins in the USA and Australia. These isometric log ratios are the coordinates corresponding to an orthonormal basis in the sample space (the simplex). The characteristic proportions of ions, as described by linear models of isometric log ratios, can be used for a mathematical-descriptive classification of water types. This is a more informative and robust method of describing water types than simply classifying a water type based on the dominant ions. The approach allows (a) compositional distinctions between very similar water types to be made and (b) large data sets with a high degree of variability to be rapidly assessed with respect to particular relationships/compositions that are of interest. A major advantage of these techniques is that major and minor ion components can be comprehensively assessed and subtle processes—which may be masked by conventional techniques such as Stiff diagrams, Piper plots, and classic ion ratios—can be highlighted. Results show that while all CSG groundwaters are dominated by Na, HCO3, and Cl ions, the proportions of other ions indicate they can evolve via different means and the particular proportions of ions within total or subcompositions can be unique to particular basins. Using isometric log ratios, subtle differences in the behavior of Na, K, and Cl between CSG water types and very similar Na-HCO3 water types in adjacent aquifers are also described. A complementary pair of isometric log ratios, derived from a geochemically-intuitive sequential binary partition that is designed to reflect compositional variability within and between CSG groundwater, is proposed. These isometric log ratios can be used to model a hydrochemical pathway associated with methanogenesis and/or to delineate groundwater associated with high gas concentrations.
Heart-rot hazard is low in Abies amabilis reproduction injured by logging.
Paul E. Aho
1960-01-01
Clear-cut units in upper-slope forest types in western Washington and Oregon often have an understory of Pacific silver fir (Abies amabilis) at time of logging. Foresters sometimes hesitate to preserve this advance regeneration, partly because of the possibility that heart rots infecting through logging wounds might considerably reduce the...
Green Lumber Grade Yields for Subfactory Class Hardwood Logs
Leland F. Hanks; Leland F. Hanks
1973-01-01
Data on lumber grade yields for subfactory class logs are presented for ten species of hardwoods. Eogs of this type are expected to assume greater importance in the market. The yields, when coupled with lumber prices, will be useful to sawmill operators for developing log prices in terms of standard factory lumber.
Hardwood log defect photographic database, software and user's guide
R. Edward Thomas
2009-01-01
Computer software and user's guide for Hardwood Log Defect Photographic Database. The database contains photographs and information on external hardwood log defects and the corresponding internal characteristics. This database allows users to search for specific defect types, sizes, and locations by tree species. For every defect, the database contains photos of...
On comparison of net survival curves.
Pavlič, Klemen; Perme, Maja Pohar
2017-05-02
Relative survival analysis is a subfield of survival analysis where competing risks data are observed, but the causes of death are unknown. A first step in the analysis of such data is usually the estimation of a net survival curve, possibly followed by regression modelling. Recently, a log-rank type test for comparison of net survival curves has been introduced and the goal of this paper is to explore its properties and put this methodological advance into the context of the field. We build on the association between the log-rank test and the univariate or stratified Cox model and show the analogy in the relative survival setting. We study the properties of the methods using both the theoretical arguments as well as simulations. We provide an R function to enable practical usage of the log-rank type test. Both the log-rank type test and its model alternatives perform satisfactory under the null, even if the correlation between their p-values is rather low, implying that both approaches cannot be used simultaneously. The stratified version has a higher power in case of non-homogeneous hazards, but also carries a different interpretation. The log-rank type test and its stratified version can be interpreted in the same way as the results of an analogous semi-parametric additive regression model despite the fact that no direct theoretical link can be established between the test statistics.
Waltemeyer, Scott D.
2008-01-01
Estimates of the magnitude and frequency of peak discharges are necessary for the reliable design of bridges, culverts, and open-channel hydraulic analysis, and for flood-hazard mapping in New Mexico and surrounding areas. The U.S. Geological Survey, in cooperation with the New Mexico Department of Transportation, updated estimates of peak-discharge magnitude for gaging stations in the region and updated regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites by use of data collected through 2004 for 293 gaging stations on unregulated streams that have 10 or more years of record. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 140 of the 293 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge by having a recurrence interval of less than 1.4 years in the probability-density function. Within each of the nine regions, logarithms of the maximum peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics by using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then were applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction, which includes average sampling error and average standard error of regression, ranged from 38 to 93 percent (mean value is 62, and median value is 59) for the 100-year flood. The 1996 investigation standard error of prediction for the flood regions ranged from 41 to 96 percent (mean value is 67, and median value is 68) for the 100-year flood that was analyzed by using generalized least-squares regression analysis. Overall, the equations based on generalized least-squares regression techniques are more reliable than those in the 1996 report because of the increased length of record and improved geographic information system (GIS) method to determine basin and climatic characteristics. Flood-frequency estimates can be made for ungaged sites upstream or downstream from gaging stations by using a method that transfers flood-frequency data at the gaging station to the ungaged site by using a drainage-area ratio adjustment equation. The peak discharge for a given recurrence interval at the gaging station, drainage-area ratio, and the drainage-area exponent from the regional regression equation of the respective region is used to transfer the peak discharge for the recurrence interval to the ungaged site. Maximum observed peak discharge as related to drainage area was determined for New Mexico. Extreme events are commonly used in the design and appraisal of bridge crossings and other structures. Bridge-scour evaluations are commonly made by using the 500-year peak discharge for these appraisals. Peak-discharge data collected at 293 gaging stations and 367 miscellaneous sites were used to develop a maximum peak-discharge relation as an alternative method of estimating peak discharge of an extreme event such as a maximum probable flood.
Feaster, Toby D.; Tasker, Gary D.
2002-01-01
Data from 167 streamflow-gaging stations in or near South Carolina with 10 or more years of record through September 30, 1999, were used to develop two methods for estimating the magnitude and frequency of floods in South Carolina for rural ungaged basins that are not significantly affected by regulation. Flood frequency estimates for 54 gaged sites in South Carolina were computed by fitting the water-year peak flows for each site to a log-Pearson Type III distribution. As part of the computation of flood-frequency estimates for gaged sites, new values for generalized skew coefficients were developed. Flood-frequency analyses also were made for gaging stations that drain basins from more than one physiographic province. The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, updated these data from previous flood-frequency reports to aid officials who are active in floodplain management as well as those who design bridges, culverts, and levees, or other structures near streams where flooding is likely to occur. Regional regression analysis, using generalized least squares regression, was used to develop a set of predictive equations that can be used to estimate the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows for rural ungaged basins in the Blue Ridge, Piedmont, upper Coastal Plain, and lower Coastal Plain physiographic provinces of South Carolina. The predictive equations are all functions of drainage area. Average errors of prediction for these regression equations ranged from -16 to 19 percent for the 2-year recurrence-interval flow in the upper Coastal Plain to -34 to 52 percent for the 500-year recurrence interval flow in the lower Coastal Plain. A region-of-influence method also was developed that interactively estimates recurrence- interval flows for rural ungaged basins in the Blue Ridge of South Carolina. The region-of-influence method uses regression techniques to develop a unique relation between flow and basin characteristics for an individual watershed. This, then, can be used to estimate flows at ungaged sites. Because the computations required for this method are somewhat complex, a computer application was developed that performs the computations and compares the predictive errors for this method. The computer application includes the option of using the region-of-influence method, or the generalized least squares regression equations from this report to compute estimated flows and errors of prediction specific to each ungaged site. From a comparison of predictive errors using the region-of-influence method with those computed using the regional regression method, the region-of-influence method performed systematically better only in the Blue Ridge and is, therefore, not recommended for use in the other physiographic provinces. Peak-flow data for the South Carolina stations used in the regionalization study are provided in appendix A, which contains gaging station information, log-Pearson Type III statistics, information on stage-flow relations, and water-year peak stages and flows. For informational purposes, water-year peak-flow data for stations on regulated streams in South Carolina also are provided in appendix D. Other information pertaining to the regulated streams is provided in the text of the report.
Ahearn, Elizabeth A.
2004-01-01
Multiple linear-regression equations were developed to estimate the magnitudes of floods in Connecticut for recurrence intervals ranging from 2 to 500 years. The equations can be used for nonurban, unregulated stream sites in Connecticut with drainage areas ranging from about 2 to 715 square miles. Flood-frequency data and hydrologic characteristics from 70 streamflow-gaging stations and the upstream drainage basins were used to develop the equations. The hydrologic characteristics?drainage area, mean basin elevation, and 24-hour rainfall?are used in the equations to estimate the magnitude of floods. Average standard errors of prediction for the equations are 31.8, 32.7, 34.4, 35.9, 37.6 and 45.0 percent for the 2-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals, respectively. Simplified equations using only one hydrologic characteristic?drainage area?also were developed. The regression analysis is based on generalized least-squares regression techniques. Observed flows (log-Pearson Type III analysis of the annual maximum flows) from five streamflow-gaging stations in urban basins in Connecticut were compared to flows estimated from national three-parameter and seven-parameter urban regression equations. The comparison shows that the three- and seven- parameter equations used in conjunction with the new statewide equations generally provide reasonable estimates of flood flows for urban sites in Connecticut, although a national urban flood-frequency study indicated that the three-parameter equations significantly underestimated flood flows in many regions of the country. Verification of the accuracy of the three-parameter or seven-parameter national regression equations using new data from Connecticut stations was beyond the scope of this study. A technique for calculating flood flows at streamflow-gaging stations using a weighted average also is described. Two estimates of flood flows?one estimate based on the log-Pearson Type III analyses of the annual maximum flows at the gaging station, and the other estimate from the regression equation?are weighted together based on the years of record at the gaging station and the equivalent years of record value determined from the regression. Weighted averages of flood flows for the 2-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals are tabulated for the 70 streamflow-gaging stations used in the regression analysis. Generally, weighted averages give the most accurate estimate of flood flows at gaging stations. An evaluation of the Connecticut's streamflow-gaging network was performed to determine whether the spatial coverage and range of geographic and hydrologic conditions are adequately represented for transferring flood characteristics from gaged to ungaged sites. Fifty-one of 54 stations in the current (2004) network support one or more flood needs of federal, state, and local agencies. Twenty-five of 54 stations in the current network are considered high-priority stations by the U.S. Geological Survey because of their contribution to the longterm understanding of floods, and their application for regionalflood analysis. Enhancements to the network to improve overall effectiveness for regionalization can be made by increasing the spatial coverage of gaging stations, establishing stations in regions of the state that are not well-represented, and adding stations in basins with drainage area sizes not represented. Additionally, the usefulness of the network for characterizing floods can be maintained and improved by continuing operation at the current stations because flood flows can be more accurately estimated at stations with continuous, long-term record.
Asquith, William H.; Kiang, Julie E.; Cohn, Timothy A.
2017-07-17
The U.S. Geological Survey (USGS), in cooperation with the U.S. Nuclear Regulatory Commission, has investigated statistical methods for probabilistic flood hazard assessment to provide guidance on very low annual exceedance probability (AEP) estimation of peak-streamflow frequency and the quantification of corresponding uncertainties using streamgage-specific data. The term “very low AEP” implies exceptionally rare events defined as those having AEPs less than about 0.001 (or 1 × 10–3 in scientific notation or for brevity 10–3). Such low AEPs are of great interest to those involved with peak-streamflow frequency analyses for critical infrastructure, such as nuclear power plants. Flood frequency analyses at streamgages are most commonly based on annual instantaneous peak streamflow data and a probability distribution fit to these data. The fitted distribution provides a means to extrapolate to very low AEPs. Within the United States, the Pearson type III probability distribution, when fit to the base-10 logarithms of streamflow, is widely used, but other distribution choices exist. The USGS-PeakFQ software, implementing the Pearson type III within the Federal agency guidelines of Bulletin 17B (method of moments) and updates to the expected moments algorithm (EMA), was specially adapted for an “Extended Output” user option to provide estimates at selected AEPs from 10–3 to 10–6. Parameter estimation methods, in addition to product moments and EMA, include L-moments, maximum likelihood, and maximum product of spacings (maximum spacing estimation). This study comprehensively investigates multiple distributions and parameter estimation methods for two USGS streamgages (01400500 Raritan River at Manville, New Jersey, and 01638500 Potomac River at Point of Rocks, Maryland). The results of this study specifically involve the four methods for parameter estimation and up to nine probability distributions, including the generalized extreme value, generalized log-normal, generalized Pareto, and Weibull. Uncertainties in streamflow estimates for corresponding AEP are depicted and quantified as two primary forms: quantile (aleatoric [random sampling] uncertainty) and distribution-choice (epistemic [model] uncertainty). Sampling uncertainties of a given distribution are relatively straightforward to compute from analytical or Monte Carlo-based approaches. Distribution-choice uncertainty stems from choices of potentially applicable probability distributions for which divergence among the choices increases as AEP decreases. Conventional goodness-of-fit statistics, such as Cramér-von Mises, and L-moment ratio diagrams are demonstrated in order to hone distribution choice. The results generally show that distribution choice uncertainty is larger than sampling uncertainty for very low AEP values.
NASA Technical Reports Server (NTRS)
Scott, David W.; Underwood, Debrah (Technical Monitor)
2002-01-01
At the Marshall Space Flight Center's (MSFC) Payload Operations Integration Center (POIC) for International Space Station (ISS), each flight controller maintains detailed logs of activities and communications at their console position. These logs are critical for accurately controlling flight in real-time as well as providing a historical record and troubleshooting tool. This paper describes logging methods and electronic formats used at the POIC and provides food for thought on their strengths and limitations, plus proposes some innovative extensions. It also describes an inexpensive PC-based scheme for capturing and/or transcribing audio clips from communications consoles. Flight control activity (e.g. interpreting computer displays, entering data/issuing electronic commands, and communicating with others) can become extremely intense. It's essential to document it well, but the effort to do so may conflict with actual activity. This can be more than just annoying, as what's in the logs (or just as importantly not in them) often feeds back directly into the quality of future operations, whether short-term or long-term. In earlier programs, such as Spacelab, log keeping was done on paper, often using position-specific shorthand, and the other reader was at the mercy of the writer's penmanship. Today, user-friendly software solves the legibility problem and can automate date/time entry, but some content may take longer to finish due to individual typing speed and less use of symbols. File layout can be used to great advantage in making types of information easy to find, and creating searchable master logs for a given position is very easy and a real lifesaver in reconstructing events or researching a given topic. We'll examine log formats from several console position, and the types of information that are included and (just as importantly) excluded. We'll also look at when a summary or synopsis is effective, and when extensive detail is needed.
NASA Astrophysics Data System (ADS)
Yegireddi, Satyanarayana; Uday Bhaskar, G.
2009-01-01
Different parameters obtained through well-logging geophysical sensors such as SP, resistivity, gamma-gamma, neutron, natural gamma and acoustic, help in identification of strata and estimation of the physical, electrical and acoustical properties of the subsurface lithology. Strong and conspicuous changes in some of the log parameters associated with any particular stratigraphy formation, are function of its composition, physical properties and help in classification. However some substrata show moderate values in respective log parameters and make difficult to identify or assess the type of strata, if we go by the standard variability ranges of any log parameters and visual inspection. The complexity increases further with more number of sensors involved. An attempt is made to identify the type of stratigraphy from borehole geophysical log data using a combined approach of neural networks and fuzzy logic, known as Adaptive Neuro-Fuzzy Inference System. A model is built based on a few data sets (geophysical logs) of known stratigraphy of in coal areas of Kothagudem, Godavari basin and further the network model is used as test model to infer the lithology of a borehole from their geophysical logs, not used in simulation. The results are very encouraging and the model is able to decipher even thin cola seams and other strata from borehole geophysical logs. The model can be further modified to assess the physical properties of the strata, if the corresponding ground truth is made available for simulation.
Comparison of urine analysis using manual and sedimentation methods.
Kurup, R; Leich, M
2012-06-01
Microscopic examination of urine sediment is an essential part in the evaluation of renal and urinary tract diseases. Traditionally, urine sediments are assessed by microscopic examination of centrifuged urine. However the current method used by the Georgetown Public Hospital Corporation Medical Laboratory involves uncentrifuged urine. To encourage high level of care, the results provided to the physician must be accurate and reliable for proper diagnosis. The aim of this study is to determine whether the centrifuge method is more clinically significant than the uncentrifuged method. In this study, a comparison between the results obtained from centrifuged and uncentrifuged methods were performed. A total of 167 urine samples were randomly collected and analysed during the period April-May 2010 at the Medical Laboratory, Georgetown Public Hospital Corporation. The urine samples were first analysed microscopically by the uncentrifuged, and then by the centrifuged method. The results obtained from both methods were recorded in a log book. These results were then entered into a database created in Microsoft Excel, and analysed for differences and similarities using this application. Analysis was further done in SPSS software to compare the results using Pearson ' correlation. When compared using Pearson's correlation coefficient analysis, both methods showed a good correlation between urinary sediments with the exception of white bloods cells. The centrifuged method had a slightly higher identification rate for all of the parameters. There is substantial agreement between the centrifuged and uncentrifuged methods. However the uncentrifuged method provides for a rapid turnaround time.
Harvesting impacts on steep slopes in Virginia
W.B. Stuart; S.L. Carr
1991-01-01
Ten tracts in the mountains of western Virginia were intensively sampled to determine the type and extent of soil disturbance from ground-based logging and the attendant erosion risk. Average slopes for the tracts ranged from 21 to 43 percent. Logged slopes exceeded 50 percent. All tracts surveyed were logged prior to the push for voluntary Best Management Practices...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-08
..., focused and limited in scope, and with a clear path to compliance. A waiver request must specify the.... OMB Control Number: 3060-0998. Title: Section 87.109, Station Logs. Form Number: N/A. Type of Review... aeronautical mobile service (IAMS) must maintain a log (written or automatic log) in accordance with the Annex...
Natural regeneration response to initial treatments
G. E. Gruell; W. C. Schmidt; S. F. Arno; W. J. Reich; James Menakis
1999-01-01
During the 1907 to 1911 harvest, logs were transported to landings by means of log chutes, horse skidding, and steam donkey yarding. Slash was disposed of by piling and burning, which the purchaser considered to be an unnecessary practice (Koch 1998). Usually this type of logging and postlogging treatment results in relatively light site disturbance, and the photo...
Primary detection of hardwood log defects using laser surface scanning
Ed Thomas; Liya Thomas; Lamine Mili; Roger Ehrich; A. Lynn Abbott; Clifford Shaffer; Clifford Shaffer
2003-01-01
The use of laser technology to scan hardwood log surfaces for defects holds great promise for improving processing efficiency and the value and volume of lumber produced. External and internal defect detection to optimize hardwood log and lumber processing is one of the top four technological needs in the nation's hardwood industry. The location, type, and...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-29
... the respondents, including the use of automated collection techniques or other forms of information...: OMB Control Number: 3060-0360. Title: Section 80.409, Station Logs. Form No.: N/A. Type of Review... for filing suits upon such claims. Section 80.409(d), Ship Radiotelegraph Logs: Logs of ship stations...
NASA Astrophysics Data System (ADS)
Dubiel, Stanisław; Zubrzycki, Adam; Rybicki, Czesław; Maruta, Michał
2012-11-01
In the south part of the Carpathian Foredeep basement, between Bochnia and Ropczyce, the Upper Jurassic (Oxfordian, Kimmeridian and Tithonian) carbonate complex plays important role as a hydrocarbon bearing formation. It consists of shallow marine carbonates deposited in environments of the outer carbonate ramp as reef limestones (dolomites), microbial - sponge or coral biostromes and marly or micrite limestones as well. The inner pore space system of these rocks was affected by different diagenetic processes as calcite cementation, dissolution, dolomitization and most probably by tectonic fracturing as well. These phenomena have modified pore space systems within limestone / dolomite series forming more or less developed reservoir zones (horizons). According to the interpretation of DST results (analysis of pressure build up curves by log - log method) for 11 intervals (marked out previously by well logging due to porosity increase readings) within the Upper Jurassic formation 3 types of pore/fracture space systems were distinguished: - type I - fracture - vuggy porosity system in which fractures connecting voids and vugs within organogenic carbonates are of great importance for medium flow; - type II - vuggy - fracture porosity system where a pore space consists of weak connected voids and intergranular/intercrystalline pores with minor influence of fractures; - type III - cavern porosity system in which a secondary porosity is developed due to dolomitization and cement/grain dissolution processes.
NASA Astrophysics Data System (ADS)
Kistler, Magdalena; Schmidl, Christoph; Padouvas, Emmanuel; Giebl, Heinrich; Lohninger, Johann; Ellinger, Reinhard; Bauer, Heidi; Puxbaum, Hans
2012-05-01
In this study, we investigated the emissions, including odor, from log wood stoves, burning wood types indigenous to mid-European countries such as Austria, Czech Republic, Hungary, Slovak Republic, Slovenia, Switzerland, as well as Baden-Württemberg and Bavaria (Germany) and South Tyrol (Italy). The investigations were performed with a modern, certified, 8 kW, manually fired log wood stove, and the results were compared to emissions from a modern 9 kW pellet stove. The examined wood types were deciduous species: black locust, black poplar, European hornbeam, European beech, pedunculate oak (also known as “common oak”), sessile oak, turkey oak and conifers: Austrian black pine, European larch, Norway spruce, Scots pine, silver fir, as well as hardwood briquettes. In addition, “garden biomass” such as pine cones, pine needles and dry leaves were burnt in the log wood stove. The pellet stove was fired with softwood pellets. The composite average emission rates for log wood and briquettes were 2030 mg MJ-1 for CO; 89 mg MJ-1 for NOx, 311 mg MJ-1 for CxHy, 67 mg MJ-1 for particulate matter PM10 and average odor concentration was at 2430 OU m-3. CO, CxHy and PM10 emissions from pellets combustion were lower by factors of 10, 13 and 3, while considering NOx - comparable to the log wood emissions. Odor from pellets combustion was not detectable. CxHy and PM10 emissions from garden biomass (needles and leaves) burning were 10 times higher than for log wood, while CO and NOx rise only slightly. Odor levels ranged from not detectable (pellets) to around 19,000 OU m-3 (dry leaves). The odor concentration correlated with CO, CxHy and PM10. For log wood combustion average odor ranged from 536 OU m-3 for hornbeam to 5217 OU m-3 for fir, indicating a considerable influence of the wood type on odor concentration.
Vermeulen, Roel; Coble, Joseph B; Yereb, Daniel; Lubin, Jay H; Blair, Aaron; Portengen, Lützen; Stewart, Patricia A; Attfield, Michael; Silverman, Debra T
2010-10-01
Diesel exhaust (DE) has been implicated as a potential lung carcinogen. However, the exact components of DE that might be involved have not been clearly identified. In the past, nitrogen oxides (NO(x)) and carbon oxides (CO(x)) were measured most frequently to estimate DE, but since the 1990s, the most commonly accepted surrogate for DE has been elemental carbon (EC). We developed quantitative estimates of historical exposure levels of respirable elemental carbon (REC) for an epidemiologic study of mortality, particularly lung cancer, among diesel-exposed miners by back-extrapolating 1998-2001 REC exposure levels using historical measurements of carbon monoxide (CO). The choice of CO was based on the availability of historical measurement data. Here, we evaluated the relationship of REC with CO and other current and historical components of DE from side-by-side area measurements taken in underground operations of seven non-metal mining facilities. The Pearson correlation coefficient of the natural log-transformed (Ln)REC measurements with the Ln(CO) measurements was 0.4. The correlation of REC with the other gaseous, organic carbon (OC), and particulate measurements ranged from 0.3 to 0.8. Factor analyses indicated that the gaseous components, including CO, together with REC, loaded most strongly on a presumed 'Diesel exhaust' factor, while the OC and particulate agents loaded predominantly on other factors. In addition, the relationship between Ln(REC) and Ln(CO) was approximately linear over a wide range of REC concentrations. The fact that CO correlated with REC, loaded on the same factor, and increased linearly in log-log space supported the use of CO in estimating historical exposure levels to DE.
Comparison of two different physical activity monitors.
Paul, David R; Kramer, Matthew; Moshfegh, Alanna J; Baer, David J; Rumpler, William V
2007-06-25
Understanding the relationships between physical activity (PA) and disease has become a major area of research interest. Activity monitors, devices that quantify free-living PA for prolonged periods of time (days or weeks), are increasingly being used to estimate PA. A range of different activity monitors brands are available for investigators to use, but little is known about how they respond to different levels of PA in the field, nor if data conversion between brands is possible. 56 women and men were fitted with two different activity monitors, the Actigraph (Actigraph LLC; AGR) and the Actical (Mini-Mitter Co.; MM) for 15 days. Both activity monitors were fixed to an elasticized belt worn over the hip, with the anterior and posterior position of the activity monitors randomized. Differences between activity monitors and the validity of brand inter-conversion were measured by t-tests, Pearson correlations, Bland-Altman plots, and coefficients of variation (CV). The AGR detected a significantly greater amount of daily PA (216.2 +/- 106.2 vs. 188.0 +/- 101.1 counts/min, P < 0.0001). The average difference between activity monitors expressed as a CV were 3.1 and 15.5% for log-transformed and raw data, respectively. When a conversion equation was applied to convert datasets from one brand to another, the differences were no longer significant, with CV's of 2.2 and 11.7%, log-transformed and raw data, respectively. Although activity monitors predict PA on the same scale (counts/min), the results between these two brands are not directly comparable. However, the data are comparable if a conversion equation is applied, with better results for log-transformed data.
Behavioral responses of cotton mice (Peromyscus gossypinus) to large amounts of coarse woody debris.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hinkleman, Travis M.
Hinkleman, Travis M. 2004. MS Thesis. Clemson University, Clemson, South Carolina. 62 pp. Coarse woody debris (CWD) is any log, snag, or downed branch >10 cm in diameter. As a major structural feature of forest ecosystems, CWD serves as an important habitat component for a variety of organisms. Rodents frequently use CWD for travel routes and daytime refugia. Although rodents are known to use CWD extensively and selectively, the use and selection of CWD by rodents may vary according to the abundance of CWD. The purpose of this project was to determine the effect of CWD abundance on the habitatmore » use patterns of a common terrestrial rodent, the cotton mouse (Peromyscus gossypinus). I tracked cotton mice with fluorescent pigments and radiotelemetry in 6 plots, situated in loblolly pine (Pinus taeda) stands, with manipulated levels of woody debris. Treatment plots had 6x the amount of woody debris as control plots. I determined log use and movement patterns from the paths produced by powder-tracking, and I identified daytime refugia by radio-tracking. Travel along logs was almost exclusively associated with the surface of logs (91%). The proportion of a movement path associated with logs was not the best predictor of path complexity; rather, the sex of the individual was the only significant indicator of relative displacement (i.e., males moved farther from the point of release than females) and vegetation cover was the only significant predictor of mean turning angle (i.e., increasing vegetation cover yielded more convoluted paths). Mice used logs to a greater extent on treatment plots (23.7%) than mice on control plots (4.8%). Mice on treatment plots used logs with less decay, less ground contact, and more bark than logs used by mice on control plots. Differences in log use patterns were largely a result of the attributes of available logs, but mice used logs selectively on treatment plots. Refuges were highly associated with woody debris, including refuges in rotting stumps (65%), root boles (13%), brush piles (8%), and logs (7%). Mice used different frequencies of refuge types between treatments; root bole and brush pile refuges were used more on treatment plots whereas stump and log refuges were used more on control plots. Refuge type, log volume, and tree basal area were significant predictors of refuge selection on control plots whereas refuge type and size were significant predictors of refuge selection on treatment plots. Refuges were significantly more dispersed on treatment plots. Mice used refuges more intensely and switched refuges less in the winter than the summer, regardless of woody debris abundance. The extensive and selective use of logs by cotton mice suggests that logs may be an important resource. However, logs are not a critical habitat component. Over half of the paths on control plots were not associated with logs, and logs were used infrequently as refuges. Nonetheless, refuges were highly associated with woody debris (e.g., stumps, root boles), which suggests that woody debris may be a critical habitat component.« less
Choosing methods and equipment for logging
Fred C. Simmons
1948-01-01
A logging job is one of the most difficult types of business to manage efficiently. In practically everything the logger does he is compelled to make a choice between several methods of operation and types of equipment. The conditions under which he works are constantly changing, particularly when he is forced to move fairly often from one timber tract to another. But...
Diversity of Medicinal Plants among Different Forest-use Types of the Pakistani Himalaya.
Adnan, Muhammad; Hölscher, Dirk
2012-12-01
Diversity of Medicinal Plants among Different Forest-use Types of the Pakistani Himalaya Medicinal plants collected in Himalayan forests play a vital role in the livelihoods of regional rural societies and are also increasingly recognized at the international level. However, these forests are being heavily transformed by logging. Here we ask how forest transformation influences the diversity and composition of medicinal plants in northwestern Pakistan, where we studied old-growth forests, forests degraded by logging, and regrowth forests. First, an approximate map indicating these forest types was established and then 15 study plots per forest type were randomly selected. We found a total of 59 medicinal plant species consisting of herbs and ferns, most of which occurred in the old-growth forest. Species number was lowest in forest degraded by logging and intermediate in regrowth forest. The most valuable economic species, including six Himalayan endemics, occurred almost exclusively in old-growth forest. Species composition and abundance of forest degraded by logging differed markedly from that of old-growth forest, while regrowth forest was more similar to old-growth forest. The density of medicinal plants positively correlated with tree canopy cover in old-growth forest and negatively in degraded forest, which indicates that species adapted to open conditions dominate in logged forest. Thus, old-growth forests are important as refuge for vulnerable endemics. Forest degraded by logging has the lowest diversity of relatively common medicinal plants. Forest regrowth may foster the reappearance of certain medicinal species valuable to local livelihoods and as such promote acceptance of forest expansion and medicinal plants conservation in the region. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s12231-012-9213-4) contains supplementary material, which is available to authorized users.
Nikolić, Slobodan; Djonić, Danijela; Zivković, Vladimir; Babić, Dragan; Juković, Fehim; Djurić, Marija
2010-09-01
The aim of our study was to determine rate of occurrence and appearance of hyperostosis frontalis interna (HFI) in females and correlation of this phenomenon with ageing. The sample included 248 deceased females: 45 of them with different types of HFI, and 203 without HFI, average age 68.3 +/- 15.4 years (range, 19-93), and 58.2 +/- 20.2 years (range, 10-101), respectively. According to our results, the rate of HFI was 18.14%. The older the woman was, the higher the possibility of HFI occurring (Pearson correlation 0.211, N=248, P=0.001), but the type of HFI did not correlate with age (Pearson correlation 0.229, N=45, P=0.131). Frontal and temporal bone were significantly thicker in women with than in women without HFI (t= -10.490, DF=246, P=0.000, and t= -5.658, DF=246, P=0.000, respectively). These bones became thicker with ageing (Pearson correlation 0.178, N=248, P=0.005, and 0.303, N=248, P=0.000, respectively). The best predictors of HFI occurrence were respectively, frontal bone thickness, temporal bone thickness, and age(Wald. coeff.=35.487, P=0.000; Wald. coeff.=3.288, P=0.070, and Wald.coeff. =2.727, P =0.099). Diagnosis of HFI depends not only on frontal bone thickness, but also on waviness of internal plate of the frontal bone, as well as-the involvement of the inner bone surface.
1978-12-01
female crew. The crewmembers were about evenly split as to type of crew pairing. The author recommended using an all-female crew pairing plan when...obtained so that the respondents could be assigned to various subpopulations during the analysis. Data ob- tained provided information about: Type of...respondent is made. There are many types of correlations that can be calculated but the parti- cular one employed by SPSS is Pearson’s correlation. A
Probability Density Functions of Observed Rainfall in Montana
NASA Technical Reports Server (NTRS)
Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.
1995-01-01
The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Finishing Operations Part 63, Subpt. TTTT, Fig. 1 Figure 1 to Subpart TTTT of Part 63—Example Logs for Recording Leather Finish Use and HAP Content Month:______Year:______ Finish Inventory Log Finish type Finish usage(pounds) HAP Content(mass fraction) Date and time Operator's name Product process operation Monthly...
Timber harvest and logging plan for the South Fork of the Caspar Creek watershed
Anonymous
1970-01-01
The Caspar Creek Watershed Study was initiated in 1960 to study large differences between conditions of stream flow and sedimentation, fish life and fish habitat between paired watersheds, one of which will be carefully logged while the other is left undisturbed as a control. This study will not compare differences in types of logging practices.
Haeckel, Rainer; Wosniok, Werner
2010-10-01
The distribution of many quantities in laboratory medicine are considered to be Gaussian if they are symmetric, although, theoretically, a Gaussian distribution is not plausible for quantities that can attain only non-negative values. If a distribution is skewed, further specification of the type is required, which may be difficult to provide. Skewed (non-Gaussian) distributions found in clinical chemistry usually show only moderately large positive skewness (e.g., log-normal- and χ(2) distribution). The degree of skewness depends on the magnitude of the empirical biological variation (CV(e)), as demonstrated using the log-normal distribution. A Gaussian distribution with a small CV(e) (e.g., for plasma sodium) is very similar to a log-normal distribution with the same CV(e). In contrast, a relatively large CV(e) (e.g., plasma aspartate aminotransferase) leads to distinct differences between a Gaussian and a log-normal distribution. If the type of an empirical distribution is unknown, it is proposed that a log-normal distribution be assumed in such cases. This avoids distributional assumptions that are not plausible and does not contradict the observation that distributions with small biological variation look very similar to a Gaussian distribution.
Aría Guerra, Eva; Cortés-Salgado, Alfonso; Mateo-Lobo, Raquel; Nattero, Lía; Riveiro, Javier; Vega-Piñero, Belén; Valbuena, Beatriz; Carabaña, Fátima; Carrero, Carmen; Grande, Enrique; Carrato, Alfredo; Botella-Carretero, José Ignacio
2015-09-01
the precise role of parenteral nutrition in the management of oncologic patients with intestinal occlusion is not well defined yet. We aimed to identify the effects of parenteral nutrition in these patients regarding prognosis. 55 patients with intestinal occlusion and peritoneal carcinomatosis were included. Parenteral nutrition aimed at 20-35 kcal/Kg/day, and 1.0 g/kg/day of amino-acids. Weight, body mass index, type of tumor, type of chemotherapy, and ECOG among others were recorded and analyzed. 69.1% of the patients had gastrointestinal tumors, 18.2% gynecologic and 12.7% others. Age was 60 ± 13y, baseline ECOG 1.5 ± 0.5 and body mass index 21.6 ± 4.3. Malnutrition was present in 85%. Survival from the start of parenteral nutrition was not significant when considering baseline ECOG (log rank = 0.593, p = 0.743), previous lines of chemotherapy (log rank = 2.117, p = 0.548), baseline BMI (log rank = 2.686, p = 0.261), or type of tumor (log rank = 2.066, p = 0.356). Survival in patients who received home parenteral nutrition after hospital discharge was higher than those who stayed in-hospital (log rank = 7.090, p = 0.008). Survival in patients who started chemotherapy during or after parenteral nutrition was higher than those who did not so (log rank = 17.316, p < 0.001). A total of 3.6% of patients presented catheter related infection without affecting survival (log rank = 0.061, p = 0.804). Parenteral nutrition in patients with advanced cancer and intestinal occlusion is safe, and in tho se who respond to chemotherapy, further administration of home parenteral nutrition together with chemotherapy may enhance prolonged survival. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Young People and Suicide—the College Scene | NIH MedlinePlus the Magazine
... why NIMH has been funding research on preventing suicide, depression, and other disorders among college students." Dr. Pearson ... should be of help: MedlinePlus: medlineplus.gov (Type "suicide" in the Search ... of Michigan's Depression Center (research funded, in part, by NIMH): www. ...
A simulation for the gated weir opening of Wonokromo River, Rungkut District, Surabaya
NASA Astrophysics Data System (ADS)
Handajani, N.; Wahjudijanto, I.; Mu'afi, M.
2018-01-01
The gated weir is a weir that the crest elevation could be operated based on the flow through the river. The upstream water level of the gated weir could be controlled with gate opening or closing. This study applied a simulation with HEC-RAS 4,0 program in order to know the river hydraulic condition after the gated weir has built. According to the rainfall intensity from each sub-watershed, Distribution Log Pearson III with return period 50 years (Q50) was determined to calculate the design flood discharge. By using Rational Method, the design flood discharge is 470 m3/s. The Results show that capacity of the river is able to accomodate Q50 with discharge 470 m3/s and the gate should be fully opened during flood. This condition could passed the normal discharge at + 5.00 m elevation.
Evaluation of statistical distributions to analyze the pollution of Cd and Pb in urban runoff.
Toranjian, Amin; Marofi, Safar
2017-05-01
Heavy metal pollution in urban runoff causes severe environmental damage. Identification of these pollutants and their statistical analysis is necessary to provide management guidelines. In this study, 45 continuous probability distribution functions were selected to fit the Cd and Pb data in the runoff events of an urban area during October 2014-May 2015. The sampling was conducted from the outlet of the city basin during seven precipitation events. For evaluation and ranking of the functions, we used the goodness of fit Kolmogorov-Smirnov and Anderson-Darling tests. The results of Cd analysis showed that Hyperbolic Secant, Wakeby and Log-Pearson 3 are suitable for frequency analysis of the event mean concentration (EMC), the instantaneous concentration series (ICS) and instantaneous concentration of each event (ICEE), respectively. In addition, the LP3, Wakeby and Generalized Extreme Value functions were chosen for the EMC, ICS and ICEE related to Pb contamination.
The Use of OPAC in a Large Academic Library: A Transactional Log Analysis Study of Subject Searching
ERIC Educational Resources Information Center
Villen-Rueda, Luis; Senso, Jose A.; de Moya-Anegon, Felix
2007-01-01
The analysis of user searches in catalogs has been the topic of research for over four decades, involving numerous studies and diverse methodologies. The present study looks at how different types of users effect queries in the catalog of a university library. For this purpose, we analyzed log files to determine which was the most frequent type of…
Ding, Yi; Zang, Runguo; Lu, Xinghui; Huang, Jihong
2017-02-01
Historically, clear-cutting and selective logging have been the commercial logging practices. However, the effect of these pervasive timber extraction methods on biodiversity in tropical forests is still poorly understood. In this study, we compared abiotic factors, species diversity, community composition, and structure between ca. 40-year-old clear-cut (MCC); ca. 40-year-old selectively logged (MSL); and tropical old growth montane rain forests (MOG) on Hainan Island, China. Results showed that there were a large number of trees with a diameter at breast height (DBH) <30cm in the two logged forests. Additionally, the two logged forests only had 40% of the basal area of the large trees (DBH≥30cm) found in the old growth forest. The species richness and Shannon-Wiener diversity indices generally showed no difference among the three forest types. MCC had 70% of the species richness of the large trees in the MOG, whereas MSL and MOG had similar species richness. High value timber species had similar species richness among the three forest types, but a lower abundance and basal area of large trees in MCC. The species composition was distinct between the three forests. Large trees belonging to the family Fagaceae dominated in the logged forests and played a more important role in the old growth forest. Huge trees (DBH≥70cm) were rare in MCC, but were frequently found in MSL. Most abiotic factors varied inconsistently among the three forest types and few variables related to species diversity, community structure and composition. Our study indicated that MSL had a relatively faster recovery rate than MCC in a tropical montane rain forest after 40years, but both logged forests had a high recovery potential over a long-term. Copyright © 2016 Elsevier B.V. All rights reserved.
Paseiro-Cerrato, Rafael; Tongchat, Chinawat; Franz, Roland
2016-05-01
This study evaluated the influence of parameters such as temperature and type of low-density polyethylene (LDPE) film on the log Kp/f values of seven model migrants in food simulants. Two different types of LDPE films contaminated by extrusion and immersion were placed in contact with three food simulants including 20% ethanol, 50% ethanol and olive oil under several time-temperature conditions. Results suggest that most log Kp/f values are little affected by these parameters in this study. In addition, the relation between log Kp/f and log Po/w was established for each food simulant and regression lines, as well as correlation coefficients, were calculated. Correlations were compared with data from real foodstuffs. Data presented in this study could be valuable in assigning certain foods to particular food simulants as well as predicting the mass transfer of potential migrants into different types of food or food simulants, avoiding tedious and expensive laboratory analysis. The results could be especially useful for regulatory agencies as well as for the food industry.
The human foramen magnum--normal anatomy of the cisterna magna in adults.
Whitney, Nathaniel; Sun, Hai; Pollock, Jeffrey M; Ross, Donald A
2013-11-01
The goal of this study was to radiologically describe the anatomical characteristics of the cisterna magna (CM) with regard to presence, dimension, and configuration. In this retrospective study, 523 records were reviewed. We defined five CM types, the range of which covered all normal variants found in the study population. Characteristics of the CM were recorded and correlations between various posterior fossa dimensions and CM volume determined. There were 268 female (mean age 50.9 ± 16.9 years) and 255 male (mean age 54.1 ± 15.8 years) patients. CM volume was smaller in females than in males and correlated with age (Pearson correlation, r = 0.1494, p = 0.0006) and gender (unpaired t test, r (2) = 0.0608, p < 0.0001). Clivus length correlated with CM volume (Pearson correlation, r = 0.211, p < 0.0001) and gender (unpaired t test, r (2) = 0.2428, p < 0.0001). Tentorial angle did not correlate with CM volume (Pearson correlation, r = -0.0609, p < 0.1642) but did correlate with gender (unpaired t test, r (2) = 0.0163, p < 0.0035). The anterior-posterior dimension of cerebrospinal fluid anterior to the brainstem correlated with CM volume (Pearson correlation, r = 0.181, p < 0.0001) and gender (unpaired t test, r (2) = 0.0205, p = 0.001). The anatomical description and simple classification system we define allows for a more precise description of posterior fossa anatomy and could potentially contribute to the understanding of Chiari malformation anatomy and management.
Factors affecting the merchandising of hardwood logs in the southern tier of New York
John E. Wagner; Bryan Smalley; William Luppold
2004-01-01
In many areas of the eastern United States, hardwood boles are sawn into logs and then separated by product before proceeding to future processing. This type of product merchandising is facilitated by large differences in the relative value of hardwood logs of different species and grades. The objective of this study was to analyze the factors influencing the...
R.B. Gardner
1966-01-01
Describes a typical logging system used in the Lake and Northeastern States, discusses each step in the operation, and presents a simple method for designing and efficient logging system for such an operation. Points out that a system should always be built around the key piece of equipment, which is usually the skidder. Specific equipment types and their production...
Bundling Logging Residues with a Modified John Deere B-380 Slash Bundler
Dana Mitchell
2011-01-01
A basic problem with processing biomass in the woods is that the machinery must be matched to the final product. If a logging business owner invests in a machine to produce a specific type of biomass product for a limited market, the opportunity for that logging business owner to diversify products to take advantage of market opportunities may also be limited. When...
C.B. LeDoux; J.E. Baumgras
1991-01-01
The impact of selected site and stand attributes on stand management is demonstrated using actual forest model plot data and a complete systems simulation model called MANAGE. The influence of terrain on the type of logging technology required to log a stand and the resulting impact on stand management is also illustrated. The results can be used by managers and...
NASA Astrophysics Data System (ADS)
Pérez-Sánchez, Julio; Senent-Aparicio, Javier
2017-08-01
Dry spells are an essential concept of drought climatology that clearly defines the semiarid Mediterranean environment and whose consequences are a defining feature for an ecosystem, so vulnerable with regard to water. The present study was conducted to characterize rainfall drought in the Segura River basin located in eastern Spain, marked by the self seasonal nature of these latitudes. A daily precipitation set has been utilized for 29 weather stations during a period of 20 years (1993-2013). Furthermore, four sets of dry spell length (complete series, monthly maximum, seasonal maximum, and annual maximum) are used and simulated for all the weather stations with the following probability distribution functions: Burr, Dagum, error, generalized extreme value, generalized logistic, generalized Pareto, Gumbel Max, inverse Gaussian, Johnson SB, Log-Logistic, Log-Pearson 3, Triangular, Weibull, and Wakeby. Only the series of annual maximum spell offer a good adjustment for all the weather stations, thereby gaining the role of Wakeby as the best result, with a p value means of 0.9424 for the Kolmogorov-Smirnov test (0.2 significance level). Probability of dry spell duration for return periods of 2, 5, 10, and 25 years maps reveal the northeast-southeast gradient, increasing periods with annual rainfall of less than 0.1 mm in the eastern third of the basin, in the proximity of the Mediterranean slope.
Tey, Yao Hsien; Jong, Koa-Jen; Fen, Shin-Yuan; Wong, Hin-Chung
2015-05-01
The occurrence of Vibrio parahaemolyticus, Vibrio vulnificus, and Vibrio cholerae in a total of 72 samples from six aquaculture ponds for groupers, milk fish, and tilapia in southern Taiwan was examined by the membrane filtration and colony hybridization method. The halophilic V. parahaemolyticus was only recovered in seawater ponds, with a high isolation frequency of 86.1% and a mean density of 2.6 log CFU/g. V. cholerae was found in both the seawater and freshwater ponds but preferentially in freshwater ponds, with a frequency of 72.2% and a mean density of 1.65 log CFU/g. V. vulnificus was identified mainly in seawater ponds, with an isolation frequency of 27.8%. The density of V. parahaemolyticus in seawater ponds was positively related to water temperature (Pearson correlation coefficient, r = 0.555) and negatively related to salinity (r = 2 0.333). The density of V. cholerae in all six ponds was positively related to water temperature (r = 0.342) and negatively related to salinity (r = 2 0.432). Two putatively pathogenic tdh(+) V. parahaemolyticus isolates (1.4% of the samples) and no ctx(+) V. cholerae isolates were identified. The experimental results may facilitate assessments of the risk posed by these pathogenic Vibrio species in Taiwan, where aquaculture provides a large part of the seafood supply.
Highbarger, Helene C.; Alvord, W. Gregory; Jiang, Min Kang; Shah, Akram S.; Metcalf, Julia A.; Lane, H. Clifford; Dewar, Robin L.
1999-01-01
This study evaluated correlation and agreement between version 3 of the Quantiplex human immunodeficiency virus type 1 (HIV-1) RNA assay (v3 branched DNA [bDNA]) and a sensitized Amplicor HIV-1 Monitor assay (reverse transcription [RT]-PCR) for the measurement of HIV RNA. Three hundred eighteen samples from 59 randomly selected, HIV-1-seropositive persons on various drug protocols from the National Institute of Allergy and Infectious Diseases HIV outpatient clinic were studied. The results indicate that v3 bDNA and RT-PCR are highly correlated (r = 0.98) and are in good agreement (mean difference in log10 copies/ml ± 2 standard deviations = 0.072 ± 0.371). The relationship between values obtained by both assays is given by the following equation: log10v3 bDNA = −0.0915 + 1.0052 · log10RT-PCR. This represents a 1.026-fold difference between log10RT-PCR values and log10v3 bDNA values. PMID:10523562
Chromatographic removal combined with heat, acid and chaotropic inactivation of four model viruses.
Valdés, R; Ibarra, Neysi; Ruibal, I; Beldarraín, A; Noa, E; Herrera, N; Alemán, R; Padilla, S; Garcia, J; Pérez, M; Morales, R; Chong, E; Reyes, B; Quiñones, Y; Agraz, A; Herrera, L
2002-07-03
The virus removal of protein A affinity chromatography, inactivation capacity, acid pH and a combination of high temperature with a chaotropic agent was determined in this work. The model viruses studied were sendaivirus, human immunodeficency virus (HIV-IIIb), human poliovirus type-II, human herpesvirus I and canine parvovirus. The protein A affinity chromatography showed a maximum reduction factor of 8 logs in the case of viruses larger than 120 nm size, while for small viruses (18-30 nm) the maximum reduction factor was about 5 logs. Non viral inactivation was observed during the monoclonal antibody elution step. Low pH treatment showed a maximum inactivation factor of 7.1 logs for enveloped viruses. However, a weak inactivation factor (3.4 logs) was obtained for DNA nonenveloped viruses. The combination of high temperature with 3 M KSCN showed a high inactivation factor for all of the viruses studied. The total clearance factor was 23.1, 15.1, 13.6, 20.0 and 16.0 logs for sendaivirus, HIV-IIIb, human poliovirus type-II, human herpesvirus I and canine parvovirus, respectively.
NASA Astrophysics Data System (ADS)
Zhang, Y.; Hu, C.; Wang, M.
2017-12-01
The evaluation of total organic carbon (TOC) in shale using logging data is one of the most crucial steps in shale gas exploration. However, it didn't achieve the ideal effect for the application of `ΔlogR' method in the Longmaxi Formation shale of Sichuan Basin.The reason may be the organic matter carbonization in Longmaxi Formation. An improved evaluation method, using the classification by lithology and sedimentary structure: 1) silty mudstone (wellsite logging data show silty); 2) calcareous mudstone (calcareous content > 25%); 3) laminated mudstone (laminations are recognized by core and imaging logging technology); 4) massive mudstone (massive textures are recognized by core and imaging logging technology, was proposed. This study compares two logging evaluation methods for measuring TOC in shale: the △logR method and the new proposed method. The results showed that the correlation coefficient between the calculated TOC and the tested TOC, based on the △logR method, was only 0.17. The correlation coefficient obtained according to the new method reached 0.80. The calculation results illustrated that, because of the good correlation between lithologies and sedimentary structure zones and TOC of different types of shale, the shale reservoirs could be graded according to four shale types. The new proposed method is more efficient, faster, and has higher vertical resolution than the △logR method. In addition, a new software had been completed. It was found to be especially effective under conditions of insufficient data during the early stages of shale gas exploration in the Silurian Longmaxi Formation, Muai Syncline Belt, south of the Sichuan Basin.
Estimating magnitude and frequency of floods using the PeakFQ 7.0 program
Veilleux, Andrea G.; Cohn, Timothy A.; Flynn, Kathleen M.; Mason, Jr., Robert R.; Hummel, Paul R.
2014-01-01
Flood-frequency analysis provides information about the magnitude and frequency of flood discharges based on records of annual maximum instantaneous peak discharges collected at streamgages. The information is essential for defining flood-hazard areas, for managing floodplains, and for designing bridges, culverts, dams, levees, and other flood-control structures. Bulletin 17B (B17B) of the Interagency Advisory Committee on Water Data (IACWD; 1982) codifies the standard methodology for conducting flood-frequency studies in the United States. B17B specifies that annual peak-flow data are to be fit to a log-Pearson Type III distribution. Specific methods are also prescribed for improving skew estimates using regional skew information, tests for high and low outliers, adjustments for low outliers and zero flows, and procedures for incorporating historical flood information. The authors of B17B identified various needs for methodological improvement and recommended additional study. In response to these needs, the Advisory Committee on Water Information (ACWI, successor to IACWD; http://acwi.gov/, Subcommittee on Hydrology (SOH), Hydrologic Frequency Analysis Work Group (HFAWG), has recommended modest changes to B17B. These changes include adoption of a generalized method-of-moments estimator denoted the Expected Moments Algorithm (EMA) (Cohn and others, 1997) and a generalized version of the Grubbs-Beck test for low outliers (Cohn and others, 2013). The SOH requested that the USGS implement these changes in a user-friendly, publicly accessible program.
Estimation of Magnitude and Frequency of Floods for Streams on the Island of Oahu, Hawaii
Wong, Michael F.
1994-01-01
This report describes techniques for estimating the magnitude and frequency of floods for the island of Oahu. The log-Pearson Type III distribution and methodology recommended by the Interagency Committee on Water Data was used to determine the magnitude and frequency of floods at 79 gaging stations that had 11 to 72 years of record. Multiple regression analysis was used to construct regression equations to transfer the magnitude and frequency information from gaged sites to ungaged sites. Oahu was divided into three hydrologic regions to define relations between peak discharge and drainage-basin and climatic characteristics. Regression equations are provided to estimate the 2-, 5-, 10-, 25-, 50-, and 100-year peak discharges at ungaged sites. Significant basin and climatic characteristics included in the regression equations are drainage area, median annual rainfall, and the 2-year, 24-hour rainfall intensity. Drainage areas for sites used in this study ranged from 0.03 to 45.7 square miles. Standard error of prediction for the regression equations ranged from 34 to 62 percent. Peak-discharge data collected through water year 1988, geographic information system (GIS) technology, and generalized least-squares regression were used in the analyses. The use of GIS seems to be a more flexible and consistent means of defining and calculating basin and climatic characteristics than using manual methods. Standard errors of estimate for the regression equations in this report are an average of 8 percent less than those published in previous studies.
Kistler, Magdalena; Schmidl, Christoph; Padouvas, Emmanuel; Giebl, Heinrich; Lohninger, Johann; Ellinger, Reinhard; Bauer, Heidi; Puxbaum, Hans
2012-01-01
In this study, we investigated the emissions, including odor, from log wood stoves, burning wood types indigenous to mid-European countries such as Austria, Czech Republic, Hungary, Slovak Republic, Slovenia, Switzerland, as well as Baden-Württemberg and Bavaria (Germany) and South Tyrol (Italy). The investigations were performed with a modern, certified, 8 kW, manually fired log wood stove, and the results were compared to emissions from a modern 9 kW pellet stove. The examined wood types were deciduous species: black locust, black poplar, European hornbeam, European beech, pedunculate oak (also known as “common oak”), sessile oak, turkey oak and conifers: Austrian black pine, European larch, Norway spruce, Scots pine, silver fir, as well as hardwood briquettes. In addition, “garden biomass” such as pine cones, pine needles and dry leaves were burnt in the log wood stove. The pellet stove was fired with softwood pellets. The composite average emission rates for log wood and briquettes were 2030 mg MJ−1 for CO; 89 mg MJ−1 for NOx, 311 mg MJ−1 for CxHy, 67 mg MJ−1 for particulate matter PM10 and average odor concentration was at 2430 OU m−3. CO, CxHy and PM10 emissions from pellets combustion were lower by factors of 10, 13 and 3, while considering NOx – comparable to the log wood emissions. Odor from pellets combustion was not detectable. CxHy and PM10 emissions from garden biomass (needles and leaves) burning were 10 times higher than for log wood, while CO and NOx rise only slightly. Odor levels ranged from not detectable (pellets) to around 19,000 OU m−3 (dry leaves). The odor concentration correlated with CO, CxHy and PM10. For log wood combustion average odor ranged from 536 OU m−3 for hornbeam to 5217 OU m−3 for fir, indicating a considerable influence of the wood type on odor concentration. PMID:23471123
Kistler, Magdalena; Schmidl, Christoph; Padouvas, Emmanuel; Giebl, Heinrich; Lohninger, Johann; Ellinger, Reinhard; Bauer, Heidi; Puxbaum, Hans
2012-05-01
In this study, we investigated the emissions, including odor, from log wood stoves, burning wood types indigenous to mid-European countries such as Austria, Czech Republic, Hungary, Slovak Republic, Slovenia, Switzerland, as well as Baden-Württemberg and Bavaria (Germany) and South Tyrol (Italy). The investigations were performed with a modern, certified, 8 kW, manually fired log wood stove, and the results were compared to emissions from a modern 9 kW pellet stove. The examined wood types were deciduous species: black locust, black poplar, European hornbeam, European beech, pedunculate oak (also known as "common oak"), sessile oak, turkey oak and conifers: Austrian black pine, European larch, Norway spruce, Scots pine, silver fir, as well as hardwood briquettes. In addition, "garden biomass" such as pine cones, pine needles and dry leaves were burnt in the log wood stove. The pellet stove was fired with softwood pellets. The composite average emission rates for log wood and briquettes were 2030 mg MJ -1 for CO; 89 mg MJ -1 for NO x , 311 mg MJ -1 for C x H y , 67 mg MJ -1 for particulate matter PM 10 and average odor concentration was at 2430 OU m -3 . CO, C x H y and PM 10 emissions from pellets combustion were lower by factors of 10, 13 and 3, while considering NO x - comparable to the log wood emissions. Odor from pellets combustion was not detectable. C x H y and PM10 emissions from garden biomass (needles and leaves) burning were 10 times higher than for log wood, while CO and NO x rise only slightly. Odor levels ranged from not detectable (pellets) to around 19,000 OU m -3 (dry leaves). The odor concentration correlated with CO, C x H y and PM 10 . For log wood combustion average odor ranged from 536 OU m -3 for hornbeam to 5217 OU m -3 for fir, indicating a considerable influence of the wood type on odor concentration.
Neyman-Pearson classification algorithms and NP receiver operating characteristics
Tong, Xin; Feng, Yang; Li, Jingyi Jessica
2018-01-01
In many binary classification applications, such as disease diagnosis and spam detection, practitioners commonly face the need to limit type I error (that is, the conditional probability of misclassifying a class 0 observation as class 1) so that it remains below a desired threshold. To address this need, the Neyman-Pearson (NP) classification paradigm is a natural choice; it minimizes type II error (that is, the conditional probability of misclassifying a class 1 observation as class 0) while enforcing an upper bound, α, on the type I error. Despite its century-long history in hypothesis testing, the NP paradigm has not been well recognized and implemented in classification schemes. Common practices that directly limit the empirical type I error to no more than α do not satisfy the type I error control objective because the resulting classifiers are likely to have type I errors much larger than α, and the NP paradigm has not been properly implemented in practice. We develop the first umbrella algorithm that implements the NP paradigm for all scoring-type classification methods, such as logistic regression, support vector machines, and random forests. Powered by this algorithm, we propose a novel graphical tool for NP classification methods: NP receiver operating characteristic (NP-ROC) bands motivated by the popular ROC curves. NP-ROC bands will help choose α in a data-adaptive way and compare different NP classifiers. We demonstrate the use and properties of the NP umbrella algorithm and NP-ROC bands, available in the R package nproc, through simulation and real data studies. PMID:29423442
Neyman-Pearson classification algorithms and NP receiver operating characteristics.
Tong, Xin; Feng, Yang; Li, Jingyi Jessica
2018-02-01
In many binary classification applications, such as disease diagnosis and spam detection, practitioners commonly face the need to limit type I error (that is, the conditional probability of misclassifying a class 0 observation as class 1) so that it remains below a desired threshold. To address this need, the Neyman-Pearson (NP) classification paradigm is a natural choice; it minimizes type II error (that is, the conditional probability of misclassifying a class 1 observation as class 0) while enforcing an upper bound, α, on the type I error. Despite its century-long history in hypothesis testing, the NP paradigm has not been well recognized and implemented in classification schemes. Common practices that directly limit the empirical type I error to no more than α do not satisfy the type I error control objective because the resulting classifiers are likely to have type I errors much larger than α, and the NP paradigm has not been properly implemented in practice. We develop the first umbrella algorithm that implements the NP paradigm for all scoring-type classification methods, such as logistic regression, support vector machines, and random forests. Powered by this algorithm, we propose a novel graphical tool for NP classification methods: NP receiver operating characteristic (NP-ROC) bands motivated by the popular ROC curves. NP-ROC bands will help choose α in a data-adaptive way and compare different NP classifiers. We demonstrate the use and properties of the NP umbrella algorithm and NP-ROC bands, available in the R package nproc, through simulation and real data studies.
NASA Astrophysics Data System (ADS)
Saghafian, B.; Mohammadi, A.
2003-04-01
Most studies involving water resources allocation, water quality, hydropower generation, and allowable water withdrawal and transfer require estimation of low flows. Normally, frequency analysis on at-station D-day low flow data is performed to derive various T-yr return period values. However, this analysis is restricted to the location of hydrometric stations where the flow discharge is measured. Regional analysis is therefore conducted to relate the at-station low flow quantiles to watershed characteristics. This enables the transposition of low flow quantiles to ungauged sites. Nevertheless, a procedure to map the regional regression relations for the entire stream network, within the bounds of the relations, is particularly helpful when one studies and weighs alternative sites for certain water resources project. In this study, we used a GIS-aided procedure for low flow mapping in Gilan province, part of northern region in Iran. Gilan enjoys a humid climate with an average of 1100 mm annual precipitation. Although rich in water resources, the highly populated area is quite dependent on minimum amount of water to sustain the vast rice farming and to maintain required flow discharge for quality purposes. To carry out the low flow analysis, a total of 36 hydrometric stations with sufficient and reliable discharge data were identified in the region. The average area of the watersheds was 250 sq. km. Log Pearson type 3 was found the best distribution for flow durations over 60 days, while log normal fitted well the shorter duration series. Low flows with return periods of 2, 5, 10, 25, 50, and 100 year were then computed. Cluster analysis identified two homogeneous areas. Although various watershed parameters were examined in factor analysis, the results showed watershed area, length of the main stream, and annual precipitation were the most effective low flow parameters. The regression equations were then mapped with the aid of GIS based on flow accumulation maps and the corresponding spatially averaged values of other parameters over the upslope area of all stream pixels exceeding a certain threshold area. Such map clearly shows the spatial variation of low flow quantiles along the stream network and enables the study of low flow profiles along any stream.
Pelle, Aline J; van den Broek, Krista C; Szabó, Balázs; Kupper, Nina
2010-11-05
Psychological factors, like Type D personality (i.e., the tendency to experience negative emotions and to inhibit emotional distress) have been linked to impaired health outcomes. Criticism on the role of psychological factors in cardiac disease has postulated that such constructs may be confounded by disease severity. Hence, we examined whether Type D personality is associated with brain natriuretic peptide (BNP), a sensitive marker of disease severity in chronic heart failure (CHF), in 202 consecutive CHF outpatients. No differences in logBNP levels were found between Type D and non-Type D patients (t(200) = -1.03, p = .30). After adjusting for demographic and clinical confounders, Type D personality remained unassociated with logBNP levels (β=.04, p = .55), whereas older age (β = .27, p<.001), being prescribed beta-blockers (β = .15, p = .02), lower left ventricular ejection fraction (β = -.38, p<.0001), and kidney dysfunction (β = .17, p = .01) were associated with higher logBNP. To conclude, Type D personality was not associated with BNP in CHF outpatients, whereas clinical variables were associated with BNP. These findings oppose the suggestion that Type D personality is confounded by indicators of disease severity. Copyright © 2009 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owen, R. K.
2007-04-04
A perl module designed to read and parse the voluminous set of event or accounting log files produced by a Portable Batch System (PBS) server. This module can filter on date-time and/or record type. The data can be returned in a variety of formats.
Kumagai, S; Kai, Y; Hanada, H; Uezono, K; Sasaki, H
2002-10-01
The purpose of the present study was to investigate the relationships among the resting systolic (SBP) and diastolic blood pressure (DBP) or SBP response during exercise with insulin resistance evaluated by a homeostasis model (HOMA-IR), abdominal fat accumulation (visceral fat area [VFA], subcutaneous fat area [SFA]) by computed tomography (CT), and an estimation of the maximal oxygen uptake (V*O2max) in 63 Japanese middle-aged male patients with type 2 diabetes mellitus (type 2 DM). Body mass index (BMI) and waist-to-hip ratio (WHR) in type 2 DM subjects were significantly higher than in age-matched healthy male control subjects (n = 135) with normal glucose tolerance. Resting SBP (127.7 +/- 16.2 mm Hg v 119.4 +/- 13.0 mm Hg) and DBP (82.2 +/- 11.9mmHg v 76.8 +/- 9.4 mm Hg) levels, and the percentage of hypertension (20.6% v 1.5%) in type 2 DM subjects were significantly higher than in the control subjects (P <.05). According to a multiple regression analysis for resting blood pressure in type 2 DM, VFA was found to be an independent predictor of SBP, while V*O2max and HOMA-IR were independent predictors of DBP. In the controls, however, HOMA-IR was not found to be a significantly independent predictor for either resting SBP or resting DBP. Measurement of the SBP response during graded exercise using a ramp test was performed by an electrical braked cycle ergometer in 54 patients with type 2 DM only. The SBP was measured at 15-second intervals during exercise. The exercise intensity at the double product breaking point (DPBP), which strongly correlated with the exercise intensity at the lactate threshold, was used as an index for the SBP response to standardized exercise intensity. The SBP corresponding to exercise intensity at DPBP (SBP@DPBP) was evaluated as an index of the SBP response to standardized exercise intensity. The change in SBP (deltaSBP = SBP@DPBP - resting SBP) was significantly and positively associated with log area under the curve for glucose (log AUCPG) during a 75-g oral glucose tolerance test (OGTT). In addition, deltaSBP significantly and negatively correlated with the log area under the curve for insulin (log AUCIRI) and log AUCIRI/log AUCPG. Based on these results, insulin resistance was suggested to be independently associated with the resting DBP and SBP response to standardized exercise intensity in type 2 DM patients. Copyright 2002, Elsevier Science (USA). All rights reserved.
Heintze, S D; Zellweger, G; Cavalleri, A; Ferracane, J
2006-02-01
The aim of the study was to evaluate two ceramic materials as possible substitutes for enamel using two wear simulation methods, and to compare both methods with regard to the wear results for different materials. Flat specimens (OHSU n=6, Ivoclar n=8) of one compomer and three composite materials (Dyract AP, Tetric Ceram, Z250, experimental composite) were fabricated and subjected to wear using two different wear testing methods and two pressable ceramic materials as stylus (Empress, experimental ceramic). For the OHSU method, enamel styli of the same dimensions as the ceramic stylus were fabricated additionally. Both wear testing methods differ with regard to loading force, lateral movement of stylus, stylus dimension, number of cycles, thermocycling and abrasive medium. In the OHSU method, the wear facets (mean vertical loss) were measured using a contact profilometer, while in the Ivoclar method (maximal vertical loss) a laser scanner was used for this purpose. Additionally, the vertical loss of the ceramic stylus was quantified for the Ivoclar method. The results obtained from each method were compared by ANOVA and Tukey's test (p<0.05). To compare both wear methods, the log-transformed data were used to establish relative ranks between material/stylus combinations and assessed by applying the Pearson correlation coefficient. The experimental ceramic material generated significantly less wear in Tetric Ceram and Z250 specimens compared to the Empress stylus in the Ivoclar method, whereas with the OHSU method, no difference between the two ceramic antagonists was found with regard to abrasion or attrition. The wear generated by the enamel stylus was not statistically different from that generated by the other two ceramic materials in the OHSU method. With the Ivoclar method, wear of the ceramic stylus was only statistically different when in contact with Tetric Ceram. There was a close correlation between the attrition wear of the OHSU and the wear of the Ivoclar method (Pearson coefficient 0.83, p=0.01). Pressable ceramic materials can be used as a substitute for enamel in wear testing machines. However, material ranking may be affected by the type of ceramic material chosen. The attrition wear of the OHSU method was comparable with the wear generated with the Ivoclar method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardwell, R.K.; Norris, J.W.
1996-12-31
Three different types of turbidite plays have been drilled in the Yinggehai and Qiongdongnan basins of the South China Sea: slope fan turbidites, bottomset turbidites, and channel fill turbidites. Each play type has a distinctive well log signature, lithology, seismic reflector geometry, and reservoir character. Slope fan turbidites are encountered in the YA 21-1-3 well. Well logs are characterized by a ratty SP curve, and mud logs indicate that the turbidites are composed of up to 80 m of sands and silts. Seismic profiles show that these turbidites are found in a distributary channel and levee system on the shelf.more » Bottomset turbidites are encountered in the LD 15-1-1 well. Well logs are characterized by an upward coarsening SP curve, and mud logs indicate that the turbidites are composed of up to 10 m of silty sand. Seismic profiles show these turbidites are deposited by the slumping of shelf sands during a continuous lowstand progradation. Channel fill turbidites are encountered in the LD 30-1-1 well. Well logs are characterized by a blocky SP curve, and mud logs indicate that the turbidites are composed of up to 100 m of massive sand. Seismic profiles show that these turbidites are associated with channel systems that trend parallel to the local basin axis. Distinct cut and fill geometries indicate that the turbidite sands were deposited in a preexisting channel cut.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardwell, R.K.; Norris, J.W.
1996-01-01
Three different types of turbidite plays have been drilled in the Yinggehai and Qiongdongnan basins of the South China Sea: slope fan turbidites, bottomset turbidites, and channel fill turbidites. Each play type has a distinctive well log signature, lithology, seismic reflector geometry, and reservoir character. Slope fan turbidites are encountered in the YA 21-1-3 well. Well logs are characterized by a ratty SP curve, and mud logs indicate that the turbidites are composed of up to 80 m of sands and silts. Seismic profiles show that these turbidites are found in a distributary channel and levee system on the shelf.more » Bottomset turbidites are encountered in the LD 15-1-1 well. Well logs are characterized by an upward coarsening SP curve, and mud logs indicate that the turbidites are composed of up to 10 m of silty sand. Seismic profiles show these turbidites are deposited by the slumping of shelf sands during a continuous lowstand progradation. Channel fill turbidites are encountered in the LD 30-1-1 well. Well logs are characterized by a blocky SP curve, and mud logs indicate that the turbidites are composed of up to 100 m of massive sand. Seismic profiles show that these turbidites are associated with channel systems that trend parallel to the local basin axis. Distinct cut and fill geometries indicate that the turbidite sands were deposited in a preexisting channel cut.« less
User's Manual for Program PeakFQ, Annual Flood-Frequency Analysis Using Bulletin 17B Guidelines
Flynn, Kathleen M.; Kirby, William H.; Hummel, Paul R.
2006-01-01
Estimates of flood flows having given recurrence intervals or probabilities of exceedance are needed for design of hydraulic structures and floodplain management. Program PeakFQ provides estimates of instantaneous annual-maximum peak flows having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (annual-exceedance probabilities of 0.50, 0.20, 0.10, 0.04, 0.02, 0.01, 0.005, and 0.002, respectively). As implemented in program PeakFQ, the Pearson Type III frequency distribution is fit to the logarithms of instantaneous annual peak flows following Bulletin 17B guidelines of the Interagency Advisory Committee on Water Data. The parameters of the Pearson Type III frequency curve are estimated by the logarithmic sample moments (mean, standard deviation, and coefficient of skewness), with adjustments for low outliers, high outliers, historic peaks, and generalized skew. This documentation provides an overview of the computational procedures in program PeakFQ, provides a description of the program menus, and provides an example of the output from the program.
Rusch, Gordon K.
1976-01-06
An improved log N amplifier type nuclear reactor period meter with reduced probability for noise-induced scrams is provided. With the reactor at low power levels a sampling circuit is provided to determine the reactor period by measuring the finite change in the amplitude of the log N amplifier output signal for a predetermined time period, while at high power levels, differentiation of the log N amplifier output signal provides an additional measure of the reactor period.
The application of PGNAA borehole logging for copper grade estimation at Chuquicamata mine.
Charbucinski, J; Duran, O; Freraut, R; Heresi, N; Pineyro, I
2004-05-01
The field trials of a prompt gamma neutron activation (PGNAA) spectrometric logging method and instrumentation (SIROLOG) for copper grade estimation in production holes of a porphyry type copper ore mine, Chuquicamata in Chile, are described. Examples of data analysis, calibration procedures and copper grade profiles are provided. The field tests have proved the suitability of the PGNAA logging system for in situ quality control of copper ore.
Andersson, Jon; Hjältén, Joakim; Dynesius, Mats
2015-01-01
The increasing demand for biofuels from logging residues require serious attention on the importance of dead wood substrates on clear-cuts for the many forestry-intolerant saproxylic (wood-inhabiting) species. In particular, the emerging harvest of low stumps motivates further study of these substrates. On ten clear-cuts we compared the species richness, abundance and species composition of saproxylic beetles hatching from four to nine year old low stumps, high stumps and logs of Norway spruce. By using emergence traps we collected a total of 2,670 saproxylic beetles among 195 species during the summers of 2006, 2007 and 2009. We found that the species assemblages differed significantly between high stumps and logs all three years. The species assemblages of low stumps, on the other hand, were intermediate to those found in logs and high stumps. There were also significant difference in species richness between the three examined years, and we found significant effect of substrate type on richness of predators and fungivores. As shown in previous studies of low stumps on clear-cuts they can sustain large numbers of different saproxylic beetles, including red-listed species. Our study does, in addition to this fact, highlight a possible problem in creating just one type of substrate as a tool for conservation in forestry. Species assemblages in high stumps did not differ significantly from those found in low stumps. Instead logs, which constitute a scarcer substrate type on clear-cuts, provided habitat for a more distinct assemblage of saproxylic species than high stumps. It can therefore be questioned whether high stumps are an optimal tool for nature conservation in clear-cutting forestry. Our results also indicate that low stumps constitute an equally important substrate as high stumps and logs, and we therefore suggest that stump harvesting is done after carefully evaluating measures to provide habitat for saproxylic organisms.
FRACTIONAL PEARSON DIFFUSIONS.
Leonenko, Nikolai N; Meerschaert, Mark M; Sikorskii, Alla
2013-07-15
Pearson diffusions are governed by diffusion equations with polynomial coefficients. Fractional Pearson diffusions are governed by the corresponding time-fractional diffusion equation. They are useful for modeling sub-diffusive phenomena, caused by particle sticking and trapping. This paper provides explicit strong solutions for fractional Pearson diffusions, using spectral methods. It also presents stochastic solutions, using a non-Markovian inverse stable time change.
Correlation Structure of Fractional Pearson Diffusions.
Leonenko, Nikolai N; Meerschaert, Mark M; Sikorskii, Alla
2013-09-01
The stochastic solution to a diffusion equations with polynomial coefficients is called a Pearson diffusion. If the first time derivative is replaced by a Caputo fractional derivative of order less than one, the stochastic solution is called a fractional Pearson diffusion. This paper develops an explicit formula for the covariance function of a fractional Pearson diffusion in steady state, in terms of Mittag-Leffler functions. That formula shows that fractional Pearson diffusions are long range dependent, with a correlation that falls off like a power law, whose exponent equals the order of the fractional derivative.
Zero Pearson coefficient for strongly correlated growing trees.
Dorogovtsev, S N; Ferreira, A L; Goltsev, A V; Mendes, J F F
2010-03-01
We obtained Pearson's coefficient of strongly correlated recursive networks growing by preferential attachment of every new vertex by m edges. We found that the Pearson coefficient is exactly zero in the infinite network limit for the recursive trees (m=1). If the number of connections of new vertices exceeds one (m>1), then the Pearson coefficient in the infinite networks equals zero only when the degree distribution exponent gamma does not exceed 4. We calculated the Pearson coefficient for finite networks and observed a slow power-law-like approach to an infinite network limit. Our findings indicate that Pearson's coefficient strongly depends on size and details of networks, which makes this characteristic virtually useless for quantitative comparison of different networks.
A comparison of the survival of F+RNA and F+DNA coliphages in lake water microcosms.
Long, Sharon C; Sobsey, Mark D
2004-03-01
The survival of seven F+RNA phages (MS2 Group I ATCC type strain, two Group I environmental isolates, a Group II environmental isolate, a Group III environmental isolate, and two Group IV environmental isolates) and six F+DNA phages (M13, fd, f1, and ZJ/2 ATCC type strains, and two environmental isolates) were examined in microcosms using a surface drinking water source. Phages were spiked into replicate aliquots of a source water at about 20,000 pfu/ml. Replicate spikes were incubated at 4 and 20 degrees C and monitored for 110 days. At 4 degrees C, Groups I and II F+ RNA phages were detectable through 110 days, with reductions of about 1 and 3 log10, respectively. The Group III F+RNA phage demonstrated 5 log10 reduction after 3 weeks, and the Group IV F+RNA phages were reduced to detection limits (5 log10 reduction) within 10 days. Of the F+DNA phages, all four type strains were detectable with about 2.5 log10 reduction after 110 days at 4 degrees C. The F+DNA environmental isolates were detectable with about a 4 log10 reduction after 110 days at 4 degrees C. All phages demonstrated faster decay at 20 degrees C. These results suggest that differences in F+ phage survival may influence their prevalence in environmental waters and the ability to attribute their prevalence to specific human and animal sources of faecal contamination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szabo, G.; Bulman, R.A.
The determination of soil adsorption coefficients (K[sub oc]) via HPLC capacity factors (k[prime]) has been studied, including the effect of column type and mobile phase composition on the correlation between log K[sub oc] and log k[prime]. K[sub oc] values obtained by procedures other than HPLC correlate well with HPLC capacity factors determined on a chemically immobilized humic acid stationary phase, and it is suggested that this phase is a better model for the sorption onto soil or sediment than the octadecyl-, phenyl- and ethylsilica phases. By using log k[prime][sub w] a theoretical capacity factor has been obtained by extrapolation ofmore » the retention data in a binary solvent system to pure aqueous eluent. There is a better correlation between log K[sub oc] and log k[prime][sub w] than the correlation between log K[sub oc] and log k[prime].« less
Kirstein, L M; Mellors, J W; Rinaldo, C R; Margolick, J B; Giorgi, J V; Phair, J P; Dietz, E; Gupta, P; Sherlock, C H; Hogg, R; Montaner, J S; Muñoz, A
1999-08-01
We conducted two studies to determine the potential influence of delays in blood processing, type of anticoagulant, and assay method on human immunodeficiency virus type 1 (HIV-1) RNA levels in plasma. The first was an experimental study in which heparin- and EDTA-anticoagulated blood samples were collected from 101 HIV-positive individuals and processed to plasma after delays of 2, 6, and 18 h. HIV-1 RNA levels in each sample were then measured by both branched-DNA (bDNA) and reverse transcriptase PCR (RT-PCR) assays. Compared to samples processed within 2 h, the loss (decay) of HIV-1 RNA in heparinized blood was significant (P < 0.05) but small after 6 h (bDNA assay, -0.12 log(10) copies/ml; RT-PCR, -0.05 log(10) copies/ml) and after 18 h (bDNA assay, -0.27 log(10) copies/ml; RT-PCR, -0.15 log(10) copies/ml). Decay in EDTA-anticoagulated blood was not significant after 6 h (bDNA assay, -0.002 log(10) copies/ml; RT-PCR, -0.02 log(10) copies/ml), but it was after 18 h (bDNA assay, -0.09 log(10) copies/ml; RT-PCR, -0.09 log(10) copies/ml). Only 4% of samples processed after 6 h lost more than 50% (>/=0.3 log(10) copies/ml) of the HIV-1 RNA, regardless of the anticoagulant or the assay that was used. The second study compared HIV-1 RNA levels in samples from the Multicenter AIDS Cohort Study (MACS; samples were collected in heparin-containing tubes in 1985, had a 6-h average processing delay, and were assayed by bDNA assay) and the British Columbia Drug Treatment Program (BCDTP) (collected in EDTA- or acid citrate dextrose-containing tubes in 1996 and 1997, had a 2-h maximum processing delay, and were assayed by RT-PCR). HIV-1 RNA levels in samples from the two cohorts were not significantly different after adjusting for CD4(+)-cell count and converting bDNA assay values to those corresponding to the RT-PCR results. In summary, the decay of HIV-1 RNA measured in heparinized blood after 6 h was small (-0.05 to -0.12 log(10) copies/ml), and the minor impact of this decay on HIV-1 RNA concentrations in archived plasma samples of the MACS was confirmed by the similarity of CD4(+)-cell counts and assay-adjusted HIV-1 RNA concentrations in the MACS and BCDTP.
Keto-Timonen, Riikka; Lindström, Miia; Puolanne, Eero; Niemistö, Markku; Korkeala, Hannu
2012-07-01
The effect of three different concentrations of sodium nitrite (0, 75, and 120 mg/kg) on growth and toxigenesis of group II (nonproteolytic) Clostridium botulinum type B was studied in Finnish wiener-type sausage, bologna-type sausage, and cooked ham. A low level of inoculum (2.0 log CFU/g) was used for wiener-type sausage and bologna-type sausage, and both low (2.0 log CFU/g) and high (4.0 log CFU/g) levels were used for cooked ham. The products were formulated and processed under simulated commercial conditions and stored at 8°C for 5 weeks. C. botulinum counts were determined in five replicate samples of each nitrite concentration at 1, 3, and 5 weeks after thermal processing. All samples were positive for C. botulinum type B. The highest C. botulinum counts were detected in nitrite-free products. Toxigenesis was observed in nitrite-free products during storage, but products containing either 75 or 120 mg/kg nitrite remained nontoxic during the 5-week study period, suggesting that spores surviving the heat treatment were unable to germinate and develop into a toxic culture in the presence of nitrite. The results suggest that the safety of processed meat products with respect to group II C. botulinum type B can be maintained even with a reduced concentration (75 mg/kg) of sodium nitrite.
NASA Astrophysics Data System (ADS)
Hoshor, Cory; Young, Stephan; Rogers, Brent; Currie, James; Oakes, Thomas; Scott, Paul; Miller, William; Caruso, Anthony
2014-03-01
A novel application of the Pearson Cross-Correlation to neutron spectral discernment in a moderating type neutron spectrometer is introduced. This cross-correlation analysis will be applied to spectral response data collected through both MCNP simulation and empirical measurement by the volumetrically sensitive spectrometer for comparison in 1, 2, and 3 spatial dimensions. The spectroscopic analysis methods discussed will be demonstrated to discern various common spectral and monoenergetic neutron sources.
Application of borehole geophysics to water-resources investigations
Keys, W.S.; MacCary, L.M.
1971-01-01
This manual is intended to be a guide for hydrologists using borehole geophysics in ground-water studies. The emphasis is on the application and interpretation of geophysical well logs, and not on the operation of a logger. It describes in detail those logging techniques that have been utilized within the Water Resources Division of the U.S. Geological Survey, and those used in petroleum investigations that have potential application to hydrologic problems. Most of the logs described can be made by commercial logging service companies, and many can be made with small water-well loggers. The general principles of each technique and the rules of log interpretation are the same, regardless of differences in instrumentation. Geophysical well logs can be interpreted to determine the lithology, geometry, resistivity, formation factor, bulk density, porosity, permeability, moisture content, and specific yield of water-bearing rocks, and to define the source, movement, and chemical and physical characteristics of ground water. Numerous examples of logs are used to illustrate applications and interpretation in various ground-water environments. The interrelations between various types of logs are emphasized, and the following aspects are described for each of the important logging techniques: Principles and applications, instrumentation, calibration and standardization, radius of investigation, and extraneous effects.
Log-Log Convexity of Type-Token Growth in Zipf's Systems
NASA Astrophysics Data System (ADS)
Font-Clos, Francesc; Corral, Álvaro
2015-06-01
It is traditionally assumed that Zipf's law implies the power-law growth of the number of different elements with the total number of elements in a system—the so-called Heaps' law. We show that a careful definition of Zipf's law leads to the violation of Heaps' law in random systems, with growth curves that have a convex shape in log-log scale. These curves fulfill universal data collapse that only depends on the value of Zipf's exponent. We observe that real books behave very much in the same way as random systems, despite the presence of burstiness in word occurrence. We advance an explanation for this unexpected correspondence.
ERIC Educational Resources Information Center
Wei, Wei; Zheng, Ying
2017-01-01
This research provided a comprehensive evaluation and validation of the listening section of a newly introduced computerised test, Pearson Test of English Academic (PTE Academic). PTE Academic contains 11 item types assessing academic listening skills either alone or in combination with other skills. First, task analysis helped identify skills…
Font, María; Ardaiz, Elena; Cordeu, Lucia; Cubedo, Elena; García-Foncillas, Jesús; Sanmartin, Carmen; Palop, Juan-Antonio
2006-03-15
In an attempt to discover the essential features that would allow us to explain the differences in cytotoxic activity shown by a series of symmetrical diaryl derivatives with nitrogenated functions, we have studied by molecular modelling techniques the variation in Log P and conformational behaviour, in terms of structural modifications. The Log P data--although they provide few clues concerning the observed variability in activity--suggest that an initial separation of active and inactive compounds is possible based on this parameter. The subsequent study of the conformational behaviour of the compounds, selected according to their Log P values, showed that the active compounds preferentially display an extended conformation and inactive ones are associated with a certain type of folding, with a triangular-type conformation adopted in these cases.
Patient educational technologies and their use by patients diagnosed with localized prostate cancer.
Baverstock, Richard J; Crump, R Trafford; Carlson, Kevin V
2015-09-29
Two urology practices in Calgary, Canada use patient educational technology (PET) as a core component of their clinical practice. The purpose of this study was to determine how patients interact with PET designed to inform them about their treatment options for clinically localized prostate cancer. A PET library was developed with 15 unique prostate-related educational modules relating to diagnosis, treatment options, and potential side effects. The PET collected data regarding its use, and those data were used to conduct a retrospective analysis. Descriptive analyses were conducted and comparisons made between patients' utilization of the PET library during first and subsequent access; Pearson's Chi-Square was used to test for statistical significance, where appropriate. Every patient (n = 394) diagnosed with localized prostate cancer was given access to the PET library using a unique identifier. Of those, 123 logged into the library and viewed at least one module and 94 patients logged into the library more than once. The average patient initially viewed modules pertaining to their diagnosis. Viewing behavior significantly changed in subsequent logins, moving towards modules pertaining to treatment options, decision making, and post-surgical information. As observed through the longitudinal utilization of the PET library, information technology offers clinicians an opportunity to provide an interactive platform to meet patients' dynamic educational needs. Understanding these needs will help inform the development of more useful PETs. The informational needs of patients diagnosed with clinically localized prostate cancer changed throughout the course of their diagnosis and treatment.
Lin, Keh-chung; Chen, Hui-fang; Chen, Chia-ling; Wang, Tien-ni; Wu, Ching-yi; Hsieh, Yu-wei; Wu, Li-ling
2012-01-01
This study examined criterion-related validity and clinimetric properties of the Pediatric Motor Activity Log (PMAL) in children with cerebral palsy. Study participants were 41 children (age range: 28-113 months) and their parents. Criterion-related validity was evaluated by the associations between the PMAL and criterion measures at baseline and posttreatment, including the self-care, mobility, and cognition subscale, the total performance of the Functional Independence Measure in children (WeeFIM), and the grasping and visual-motor integration of the Peabody Developmental Motor Scales. Pearson correlation coefficients were calculated. Responsiveness was examined using the paired t test and the standardized response mean, the minimal detectable change was captured at the 90% confidence level, and the minimal clinically important change was estimated using anchor-based and distribution-based approaches. The PMAL-QOM showed fair concurrent validity at pretreatment and posttreatment and predictive validity, whereas the PMAL-AOU had fair concurrent validity at posttreatment only. The PMAL-AOU and PMAL-QOM were both markedly responsive to change after treatment. Improvement of at least 0.67 points on the PMAL-AOU and 0.66 points on the PMAL-QOM can be considered as a true change, not measurement error. A mean change has to exceed the range of 0.39-0.94 on the PMAL-AOU and the range of 0.38-0.74 on the PMAL-QOM to be regarded as clinically important change. Copyright © 2011 Elsevier Ltd. All rights reserved.
Genotyping and drug resistance patterns of M. tuberculosis strains in Pakistan.
Tanveer, Mahnaz; Hasan, Zahra; Siddiqui, Amna R; Ali, Asho; Kanji, Akbar; Ghebremicheal, Solomon; Hasan, Rumina
2008-12-24
The incidence of tuberculosis in Pakistan is 181/100,000 population. However, information about transmission and geographical prevalence of Mycobacterium tuberculosis strains and their evolutionary genetics as well as drug resistance remains limited. Our objective was to determine the clonal composition, evolutionary genetics and drug resistance of M. tuberculosis isolates from different regions of the country. M. tuberculosis strains isolated (2003-2005) from specimens submitted to the laboratory through collection units nationwide were included. Drug susceptibility was performed and strains were spoligotyped. Of 926 M. tuberculosis strains studied, 721(78%) were grouped into 59 "shared types", while 205 (22%) were identified as "Orphan" spoligotypes. Amongst the predominant genotypes 61% were Central Asian strains (CAS ; including CAS1, CAS sub-families and Orphan Pak clusters), 4% East African-Indian (EAI), 3% Beijing, 2% poorly defined TB strains (T), 2% Haarlem and LAM (0.2). Also TbD1 analysis (M. tuberculosis specific deletion 1) confirmed that CAS1 was of "modern" origin while EAI isolates belonged to "ancestral" strain types.Prevalence of CAS1 clade was significantly higher in Punjab (P < 0.01, Pearsons Chi-square test) as compared with Sindh, North West Frontier Province and Balochistan provinces. Forty six percent of isolates were sensitive to five first line antibiotics tested, 45% were Rifampicin resistant, 50% isoniazid resistant. MDR was significantly associated with Beijing strains (P = 0.01, Pearsons Chi-square test) and EAI (P = 0.001, Pearsons Chi-square test), but not with CAS family. Our results show variation of prevalent M. tuberculosis strain with greater association of CAS1 with the Punjab province. The fact that the prevalent CAS genotype was not associated with drug resistance is encouraging. It further suggests a more effective treatment and control programme should be successful in reducing the tuberculosis burden in Pakistan.
Rule-Based Motion Coordination for the Adaptive Suspension Vehicle on Ternary-Type Terrain
1990-12-01
robot-window-array* nil) (defvar *robot..window..width* nil) (defvar * rebot -.window..heig)ht* nil) (defvar *terrain-buffer* nil) (defvar *terrain...cond ((momrber leg lift-able-leg. -test #’equal) log) (t nil)) .(dafmethod (test-overlap- rebot ipltcable-leg) (log) (nond ((and (member leg place-able
Virus inactivation under the photodynamic effect of phthalocyanine zinc(II) complexes.
Remichkova, Mimi; Mukova, Luchia; Nikolaeva-Glomb, Lubomira; Nikolova, Nadya; Doumanova, Lubka; Mantareva, Vanya; Angelov, Ivan; Kussovski, Veselin; Galabov, Angel S
2017-03-01
Various metal phthalocyanines have been studied for their capacity for photodynamic effects on viruses. Two newly synthesized water-soluble phthalocyanine Zn(II) complexes with different charges, cationic methylpyridyloxy-substituted Zn(II)- phthalocyanine (ZnPcMe) and anionic sulfophenoxy-substituted Zn(II)-phthalocyanine (ZnPcS), were used for photoinactivation of two DNA-containing enveloped viruses (herpes simplex virus type 1 and vaccinia virus), two RNA-containing enveloped viruses (bovine viral diarrhea virus and Newcastle disease virus) and two nude viruses (the enterovirus Coxsackie B1, a RNA-containing virus, and human adenovirus 5, a DNA virus). These two differently charged phthalocyanine complexes showed an identical marked virucidal effect against herpes simplex virus type 1, which was one and the same at an irradiation lasting 5 or 20 min (Δlog=3.0 and 4.0, respectively). Towards vaccinia virus this effect was lower, Δlog=1.8 under the effect of ZnPcMe and 2.0 for ZnPcS. Bovine viral diarrhea virus manifested a moderate sensitivity to ZnPcMe (Δlog=1.8) and a pronounced one to ZnPcS at 5- and 20-min irradiation (Δlog=5.8 and 5.3, respectively). The complexes were unable to inactivate Newcastle disease virus, Coxsackievirus B1 and human adenovirus type 5.
Pharmacokinetic profiles of repaglinide in elderly subjects with type 2 diabetes.
Hatorp, V; Huang, W C; Strange, P
1999-04-01
Pharmacokinetic profiles of single- and multiple-dose regimens of repaglinide were evaluated in 12 elderly subjects with type 2 diabetes. On day 1, following a 10-hour fast, subjects received a single 2-mg dose of repaglinide. Starting on day 2 and continuing for 7 days, each subject received a 2-mg dose of repaglinide 15 minutes before each of the three main meals. On day 9, subjects received a single 2-mg dose of repaglinide. Pharmacokinetic profiles, including area under the curve (AUC), log(AUC), maximal concentration (Cmax), log(Cmax), time to maximal concentration (Tmax), and half-life (T(1/2)), were determined at completion of the single- and multiple-dose regimens (days 1 and 9, respectively). Trough repaglinide values were collected on days 2 through 7. The mean log(AUC) values after multiple dosing were significantly higher than the values obtained after a single dose. The mean values for log(Cmax), and Tmax were comparable after each dosing regimen. The T(1/2) of repaglinide after multiple dosing was 1.7 hours. The trough values for repaglinide were low. No hypoglycemic events were reported. The pharmacokinetic profiles of repaglinide after single- and multiple-dose regimens were similar, and repaglinide was well tolerated by elderly subjects with type 2 diabetes.
Kwee, Sandi A.; Lim, John; Watanabe, Alex; Kromer-Baker, Kathleen; Coel, Marc N.
2015-01-01
This study investigates the prognostic significance of metabolically active tumor volume (MATV) measurements applied to fluorine-18 fluorocholine (FC) PET/CT in castrate-resistant prostate cancer (CRPC). Methods FC PET/CT imaging was performed in 30 patients with CRPC. Metastatic disease was quantified on the basis of maximum standardized uptake value (SUVmax), MATV, and total lesion activity (TLA = MATV × mean SUV). Tumor burden indices derived from whole-body summation of PET tumor volume measurements (ie. net MATV and net TLA) were evaluated as variables in Cox regression and Kaplan-Meier survival analyses. Results Net MATV ranged from 0.12 cm3 to 1543.9 cm3 (median 52.6 cm3). Net TLA ranged from 0.40g to 6688.7g (median 225.1g). PSA level at the time of PET correlated significantly with net MATV (Pearson r = 0.65, p = 0.0001) and net TLA (r = 0.60, p = 0.0005) but not highest lesional SUVmax of each scan. Survivors were followed for a median 23 months (range 6 – 38 months). On Cox regression analyses, overall survival was significantly associated with net MATV (p = 0.0068), net TLA (p = 0.0072), and highest lesion SUVmax (p = 0.0173), and borderline associated with PSA level (p = 0.0458). Only net MATV and net TLA remained significant in univariate-adjusted survival analyses. Kaplan-Meier analysis demonstrated significant differences in survival between groups stratified by median net MATV (log-rank P = 0.0371), net TLA (log-rank P = 0.0371), and highest lesion SUVmax (log-rank P = 0.0223). Conclusions Metastatic prostate cancer detected by FC PET/CT can be quantified based on volumetric measurements of tumor metabolic activity. The prognostic value of FC PET/CT may stem from this capacity to assess whole-body tumor burden. With further clinical validation, FC PET-based indices of global disease activity and mortality risk could prove useful in patient-individualized treatment of CRPC. PMID:24676753
Marinakis, Harry A; Zwemer, Frank L
2003-02-01
Little is known about how the availability of laboratory data affects emergency physicians' practice habits and satisfaction. We modified our clinical information system to display laboratory test status with continuous updates, similar to an airport arrival display. The objective of this study was to determine whether the laboratory test status display altered emergency physicians' work habits and increased satisfaction compared with the time period before implementation of laboratory test status. A retrospective analysis was performed of emergency physicians' actual use of the clinical information system before and after implementation of the laboratory test status display. Emergency physicians were retrospectively surveyed regarding the effect of laboratory test status display on their practice habits and clinical information system use. Survey responses were matched with actual use of the clinical information system. Data were analyzed by using dependent t tests and Pearson correlation coefficients. The study was conducted at a university hospital. Clinical information system use by 46 emergency physicians was analyzed. Twenty-five surveys were returned (71.4% of available emergency physicians). All emergency physicians perceived fewer clinical information system log ons per day after laboratory test status display. The actual average decrease was 19%. Emergency physicians who reported the greatest decrease in log ons per day tended to have the greatest actual decrease (r =-0.36). There was no significant correlation between actual and perceived total time logged on (r =0.08). In regard to effect on emergency physicians' practice habits, 95% reported increased efficiency, 80% reported improved satisfaction with data access, and 65% reported improved communication with patients. An inexpensive computer modification, laboratory test status display, significantly increased subjective efficiency, changed work habits, and improved satisfaction regarding data access and patient communication among emergency physicians. Knowledge of the test queue changed emergency physician behavior and improved satisfaction.
NASA Astrophysics Data System (ADS)
Gillham, Nicholas W.
2015-01-01
Francis Galton, Charles Darwin's cousin, had wide and varied interests. They ranged from exploration and travel writing to fingerprinting and the weather. After reading Darwin's On the Origin of Species, Galton reached the conclusion that it should be possible to improve the human stock through selective breeding, as was the case for domestic animals and cultivated plants. Much of the latter half of Galton's career was devoted to trying to devise methods to distinguish men of good stock and then to show that these qualities were inherited. But along the way he invented two important statistical methods: regression and correlation. He also discovered regression to the mean. This led Galton to believe that evolution could not proceed by the small steps envisioned by Darwin, but must proceed by discontinuous changes. Galton's book Natural Inheritance (1889) served as the inspiration for Karl Pearson, W.F.R. Weldon and William Bateson. Pearson and Weldon were interested in continuously varying characters and the application of statistical techniques to their study. Bateson was fascinated by discontinuities and the role they might play in evolution. Galton proposed his Law of Ancestral Heredity in the last decade of the nineteenth century. At first this seemed to work well as an explanation for continuously varying traits of the type that interested Pearson and Weldon. In contrast, Bateson had published a book on discontinuously varying traits so he was in a position to understand and embrace Mendel's principles of inheritance when they were rediscovered in 1900. The subsequent battle between Weldon and Pearson, the biometricians, and Bateson, the Mendelian, went on acrimoniously for several years at the beginning of the twentieth century before Mendelian theory finally won out.
Skoglund, Per H; Arpegård, Johannes; Ostergren, Jan; Svensson, Per
2014-03-01
Patients with peripheral arterial disease (PAD) are at high risk for cardiovascular (CV) events. We have previously shown that ambulatory pulse pressure (APP) predicts CV events in PAD patients. The biomarkers amino-terminal pro-B-type natriuretic peptide (NT-proBNP), high-sensitivity C-reactive protein (hs-CRP), and cystatin C are related to a worse outcome in patients with CV disease, but their predictive values have not been studied in relation to APP. Blood samples and 24-hour measurements of ambulatory blood pressure were examined in 98 men referred for PAD evaluation during 1998-2001. Patients were followed for a median of 71 months. The outcome variable was CV events defined as either CV mortality or any hospitalization for myocardial infarction, stroke, or coronary revascularization. The predictive values of log(NT-proBNP), log(hs-CRP), and log(cystatin C) alone and together with APP were assessed by multivariable Cox regression. Area under the curve (AUC) and net reclassification improvement (NRI) were calculated compared with a model containing other significant risk factors. During follow-up, 36 patients had at least 1 CV event. APP, log(NT-proBNP), and log(hs-CRP) all predicted CV events in univariable analysis, whereas log(cystatin C) did not. In multivariable analysis log(NT-proBNP) (hazard ratio (HR) = 1.62; 95% confidence interval (CI) = 1.05-2.51) and log(hs-CRP) (HR = 1.63; 95% CI = 1.19-2.24) predicted events independently of 24-hour PP. The combination of log(NT-proBNP), log(hs-CRP), and average day PP improved risk discrimination (AUC = 0.833 vs. 0.736; P < 0.05) and NRI (37%; P < 0.01) when added to other significant risk factors. NT-proBNP and hs-CRP predict CV events independently of APP and the combination of hs-CRP, NT-proBNP, and day PP improves risk discrimination in PAD patients.
Neyman-Pearson biometric score fusion as an extension of the sum rule
NASA Astrophysics Data System (ADS)
Hube, Jens Peter
2007-04-01
We define the biometric performance invariance under strictly monotonic functions on match scores as normalization symmetry. We use this symmetry to clarify the essential difference between the standard score-level fusion approaches of sum rule and Neyman-Pearson. We then express Neyman-Pearson fusion assuming match scores defined using false acceptance rates on a logarithmic scale. We show that by stating Neyman-Pearson in this form, it reduces to sum rule fusion for ROC curves with logarithmic slope. We also introduce a one parameter model of biometric performance and use it to express Neyman-Pearson fusion as a weighted sum rule.
Kahan, Tracey L; Claudatos, Stephanie
2016-04-01
Self-ratings of dream experiences were obtained from 144 college women for 788 dreams, using the Subjective Experiences Rating Scale (SERS). Consistent with past studies, dreams were characterized by a greater prevalence of vision, audition, and movement than smell, touch, or taste, by both positive and negative emotion, and by a range of cognitive processes. A Principal Components Analysis of SERS ratings revealed ten subscales: four sensory, three affective, one cognitive, and two structural (events/actions, locations). Correlations (Pearson r) among subscale means showed a stronger relationship among the process-oriented features (sensory, cognitive, affective) than between the process-oriented and content-centered (structural) features--a pattern predicted from past research (e.g., Bulkeley & Kahan, 2008). Notably, cognition and positive emotion were associated with a greater number of other phenomenal features than was negative emotion; these findings are consistent with studies of the qualitative features of waking autobiographical memory (e.g., Fredrickson, 2001). Copyright © 2016 Elsevier Inc. All rights reserved.
Nonlocal Total-Variation-Based Speckle Filtering for Ultrasound Images.
Wen, Tiexiang; Gu, Jia; Li, Ling; Qin, Wenjian; Wang, Lei; Xie, Yaoqin
2016-07-01
Ultrasound is one of the most important medical imaging modalities for its real-time and portable imaging advantages. However, the contrast resolution and important details are degraded by the speckle in ultrasound images. Many speckle filtering methods have been developed, but they are suffered from several limitations, difficult to reach a balance between speckle reduction and edge preservation. In this paper, an adaptation of the nonlocal total variation (NLTV) filter is proposed for speckle reduction in ultrasound images. The speckle is modeled via a signal-dependent noise distribution for the log-compressed ultrasound images. Instead of the Euclidian distance, the statistical Pearson distance is introduced in this study for the similarity calculation between image patches via the Bayesian framework. And the Split-Bregman fast algorithm is used to solve the adapted NLTV despeckling functional. Experimental results on synthetic and clinical ultrasound images and comparisons with some classical and recent algorithms are used to demonstrate its improvements in both speckle noise reduction and tissue boundary preservation for ultrasound images. © The Author(s) 2015.
Bacardí-Gascón, Montserrat; Ley y de Góngora, Silvia; Castro-Vázquez, Brenda Yuniba; Jiménez-Cruz, Arturo
2003-01-01
The purpose of the study was to estimate dietary intake of folate in two groups of women from different economic backgrounds and to evaluate validity of the 5-day-weighed food registry (5-d-WFR) and Food Frequency Questionnaire (FFQ) using biological markers. A cross-sectional study was conducted in two samples of urban Mexican women: one represented the middle socioeconomic status (middle SES) and the other, low socioeconomic status (low SES). Middle SES included 34 women recruited from 1998 to 1999. Participants were between the ages of 18 and 32 years and were employed in the banking industry (middle SES) in the US-Mexican border city of Tijuana, Baja California. Low SES included 70 women between the ages of 18 and 35 years recruited during the year 2000. These women were receiving care at a primary health care center in Ensenada, Baja California Norte State, Mexico (low SES). Pearson correlations were calculated between folate intake among 5-day diet registry, FFQ, and biochemical indices. FFQ reproducibility was performed by Spearman correlation of each food item daily and of weekly intake. Average folate intake in middle SES from 5-d-WFR was 210 microg +/- 171. Fifty four percent of participants had intakes <200 microg/daily. Average folate intake from FFQ was 223 +/- 78 microg/day. Pearson correlation between log transformed and within individually adjusted 5-d-WFR folate intakes and serum folate was 0.40 (p=0.02). Mexican women of reproductive age living in the US-Mexican border State of Baja California are at very high risk of NTDs as a result of low folate intake and low serum folate and RBC folate concentrations.
The Effects of Increasing Ocular Surface Stimulation on Blinking and Sensation
Wu, Ziwei; Begley, Carolyn G.; Situ, Ping; Simpson, Trefford
2014-01-01
Purpose. The purpose of this study was to determine how increasing ocular surface stimulation affected blinking and sensation, while controlling task concentration. Methods. Ten healthy subjects concentrated on a task while a custom pneumatic device generated air flow toward the central cornea. Six flow rates (FRs) were randomly presented three times each and subjects used visual analog scales to record their sensory responses. The interblink interval (IBI) and the FR were recorded simultaneously and the IBI, sensory response, and corresponding FR were determined for each trial. The FR associated with a statistically significant decrease in IBI, the blink increase threshold (BIT), was calculated for each subject. Results. Both the mean and SD of IBI were decreased with increasing stimulation, from 5.69 ± 3.96 seconds at baseline to 1.02 ± 0.37 seconds at maximum stimulation. The average BIT was 129 ± 20 mL/min flow rate with an IBI of 2.33 ± 1.10 seconds (permutation test, P < 0.001). After log transformation, there was a significant linear function between increasing FR and decreasing IBI within each subject (Pearson's r ≤ −0.859, P < 0.05). The IBI was highly correlated with wateriness, discomfort, and cooling ratings (Pearson's r ≤ −0.606, P < 0.001). Conclusions. There was a dose-response–like relationship between increased surface stimulation and blinking in healthy subjects, presumably for protection of the ocular surface. The blink response was highly correlated with ocular surface sensation, which is not surprising given their common origins. The BIT, a novel metric, may provide an additional end point for studies on dry eye or other conditions. PMID:24557346
Projections of Flood Risk using Credible Climate Signals in the Ohio River Basin
NASA Astrophysics Data System (ADS)
Schlef, K.; Robertson, A. W.; Brown, C.
2017-12-01
Estimating future hydrologic flood risk under non-stationary climate is a key challenge to the design of long-term water resources infrastructure and flood management strategies. In this work, we demonstrate how projections of large-scale climate patterns can be credibly used to create projections of long-term flood risk. Our study area is the northwest region of the Ohio River Basin in the United States Midwest. In the region, three major teleconnections have been previously demonstrated to affect synoptic patterns that influence extreme precipitation and streamflow: the El Nino Southern Oscillation, the Pacific North American pattern, and the Pacific Decadal Oscillation. These teleconnections are strongest during the winter season (January-March), which also experiences the greatest number of peak flow events. For this reason, flood events are defined as the maximum daily streamflow to occur in the winter season. For each gage in the region, the location parameter of a log Pearson type 3 distribution is conditioned on the first principal component of the three teleconnections to create a statistical model of flood events. Future projections of flood risk are created by forcing the statistical model with projections of the teleconnections from general circulation models selected for skill. We compare the results of our method to the results of two other methods: the traditional model chain (i.e., general circulation model projections to downscaling method to hydrologic model to flood frequency analysis) and that of using the historic trend. We also discuss the potential for developing credible projections of flood events for the continental United States.
Haryanto, Haryanto; Arisandi, Defa; Suriadi, Suriadi; Imran, Imran; Ogai, Kazuhiro; Sanada, Hiromi; Okuwa, Mayumi; Sugama, Junko
2017-06-01
The aim of this study was to clarify the relationship between maceration and wound healing. A prospective longitudinal design was used in this study. The wound condition determined the type of dressings used and the dressing change frequency. A total of 62 participants with diabetic foot ulcers (70 wounds) were divided into two groups: non-macerated (n = 52) and macerated wounds (n = 18). Each group was evaluated weekly using the Bates-Jensen Wound Assessment Tool, with follow-ups until week 4. The Mann-Whitney U test showed that the changes in the wound area in week 1 were faster in the non-macerated group than the macerated group (P = 0·02). The Pearson correlation analysis showed a moderate correlation between maceration and wound healing from enrolment until week 4 (P = 0·002). After week 4, the Kaplan-Meier analysis showed that the non-macerated wounds healed significantly faster than the macerated wounds (log-rank test = 19·378, P = 0·000). The Cox regression analysis confirmed that maceration was a significant and independent predictor of wound healing in this study (adjusted hazard ratio, 0·324; 95% CI, 0·131-0·799; P = 0·014). The results of this study demonstrated that there is a relationship between maceration and wound healing. Changes in the wound area can help predict the healing of wounds with maceration in clinical settings. © 2016 Medicalhelplines.com Inc and John Wiley & Sons Ltd.
Curran, Janet H.; Meyer, David F.; Tasker, Gary D.
2003-01-01
Estimates of the magnitude and frequency of peak streamflow are needed across Alaska for floodplain management, cost-effective design of floodway structures such as bridges and culverts, and other water-resource management issues. Peak-streamflow magnitudes for the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows were computed for 301 streamflow-gaging and partial-record stations in Alaska and 60 stations in conterminous basins of Canada. Flows were analyzed from data through the 1999 water year using a log-Pearson Type III analysis. The State was divided into seven hydrologically distinct streamflow analysis regions for this analysis, in conjunction with a concurrent study of low and high flows. New generalized skew coefficients were developed for each region using station skew coefficients for stations with at least 25 years of systematic peak-streamflow data. Equations for estimating peak streamflows at ungaged locations were developed for Alaska and conterminous basins in Canada using a generalized least-squares regression model. A set of predictive equations for estimating the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year peak streamflows was developed for each streamflow analysis region from peak-streamflow magnitudes and physical and climatic basin characteristics. These equations may be used for unregulated streams without flow diversions, dams, periodically releasing glacial impoundments, or other streamflow conditions not correlated to basin characteristics. Basin characteristics should be obtained using methods similar to those used in this report to preserve the statistical integrity of the equations.
NASA Astrophysics Data System (ADS)
Luke, Adam; Vrugt, Jasper A.; AghaKouchak, Amir; Matthew, Richard; Sanders, Brett F.
2017-07-01
Nonstationary extreme value analysis (NEVA) can improve the statistical representation of observed flood peak distributions compared to stationary (ST) analysis, but management of flood risk relies on predictions of out-of-sample distributions for which NEVA has not been comprehensively evaluated. In this study, we apply split-sample testing to 1250 annual maximum discharge records in the United States and compare the predictive capabilities of NEVA relative to ST extreme value analysis using a log-Pearson Type III (LPIII) distribution. The parameters of the LPIII distribution in the ST and nonstationary (NS) models are estimated from the first half of each record using Bayesian inference. The second half of each record is reserved to evaluate the predictions under the ST and NS models. The NS model is applied for prediction by (1) extrapolating the trend of the NS model parameters throughout the evaluation period and (2) using the NS model parameter values at the end of the fitting period to predict with an updated ST model (uST). Our analysis shows that the ST predictions are preferred, overall. NS model parameter extrapolation is rarely preferred. However, if fitting period discharges are influenced by physical changes in the watershed, for example from anthropogenic activity, the uST model is strongly preferred relative to ST and NS predictions. The uST model is therefore recommended for evaluation of current flood risk in watersheds that have undergone physical changes. Supporting information includes a MATLAB® program that estimates the (ST/NS/uST) LPIII parameters from annual peak discharge data through Bayesian inference.
Tortorelli, Robert L.
1997-01-01
Statewide regression equations for Oklahoma were determined for estimating peak discharge and flood frequency for selected recurrence intervals from 2 to 500 years for ungaged sites on natural unregulated streams. The most significant independent variables required to estimate peak-streamflow frequency for natural unregulated streams in Oklahoma are contributing drainage area, main-channel slope, and mean-annual precipitation. The regression equations are applicable for watersheds with drainage areas less than 2,510 square miles that are not affected by regulation from manmade works. Limitations on the use of the regression relations and the reliability of regression estimates for natural unregulated streams are discussed. Log-Pearson Type III analysis information, basin and climatic characteristics, and the peak-stream-flow frequency estimates for 251 gaging stations in Oklahoma and adjacent states are listed. Techniques are presented to make a peak-streamflow frequency estimate for gaged sites on natural unregulated streams and to use this result to estimate a nearby ungaged site on the same stream. For ungaged sites on urban streams, an adjustment of the statewide regression equations for natural unregulated streams can be used to estimate peak-streamflow frequency. For ungaged sites on streams regulated by small floodwater retarding structures, an adjustment of the statewide regression equations for natural unregulated streams can be used to estimate peak-streamflow frequency. The statewide regression equations are adjusted by substituting the drainage area below the floodwater retarding structures, or drainage area that represents the percentage of the unregulated basin, in the contributing drainage area parameter to obtain peak-streamflow frequency estimates.
Dobkin, Bruce H; Xu, Xiaoyu; Batalin, Maxim; Thomas, Seth; Kaiser, William
2011-08-01
Outcome measures of mobility for large stroke trials are limited to timed walks for short distances in a laboratory, step counters and ordinal scales of disability and quality of life. Continuous monitoring and outcome measurements of the type and quantity of activity in the community would provide direct data about daily performance, including compliance with exercise and skills practice during routine care and clinical trials. Twelve adults with impaired ambulation from hemiparetic stroke and 6 healthy controls wore triaxial accelerometers on their ankles. Walking speed for repeated outdoor walks was determined by machine-learning algorithms and compared to a stopwatch calculation of speed for distances not known to the algorithm. The reliability of recognizing walking, exercise, and cycling by the algorithms was compared to activity logs. A high correlation was found between stopwatch-measured outdoor walking speed and algorithm-calculated speed (Pearson coefficient, 0.98; P=0.001) and for repeated measures of algorithm-derived walking speed (P=0.01). Bouts of walking >5 steps, variations in walking speed, cycling, stair climbing, and leg exercises were correctly identified during a day in the community. Compared to healthy subjects, those with stroke were, as expected, more sedentary and slower, and their gait revealed high paretic-to-unaffected leg swing ratios. Test-retest reliability and concurrent and construct validity are high for activity pattern-recognition Bayesian algorithms developed from inertial sensors. This ratio scale data can provide real-world monitoring and outcome measurements of lower extremity activities and walking speed for stroke and rehabilitation studies.
Measuring firm size distribution with semi-nonparametric densities
NASA Astrophysics Data System (ADS)
Cortés, Lina M.; Mora-Valencia, Andrés; Perote, Javier
2017-11-01
In this article, we propose a new methodology based on a (log) semi-nonparametric (log-SNP) distribution that nests the lognormal and enables better fits in the upper tail of the distribution through the introduction of new parameters. We test the performance of the lognormal and log-SNP distributions capturing firm size, measured through a sample of US firms in 2004-2015. Taking different levels of aggregation by type of economic activity, our study shows that the log-SNP provides a better fit of the firm size distribution. We also formally introduce the multivariate log-SNP distribution, which encompasses the multivariate lognormal, to analyze the estimation of the joint distribution of the value of the firm's assets and sales. The results suggest that sales are a better firm size measure, as indicated by other studies in the literature.
Lysaniuk, Benjamin; Ladsous, Roman; Tabeaud, Martine; Cottrell, Gilles; Pennetier, Cédric; Garcia, André
2015-01-01
Anthropogenic factors, as well as environmental factors, can explain fine-scale spatial differences in vector densities and seasonal variations in malaria. In this pilot study, numbers of Anopheles gambiae were quantified in concessions in a rural area of southern Benin, West Africa, in order to establish whether vector number and human factors, such as habitat and living practices, are related. The courtyard homes of 64 concessions (houses and private yards) were systematically and similarly photographed. Predefined features in the photographed items were extracted by applying an analysis grid that listed vector resting sites or potential breeding sites and also more general information about the building materials used. These data were analysed with respect to entomological data (number of mosquitoes caught per night) using the Kruskal-Wallis test, Pearson correlation coefficients, and analysis of covariance (ANCOVA). Three recurrent habitat/household types and living practices were identified that corresponded to different standards of living. These were related to the average number of mosquitoes captured per night: type I=0.88 anopheles/night; type II=0.85; and type III 0.55, but this was not statistically significant (Kruskal-Wallis test; p=0.41). There were no significant relationships between the number of potential breeding sites and number of mosquitoes caught (Pearson's correlation coefficient=-0.09, p=0.53). ANCOVA analysis of building materials and numbers of openings did not explain variation in the number of mosquitoes caught. Three dwelling types were identified by using predetermined socio-environmental characteristics but there was no association found in this study between vector number and habitat characteristics as was suspected.
A New Family of Solvable Pearson-Dirichlet Random Walks
NASA Astrophysics Data System (ADS)
Le Caër, Gérard
2011-07-01
An n-step Pearson-Gamma random walk in ℝ d starts at the origin and consists of n independent steps with gamma distributed lengths and uniform orientations. The gamma distribution of each step length has a shape parameter q>0. Constrained random walks of n steps in ℝ d are obtained from the latter walks by imposing that the sum of the step lengths is equal to a fixed value. Simple closed-form expressions were obtained in particular for the distribution of the endpoint of such constrained walks for any d≥ d 0 and any n≥2 when q is either q = d/2 - 1 ( d 0=3) or q= d-1 ( d 0=2) (Le Caër in J. Stat. Phys. 140:728-751, 2010). When the total walk length is chosen, without loss of generality, to be equal to 1, then the constrained step lengths have a Dirichlet distribution whose parameters are all equal to q and the associated walk is thus named a Pearson-Dirichlet random walk. The density of the endpoint position of a n-step planar walk of this type ( n≥2), with q= d=2, was shown recently to be a weighted mixture of 1+ floor( n/2) endpoint densities of planar Pearson-Dirichlet walks with q=1 (Beghin and Orsingher in Stochastics 82:201-229, 2010). The previous result is generalized to any walk space dimension and any number of steps n≥2 when the parameter of the Pearson-Dirichlet random walk is q= d>1. We rely on the connection between an unconstrained random walk and a constrained one, which have both the same n and the same q= d, to obtain a closed-form expression of the endpoint density. The latter is a weighted mixture of 1+ floor( n/2) densities with simple forms, equivalently expressed as a product of a power and a Gauss hypergeometric function. The weights are products of factors which depends both on d and n and Bessel numbers independent of d.
Prognostic value of alcohol dehydrogenase mRNA expression in gastric cancer.
Guo, Erna; Wei, Haotang; Liao, Xiwen; Xu, Yang; Li, Shu; Zeng, Xiaoyun
2018-04-01
Previous studies have reported that alcohol dehydrogenase (ADH) isoenzymes possess diagnostic value in gastric cancer (GC). However, the prognostic value of ADH isoenzymes in GC remains unclear. The aim of the present study was to identify the prognostic value of ADH genes in patients with GC. The prognostic value of ADH genes was investigated in patients with GC using the Kaplan-Meier plotter tool. Kaplan-Meier plots were used to assess the difference between groups of patients with GC with different prognoses. Hazard ratios (HR) and 95% confidence intervals (CI) were used to assess the relative risk of GC survival. Overall, 593 patients with GC and 7 ADH genes were included in the survival analysis. High expression of ADH 1A (class 1), α polypeptide ( ADH1A; log-rank P=0.043; HR=0.79; 95% CI: 0.64-0.99), ADH 1B (class 1), β polypeptide ( ADH1B ; log-rank P=1.9×10 -05 ; HR=0.65; 95% CI: 0.53-0.79) and ADH 5 (class III), χ polypeptide ( ADH5 ; log-rank P=0.0011; HR=0.73; 95% CI: 0.6-0.88) resulted in a significantly decreased risk of mortality in all patients with GC compared with patients with low expression of those genes. Furthermore, protective effects may additionally be observed in patients with intestinal-type GC with high expression of ADH1B (log-rank P=0.031; HR=0.64; 95% CI: 0.43-0.96) and patients with diffuse-type GC with high expression of ADH1A (log-rank P=0.014; HR=0.51; 95% CI: 0.3-0.88), ADH1B (log-rank P=0.04; HR=0.53; 95% CI: 0.29-0.98), ADH 4 (class II), π polypeptide (log-rank P=0.033; HR=0.58; 95% CI: 0.35-0.96) and ADH 6 (class V) (log-rank P=0.037; HR=0.59; 95% CI: 0.35-0.97) resulting in a significantly decreased risk of mortality compared with patients with low expression of those genes. In contrast, patients with diffuse-type GC with high expression of ADH5 (log-rank P=0.044; HR=1.66; 95% CI: 1.01-2.74) were significantly correlated with a poor prognosis. The results of the present study suggest that ADH1A and ADH1B may be potential prognostic biomarkers of GC, whereas the prognostic value of other ADH genes requires further investigation.
Prognostic value of alcohol dehydrogenase mRNA expression in gastric cancer
Guo, Erna; Wei, Haotang; Liao, Xiwen; Xu, Yang; Li, Shu; Zeng, Xiaoyun
2018-01-01
Previous studies have reported that alcohol dehydrogenase (ADH) isoenzymes possess diagnostic value in gastric cancer (GC). However, the prognostic value of ADH isoenzymes in GC remains unclear. The aim of the present study was to identify the prognostic value of ADH genes in patients with GC. The prognostic value of ADH genes was investigated in patients with GC using the Kaplan-Meier plotter tool. Kaplan-Meier plots were used to assess the difference between groups of patients with GC with different prognoses. Hazard ratios (HR) and 95% confidence intervals (CI) were used to assess the relative risk of GC survival. Overall, 593 patients with GC and 7 ADH genes were included in the survival analysis. High expression of ADH 1A (class 1), α polypeptide (ADH1A; log-rank P=0.043; HR=0.79; 95% CI: 0.64–0.99), ADH 1B (class 1), β polypeptide (ADH1B; log-rank P=1.9×10−05; HR=0.65; 95% CI: 0.53–0.79) and ADH 5 (class III), χ polypeptide (ADH5; log-rank P=0.0011; HR=0.73; 95% CI: 0.6–0.88) resulted in a significantly decreased risk of mortality in all patients with GC compared with patients with low expression of those genes. Furthermore, protective effects may additionally be observed in patients with intestinal-type GC with high expression of ADH1B (log-rank P=0.031; HR=0.64; 95% CI: 0.43–0.96) and patients with diffuse-type GC with high expression of ADH1A (log-rank P=0.014; HR=0.51; 95% CI: 0.3–0.88), ADH1B (log-rank P=0.04; HR=0.53; 95% CI: 0.29–0.98), ADH 4 (class II), π polypeptide (log-rank P=0.033; HR=0.58; 95% CI: 0.35–0.96) and ADH 6 (class V) (log-rank P=0.037; HR=0.59; 95% CI: 0.35–0.97) resulting in a significantly decreased risk of mortality compared with patients with low expression of those genes. In contrast, patients with diffuse-type GC with high expression of ADH5 (log-rank P=0.044; HR=1.66; 95% CI: 1.01–2.74) were significantly correlated with a poor prognosis. The results of the present study suggest that ADH1A and ADH1B may be potential prognostic biomarkers of GC, whereas the prognostic value of other ADH genes requires further investigation. PMID:29552190
The impact of logging on biodiversity and carbon sequestration in tropical forests
NASA Astrophysics Data System (ADS)
Cazzolla Gatti, R.
2012-04-01
Tropical deforestation is one of the most relevant environmental issues at planetary scale. Forest clearcutting has dramatic effect on local biodiversity, on the terrestrial carbon sink and atmospheric GHGs balance. In terms of protection of tropical forests selective logging is, instead, often regarded as a minor or even positive management practice for the ecosystem and it is supported by international certifications. However, few studies are available on changes in the structure, biodiversity and ecosystem services due to the selective logging of African forests. This paper presents the results of a survey on tropical forests of West and Central Africa, with a comparison of long-term dynamics, structure, biodiversity and ecosystem services (such as the carbon sequestration) of different types of forests, from virgin primary to selectively logged and secondary forest. Our study suggests that there is a persistent effect of selective logging on biodiversity and carbon stock losses in the long term (up to 30 years since logging) and after repeated logging. These effects, in terms of species richness and biomass, are greater than the expected losses from commercial harvesting, implying that selective logging in West and Central Africa is impairing long term (at least until 30 years) ecosystem structure and services. A longer selective logging cycle (>30 years) should be considered by logging companies although there is not yet enough information to consider this practice sustainable.
Improving Attachments of Non-Invasive (Type III) Electronic Data Loggers to Cetaceans
2013-09-30
logged using a netbook and USB analog to digital converter. Initial testing of the SSSCup was conducted on a common dolphin (Delphinus delphis) cadaver...cell and five pressure sensors (four to measure internal cup pressure and one for atmospheric pressure). Sensor data are logged using a netbook and
Missouri timber industry - an assessment of timber product output and use, 1997.
Ronald J. Piva; Shelby G. Jones; Lynn W. Barnickol; Thomas B. Treiman
2000-01-01
Discusses recent Missouri forest industry trends; production and receipts of industrial roundwood; and production of saw logs, veneer logs, cooperage, charcoal, and other products in 1997, and compares findings with earlier surveys. Reports on quantity, type, and disposition of wood and bark residues generated by the primary wood-using industry.
Responses of experimental river corridors to engineered log jams
USDA-ARS?s Scientific Manuscript database
Physical models of the Big Sioux River, SD, were constructed to assess the impact on flow, drag, and bed erosion and deposition in response to the installation of two different types of engineered log jams (ELJs). A fixed-bed model focused on flow velocity and forces acting on an instrumented ELJ, a...
An Analytic Comparison of Effect Sizes for Differential Item Functioning
ERIC Educational Resources Information Center
Demars, Christine E.
2011-01-01
Three types of effects sizes for DIF are described in this exposition: log of the odds-ratio (differences in log-odds), differences in probability-correct, and proportion of variance accounted for. Using these indices involves conceptualizing the degree of DIF in different ways. This integrative review discusses how these measures are impacted in…
Liu, An-Nuo; Wang, Lu-Lu; Li, Hui-Ping; Gong, Juan; Liu, Xiao-Hong
2017-05-01
The literature on posttraumatic growth (PTG) is burgeoning, with the inconsistencies in the literature of the relationship between PTG and posttraumatic stress disorder (PTSD) symptoms becoming a focal point of attention. Thus, this meta-analysis aims to explore the relationship between PTG and PTSD symptoms through the Pearson correlation coefficient. A systematic search of the literature from January 1996 to November 2015 was completed. We retrieved reports on 63 studies that involved 26,951 patients. The weighted correlation coefficient revealed an effect size of 0.22 with a 95% confidence interval of 0.18 to 0.25. Meta-analysis provides evidence that PTG may be positively correlated with PTSD symptoms and that this correlation may be modified by age, trauma type, and time since trauma. Accordingly, people with high levels of PTG should not be ignored, but rather, they should continue to receive help to alleviate their PTSD symptoms.
Disturbance-mediated accelerated succession in two Michigan forest types
Abrams, Marc D.; Scott, Michael L.
1989-01-01
In northern lower Michigan, logging accelerated sugar maple (Acer saccharum) dominance in a northern white cedar (Thuja occidentals) community, and clear-cutting and burning quickly converted certain sites dominated by mature jack pine (Pinus banksiana) to early-succesional hardwoods, including Prunus, Populus, and Quercus. In both forest types the succeeding hardwoods should continue to increase in the future at the expense of the pioneer conifer species. In the cedar example, sugar maple was also increasing a an undisturbed, old-growth stand, but at a much reduced rate than in the logged stand. Traditionally, disturbance was through to set back succession to some earlier stage. However, out study sites and at least several other North American forest communities exhibited accelerated succession following a wide range of disturbances, including logging fire, ice storms, wind-throw, disease, insect attack, and herbicide spraying.
Kirstein, Lynn M.; Mellors, John W.; Rinaldo, Charles R.; Margolick, Joseph B.; Giorgi, Janis V.; Phair, John P.; Dietz, Edith; Gupta, Phalguni; Sherlock, Christopher H.; Hogg, Robert; Montaner, J. S. G.; Muñoz, Alvaro
1999-01-01
We conducted two studies to determine the potential influence of delays in blood processing, type of anticoagulant, and assay method on human immunodeficiency virus type 1 (HIV-1) RNA levels in plasma. The first was an experimental study in which heparin- and EDTA-anticoagulated blood samples were collected from 101 HIV-positive individuals and processed to plasma after delays of 2, 6, and 18 h. HIV-1 RNA levels in each sample were then measured by both branched-DNA (bDNA) and reverse transcriptase PCR (RT-PCR) assays. Compared to samples processed within 2 h, the loss (decay) of HIV-1 RNA in heparinized blood was significant (P < 0.05) but small after 6 h (bDNA assay, −0.12 log10 copies/ml; RT-PCR, −0.05 log10 copies/ml) and after 18 h (bDNA assay, −0.27 log10 copies/ml; RT-PCR, −0.15 log10 copies/ml). Decay in EDTA-anticoagulated blood was not significant after 6 h (bDNA assay, −0.002 log10 copies/ml; RT-PCR, −0.02 log10 copies/ml), but it was after 18 h (bDNA assay, −0.09 log10 copies/ml; RT-PCR, −0.09 log10 copies/ml). Only 4% of samples processed after 6 h lost more than 50% (≥0.3 log10 copies/ml) of the HIV-1 RNA, regardless of the anticoagulant or the assay that was used. The second study compared HIV-1 RNA levels in samples from the Multicenter AIDS Cohort Study (MACS; samples were collected in heparin-containing tubes in 1985, had a 6-h average processing delay, and were assayed by bDNA assay) and the British Columbia Drug Treatment Program (BCDTP) (collected in EDTA- or acid citrate dextrose-containing tubes in 1996 and 1997, had a 2-h maximum processing delay, and were assayed by RT-PCR). HIV-1 RNA levels in samples from the two cohorts were not significantly different after adjusting for CD4+-cell count and converting bDNA assay values to those corresponding to the RT-PCR results. In summary, the decay of HIV-1 RNA measured in heparinized blood after 6 h was small (−0.05 to −0.12 log10 copies/ml), and the minor impact of this decay on HIV-1 RNA concentrations in archived plasma samples of the MACS was confirmed by the similarity of CD4+-cell counts and assay-adjusted HIV-1 RNA concentrations in the MACS and BCDTP. PMID:10405379
On the Rapid Computation of Various Polylogarithmic Constants
NASA Technical Reports Server (NTRS)
Bailey, David H.; Borwein, Peter; Plouffe, Simon
1996-01-01
We give algorithms for the computation of the d-th digit of certain transcendental numbers in various bases. These algorithms can be easily implemented (multiple precision arithmetic is not needed), require virtually no memory, and feature run times that scale nearly linearly with the order of the digit desired. They make it feasible to compute, for example, the billionth binary digit of log(2) or pi on a modest workstation in a few hours run time. We demonstrate this technique by computing the ten billionth hexadecimal digit of pi, the billionth hexadecimal digits of pi-squared, log(2) and log-squared(2), and the ten billionth decimal digit of log(9/10). These calculations rest on the observation that very special types of identities exist for certain numbers like pi, pi-squared, log(2) and log-squared(2). These are essentially polylogarithmic ladders in an integer base. A number of these identities that we derive in this work appear to be new, for example a critical identity for pi.
Dornacher, Daniel; Reichel, Heiko; Lippacher, Sabine
2014-10-01
Excessive tibial tuberosity-trochlear groove distance (TT-TG) is considered as one of the major risk factors in patellofemoral instability (PFI). TT-TG characterises the lateralisation of the tibial tuberosity and the medialisation of the trochlear groove in the case of trochlear dysplasia. The aim of this study was to assess the inter- and intraobserver reliability of the measurement of TT-TG dependent on the grade of trochlear dysplasia. Magnetic resonance imaging (MRI) scans of 99 consecutive knee joints were analysed retrospectively. Hereof, 61 knee joints presented with a history of PFI and 38 had no symptoms of PFI. After synopsis of the axial MRI scans with true lateral radiographs of the knee, the 61 knees presenting with PFI were assessed in terms of trochlear dysplasia. The knees were distributed according to the four-type classification system described by Dejour. Regarding interobserver correlation for the measurements of TT-TG in trochlear dysplasia, we found r=0.89 (type A), r=0.90 (type B), r=0.74 (type C) and 0.62 (type D) for Pearson's correlation coefficient. Regarding intraobserver correlation, we calculated r=0.89 (type A), r=0.91 (type B), r=0.77 (type C) and r=0.71 (type D), respectively. Pearson's correlation coefficient for the measurement of TT-TG in normal knees resulted in r=0.87 for interobserver correlation and r=0.90 for intraobserver correlation. Decreasing inter- and intraobserver correlation for the measurement of TT-TG with increasing severity of trochlear dysplasia was detected. In our opinion, the measurement of TT-TG is of significance in low-grade trochlear dysplasia. The final decision to perform a distal realignment procedure based on a pathological TT-TG in the presence of high-grade trochlear dysplasia should be reassessed properly. Retrospective study, Level II.
Fregnani, José H T G; Soares, Fernando A; Novik, Pablo R; Lopes, Ademar; Latorre, Maria R D O
2008-02-01
(1) To compare the anatomopathological variables and recurrence rates in patients with early-stage adenocarcinoma (AC) and squamous cell carcinoma (SCC) of the uterine cervix; (2) to identify the independent risk factors for recurrence. This historical cohort study assessed 238 patients with carcinoma of the uterine cervix (IB and IIA), who underwent radical hysterectomy with pelvic lymph node dissection between 1980 and 1999. Comparison of category variables between the two histological types was carried out using the Pearson's chi(2)-test or Fisher exact test. Disease-free survival rates for AC and SCC were calculated using the Kaplan-Meier method and the curves were compared using the log-rank test. The Cox proportional hazards model was used to identify the independent risk factors for recurrence. There were 35 cases of AC (14.7%) and 203 of SCC (85.3%). AC presented lower histological grade than did SCC (grade 1: 68.6% versus 9.4%; p<0.001), lower rate of lymphovascular space involvement (25.7% versus 53.7%; p=0.002), lower rate of invasion into the middle or deep thirds of the uterine cervix (40.0% versus 80.8%; p<0.001) and lower rate of lymph node metastasis (2.9% versus 16.3%; p=0.036). Although the recurrence rate was lower for AC than for SCC (11.4% versus 15.8%), this difference was not statistically significant (p=0.509). Multivariate analysis identified three independent risk factors for recurrence: presence of metastases in the pelvic lymph nodes, invasion of the deep third of the uterine cervix and absence of or slight inflammatory reaction in the cervix. When these variables were adjusted for the histological type and radiotherapy status, they remained in the model as independent risk factors. The AC group showed less aggressive histological behavior than did the SCC group, but no difference in the disease-free survival rates was noted.
Flood Frequency Curves - Use of information on the likelihood of extreme floods
NASA Astrophysics Data System (ADS)
Faber, B.
2011-12-01
Investment in the infrastructure that reduces flood risk for flood-prone communities must incorporate information on the magnitude and frequency of flooding in that area. Traditionally, that information has been a probability distribution of annual maximum streamflows developed from the historical gaged record at a stream site. Practice in the United States fits a Log-Pearson type3 distribution to the annual maximum flows of an unimpaired streamflow record, using the method of moments to estimate distribution parameters. The procedure makes the assumptions that annual peak streamflow events are (1) independent, (2) identically distributed, and (3) form a representative sample of the overall probability distribution. Each of these assumptions can be challenged. We rarely have enough data to form a representative sample, and therefore must compute and display the uncertainty in the estimated flood distribution. But, is there a wet/dry cycle that makes precipitation less than independent between successive years? Are the peak flows caused by different types of events from different statistical populations? How does the watershed or climate changing over time (non-stationarity) affect the probability distribution floods? Potential approaches to avoid these assumptions vary from estimating trend and shift and removing them from early data (and so forming a homogeneous data set), to methods that estimate statistical parameters that vary with time. A further issue in estimating a probability distribution of flood magnitude (the flood frequency curve) is whether a purely statistical approach can accurately capture the range and frequency of floods that are of interest. A meteorologically-based analysis produces "probable maximum precipitation" (PMP) and subsequently a "probable maximum flood" (PMF) that attempts to describe an upper bound on flood magnitude in a particular watershed. This analysis can help constrain the upper tail of the probability distribution, well beyond the range of gaged data or even historical or paleo-flood data, which can be very important in risk analyses performed for flood risk management and dam and levee safety studies.
Prado-Silva, Leonardo; Cadavez, Vasco; Gonzales-Barron, Ursula; Rezende, Ana Carolina B.
2015-01-01
The aim of this study was to perform a meta-analysis of the effects of sanitizing treatments of fresh produce on Salmonella spp., Escherichia coli O157:H7, and Listeria monocytogenes. From 55 primary studies found to report on such effects, 40 were selected based on specific criteria, leading to more than 1,000 data on mean log reductions of these three bacterial pathogens impairing the safety of fresh produce. Data were partitioned to build three meta-analytical models that could allow the assessment of differences in mean log reductions among pathogens, fresh produce, and sanitizers. Moderating variables assessed in the meta-analytical models included type of fresh produce, type of sanitizer, concentration, and treatment time and temperature. Further, a proposal was done to classify the sanitizers according to bactericidal efficacy by means of a meta-analytical dendrogram. The results indicated that both time and temperature significantly affected the mean log reductions of the sanitizing treatment (P < 0.0001). In general, sanitizer treatments led to lower mean log reductions when applied to leafy greens (for example, 0.68 log reductions [0.00 to 1.37] achieved in lettuce) compared to other, nonleafy vegetables (for example, 3.04 mean log reductions [2.32 to 3.76] obtained for carrots). Among the pathogens, E. coli O157:H7 was more resistant to ozone (1.6 mean log reductions), while L. monocytogenes and Salmonella presented high resistance to organic acids, such as citric acid, acetic acid, and lactic acid (∼3.0 mean log reductions). With regard to the sanitizers, it has been found that slightly acidic electrolyzed water, acidified sodium chlorite, and the gaseous chlorine dioxide clustered together, indicating that they possessed the strongest bactericidal effect. The results reported seem to be an important achievement for advancing the global understanding of the effectiveness of sanitizers for microbial safety of fresh produce. PMID:26362982
Analysis of the high water wave volume for the Sava River near Zagreb
NASA Astrophysics Data System (ADS)
Trninic, Dusan
2010-05-01
The paper analyses volumes of the Sava River high water waves near Zagreb during the period: 1926-2008 (N = 83 years), which is needed for more efficient control of high and flood waters. The primary Sava flood control structures in the City of Zagreb are dikes built on both riverbanks, and the Odra Relief Canal with lateral spillway upstream from the City of Zagreb. Intensive morphological changes in the greater Sava area near Zagreb, and anthropological and climate variations and changes at the Sava catchment up to the Zagreb area require detailed analysis of the water wave characteristics. In one analysis, maximum annual volumes are calculated for high water waves with constant duration of: 10, 20, 30, 40, 50 and 60 days. Such calculations encompass total quantity of water (basic and surface runoff). The log Pearson III distribution is adapted for this series of maximum annual volumes. Based on the results obtained, the interrelations are established between the wave volume as function of duration and occurrence probability. In addition to the analysis of maximum volumes of constant duration, it is interesting to carry out the analyses of maximum volume in excess of the reference discharge since it is very important for the flood control. To determine the reference discharges, a discharge of specific duration is used from an average discharge duration curve. The adopted reference discharges have durations of 50, 40, 30, 20 and 10%. Like in the previous case, log Pearson III distribution is adapted to the maximum wave data series. For reference discharge Q = 604 m3/s (duration 10%), a linear trend is calculated of maximum annual volumes exceeding the reference discharge for the Sava near Zagreb during the analyzed period. The analysis results show a significant decrease trend. A similar analysis is carried out for the following three reference discharges: regular flood control measures at the Sava near Zagreb, which are proclaimed when the water level is 350 cm (Q = 2114 m3/s), extraordinary flood control measures taken when the water level is 450 cm (Q = 2648 m3/s), and the discharge at the deterministic inlet into the Odra Canal of approximately Q = 2300 m3/s. The results of these analyses have shown that water wave volumes higher than the reference discharges occurred in a comparatively small number of years, and that their duration was one to two days.
Efficacy of on-farm use of ultraviolet light for inactivation of bacteria in milk for calves.
Gelsinger, S L; Heinrichs, A J; Jones, C M; Van Saun, R J; Wolfgang, D R; Burns, C M; Lysczek, H R
2014-05-01
Ultraviolet light is being employed for bacterial inactivation in milk for calves; however, limited evidence is available to support the claim that UV light effectively inactivates bacteria found in milk. Thus, the objective of this observational study was to investigate the efficacy of on-farm UV light treatment in reducing bacteria populations in waste milk used for feeding calves. Samples of nonsaleable milk were collected from 9 Pennsylvania herds, twice daily for 15 d, both before and after UV light treatment (n=60 samples per farm), and analyzed for standard plate count, coliforms, noncoliform, gram-negative bacteria, environmental and contagious streptococci, coagulase-negative staphylococci, Streptococcus agalactiae, Staphylococcus aureus count, and total solids percentage, and log reduction and percentage log reduction were calculated. Data were analyzed using the mixed procedure in SAS. In all bacteria types, samples collected after UV treatment contained significantly fewer bacteria compared with samples collected before UV treatment. Weighted least squares means for log reduction (percentage log reduction) were 1.34 (29%), 1.27 (58%), 1.48 (53%), 1.85 (55%), 1.37 (72%), 1.92 (63%), 1.07 (33%), and 1.67 (82%) for standard plate count, coliforms, noncoliform, gram-negative bacteria, environmental and contagious streptococci, Strep. agalactiae, coagulase-negative staphylococci, and Staph. aureus, respectively. A percentage log reduction greater than 50% was achieved in 6 of 8 bacteria types, and 43 and 94% of samples collected after UV treatment met recommended bacterial standards for milk for feeding calves. Based on these results, UV light treatment may be effective for some, but not all bacteria types found in nonsaleable waste milk. Thus, farmers should take into account the bacteria types that may need to be reduced when considering the purchase of a UV-treatment system. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
LaManna, Joseph A.; Martin, Thomas E.
2017-01-01
Understanding the causes underlying changes in species diversity is a fundamental pursuit of ecology. Animal species richness and composition often change with decreased forest structural complexity associated with logging. Yet differences in latitude and forest type may strongly influence how species diversity responds to logging. We performed a meta-analysis of logging effects on local species richness and composition of birds across the world and assessed responses by different guilds (nesting strata, foraging strata, diet, and body size). This approach allowed identification of species attributes that might underlie responses to this anthropogenic disturbance. We only examined studies that allowed forests to regrow naturally following logging, and accounted for logging intensity, spatial extent, successional regrowth after logging, and the change in species composition expected due to random assembly from regional species pools. Selective logging in the tropics and clearcut logging in temperate latitudes caused loss of species from nearly all forest strata (ground to canopy), leading to substantial declines in species richness (up to 27% of species). Few species were lost or gained following any intensity of logging in lower-latitude temperate forests, but the relative abundances of these species changed substantially. Selective logging at higher-temperate latitudes generally replaced late-successional specialists with early-successional specialists, leading to no net changes in species richness but large changes in species composition. Removing less basal area during logging mitigated the loss of avian species from all forests and, in some cases, increased diversity in temperate forests. This meta-analysis provides insights into the important role of habitat specialization in determining differential responses of animal communities to logging across tropical and temperate latitudes.
LaManna, Joseph A; Martin, Thomas E
2017-08-01
Understanding the causes underlying changes in species diversity is a fundamental pursuit of ecology. Animal species richness and composition often change with decreased forest structural complexity associated with logging. Yet differences in latitude and forest type may strongly influence how species diversity responds to logging. We performed a meta-analysis of logging effects on local species richness and composition of birds across the world and assessed responses by different guilds (nesting strata, foraging strata, diet, and body size). This approach allowed identification of species attributes that might underlie responses to this anthropogenic disturbance. We only examined studies that allowed forests to regrow naturally following logging, and accounted for logging intensity, spatial extent, successional regrowth after logging, and the change in species composition expected due to random assembly from regional species pools. Selective logging in the tropics and clearcut logging in temperate latitudes caused loss of species from nearly all forest strata (ground to canopy), leading to substantial declines in species richness (up to 27% of species). Few species were lost or gained following any intensity of logging in lower-latitude temperate forests, but the relative abundances of these species changed substantially. Selective logging at higher-temperate latitudes generally replaced late-successional specialists with early-successional specialists, leading to no net changes in species richness but large changes in species composition. Removing less basal area during logging mitigated the loss of avian species from all forests and, in some cases, increased diversity in temperate forests. This meta-analysis provides insights into the important role of habitat specialization in determining differential responses of animal communities to logging across tropical and temperate latitudes. © 2016 Cambridge Philosophical Society.
Kim, Taehyeung; Park, Ah Yeon; Baek, Younghwa; Cha, Seongwon
2017-01-01
Circulating lipid ratios are considered predictors of cardiovascular risks and metabolic syndrome, which cause coronary heart diseases. One constitutional type of Korean medicine prone to weight accumulation, the Tae-Eum type, predisposes the consumers to metabolic syndrome, hypertension, diabetes mellitus, etc. Here, we aimed to identify genetic variants for lipid ratios using a genome-wide association study (GWAS) and followed replication analysis in Koreans and constitutional subgroups. GWASs in 5,292 individuals of the Korean Genome and Epidemiology Study and replication analyses in 2,567 subjects of the Korea medicine Data Center were performed to identify genetic variants associated with triglyceride (TG) to HDL cholesterol (HDLC), LDL cholesterol (LDLC) to HDLC, and non-HDLC to HDLC ratios. For subgroup analysis, a computer-based constitution analysis tool was used to categorize the constitutional types of the subjects. In the discovery stage, seven variants in four loci, three variants in three loci, and two variants in one locus were associated with the ratios of log-transformed TG:HDLC (log[TG]:HDLC), LDLC:HDLC, and non-HDLC:HDLC, respectively. The associations of the GWAS variants with lipid ratios were replicated in the validation stage: for the log[TG]:HDLC ratio, rs6589566 near APOA5 and rs4244457 and rs6586891 near LPL; for the LDLC:HDLC ratio, rs4420638 near APOC1 and rs17445774 near C2orf47; and for the non-HDLC:HDLC ratio, rs6589566 near APOA5. Five of these six variants are known to be associated with TG, LDLC, and/or HDLC, but rs17445774 was newly identified to be involved in lipid level changes in this study. Constitutional subgroup analysis revealed effects of variants associated with log[TG]:HDLC and non-HDLC:HDLC ratios in both the Tae-Eum and non-Tae-Eum types, whereas the effect of the LDLC:HDLC ratio-associated variants remained only in the Tae-Eum type. In conclusion, we identified three log[TG]:HDLC ratio-associated variants, two LDLC:HDLC ratio-associated variants, and one non-HDLC:HDLC-associated variant in Koreans and the constitutional subgroups.
Kim, Taehyeung; Park, Ah Yeon; Baek, Younghwa
2017-01-01
Circulating lipid ratios are considered predictors of cardiovascular risks and metabolic syndrome, which cause coronary heart diseases. One constitutional type of Korean medicine prone to weight accumulation, the Tae-Eum type, predisposes the consumers to metabolic syndrome, hypertension, diabetes mellitus, etc. Here, we aimed to identify genetic variants for lipid ratios using a genome-wide association study (GWAS) and followed replication analysis in Koreans and constitutional subgroups. GWASs in 5,292 individuals of the Korean Genome and Epidemiology Study and replication analyses in 2,567 subjects of the Korea medicine Data Center were performed to identify genetic variants associated with triglyceride (TG) to HDL cholesterol (HDLC), LDL cholesterol (LDLC) to HDLC, and non-HDLC to HDLC ratios. For subgroup analysis, a computer-based constitution analysis tool was used to categorize the constitutional types of the subjects. In the discovery stage, seven variants in four loci, three variants in three loci, and two variants in one locus were associated with the ratios of log-transformed TG:HDLC (log[TG]:HDLC), LDLC:HDLC, and non-HDLC:HDLC, respectively. The associations of the GWAS variants with lipid ratios were replicated in the validation stage: for the log[TG]:HDLC ratio, rs6589566 near APOA5 and rs4244457 and rs6586891 near LPL; for the LDLC:HDLC ratio, rs4420638 near APOC1 and rs17445774 near C2orf47; and for the non-HDLC:HDLC ratio, rs6589566 near APOA5. Five of these six variants are known to be associated with TG, LDLC, and/or HDLC, but rs17445774 was newly identified to be involved in lipid level changes in this study. Constitutional subgroup analysis revealed effects of variants associated with log[TG]:HDLC and non-HDLC:HDLC ratios in both the Tae-Eum and non-Tae-Eum types, whereas the effect of the LDLC:HDLC ratio-associated variants remained only in the Tae-Eum type. In conclusion, we identified three log[TG]:HDLC ratio-associated variants, two LDLC:HDLC ratio-associated variants, and one non-HDLC:HDLC-associated variant in Koreans and the constitutional subgroups. PMID:28046027
The X-Ray and Mid-infrared Luminosities in Luminous Type 1 Quasars
NASA Astrophysics Data System (ADS)
Chen, Chien-Ting J.; Hickox, Ryan C.; Goulding, Andrew D.; Stern, Daniel; Assef, Roberto; Kochanek, Christopher S.; Brown, Michael J. I.; Harrison, Chris M.; Hainline, Kevin N.; Alberts, Stacey; Alexander, David M.; Brodwin, Mark; Del Moro, Agnese; Forman, William R.; Gorjian, Varoujan; Jones, Christine; Murray, Stephen S.; Pope, Alexandra; Rovilos, Emmanouel
2017-03-01
Several recent studies have reported different intrinsic correlations between the active galactic nucleus (AGN) mid-IR luminosity ({L}MIR}) and the rest-frame 2–10 keV luminosity (L X) for luminous quasars. To understand the origin of the difference in the observed {L}{{X}}{--}{L}MIR} relations, we study a sample of 3247 spectroscopically confirmed type 1 AGNs collected from Boötes, XMM-COSMOS, XMM-XXL-North, and the Sloan Digital Sky Survey quasars in the Swift/XRT footprint spanning over four orders of magnitude in luminosity. We carefully examine how different observational constraints impact the observed {L}{{X}}{--}{L}MIR} relations, including the inclusion of X-ray-nondetected objects, possible X-ray absorption in type 1 AGNs, X-ray flux limits, and star formation contamination. We find that the primary factor driving the different {L}{{X}}{--}{L}MIR} relations reported in the literature is the X-ray flux limits for different studies. When taking these effects into account, we find that the X-ray luminosity and mid-IR luminosity (measured at rest-frame 6 μ {{m}}, or {L}6μ {{m}}) of our sample of type 1 AGNs follow a bilinear relation in the log–log plane: {log}{L}{{X}}=(0.84+/- 0.03)× {log}{L}6μ {{m}}/{10}45 erg s‑1 + (44.60 ± 0.01) for {L}6μ {{m}}< {10}44.79 erg s‑1, and {log}{L}{{X}}=(0.40+/- 0.03)× {log}{L}6μ {{m}}/{10}45 erg s‑1 + (44.51 ± 0.01) for {L}6μ {{m}} ≥slant {10}44.79 erg s‑1. This suggests that the luminous type 1 quasars have a shallower {L}{{X}}{--}{L}6μ {{m}} correlation than the approximately linear relations found in local Seyfert galaxies. This result is consistent with previous studies reporting a luminosity-dependent {L}{{X}}{--}{L}MIR} relation and implies that assuming a linear {L}{{X}}{--}{L}6μ {{m}} relation to infer the neutral gas column density for X-ray absorption might overestimate the column densities in luminous quasars.
Weighted Bergman Kernels and Quantization}
NASA Astrophysics Data System (ADS)
Engliš, Miroslav
Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion
Logging residue in southeast Alaska.
James O. Howard; Theodore S. Setzer
1989-01-01
Detailed information on logging residues in southeast Alaska is provided as input to economic and technical assessments of its use for products or site amenities. Two types of information are presented. Ratios are presented that can be used to generate an estimate, based on volume or acres harvested, of the cubic-foot volume of residue for any particular area of...
Slash and litter weight after clearcut logging in two young-growth timber stands
William E. Sundahl
1966-01-01
Ninety-year-old stands of the Pacific ponderosa pine and Pacific ponderosa pine-Douglas-fir types yielded 53 to 110 tons of slash to the acre after logging on the Challenge Experimental Forest, Yuba County, Calif. Fine slash (under 4 inches d.i.b.) contributed 61 to 64 percent of this weight.
Missouri timber industry--an assessment of timber product output and use, 1994.
Ronald J. Piva; Shelby G. Jones
1997-01-01
Discusses recent Missouri forest industry trends; production and receipts of industrial roundwood; and production of saw logs, veneer logs, cooperage bolts, charcoal, and other products in 1994. Compares findings with those from earlier surveys. Reports on the quantity, type, and disposition of wood and bark residues generated by the primary wood-using industry.
A Spreadsheet for a 2 x 3 x 2 Log-Linear Analysis. AIR 1991 Annual Forum Paper.
ERIC Educational Resources Information Center
Saupe, Joe L.
This paper describes a personal computer spreadsheet set up to carry out hierarchical log-linear analyses, a type of analysis useful for institutional research into multidimensional frequency tables formed from categorical variables such as faculty rank, student class level, gender, or retention status. The spreadsheet provides a concrete vehicle…
A logging residue "yield" table for Appalachian hardwoods
A. Jeff Martin
1976-01-01
An equation for predicting logging-residue volume per acre for Appalachian hardwoods was developed from data collected on 20 timber sales in national forests in West Virginia and Virginia. The independent variables of type-of-cut, products removed, basal area per acre, and stand age explained 95 percent of the variation in residue volume per acre. A "yield"...
Validation of an internal hardwood log defect prediction model
R. Edward Thomas
2011-01-01
The type, size, and location of internal defects dictate the grade and value of lumber sawn from hardwood logs. However, acquiring internal defect knowledge with x-ray/computed-tomography or magnetic-resonance imaging technology can be expensive both in time and cost. An alternative approach uses prediction models based on correlations among external defect indicators...
Williams, Lester J.; Raines, Jessica E.; Lanning, Amanda E.
2013-04-04
A database of borehole geophysical logs and other types of data files were compiled as part of ongoing studies of water availability and assessment of brackish- and saline-water resources. The database contains 4,883 logs from 1,248 wells in Florida, Georgia, Alabama, South Carolina, and from a limited number of offshore wells of the eastern Gulf of Mexico and the Atlantic Ocean. The logs can be accessed through a download directory organized by state and county for onshore wells and in a single directory for the offshore wells. A flat file database is provided that lists the wells, their coordinates, and the file listings.
Farsa, Oldřich
2013-01-01
The log BB parameter is the logarithm of the ratio of a compound's equilibrium concentrations in the brain tissue versus the blood plasma. This parameter is a useful descriptor in assessing the ability of a compound to permeate the blood-brain barrier. The aim of this study was to develop a Hansch-type linear regression QSAR model that correlates the parameter log BB and the retention time of drugs and other organic compounds on a reversed-phase HPLC containing an embedded amide moiety. The retention time was expressed by the capacity factor log k'. The second aim was to estimate the brain's absorption of 2-(azacycloalkyl)acetamidophenoxyacetic acids, which are analogues of piracetam, nefiracetam, and meclofenoxate. Notably, these acids may be novel nootropics. Two simple regression models that relate log BB and log k' were developed from an assay performed using a reversed-phase HPLC that contained an embedded amide moiety. Both the quadratic and linear models yielded statistical parameters comparable to previously published models of log BB dependence on various structural characteristics. The models predict that four members of the substituted phenoxyacetic acid series have a strong chance of permeating the barrier and being absorbed in the brain. The results of this study show that a reversed-phase HPLC system containing an embedded amide moiety is a functional in vitro surrogate of the blood-brain barrier. These results suggest that racetam-type nootropic drugs containing a carboxylic moiety could be more poorly absorbed than analogues devoid of the carboxyl group, especially if the compounds penetrate the barrier by a simple diffusion mechanism.
NASA Astrophysics Data System (ADS)
Matsubara, Yoshitsugu; Musashi, Yasuo
2017-12-01
The purpose of this study is to explain fluctuations in email size. We have previously investigated the long-term correlations between email send requests and data flow in the system log of the primary staff email server at a university campus, finding that email size frequency follows a power-law distribution with two inflection points, and that the power-law property weakens the correlation of the data flow. However, the mechanism underlying this fluctuation is not completely understood. We collected new log data from both staff and students over six academic years and analyzed the frequency distribution thereof, focusing on the type of content contained in the emails. Furthermore, we obtained permission to collect "Content-Type" log data from the email headers. We therefore collected the staff log data from May 1, 2015 to July 31, 2015, creating two subdistributions. In this paper, we propose a model to explain these subdistributions, which follow log-normal-like distributions. In the log-normal-like model, email senders -consciously or unconsciously- regulate the size of new email sentences according to a normal distribution. The fitting of the model is acceptable for these subdistributions, and the model demonstrates power-law properties for large email sizes. An analysis of the length of new email sentences would be required for further discussion of our model; however, to protect user privacy at the participating organization, we left this analysis for future work. This study provides new knowledge on the properties of email sizes, and our model is expected to contribute to the decision on whether to establish upper size limits in the design of email services.
Solubilization of polycyclic aromatic hydrocarbons in micellar nonionic surfactant solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, D.A.; Luthy, R.G.; Liu, Zhongbao
1991-01-01
Experimental data are presented on the enhanced apparent solubilities of naphthalene, phenanthrene, and pyrene resulting from solubilization in aqueous solutions of four commercial, nonionic surfactants: an alkyl polyoxyethylene (POE) type, two octylphenol POE types, and a nonylphenol POE type. Apparent solubilities of the polycyclic aromatic hydrocarbon (PAH) compounds in surfactant solutions were determined by radiolabeled techniques. Solubilization of each PAH compound commenced at the surfactant critical micelle concentration and was proportional to the concentration of surfactant in micelle form. The partitioning of organic compounds between surfactant micelles and aqueous solution is characterized by a mole fraction micelle-phase/aqueous-phase partition coefficient, K{submore » m}. Values of log K{sub m} for PAH compounds in surfactant solutions of this study range from 4.57 to 6.53. Log K{sub m} appears to be a linear function of log K{sub ow} for a given surfactant solution. A knowledge of partitioning in aqueous surfactant systems is a prerequisite to understanding mechanisms affecting the behavior of hydrophobic organic compounds in soil-water systems in which surfactants play a role in contaminant remediation or facilitated transport.« less
Surface Soil Changes Following Selective Logging in an Eastern Amazon Forest
NASA Technical Reports Server (NTRS)
Olander, Lydia P.; Bustamante, Mercedes M.; Asner, Gregory P.; Telles, Everaldo; Prado, Zayra; Camargo, Plinio B.
2005-01-01
In the Brazilian Amazon, selective logging is second only to forest conversion in its extent. Conversion to pasture or agriculture tends to reduce soil nutrients and site productivity over time unless fertilizers are added. Logging removes nutrients in bole wood, enough that repeated logging could deplete essential nutrients over time. After a single logging event, nutrient losses are likely to be too small to observe in the large soil nutrient pools, but disturbances associated with logging also alter soil properties. Selective logging, particularly reduced-impact logging, results in consistent patterns of disturbance that may be associated with particular changes in soil properties. Soil bulk density, pH, carbon (C), nitrogen (N), phosphorus (P), calcium (Ca), magnesium (Mg), potassium (K), iron (Fe), aluminum (Al), delta(sup 3)C, delta(sup 15)N, and P fractionations were measured on the soils of four different types of loggingrelated disturbances: roads, decks, skids, and treefall gaps. Litter biomass and percent bare ground were also determined in these areas. To evaluate the importance of fresh foliage inputs from downed tree crowns in treefall gaps, foliar nutrients for mature forest trees were also determined and compared to that of fresh litterfall. The immediate impacts of logging on soil properties and how these might link to the longer-term estimated nutrient losses and the observed changes in soils were studied.
NASA Astrophysics Data System (ADS)
Ivanova, G. A.; Conard, S. G.; McRae, D. J.; Kukavskaya, E. A.; Bogorodskaya, A. V.; Kovaleva, N. M.
2010-12-01
Wildfire and large-scale forest harvesting are the two major disturbances in the Russian boreal forests. Non-recovered logged sites total about a million hectares in Siberia. Logged sites are characterized by higher fire hazard than forest sites due to the presence of generally untreated logging slash (i.e., available fuel) which dries out much more rapidly compared to understory fuels. Moreover, most logging sites can be easily accessed by local population; this increases the risk for fire ignition. Fire impacts on the overstory trees, subcanopy woody layer, and ground vegetation biomass were estimated on 14 logged and unlogged comparison sites in the Lower Angara Region in 2009-2010 as part of the NASA-funded NEESPI project, The Influence of Changing Forestry Practices on the Effects of Wildfire and on Interactions Between Fire and Changing Climate in Central Siberia. Based on calculated fuel consumption, we estimated carbon emission from fires on both logged and unlogged burned sites. Carbon emission from fires on logged sites appeared to be twice that on unlogged sites. Soil respiration decreased on both site types after fires. This reduction may partially offset fire-produced carbon emissions. Carbon emissions from fire and post-fire ecosystem damage on logged sites are expected to increase under changing climate conditions and as a result of anticipated increases in future forest harvesting in Siberia.
Gonzalo, C; Carriedo, J A; Beneitez, E; Juárez, M T; De La Fuente, L F; San Primitivo, F
2006-02-01
A total of 9,353 records for bulk tank total bacterial count (TBC) were obtained over 1 yr from 315 dairy ewe flocks belonging to the Sheep Improvement Consortium (CPO) in Castilla-León (Spain). Analysis of variance showed significant effects of flock, breed, month within flock, dry therapy, milking type and installation, and logSCC on logTBC. Flock and month within flock were important variation factors as they accounted for 22.0 and 22.1% of the variance, respectively. Considerable repeatability values were obtained for both random factors. Hand milking and bucket-milking machines elicited highest logTBC (5.31), whereas parlor systems with looped milkline (5.01) elicited the lowest logTBC. The implementation of dry therapy practice (5.12) showed significantly lower logTBC than when not used (5.25). Variability in logTBC among breeds ranged from 5.24 (Awassi) to 5.07 (Churra). However, clinical outbreaks of contagious agalactia did not increase TBC significantly. A statistically significant relationship was found between logTBC and logSCC, the correlation coefficient between the variables being r = 0.23. Programs for improving milk hygiene should be implemented for both total bacterial count and somatic cell count variables at the same time.
Practical scheme for optimal measurement in quantum interferometric devices
NASA Astrophysics Data System (ADS)
Takeoka, Masahiro; Ban, Masashi; Sasaki, Masahide
2003-06-01
We apply a Kennedy-type detection scheme, which was originally proposed for a binary communications system, to interferometric sensing devices. We show that the minimum detectable perturbation of the proposed system reaches the ultimate precision bound which is predicted by quantum Neyman-Pearson hypothesis testing. To provide concrete examples, we apply our interferometric scheme to phase shift detection by using coherent and squeezed probe fields.
NASA Astrophysics Data System (ADS)
Cao, Xiangyu; Le Doussal, Pierre; Rosso, Alberto; Santachiara, Raoul
2018-04-01
We study transitions in log-correlated random energy models (logREMs) that are related to the violation of a Seiberg bound in Liouville field theory (LFT): the binding transition and the termination point transition (a.k.a., pre-freezing). By means of LFT-logREM mapping, replica symmetry breaking and traveling-wave equation techniques, we unify both transitions in a two-parameter diagram, which describes the free-energy large deviations of logREMs with a deterministic background log potential, or equivalently, the joint moments of the free energy and Gibbs measure in logREMs without background potential. Under the LFT-logREM mapping, the transitions correspond to the competition of discrete and continuous terms in a four-point correlation function. Our results provide a statistical interpretation of a peculiar nonlocality of the operator product expansion in LFT. The results are rederived by a traveling-wave equation calculation, which shows that the features of LFT responsible for the transitions are reproduced in a simple model of diffusion with absorption. We examine also the problem by a replica symmetry breaking analysis. It complements the previous methods and reveals a rich large deviation structure of the free energy of logREMs with a deterministic background log potential. Many results are verified in the integrable circular logREM, by a replica-Coulomb gas integral approach. The related problem of common length (overlap) distribution is also considered. We provide a traveling-wave equation derivation of the LFT predictions announced in a precedent work.
Role of T-type calcium channels in myogenic tone of skeletal muscle resistance arteries.
VanBavel, Ed; Sorop, Oana; Andreasen, Ditte; Pfaffendorf, Martin; Jensen, Boye L
2002-12-01
T-type calcium channels may be involved in the maintenance of myogenic tone. We tested their role in isolated rat cremaster arterioles obtained after CO(2) anesthesia and decapitation. Total RNA was analyzed by RT-PCR and Southern blotting for calcium channel expression. We observed expression of voltage-operated calcium (Ca(V)) channels Ca(V)3.1 (T-type), Ca(V)3.2 (T-type), and Ca(V)1.2 (L-type) in cremaster arterioles (n = 3 rats). Amplification products were observed only in the presence of reverse transcriptase and cDNA. Concentration-response curves of the relatively specific L-type blocker verapamil and the relatively specific T-type blockers mibefradil and nickel were made on cannulated vessels with either myogenic tone (75 mmHg) or a similar level of constriction induced by 30 mM K(+) at 35 mmHg. Mibefradil and nickel were, respectively, 162-fold and 300-fold more potent in inhibiting myogenic tone compared with K(+)-induced constriction [log(IC(50), M): mibefradil, basal -7.3 +/- 0.2 (n = 9) and K(+) -5.1 +/- 0.1 (n = 5); nickel, basal -4.1 +/- 0.2 (n = 5) and K(+) -1.6 +/- 0.5 (n = 5); means +/- SE]. Verapamil had a 17-fold more potent effect [log(IC(50), M): basal -6.6 +/- 0.1 (n = 5); K(+) -5.4 +/- 0.3 (n = 4); all log(IC(50)) P < 0.05, basal vs. K(+)]. These data suggest that T-type calcium channels are expressed and involved in maintenance of myogenic tone in rat cremaster muscle arterioles.
Jelden, Katelyn C; Gibbs, Shawn G; Smith, Philip W; Hewlett, Angela L; Iwen, Peter C; Schmid, Kendra K; Lowe, John J
2017-06-01
An ultraviolet germicidal irradiation (UVGI) generator (the TORCH, ClorDiSys Solutions, Inc.) was used to compare the disinfection of surface coupons (plastic from a bedrail, stainless steel, and chrome-plated light switch cover) in a hospital room with walls coated with ultraviolet (UV)-reflective paint (Lumacept) or standard paint. Each surface coupon was inoculated with methicillin-resistant Staphylococcus aureus (MRSA) or vancomycin-resistant Enterococcus faecalis (VRE), placed at 6 different sites within a hospital room coated with UV-reflective paint or standard paint, and treated by 10 min UVC exposure (UVC dose of 0-688 mJ/cm 2 between sites with standard paint and 0-553 mJ/cm 2 with UV-reflective paint) in 8 total trials. Aggregated MRSA concentrations on plastic bedrail surface coupons were reduced on average by 3.0 log 10 (1.8 log 10 Geometric Standard Deviation [GSD]) with standard paint and 4.3 log 10 (1.3 log 10 GSD) with UV-reflective paint (p = 0.0005) with no significant reduction differences between paints on stainless steel and chrome. Average VRE concentrations were reduced by ≥4.9 log 10 (<1.2 log 10 GSD) on all surface types with UV-reflective paint and ≤4.1 log 10 (<1.7 log 10 GSD) with standard paint (p < 0.05). At 5 aggregated sites directly exposed to UVC light, MRSA concentrations on average were reduced by 5.2 log 10 (1.4 log 10 GSD) with standard paint and 5.1 log 10 (1.2 log 10 GSD) with UV-reflective paint (p = 0.017) and VRE by 4.4 log 10 (1.4 log 10 GSD) with standard paint and 5.3 log 10 (1.1 log 10 GSD) with UV-reflective paint (p < 0.0001). At one indirectly exposed site on the opposite side of the hospital bed from the UVGI generator, MRSA concentrations on average were reduced by 1.3 log 10 (1.7 log 10 GSD) with standard paint and 4.7 log 10 (1.3 log 10 GSD) with UV-reflective paint (p < 0.0001) and VRE by 1.2 log 10 (1.5 log 10 GSD) with standard paint and 4.6 log 10 (1.1 log 10 GSD) with UV-reflective paint (p < 0.0001). Coating hospital room walls with UV-reflective paint enhanced UVGI disinfection of nosocomial bacteria on various surfaces compared to standard paint, particularly at a surface placement site indirectly exposed to UVC light.
Generating porosity spectrum of carbonate reservoirs using ultrasonic imaging log
NASA Astrophysics Data System (ADS)
Zhang, Jie; Nie, Xin; Xiao, Suyun; Zhang, Chong; Zhang, Chaomo; Zhang, Zhansong
2018-03-01
Imaging logging tools can provide us the borehole wall image. The micro-resistivity imaging logging has been used to obtain borehole porosity spectrum. However, the resistivity imaging logging cannot cover the whole borehole wall. In this paper, we propose a method to calculate the porosity spectrum using ultrasonic imaging logging data. Based on the amplitude attenuation equation, we analyze the factors affecting the propagation of wave in drilling fluid and formation and based on the bulk-volume rock model, Wyllie equation and Raymer equation, we establish various conversion models between the reflection coefficient β and porosity ϕ. Then we use the ultrasonic imaging logging and conventional wireline logging data to calculate the near-borehole formation porosity distribution spectrum. The porosity spectrum result obtained from ultrasonic imaging data is compared with the one from the micro-resistivity imaging data, and they turn out to be similar, but with discrepancy, which is caused by the borehole coverage and data input difference. We separate the porosity types by performing threshold value segmentation and generate porosity-depth distribution curves by counting with equal depth spacing on the porosity image. The practice result is good and reveals the efficiency of our method.
[Calculating Pearson residual in logistic regressions: a comparison between SPSS and SAS].
Xu, Hao; Zhang, Tao; Li, Xiao-song; Liu, Yuan-yuan
2015-01-01
To compare the results of Pearson residual calculations in logistic regression models using SPSS and SAS. We reviewed Pearson residual calculation methods, and used two sets of data to test logistic models constructed by SPSS and STATA. One model contained a small number of covariates compared to the number of observed. The other contained a similar number of covariates as the number of observed. The two software packages produced similar Pearson residual estimates when the models contained a similar number of covariates as the number of observed, but the results differed when the number of observed was much greater than the number of covariates. The two software packages produce different results of Pearson residuals, especially when the models contain a small number of covariates. Further studies are warranted.
Suen, Nian-Tzu; Guo, Sheng-Ping; Hoos, James; Bobev, Svilen
2018-05-07
Reported are the syntheses, crystal structures, and electronic structures of six rare-earth metal-lithium stannides with the general formulas RE 3 Li 4- x Sn 4+ x (RE = La-Nd, Sm) and Eu 7 Li 8- x Sn 10+ x . These new ternary compounds have been synthesized by high-temperature reactions of the corresponding elements. Their crystal structures have been established using single-crystal X-ray diffraction methods. The RE 3 Li 4- x Sn 4+ x phases crystallize in the orthorhombic body-centered space group Immm (No. 71) with the Zr 3 Cu 4 Si 4 structure type (Pearson code oI22), and the Eu 7 Li 8- x Sn 10+ x phase crystallizes in the orthorhombic base-centered space group Cmmm (No. 65) with the Ce 7 Li 8 Ge 10 structure type (Pearson code oC50). Both structures can be consdered as part of the [RESn 2 ] n [RELi 2 Sn] m homologous series, wherein the structures are intergrowths of imaginary RESn 2 (AlB 2 -like structure type) and RELi 2 Sn (MgAl 2 Cu-like structure type) fragments. Close examination the structures indicates complex occupational Li-Sn disorder, apparently governed by the drive of the structure to achieve an optimal number of valence electrons. This conclusion based on experimental results is supported by detailed electronic structure calculations, carried out using the tight-binding linear muffin-tin orbital method.
Degnan, James; Barker, Gregory; Olson, Neil; Wilder, Leland
2012-01-01
Maximum groundwater temperatures at the bottom of the logs were between 11.7 and 17.3 degrees Celsius. Geothermal gradients were generally higher than typically reported for other water wells in the United States. Some of the high gradients were associated with high natural gamma emissions. Groundwater flow was discernible in 5 of the 10 wells studied but only obscured the portion of the geothermal gradient signal where groundwater actually flowed through the well. Temperature gradients varied by mapped bedrock type but can also vary by differences in mineralogy or rock type within the wells.
Tavares, Óscar M; Valente-Dos-Santos, João; Duarte, João P; Póvoas, Susana C; Gobbo, Luís A; Fernandes, Rômulo A; Marinho, Daniel A; Casanova, José M; Sherar, Lauren B; Courteix, Daniel; Coelho-E-Silva, Manuel J
2016-11-24
A variety of performance outputs are strongly determined by lower limbs volume and composition in children and adolescents. The current study aimed to examine the validity of thigh volume (TV) estimated by anthropometry in late adolescent female volleyball players. Dual-energy X-ray absorptiometry (DXA) measures were used as the reference method. Total and regional body composition was assessed with a Lunar DPX NT/Pro/MD+/Duo/Bravo scanner in a cross-sectional sample of 42 Portuguese female volleyball players aged 14-18 years (165.2 ± 0.9 cm; 61.1 ± 1.4 kg). TV was estimated with the reference method (TV-DXA) and with the anthropometric method (TV-ANTH). Agreement between procedures was assessed with Deming regression. The analysis also considered a calibration of the anthropometric approach. The equation that best predicted TV-DXA was: -0.899 + 0.876 × log 10 (body mass) + 0.113 × log 10 (TV-ANTH). This new model (NM) was validated using the predicted residual sum of squares (PRESS) method (R 2 PRESS = 0.838). Correlation between the reference method and the NM was 0.934 (95%CI: 0.880-0.964, S y∙x = 0.325 L). A new and accurate anthropometric method to estimate TV in adolescent female volleyball players was obtained from the equation of Jones and Pearson alongside with adjustments for body mass.
NASA Astrophysics Data System (ADS)
Lam, Hing-Lan
2017-01-01
A statistical study of relativistic electron (>2 MeV) fluence derived from geosynchronous satellites and Pc5 ultralow frequency (ULF) wave power computed from a ground magnetic observatory data located in Canada's auroral zone has been carried out. The ground observations were made near the foot points of field lines passing through the GOESs from 1987 to 2009 (cycles 22 and 23). We determine statistical relationships between the two quantities for different phases of a solar cycle and validate these relationships in two different cycles. There is a positive linear relationship between log fluence and log Pc5 power for all solar phases; however, the power law indices vary for different phases of the cycle. High index values existed during the descending phase. The Pearson's cross correlation between electron fluence and Pc5 power indicates fluence enhancement 2-3 days after strong Pc5 wave activity for all solar phases. The lag between the two quantities is shorter for extremely high fluence (due to high Pc5 power), which tends to occur during the declining phases of both cycles. Most occurrences of extremely low fluence were observed during the extended solar minimum of cycle 23. The precursory attribute of Pc5 power with respect to fluence and the enhancement of fluence due to rising Pc5 power both support the notion of an electron acceleration mechanism by Pc5 ULF waves. This precursor behavior establishes the potential of using Pc5 power to predict relativistic electron fluence.
Using volcano plots and regularized-chi statistics in genetic association studies.
Li, Wentian; Freudenberg, Jan; Suh, Young Ju; Yang, Yaning
2014-02-01
Labor intensive experiments are typically required to identify the causal disease variants from a list of disease associated variants in the genome. For designing such experiments, candidate variants are ranked by their strength of genetic association with the disease. However, the two commonly used measures of genetic association, the odds-ratio (OR) and p-value may rank variants in different order. To integrate these two measures into a single analysis, here we transfer the volcano plot methodology from gene expression analysis to genetic association studies. In its original setting, volcano plots are scatter plots of fold-change and t-test statistic (or -log of the p-value), with the latter being more sensitive to sample size. In genetic association studies, the OR and Pearson's chi-square statistic (or equivalently its square root, chi; or the standardized log(OR)) can be analogously used in a volcano plot, allowing for their visual inspection. Moreover, the geometric interpretation of these plots leads to an intuitive method for filtering results by a combination of both OR and chi-square statistic, which we term "regularized-chi". This method selects associated markers by a smooth curve in the volcano plot instead of the right-angled lines which corresponds to independent cutoffs for OR and chi-square statistic. The regularized-chi incorporates relatively more signals from variants with lower minor-allele-frequencies than chi-square test statistic. As rare variants tend to have stronger functional effects, regularized-chi is better suited to the task of prioritization of candidate genes. Copyright © 2013 Elsevier Ltd. All rights reserved.
Salazar, Dane; Schiff, Adam; Mitchell, Erika; Hopkinson, William
2014-02-05
The Accreditation Council for Graduate Medical Education (ACGME) Resident Case Log System is designed to be a reflection of residents' operative volume and an objective measure of their surgical experience. All operative procedures and manipulations in the operating room, Emergency Department, and outpatient clinic are to be logged into the Resident Case Log System. Discrepancies in the log volumes between residents and residency programs often prompt scrutiny. However, it remains unclear if such disparities truly represent differences in operative experiences or if they are reflections of inconsistent logging practices. The purpose of this study was to investigate individual recording practices among orthopaedic surgery residents prior to August 1, 2011. Orthopaedic surgery residents received a questionnaire on case log practices that was distributed through the Council of Orthopaedic Residency Directors list server. Respondents were asked to respond anonymously about recording practices in different clinical settings as well as types of cases routinely logged. Hypothetical scenarios of common orthopaedic procedures were presented to investigate the differences in the Current Procedural Terminology codes utilized. Two hundred and ninety-eight orthopaedic surgery residents completed the questionnaire; 37% were fifth-year residents, 22% were fourth-year residents, 18% were third-year residents, 15% were second-year residents, and 8% were first-year residents. Fifty-six percent of respondents reported routinely logging procedures performed in the Emergency Department or urgent care setting. Twenty-two percent of participants routinely logged procedures in the clinic or outpatient setting, 20% logged joint injections, and only 13% logged casts or splints applied in the office setting. There was substantial variability in the Current Procedural Terminology codes selected for the seven clinical scenarios. There has been a lack of standardization in case-logging practices among orthopaedic surgery residents prior to August 1, 2011. ACGME case log data prior to this date may not be a reliable measure of residents' procedural experience.
Fang, Wei; Li, Jiu-Ke; Jin, Xiao-Hong; Dai, Yuan-Min; Li, Yu-Min
2016-01-01
To evaluate predictive factors for postoperative visual function of primary chronic rhegmatgenous retinal detachment (RRD) after sclera buckling (SB). Totally 48 patients (51 eyes) with primary chronic RRD were included in this prospective interventional clinical cases study, which underwent SB alone from June 2008 to December 2014. Age, sex, symptoms duration, detached extension, retinal hole position, size, type, fovea on/off, proliferative vitreoretinopathy (PVR), posterior vitreous detachment (PVD), baseline best corrected visual acuity (BCVA), operative duration, follow up duration, final BCVA were measured. Pearson correlation analysis, Spearman correlation analysis and multivariate linear stepwise regression were used to confirm predictive factors for better final visual acuity. Student's t-test, Wilcoxon two-sample test, Chi-square test and logistic stepwise regression were used to confirm predictive factors for better vision improvement. Baseline BCVA was 0.8313±0.6911 logMAR and final BCVA was 0.4761±0.4956 logMAR. Primary surgical success rate was 92.16% (47/51). Correlation analyses revealed shorter symptoms duration (r=0.3850, P=0.0053), less detached area (r=0.5489, P<0.0001), fovea (r=0.4605, P=0.0007), no PVR (r=0.3138, P=0.0250), better baseline BCVA (r=0.7291, P<0.0001), shorter operative duration (r=0.3233, P=0.0207) and longer follow up (r=-0.3358, P=0.0160) were related with better final BCVA, while independent predictive factors were better baseline BCVA [partial R-square (PR(2))=0.5316, P<0.0001], shorter symptoms duration (PR(2)=0.0609, P=0.0101), longer follow up duration (PR(2)=0.0278, P=0.0477) and shorter operative duration (PR(2)=0.0338, P=0.0350). Patients with vision improvement took up 49.02% (25/51). Univariate and multivariate analyses both revealed predictive factors for better vision improvement were better baseline vision [odds ratio (OR) =50.369, P=0.0041] and longer follow up duration (OR=1.144, P=0.0067). Independent predictive factors for better visual outcome of primary chronic RRD after SB are better baseline BCVA, shorter symptoms duration, shorter operative duration and longer follow up duration, while independent predictive factors for better vision improvement after operation are better baseline vision and longer follow up duration.
Pan-European comparison of candidate distributions for climatological drought indices, SPI and SPEI
NASA Astrophysics Data System (ADS)
Stagge, James; Tallaksen, Lena; Gudmundsson, Lukas; Van Loon, Anne; Stahl, Kerstin
2013-04-01
Drought indices are vital to objectively quantify and compare drought severity, duration, and extent across regions with varied climatic and hydrologic regimes. The Standardized Precipitation Index (SPI), a well-reviewed meterological drought index recommended by the WMO, and its more recent water balance variant, the Standardized Precipitation-Evapotranspiration Index (SPEI) both rely on selection of univariate probability distributions to normalize the index, allowing for comparisons across climates. The SPI, considered a universal meteorological drought index, measures anomalies in precipitation, whereas the SPEI measures anomalies in climatic water balance (precipitation minus potential evapotranspiration), a more comprehensive measure of water availability that incorporates temperature. Many reviewers recommend use of the gamma (Pearson Type III) distribution for SPI normalization, while developers of the SPEI recommend use of the three parameter log-logistic distribution, based on point observation validation. Before the SPEI can be implemented at the pan-European scale, it is necessary to further validate the index using a range of candidate distributions to determine sensitivity to distribution selection, identify recommended distributions, and highlight those instances where a given distribution may not be valid. This study rigorously compares a suite of candidate probability distributions using WATCH Forcing Data, a global, historical (1958-2001) climate dataset based on ERA40 reanalysis with 0.5 x 0.5 degree resolution and bias-correction based on CRU-TS2.1 observations. Using maximum likelihood estimation, alternative candidate distributions are fit for the SPI and SPEI across the range of European climate zones. When evaluated at this scale, the gamma distribution for the SPI results in negatively skewed values, exaggerating the index severity of extreme dry conditions, while decreasing the index severity of extreme high precipitation. This bias is particularly notable for shorter aggregation periods (1-6 months) during the summer months in southern Europe (below 45° latitude), and can partially be attributed to distribution fitting difficulties in semi-arid regions where monthly precipitation totals cluster near zero. By contrast, the SPEI has potential for avoiding this fitting difficulty because it is not bounded by zero. However, the recommended log-logistic distribution produces index values with less variation than the standard normal distribution. Among the alternative candidate distributions, the best fit distribution and the distribution parameters vary in space and time, suggesting regional commonalities within hydroclimatic regimes, as discussed further in the presentation.
The Flux of Carbon from Selective Logging, Fire, and Regrowth in Amazonia
NASA Technical Reports Server (NTRS)
Houghton, R. A.
2004-01-01
The major goal of this work was to develop a spatial, process-based model (CARLUC) that would calculate sources and sinks of carbon from changes in land use, including logging and fire. The work also included Landsat data, together with fieldwork, to investigate fire and logging in three different forest types within Brazilian Amazonia. Results from these three activities (modeling, fieldwork, and remote sensing) are described, individually, below. The work and some of the personnel overlapped with research carried out by Dr. Daniel Nepstad's LBA team, and thus some of the findings are also reported in his summaries.
Grading options for western hemlock "pulpwood" logs from southeastern Alaska.
David W. Green; Kent A. McDonald; John Dramm; Kenneth Kilborn
Properties and grade yield are estimated for structural lumber produced from No. 3, No. 4, and low-end No. 2 grade western hemlock logs of the type previously used primarily for the production of pulp chips. Estimates are given for production in the Structural Framing, Machine Stress Rating, and Laminating Stock grading systems. The information shows that significant...
Joseph B. Fontaine; Daniel C. Donato; John L. Campbell; Jonathan G. Martin; Beverley E. Law
2010-01-01
Following stand-replacing wildfire, post-fire (salvage) logging of fire-killed trees is a widely implemented management practice in many forest types. A common hypothesis is that removal of fire-killed trees increases surface temperatures due to loss of shade and increased solar radiation, thereby influencing vegetation establishment and possibly stand development. Six...
The effect on vegetation and soil temperature of logging flood-plain white spruce.
C.T. Dyrness; L.A. Vlereck; M.J. Foote; J.C. Zasada
1988-01-01
During winter 1982-83, five silvicultural treatments were applied on Willow Island (near Fairbanks, Alaska): two types of shelterwood cuttings, a clearcutting, a clearcutting with broadcast slash burning, and a thinning. The effects of these treatments on vegetation, soil temperature, and frost depth were followed from 1983 through 1985. In 1984 and 1985, logged plots...
Low-Speed Wind Tunnel Flow Quality Determination
2011-09-01
Traverse Motor The traverse motor for the BiSlide is a NEMA Type 34D, Slo-Syn® stepper motor, allowing the operator to position items in the test... norm (w1)^2/sum(w1)^2,’k’); %% plot on log-log scale ylabel(‘RMS Power/Frequency (V^2)’) xlabel(‘Frequency (Hz)’) title(‘Power Spectrum’) end
Regeneration patterns of northern white cedar, an old-growth forest dominant
Scott, Michael L.; Murphy, Peter G.
1987-01-01
Regeneration of Thuja occidentalis L. was examined in an old-growth dune forest on South Manitou Island, Michigan. To estimate the current status of cedar regeneration, we determined size structure of seedlings and stems and analyzed present patterns of establishment and persistence relative to substrate type. There has been a shift in the pattern of cedar establishment from soil to log substrates. While 97% of all stems ≥15 cm dbh are associated with a soil substrate, 81% of stems ≥2.5cm-25 cm tall. There was no significant relationship between the state of log decay and the density of seedlings >25 cm in height, indicating that long-term survival is not dependent on the degree of log decomposition. However, survival on logs is associated with canopy openings. Seedlings >25 cm tall were associated with gaps, and 78% of cedar stems (≥2.5 cm dbh) on logs were associated with a single windthrow gap. Thus, current cedar regeneration in this old-growth forest depends on logs and the canopy openings associated with them.
Estimating the footprint of pollution on coral reefs with models of species turnover.
Brown, Christopher J; Hamilton, Richard J
2018-01-15
Ecological communities typically change along gradients of human impact, although it is difficult to estimate the footprint of impacts for diffuse threats such as pollution. We developed a joint model (i.e., one that includes multiple species and their interactions with each other and environmental covariates) of benthic habitats on lagoonal coral reefs and used it to infer change in benthic composition along a gradient of distance from logging operations. The model estimated both changes in abundances of benthic groups and their compositional turnover, a type of beta diversity. We used the model to predict the footprint of turbidity impacts from past and recent logging. Benthic communities far from logging were dominated by branching corals, whereas communities close to logging had higher cover of dead coral, massive corals, and soft sediment. Recent impacts were predicted to be small relative to the extensive impacts of past logging because recent logging has occurred far from lagoonal reefs. Our model can be used more generally to estimate the footprint of human impacts on ecosystems and evaluate the benefits of conservation actions for ecosystems. © 2018 Society for Conservation Biology.
Karl Pearson and eugenics: personal opinions and scientific rigor.
Delzell, Darcie A P; Poliak, Cathy D
2013-09-01
The influence of personal opinions and biases on scientific conclusions is a threat to the advancement of knowledge. Expertise and experience does not render one immune to this temptation. In this work, one of the founding fathers of statistics, Karl Pearson, is used as an illustration of how even the most talented among us can produce misleading results when inferences are made without caution or reference to potential bias and other analysis limitations. A study performed by Pearson on British Jewish schoolchildren is examined in light of ethical and professional statistical practice. The methodology used and inferences made by Pearson and his coauthor are sometimes questionable and offer insight into how Pearson's support of eugenics and his own British nationalism could have potentially influenced his often careless and far-fetched inferences. A short background into Pearson's work and beliefs is provided, along with an in-depth examination of the authors' overall experimental design and statistical practices. In addition, portions of the study regarding intelligence and tuberculosis are discussed in more detail, along with historical reactions to their work.
Estimating pore-space gas hydrate saturations from well log acoustic data
NASA Astrophysics Data System (ADS)
Lee, Myung W.; Waite, William F.
2008-07-01
Relating pore-space gas hydrate saturation to sonic velocity data is important for remotely estimating gas hydrate concentration in sediment. In the present study, sonic velocities of gas hydrate-bearing sands are modeled using a three-phase Biot-type theory in which sand, gas hydrate, and pore fluid form three homogeneous, interwoven frameworks. This theory is developed using well log compressional and shear wave velocity data from the Mallik 5L-38 permafrost gas hydrate research well in Canada and applied to well log data from hydrate-bearing sands in the Alaskan permafrost, Gulf of Mexico, and northern Cascadia margin. Velocity-based gas hydrate saturation estimates are in good agreement with Nuclear Magneto Resonance and resistivity log estimates over the complete range of observed gas hydrate saturations.
Estimating pore-space gas hydrate saturations from well log acoustic data
Lee, Myung W.; Waite, William F.
2008-01-01
Relating pore-space gas hydrate saturation to sonic velocity data is important for remotely estimating gas hydrate concentration in sediment. In the present study, sonic velocities of gas hydrate–bearing sands are modeled using a three-phase Biot-type theory in which sand, gas hydrate, and pore fluid form three homogeneous, interwoven frameworks. This theory is developed using well log compressional and shear wave velocity data from the Mallik 5L-38 permafrost gas hydrate research well in Canada and applied to well log data from hydrate-bearing sands in the Alaskan permafrost, Gulf of Mexico, and northern Cascadia margin. Velocity-based gas hydrate saturation estimates are in good agreement with Nuclear Magneto Resonance and resistivity log estimates over the complete range of observed gas hydrate saturations.
[Neonatal Pearson syndrome. two case studies].
Collin-Ducasse, H; Maillotte, A-M; Monpoux, F; Boutté, P; Ferrero-Vacher, C; Paquis, V
2010-01-01
Among the etiologies of anemia in the newborn, those related to mitochondrial cytopathies are rare. Pearson syndrome is mostly diagnosed during infancy and characterized by refractory sideroblastic anemia with vacuolization of marrow progenitor cells and exocrine pancreatic dysfunction. We describe two diagnosed cases of Pearson syndrome in the early neonatal period caused by severe macrocytic aregenerative anemia. Bone marrow aspiration revealed sideroblastic anemia and vacuolization of erythroblastic precursors. The diagnosis was confirmed by genetic analysis revealing a deletion in the mitochondrial DNA. These two newborns received monthly transfusions. Five other newborns suffering from Pearson syndrome with various clinical symptoms were found in literature. Pearson syndrome, rarely diagnosed in newborns, should be suspected in the presence of macrocytic aregenerative anemia and requires a bone marrow aspirate followed by a genetic analysis from a blood sample. Copyright 2009 Elsevier Masson SAS. All rights reserved.
Corneal endothelial dysfunction in Pearson syndrome.
Kasbekar, Shivani A; Gonzalez-Martin, Jose A; Shafiq, Ayad E; Chandna, Arvind; Willoughby, Colin E
2013-01-01
Mitochondrial disorders are associated with well recognized ocular manifestations. Pearson syndrome is an often fatal, multisystem, mitochondrial disorder that causes variable bone marrow, hepatic, renal and pancreatic exocrine dysfunction. Phenotypic progression of ocular disease in a 12-year-old male with Pearson syndrome is described. This case illustrates phenotypic drift from Pearson syndrome to Kearns-Sayre syndrome given the patient's longevity. Persistent corneal endothelial failure was noted in addition to ptosis, chronic external ophthalmoplegia and mid-peripheral pigmentary retinopathy. We propose that corneal edema resulting from corneal endothelial metabolic pump failure occurs within a spectrum of mitochondrial disorders.
Analyzing Decision Logs to Understand Decision Making in Serious Crime Investigations.
Dando, Coral J; Ormerod, Thomas C
2017-12-01
Objective To study decision making by detectives when investigating serious crime through the examination of decision logs to explore hypothesis generation and evidence selection. Background Decision logs are used to record and justify decisions made during serious crime investigations. The complexity of investigative decision making is well documented, as are the errors associated with miscarriages of justice and inquests. The use of decision logs has not been the subject of an empirical investigation, yet they offer an important window into the nature of investigative decision making in dynamic, time-critical environments. Method A sample of decision logs from British police forces was analyzed qualitatively and quantitatively to explore hypothesis generation and evidence selection by police detectives. Results Analyses revealed diversity in documentation of decisions that did not correlate with case type and identified significant limitations of the decision log approach to supporting investigative decision making. Differences emerged between experienced and less experienced officers' decision log records in exploration of alternative hypotheses, generation of hypotheses, and sources of evidential inquiry opened over phase of investigation. Conclusion The practical use of decision logs is highly constrained by their format and context of use. Despite this, decision log records suggest that experienced detectives display strategic decision making to avoid confirmation and satisficing, which affect less experienced detectives. Application Potential applications of this research include both training in case documentation and the development of new decision log media that encourage detectives, irrespective of experience, to generate multiple hypotheses and optimize the timely selection of evidence to test them.
Mapping soil particle-size fractions: A comparison of compositional kriging and log-ratio kriging
NASA Astrophysics Data System (ADS)
Wang, Zong; Shi, Wenjiao
2017-03-01
Soil particle-size fractions (psf) as basic physical variables need to be accurately predicted for regional hydrological, ecological, geological, agricultural and environmental studies frequently. Some methods had been proposed to interpolate the spatial distributions of soil psf, but the performance of compositional kriging and different log-ratio kriging methods is still unclear. Four log-ratio transformations, including additive log-ratio (alr), centered log-ratio (clr), isometric log-ratio (ilr), and symmetry log-ratio (slr), combined with ordinary kriging (log-ratio kriging: alr_OK, clr_OK, ilr_OK and slr_OK) were selected to be compared with compositional kriging (CK) for the spatial prediction of soil psf in Tianlaochi of Heihe River Basin, China. Root mean squared error (RMSE), Aitchison's distance (AD), standardized residual sum of squares (STRESS) and right ratio of the predicted soil texture types (RR) were chosen to evaluate the accuracy for different interpolators. The results showed that CK had a better accuracy than the four log-ratio kriging methods. The RMSE (sand, 9.27%; silt, 7.67%; clay, 4.17%), AD (0.45), STRESS (0.60) of CK were the lowest and the RR (58.65%) was the highest in the five interpolators. The clr_OK achieved relatively better performance than the other log-ratio kriging methods. In addition, CK presented reasonable and smooth transition on mapping soil psf according to the environmental factors. The study gives insights for mapping soil psf accurately by comparing different methods for compositional data interpolation. Further researches of methods combined with ancillary variables are needed to be implemented to improve the interpolation performance.
Bioaccumulation and enantioselectivity of type I and type II pyrethroid pesticides in earthworm.
Chang, Jing; Wang, Yinghuan; Wang, Huili; Li, Jianzhong; Xu, Peng
2016-02-01
In this study, the bioavailability and enantioselectivity differences between bifenthrin (BF, typeⅠpyrethroid) and lambad-cyhalothrin (LCT, type Ⅱ pyrethroid) in earthworm (Eisenia fetida) were investigated. The bio-soil accumulation factors (BSAFs) of BF was about 4 times greater than that of LCT. LCT was degraded faster than BF in soil while eliminated lower in earthworm samples. Compound sorption plays an important role on bioavailability in earthworm, and the soil-adsorption coefficient (K(oc)) of BF and LCT were 22 442 and 42 578, respectively. Metabolic capacity of earthworm to LCT was further studied as no significant difference in the accumulation of LCT between the high and low dose experiment was found. 3-phenoxybenzoic acid (PBCOOH), a metabolite of LCT produced by earthworm was detected in soil. The concentration of PBCOOH at high dose exposure was about 4.7 times greater than that of in low dose level at the fifth day. The bioaccumulation of BF and LCT were both enantioselective in earthworm. The enantiomer factors of BF and LCT in earthworm were approximately 0.12 and 0.65, respectively. The more toxic enantiomers ((+)-BF and (-)-LCT) had a preferential degradation in earthworm and leaded to less toxicity on earthworm for racemate exposure. In combination with other studies, a liner relationship between Log BSAF(S) and Log K(ow) was observed, and the Log BSAF(S) decreased with the increase of Log K(ow). Copyright © 2015 Elsevier Ltd. All rights reserved.
Magrach, Ainhoa; Senior, Rebecca A; Rogers, Andrew; Nurdin, Deddy; Benedick, Suzan; Laurance, William F; Santamaria, Luis; Edwards, David P
2016-03-16
Selective logging is one of the major drivers of tropical forest degradation, causing important shifts in species composition. Whether such changes modify interactions between species and the networks in which they are embedded remain fundamental questions to assess the 'health' and ecosystem functionality of logged forests. We focus on interactions between lianas and their tree hosts within primary and selectively logged forests in the biodiversity hotspot of Malaysian Borneo. We found that lianas were more abundant, had higher species richness, and different species compositions in logged than in primary forests. Logged forests showed heavier liana loads disparately affecting slow-growing tree species, which could exacerbate the loss of timber value and carbon storage already associated with logging. Moreover, simulation scenarios of host tree local species loss indicated that logging might decrease the robustness of liana-tree interaction networks if heavily infested trees (i.e. the most connected ones) were more likely to disappear. This effect is partially mitigated in the short term by the colonization of host trees by a greater diversity of liana species within logged forests, yet this might not compensate for the loss of preferred tree hosts in the long term. As a consequence, species interaction networks may show a lagged response to disturbance, which may trigger sudden collapses in species richness and ecosystem function in response to additional disturbances, representing a new type of 'extinction debt'. © 2016 The Author(s).
Magrach, Ainhoa; Senior, Rebecca A.; Rogers, Andrew; Nurdin, Deddy; Benedick, Suzan; Laurance, William F.; Santamaria, Luis; Edwards, David P.
2016-01-01
Selective logging is one of the major drivers of tropical forest degradation, causing important shifts in species composition. Whether such changes modify interactions between species and the networks in which they are embedded remain fundamental questions to assess the ‘health’ and ecosystem functionality of logged forests. We focus on interactions between lianas and their tree hosts within primary and selectively logged forests in the biodiversity hotspot of Malaysian Borneo. We found that lianas were more abundant, had higher species richness, and different species compositions in logged than in primary forests. Logged forests showed heavier liana loads disparately affecting slow-growing tree species, which could exacerbate the loss of timber value and carbon storage already associated with logging. Moreover, simulation scenarios of host tree local species loss indicated that logging might decrease the robustness of liana–tree interaction networks if heavily infested trees (i.e. the most connected ones) were more likely to disappear. This effect is partially mitigated in the short term by the colonization of host trees by a greater diversity of liana species within logged forests, yet this might not compensate for the loss of preferred tree hosts in the long term. As a consequence, species interaction networks may show a lagged response to disturbance, which may trigger sudden collapses in species richness and ecosystem function in response to additional disturbances, representing a new type of ‘extinction debt’. PMID:26936241
Sweep visually evoked potentials and visual findings in children with West syndrome.
de Freitas Dotto, Patrícia; Cavascan, Nívea Nunes; Berezovsky, Adriana; Sacai, Paula Yuri; Rocha, Daniel Martins; Pereira, Josenilson Martins; Salomão, Solange Rios
2014-03-01
West syndrome (WS) is a type of early childhood epilepsy characterized by progressive neurological development deterioration that includes vision. To demonstrate the clinical importance of grating visual acuity thresholds (GVA) measurement by sweep visually evoked potentials technique (sweep-VEP) as a reliable tool for evaluation of the visual cortex status in WS children. This is a retrospective study of the best-corrected binocular GVA and ophthalmological features of WS children referred for the Laboratory of Clinical Electrophysiology of Vision of UNIFESP from 1998 to 2012 (Committee on Ethics in Research of UNIFESP n° 0349/08). The GVA deficit was calculated by subtracting binocular GVA score (logMAR units) of each patient from the median values of age norms from our own lab and classified as mild (0.1-0.39 logMAR), moderate (0.40-0.80 logMAR) or severe (>0.81 logMAR). Associated ophthalmological features were also described. Data from 30 WS children (age from 6 to 108 months, median = 14.5 months, mean ± SD = 22.0 ± 22.1 months; 19 male) were analyzed. The majority presented severe GVA deficit (0.15-1.44 logMAR; mean ± SD = 0.82 ± 0.32 logMAR; median = 0.82 logMAR), poor visual behavior, high prevalence of strabismus and great variability in ocular positioning. The GVA deficit did not vary according to gender (P = .8022), WS type (P = .908), birth age (P = .2881), perinatal oxygenation (P = .7692), visual behavior (P = .8789), ocular motility (P = .1821), nystagmus (P = .2868), risk of drug-induced retinopathy (P = .4632) and participation in early visual stimulation therapy (P = .9010). The sweep-VEP technique is a reliable tool to classify visual system impairment in WS children, in agreement with the poor visual behavior exhibited by them. Copyright © 2013 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.
Walford, Geoffrey A; Ma, Yong; Christophi, Costas A; Goldberg, Ronald B; Jarolim, Petr; Horton, Edward; Mather, Kieren J; Barrett-Connor, Elizabeth; Davis, Jaclyn; Florez, Jose C; Wang, Thomas J
2014-05-01
We aimed to study the relationship between measures of adiposity, insulin sensitivity and N-terminal pro-B-type natriuretic peptide (NT-proBNP) in the Diabetes Prevention Program (DPP). The DPP is a completed clinical trial. Using stored samples from this resource, we measured BMI, waist circumference (WC), an insulin sensitivity index (ISI; [1/HOMA-IR]) and NT-proBNP at baseline and at 2 years of follow-up in participants randomised to placebo (n = 692), intensive lifestyle intervention (n = 832) or metformin (n = 887). At baseline, log NT-proBNP did not differ between treatment arms and was correlated with baseline log ISI (p < 0.0001) and WC (p = 0.0003) but not with BMI (p = 0.39). After 2 years of treatment, BMI decreased in the lifestyle and metformin groups (both p < 0.0001); WC decreased in all three groups (p < 0.05 for all); and log ISI increased in the lifestyle and metformin groups (both p < 0.001). The change in log NT-proBNP did not differ in the lifestyle or metformin group vs the placebo group (p > 0.05 for both). In regression models, the change in log NT-proBNP was positively associated with the change in log ISI (p < 0.005) in all three study groups after adjusting for changes in BMI and WC, but was not associated with the change in BMI or WC after adjusting for changes in log ISI. Circulating NT-proBNP was associated with a measure of insulin sensitivity before and during preventive interventions for type 2 diabetes in the DPP. This relationship persisted after adjustment for measures of adiposity and was consistent regardless of whether a participant was treated with placebo, intensive lifestyle intervention or metformin.
Land use not litter quality is a stronger driver of decomposition in hyperdiverse tropical forest.
Both, Sabine; Elias, Dafydd M O; Kritzler, Ully H; Ostle, Nick J; Johnson, David
2017-11-01
In hyperdiverse tropical forests, the key drivers of litter decomposition are poorly understood despite its crucial role in facilitating nutrient availability for plants and microbes. Selective logging is a pressing land use with potential for considerable impacts on plant-soil interactions, litter decomposition, and nutrient cycling. Here, in Borneo's tropical rainforests, we test the hypothesis that decomposition is driven by litter quality and that there is a significant "home-field advantage," that is positive interaction between local litter quality and land use. We determined mass loss of leaf litter, collected from selectively logged and old-growth forest, in a fully factorial experimental design, using meshes that either allowed or precluded access by mesofauna. We measured leaf litter chemical composition before and after the experiment. Key soil chemical and biological properties and microclimatic conditions were measured as land-use descriptors. We found that despite substantial differences in litter quality, the main driver of decomposition was land-use type. Whilst inclusion of mesofauna accelerated decomposition, their effect was independent of land use and litter quality. Decomposition of all litters was slower in selectively logged forest than in old-growth forest. However, there was significantly greater loss of nutrients from litter, especially phosphorus, in selectively logged forest. The analyses of several covariates detected minor microclimatic differences between land-use types but no alterations in soil chemical properties or free-living microbial composition. These results demonstrate that selective logging can significantly reduce litter decomposition in tropical rainforest with no evidence of a home-field advantage. We show that loss of key limiting nutrients from litter (P & N) is greater in selectively logged forest. Overall, the findings hint at subtle differences in microclimate overriding litter quality that result in reduced decomposition rates in selectively logged forests and potentially affect biogeochemical nutrient cycling in the long term.
Workie, Demeke Lakew; Zike, Dereje Tesfaye; Fenta, Haile Mekonnen; Mekonnen, Mulusew Admasu
2018-05-10
Ethiopia is among countries with low contraceptive usage prevalence rate and resulted in high total fertility rate and unwanted pregnancy which intern affects the maternal and child health status. This study aimed to investigate the major factors that affect the number of modern contraceptive users at service delivery point in Ethiopia. The Performance Monitoring and Accountability2020/Ethiopia data collected between March and April 2016 at round-4 from 461 eligible service delivery points were in this study. The weighted log-linear negative binomial model applied to analyze the service delivery point's data. Fifty percent of service delivery points in Ethiopia given service for 61 modern contraceptive users with the interquartile range of 0.62. The expected log number of modern contraceptive users at rural was 1.05 (95% Wald CI: - 1.42 to - 0.68) lower than the expected log number of modern contraceptive users at urban. In addition, the expected log count of modern contraceptive users at others facility type was 0.58 lower than the expected log count of modern contraceptive users at the health center. The numbers of nurses/midwives were affecting the number of modern contraceptive users. Since, the incidence rate of modern contraceptive users increased by one due to an additional nurse in the delivery point. Among different factors considered in this study, residence, region, facility type, the number of days per week family planning offered, the number of nurses/midwives and number of medical assistants were to be associated with the number of modern contraceptive users. Thus, the Government of Ethiopia would take immediate steps to address causes of the number of modern contraceptive users in Ethiopia.
Farsa, Oldřich
2013-01-01
The log BB parameter is the logarithm of the ratio of a compound’s equilibrium concentrations in the brain tissue versus the blood plasma. This parameter is a useful descriptor in assessing the ability of a compound to permeate the blood-brain barrier. The aim of this study was to develop a Hansch-type linear regression QSAR model that correlates the parameter log BB and the retention time of drugs and other organic compounds on a reversed-phase HPLC containing an embedded amide moiety. The retention time was expressed by the capacity factor log k′. The second aim was to estimate the brain’s absorption of 2-(azacycloalkyl)acetamidophenoxyacetic acids, which are analogues of piracetam, nefiracetam, and meclofenoxate. Notably, these acids may be novel nootropics. Two simple regression models that relate log BB and log k′ were developed from an assay performed using a reversed-phase HPLC that contained an embedded amide moiety. Both the quadratic and linear models yielded statistical parameters comparable to previously published models of log BB dependence on various structural characteristics. The models predict that four members of the substituted phenoxyacetic acid series have a strong chance of permeating the barrier and being absorbed in the brain. The results of this study show that a reversed-phase HPLC system containing an embedded amide moiety is a functional in vitro surrogate of the blood-brain barrier. These results suggest that racetam-type nootropic drugs containing a carboxylic moiety could be more poorly absorbed than analogues devoid of the carboxyl group, especially if the compounds penetrate the barrier by a simple diffusion mechanism. PMID:23641330
A new triclinic modification of the pyrochlore-type KOs{sub 2}O{sub 6} superconductor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katrych, S.; Gu, Q.F.; Bukowski, Z.
2009-03-15
A new modification of KOs{sub 2}O{sub 6}, the representative of a new structural type (Pearson symbol aP18, a=5.5668(1) A, b=6.4519(2) A, c=7.2356(2) A, {alpha}=65.377(3){sup o}, {beta}=70.572(3){sup o}, {gamma}=75.613(2){sup o} space group P-1, no. 2 was synthesized employing high pressure technique. Its structure was determined by single-crystal X-ray diffraction. The structure can be described as two OsO{sub 6} octahedral chains relating to each other through inversion and forming big voids with K atoms inside. Quantum chemical calculations were performed on the novel compound and structurally related cubic compound. High-pressure X-ray study showed that cubic KOs{sub 2}O{sub 6} phase was stable upmore » to 32.5(2) GPa at room temperature. - Graphical abstract: A new modification of KOs{sub 2}O{sub 6}, the representative of a new structural type (Pearson symbol aP18, a=5.5668(1) A, b=6.4519(2) A, c=7.2356(2) A, {alpha}=65.377(3){sup o}, {beta}=70.572(3){sup o}, {gamma}=75.613(2){sup o} space group P-1, no. 2 was synthesized employing high pressure technique. The structure can be described as two OsO{sub 6} octahedral chains relating to each other through inversion and forming big voids with K atoms inside.« less
Medhurst, R. Bruce; Wipfli, Mark S.; Binckley, Chris; Polivka, Karl; Hessburg, Paul F.; Salter, R. Brion
2010-01-01
Effects of forest management on stream communities have been widely documented, but the role that climate plays in the disturbance outcomes is not understood. In order to determine whether the effect of disturbance from forest management on headwater stream communities varies by climate, we evaluated benthic macroinvertebrate communities in 24 headwater streams that differed in forest management (logged-roaded vs. unlogged-unroaded, hereafter logged and unlogged) within two ecological sub-regions (wet versus dry) within the eastern Cascade Range, Washington, USA. In both ecoregions, total macroinvertebrate density was highest at logged sites (P = 0.001) with gathering-collectors and shredders dominating. Total taxonomic richness and diversity did not differ between ecoregions or forest management types. Shredder densities were positively correlated with total deciduous and Sitka alder (Alnus sinuata) riparian cover. Further, differences in shredder density between logged and unlogged sites were greater in the wet ecoregion (logging × ecoregion interaction; P = 0.006) suggesting that differences in post-logging forest succession between ecoregions were responsible for differences in shredder abundance. Headwater stream benthic community structure was influenced by logging and regional differences in climate. Future development of ecoregional classification models at the subbasin scale, and use of functional metrics in addition to structural metrics, may allow for more accurate assessments of anthropogenic disturbances in mountainous regions where mosaics of localized differences in climate are common.
Forest regeneration research at Fort Valley
L. J. (Pat) Heidmann
2008-01-01
When G. A. Pearson arrived at Fort Valley to establish the first Forest Service Experiment Station he found many open park-like stands similar to those in Figure 1. Within two years, Pearson had outlined the major factors detrimental to the establishment of ponderosa pine seedlings (Pearson 1910). During the next almost 40 years, he wrote many articles on methods of...
Forest regeneration research (P-53)
Leroy J. (Pat) Heidmann
2008-01-01
When G. A. Pearson arrived at Fort Valley to establish the first Forest Service Experiment Station he found many open park-like stands similar to those in Figure 1. Within two years, Pearson had outlined the major factors detrimental to the establishment of ponderosa pine seedlings (Pearson 1910). During the next almost 40 years, he wrote many articles on methods of...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-01
... County Commission supports a request by Lee Pearson for a modified-competitive sale of the 26.39 acre parcels. Mr. Pearson presently resides and conducts a cattle ranching operation on the private land that... dislocation of existing users, the BLM authorized officer has determined Lee Pearson as the designated bidder...
Fractured-aquifer hydrogeology from geophysical logs; the passaic formation, New Jersey
Morin, R.H.; Carleton, G.B.; Poirier, S.
1997-01-01
The Passaic Formation consists of gradational sequences of mudstone, siltstone, and sandstone, and is a principal aquifer in central New Jersey. Ground-water flow is primarily controlled by fractures interspersed throughout these sedimentary rocks and characterizing these fractures in terms of type, orientation, spatial distribution, frequency, and transmissivity is fundamental towards understanding local fluid-transport processes. To obtain this information, a comprehensive suite of geophysical logs was collected in 10 wells roughly 46 m in depth and located within a .05 km2 area in Hopewell Township, New Jersey. A seemingly complex, heterogeneous network of fractures identified with an acoustic televiewer was statistically reduced to two principal subsets corresponding to two distinct fracture types: (1) bedding-plane partings and (2) high-angle fractures. Bedding-plane partings are the most numerous and have an average strike of N84??W and dip of 20??N. The high-angle fractures are oriented subparallel to these features, with an average strike of N79??E and dip of 71??S, making the two fracture types roughly orthogonal. Their intersections form linear features that also retain this approximately east-west strike. Inspection of fluid temperature and conductance logs in conjunction with flowmeter measurements obtained during pumping allows the transmissive fractures to be distinguished from the general fracture population. These results show that, within the resolution capabilities of the logging tools, approximately 51 (or 18 percent) of the 280 total fractures are water producing. The bedding-plane partings exhibit transmissivities that average roughly 5 m2/day and that generally diminish in magnitude and frequency with depth. The high-angle fractures have average transmissivities that are about half those of the bedding-plane partings and show no apparent dependence upon depth. The geophysical logging results allow us to infer a distinct hydrogeologic structure within this aquifer that is defined by fracture type and orientation. Fluid flow near the surface is controlled primarily by the highly transmissive, subhorizontal bedding-plane partings. As depth increases, the high-angle fractures apparently become more dominant hydrologically.The Passaic Formation consists of gradational sequences of mudstone, siltstone, and sandstone, and is a principal aquifer in central New Jersey. Ground-water flow is primarily controlled by fractures interspersed throughout these sedimentary rocks and characterizing these fractures in terms of type, orientation, spatial distribution, frequency, and transmissivity is fundamental towards understanding local fluid-transport processes. To obtain this information, a comprehensive suite of geophysical logs was collected in 10 wells roughly 46 m in depth and located within a .05 km2 area in Hopewell Township, New Jersey. A seemingly complex, heterogeneous network of fractures identified with an acoustic televiewer was statistically reduced to two principal subsets corresponding to two distinct fracture types: (1) bedding-plane partings and (2) high-angle fractures. Bedding-plane partings are the most numerous and have an average strike of N84?? W and dip of 20?? N. The high-angle fractures are oriented subparallel to these features, with an average strike of N79?? E and dip of 71?? S, making the two fracture types roughly orthogonal. Their intersections form linear features that also retain this approximately east-west strike. Inspection of fluid temperature and conductance logs in conjunction with flowmeter measurements obtained during pumping allows the transmissive fractures to be distinguished from the general fracture population. These results show that, within the resolution capabilities of the logging tools, approximately 51 (or 18 percent) of the 280 total fractures are water producing. The bedding-plane partings exhibit transmissivities that average roughly 5 m2/day and that generally dimi
Genotyping and drug resistance patterns of M. tuberculosis strains in Pakistan
Tanveer, Mahnaz; Hasan, Zahra; Siddiqui, Amna R; Ali, Asho; Kanji, Akbar; Ghebremicheal, Solomon; Hasan, Rumina
2008-01-01
Background The incidence of tuberculosis in Pakistan is 181/100,000 population. However, information about transmission and geographical prevalence of Mycobacterium tuberculosis strains and their evolutionary genetics as well as drug resistance remains limited. Our objective was to determine the clonal composition, evolutionary genetics and drug resistance of M. tuberculosis isolates from different regions of the country. Methods M. tuberculosis strains isolated (2003–2005) from specimens submitted to the laboratory through collection units nationwide were included. Drug susceptibility was performed and strains were spoligotyped. Results Of 926 M. tuberculosis strains studied, 721(78%) were grouped into 59 "shared types", while 205 (22%) were identified as "Orphan" spoligotypes. Amongst the predominant genotypes 61% were Central Asian strains (CAS ; including CAS1, CAS sub-families and Orphan Pak clusters), 4% East African-Indian (EAI), 3% Beijing, 2% poorly defined TB strains (T), 2% Haarlem and LAM (0.2). Also TbD1 analysis (M. tuberculosis specific deletion 1) confirmed that CAS1 was of "modern" origin while EAI isolates belonged to "ancestral" strain types. Prevalence of CAS1 clade was significantly higher in Punjab (P < 0.01, Pearsons Chi-square test) as compared with Sindh, North West Frontier Province and Balochistan provinces. Forty six percent of isolates were sensitive to five first line antibiotics tested, 45% were Rifampicin resistant, 50% isoniazid resistant. MDR was significantly associated with Beijing strains (P = 0.01, Pearsons Chi-square test) and EAI (P = 0.001, Pearsons Chi-square test), but not with CAS family. Conclusion Our results show variation of prevalent M. tuberculosis strain with greater association of CAS1 with the Punjab province. The fact that the prevalent CAS genotype was not associated with drug resistance is encouraging. It further suggests a more effective treatment and control programme should be successful in reducing the tuberculosis burden in Pakistan. PMID:19108722
Adjustments to forest inventory and analysis estimates of 2001 saw-log volumes for Kentucky
Stanley J. Zarnoch; Jeffery A. Turner
2005-01-01
The 2001 Kentucky Forest Inventory and Analysis survey overestimated hardwood saw-log volume in tree grade 1. This occurred because 2001 field crews classified too many trees as grade 1 trees. Data collected by quality assurance crews were used to generate two types of adjustments, one based on the proportion of trees misclassified and the other on the proportion of...
Classifying features in CT imagery: accuracy for some single- and multiple-species classifiers
Daniel L. Schmoldt; Jing He; A. Lynn Abbott
1998-01-01
Our current approach to automatically label features in CT images of hardwood logs classifies each pixel of an image individually. These feature classifiers use a back-propagation artificial neural network (ANN) and feature vectors that include a small, local neighborhood of pixels and the distance of the target pixel to the center of the log. Initially, this type of...
Ultraviolet Light Disinfection in the Use of Individual Water Purification Devices
2006-03-01
adenovirus, Giardia lamblia, Giardia muris , and Cryptosporidium parvum. Adenovirus was evaluated because it is considered the most resistant to...7 reproduce cannot infect and are thereby inactivated. Subsequently, when evaluating UV disinfection capability, Giardia cyst and...0.25 to 20 NTU resulted in a 0.8-log and 0.5-log decrease in inactivation of Cryptosporidium and Giardia , respectively (reference 3). The type of
Nondestructive rule-based defect detection and identification system in CT images of hardwood logs
Erol Sarigul; A. Lynn Abbott; Daniel L. Schmoldt
2001-01-01
This paper is concerned with the detection of internal defects in hardwood logs. Because the commercial value of hardwood lumber is directly related to the quantity, type, and location of defects in the wood, sawing strategies are typically chosen in an attempt to minimize the defects in the resulting boards. Traditionally, the sawyer makes sawing decisions by visually...
Tortorelli, R.L.; Bergman, D.L.
1985-01-01
Statewide regression relations for Oklahoma were determined for estimating peak discharge of floods for selected recurrence intervals from 2 to 500 years. The independent variables required for estimating flood discharge for rural streams are contributing drainage area and mean annual precipitation. Main-channel slope, a variable used in previous reports, was found to contribute very little to the accuracy of the relations and was not used. The regression equations are applicable for watersheds with drainage areas less than 2,500 square miles that are not significantly affected by regulation from manmade works. These relations are presented in graphical form for easy application. Limitations on the use of the regression relations and the reliability of regression estimates for rural unregulated streams are discussed. Basin and climatic characteristics, log-Pearson Type III statistics and the flood-frequency relations for 226 gaging stations in Oklahoma and adjacent states are presented. Regression relations are investigated for estimating flood magnitude and frequency for watersheds affected by regulation from small FRS (floodwater retarding structures) built by the U.S. Soil Conservation Service in their watershed protection and flood prevention program. Gaging-station data from nine FRS regulated sites in Oklahoma and one FRS regulated site in Kansas are used. For sites regulated by FRS, an adjustment of the statewide rural regression relations can be used to estimate flood magnitude and frequency. The statewide regression equations are used by substituting the drainage area below the FRS, or drainage area that represents the percent of the basin unregulated, in the contributing drainage area parameter to obtain flood-frequency estimates. Flood-frequency curves and flow-duration curves are presented for five gaged sites to illustrate the effects of FRS regulation on peak discharge.
Kennedy, Jeffrey R.; Paretti, Nicholas V.
2014-01-01
Flooding in urban areas routinely causes severe damage to property and often results in loss of life. To investigate the effect of urbanization on the magnitude and frequency of flood peaks, a flood frequency analysis was carried out using data from urbanized streamgaging stations in Phoenix and Tucson, Arizona. Flood peaks at each station were predicted using the log-Pearson Type III distribution, fitted using the expected moments algorithm and the multiple Grubbs-Beck low outlier test. The station estimates were then compared to flood peaks estimated by rural-regression equations for Arizona, and to flood peaks adjusted for urbanization using a previously developed procedure for adjusting U.S. Geological Survey rural regression peak discharges in an urban setting. Only smaller, more common flood peaks at the 50-, 20-, 10-, and 4-percent annual exceedance probabilities (AEPs) demonstrate any increase in magnitude as a result of urbanization; the 1-, 0.5-, and 0.2-percent AEP flood estimates are predicted without bias by the rural-regression equations. Percent imperviousness was determined not to account for the difference in estimated flood peaks between stations, either when adjusting the rural-regression equations or when deriving urban-regression equations to predict flood peaks directly from basin characteristics. Comparison with urban adjustment equations indicates that flood peaks are systematically overestimated if the rural-regression-estimated flood peaks are adjusted upward to account for urbanization. At nearly every streamgaging station in the analysis, adjusted rural-regression estimates were greater than the estimates derived using station data. One likely reason for the lack of increase in flood peaks with urbanization is the presence of significant stormwater retention and detention structures within the watershed used in the study.
Lewis, Jason M.
2010-01-01
Peak-streamflow regression equations were determined for estimating flows with exceedance probabilities from 50 to 0.2 percent for the state of Oklahoma. These regression equations incorporate basin characteristics to estimate peak-streamflow magnitude and frequency throughout the state by use of a generalized least squares regression analysis. The most statistically significant independent variables required to estimate peak-streamflow magnitude and frequency for unregulated streams in Oklahoma are contributing drainage area, mean-annual precipitation, and main-channel slope. The regression equations are applicable for watershed basins with drainage areas less than 2,510 square miles that are not affected by regulation. The resulting regression equations had a standard model error ranging from 31 to 46 percent. Annual-maximum peak flows observed at 231 streamflow-gaging stations through water year 2008 were used for the regression analysis. Gage peak-streamflow estimates were used from previous work unless 2008 gaging-station data were available, in which new peak-streamflow estimates were calculated. The U.S. Geological Survey StreamStats web application was used to obtain the independent variables required for the peak-streamflow regression equations. Limitations on the use of the regression equations and the reliability of regression estimates for natural unregulated streams are described. Log-Pearson Type III analysis information, basin and climate characteristics, and the peak-streamflow frequency estimates for the 231 gaging stations in and near Oklahoma are listed. Methodologies are presented to estimate peak streamflows at ungaged sites by using estimates from gaging stations on unregulated streams. For ungaged sites on urban streams and streams regulated by small floodwater retarding structures, an adjustment of the statewide regression equations for natural unregulated streams can be used to estimate peak-streamflow magnitude and frequency.
Kellogg, James A.; Atria, Peter V.; Sanders, Jeffrey C.; Eyster, M. Elaine
2001-01-01
Normal assay variation associated with bDNA tests for human immunodeficiency virus type 1 (HIV-1) RNA performed at two laboratories with different levels of test experience was investigated. Two 5-ml aliquots of blood in EDTA tubes were collected from each patient for whom the HIV-1 bDNA test was ordered. Blood was stored for no more than 4 h at room temperature prior to plasma separation. Plasma was stored at −70°C until transported to the Central Pennsylvania Alliance Laboratory (CPAL; York, Pa.) and to the Hershey Medical Center (Hershey, Pa.) on dry ice. Samples were stored at ≤−70°C at both laboratories prior to testing. Pools of negative (donor), low-HIV-1-RNA-positive, and high-HIV-1-RNA-positive plasma samples were also repeatedly tested at CPAL to determine both intra- and interrun variation. From 11 August 1999 until 14 September 2000, 448 patient specimens were analyzed in parallel at CPAL and Hershey. From 206 samples with results of ≥1,000 copies/ml at CPAL, 148 (72%) of the results varied by ≤0.20 log10 when tested at Hershey and none varied by >0.50 log10. However, of 242 specimens with results of <1,000 copies/ml at CPAL, 11 (5%) of the results varied by >0.50 log10 when tested at Hershey. Of 38 aliquots of HIV-1 RNA pool negative samples included in 13 CPAL bDNA runs, 37 (97%) gave results of <50 copies/ml and 1 (3%) gave a result of 114 copies/ml. Low-positive HIV-1 RNA pool intrarun variation ranged from 0.06 to 0.26 log10 while the maximum interrun variation was 0.52 log10. High-positive HIV-1 RNA pool intrarun variation ranged from 0.04 to 0.32 log10, while the maximum interrun variation was 0.55 log10. In our patient population, a change in bDNA HIV-1 RNA results of ≤0.50 log10 over time most likely represents normal laboratory test variation. However, a change of >0.50 log10, especially if the results are >1,000 copies/ml, is likely to be significant. PMID:11329458
Kellogg, J A; Atria, P V; Sanders, J C; Eyster, M E
2001-05-01
Normal assay variation associated with bDNA tests for human immunodeficiency virus type 1 (HIV-1) RNA performed at two laboratories with different levels of test experience was investigated. Two 5-ml aliquots of blood in EDTA tubes were collected from each patient for whom the HIV-1 bDNA test was ordered. Blood was stored for no more than 4 h at room temperature prior to plasma separation. Plasma was stored at -70 degrees C until transported to the Central Pennsylvania Alliance Laboratory (CPAL; York, Pa.) and to the Hershey Medical Center (Hershey, Pa.) on dry ice. Samples were stored at < or =-70 degrees C at both laboratories prior to testing. Pools of negative (donor), low-HIV-1-RNA-positive, and high-HIV-1-RNA-positive plasma samples were also repeatedly tested at CPAL to determine both intra- and interrun variation. From 11 August 1999 until 14 September 2000, 448 patient specimens were analyzed in parallel at CPAL and Hershey. From 206 samples with results of > or =1,000 copies/ml at CPAL, 148 (72%) of the results varied by < or =0.20 log(10) when tested at Hershey and none varied by >0.50 log(10). However, of 242 specimens with results of <1,000 copies/ml at CPAL, 11 (5%) of the results varied by >0.50 log(10) when tested at Hershey. Of 38 aliquots of HIV-1 RNA pool negative samples included in 13 CPAL bDNA runs, 37 (97%) gave results of <50 copies/ml and 1 (3%) gave a result of 114 copies/ml. Low-positive HIV-1 RNA pool intrarun variation ranged from 0.06 to 0.26 log(10) while the maximum interrun variation was 0.52 log(10). High-positive HIV-1 RNA pool intrarun variation ranged from 0.04 to 0.32 log(10), while the maximum interrun variation was 0.55 log(10). In our patient population, a change in bDNA HIV-1 RNA results of < or =0.50 log(10) over time most likely represents normal laboratory test variation. However, a change of >0.50 log(10), especially if the results are >1,000 copies/ml, is likely to be significant.
Galaxy evolution by color-log(n) type since redshift unity in the Hubble Ultra Deep Field
NASA Astrophysics Data System (ADS)
Cameron, E.; Driver, S. P.
2009-01-01
Aims: We explore the use of the color-log(n) (where n is the global Sérsic index) plane as a tool for subdividing the galaxy population in a physically-motivated manner out to redshift unity. We thereby aim to quantify surface brightness evolution by color-log(n) type, accounting separately for the specific selection and measurement biases against each. Methods: We construct (u-r) color-log(n) diagrams for distant galaxies in the Hubble Ultra Deep Field (UDF) within a series of volume-limited samples to z=1.5. The color-log(n) distributions of these high redshift galaxies are compared against that measured for nearby galaxies in the Millennium Galaxy Catalogue (MGC), as well as to the results of visual morphological classification. Based on this analysis we divide our sample into three color-structure classes. Namely, “red, compact”, “blue, diffuse” and “blue, compact”. Luminosity-size diagrams are constructed for members of the two largest classes (“red, compact” and “blue, diffuse”), both in the UDF and the MGC. Artificial galaxy simulations (for systems with exponential and de Vaucouleurs profile shapes alternately) are used to identify “bias-free” regions of the luminosity-size plane in which galaxies are detected with high completeness, and their fluxes and sizes recovered with minimal surface brightness-dependent biases. Galaxy evolution is quantified via comparison of the low and high redshift luminosity-size relations within these “bias-free” regions. Results: We confirm the correlation between color-log(n) plane position and visual morphological type observed locally and in other high redshift studies in the color and/or structure domain. The combined effects of observational uncertainties, the morphological K-correction and cosmic variance preclude a robust statistical comparison of the shape of the MGC and UDF color-log(n) distributions. However, in the interval 0.75 < z <1.0 where the UDF i-band samples close to rest-frame B-band light (i.e., the morphological K-correction between our samples is negligible) we are able to present tentative evidence of bimodality, albiet for a very small sample size (17 galaxies). Our unique approach to quantifying selection and measurement biases in the luminosity-size plane highlights the need to consider errors in the recovery of both magnitudes and sizes, and their dependence on profile shape. Motivated by these results we divide our sample into the three color-structure classes mentioned above and quantify luminosity-size evolution by galaxy type. Specifically, we detect decreases in B-band, surface brightness of 1.57 ± 0.22 mag arcsec-2 and 1.65 ± 0.22 mag arcsec-2 for our “blue, diffuse” and “red, compact” classes respectively between redshift unity and the present day.
NASA Astrophysics Data System (ADS)
Kozhushko, A. A.; Pahotin, A. N.; Mal'tsev, V. N.; Bojarnikova, L. V.; Stepanova, E. P.
2018-04-01
The technology of carrying out various geophysical studies of oil wells using a carrying well-logging cable for the delivery of geophysical instruments and equipment is considered in the article. The relevance of the topic results from the need to evaluate the effect of well-logging cable stretching in the well under the influence of its own mass and mass of geophysical instruments and equipment on the accuracy and adequacy geophysical studies. Calculation formulas for determining the well-logging cable stretching under the influence of the geophysical tools and equipment mass without taking into account and taking into account the influence of the carrying well-logging cable mass are also presented. For three types of carrying well-logging cables, calculations were made of their stretching in the oil well under the mass of geophysical instruments and equipment and its own mass, depending on the depth of the investigated well. The analysis of the obtained results made it possible to numerically evaluate the extension of the carrying well-logging cable depending on the depth of the investigated well and by correcting the obtained results it allows to provide the required depth accuracy and reliability of the interpreted results of the geophysical studies.
Information-theoretic approach to lead-lag effect on financial markets
NASA Astrophysics Data System (ADS)
Fiedor, Paweł
2014-08-01
Recently the interest of researchers has shifted from the analysis of synchronous relationships of financial instruments to the analysis of more meaningful asynchronous relationships. Both types of analysis are concentrated mostly on Pearson's correlation coefficient and consequently intraday lead-lag relationships (where one of the variables in a pair is time-lagged) are also associated with them. Under the Efficient-Market Hypothesis such relationships are not possible as all information is embedded in the prices, but in real markets we find such dependencies. In this paper we analyse lead-lag relationships of financial instruments and extend known methodology by using mutual information instead of Pearson's correlation coefficient. Mutual information is not only a more general measure, sensitive to non-linear dependencies, but also can lead to a simpler procedure of statistical validation of links between financial instruments. We analyse lagged relationships using New York Stock Exchange 100 data not only on an intraday level, but also for daily stock returns, which have usually been ignored.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, D.; Cole, H.A.G.
1958-11-01
A curious effect was observed during the operation of Bcrehole Logging Equipment Type 1417A in Rhodesia and Swaziland. With the probe submerged in water at depths greater than 150 ft vertical motion or light jerking of the cable produced spurious high count rates on the ratemeter. It has been suggested that voltage pulses which would be necessary to produce this effect, may be caused by changes in the length and hence capacity of the charged concentric cable connector prcduced under load conditions. This memorandum analyzes possible effects of cable stretch on instrument performance. (auth)
Geophysical logging of bedrock wells for geothermal gradient characterization in New Hampshire, 2013
Degnan, James R.; Barker, Gregory; Olson, Neil; Wilder, Leland
2014-01-01
Maximum groundwater temperatures at the bottom of the logs ranged from 11.2 to 15.4 degrees Celsius. Geothermal gradients were generally higher than those typically reported for other water wells in the United States. Some of the high gradients were associated with high natural gamma emissions. Groundwater flow was discernible in 4 of the 10 wells studied but only obscured the part of the geothermal gradient signal where groundwater actually flowed into, out of, or through the well. Temperature gradients varied by mapped bedrock type but can also vary by localized differences in mineralogy or rock type within the wells.
A photometric analysis of the neglected EW-type binary V336 TrA
NASA Astrophysics Data System (ADS)
Kriwattanawong, W.; Sarotsakulchai, T.; Maungkorn, S.; Reichart, D. E.; Haislip, J. B.; Kouprianov, V. V.; LaCluyze, A. P.; Moore, J. P.
2018-05-01
This study presents an analysis of photometric light curves and absolute parameters for the EW-type binary V336 TrA. VRI imaging observations were taken in 2013 by using the robotic telescopes PROMPT 4 and PROMPT 5 at Cerro Tololo Inter-American Observatory (CTIO), Chile. The observed light curves were fitted by using the Wilson-Devinney method. The results showed that V336 TrA is a W-type contact binary with a mass ratio of q = 1.396. The binary is a weak contact system with a fill-out factor of f = 15.69%. The system contains components with masses of 0.653 M⊙ and 0.912 M⊙ for the hotter and the cooler, respectively. The location of the secondary (less massive) component on the log M - log L diagram was found to be near the TAMS. The component has evolved to be oversize and overluminous. The orbital angular momentum of the binary was found to be log Jo = 51.61 cgs, less than all detached systems for same mass. The system has undergone angular momentum and/or mass loss, during the binary evolution from the detached to contact system.
Borghese, Michael M; Janssen, Ian
2018-03-22
Children participate in four main types of physical activity: organized sport, active travel, outdoor active play, and curriculum-based physical activity. The objective of this study was to develop a valid approach that can be used to concurrently measure time spent in each of these types of physical activity. Two samples (sample 1: n = 50; sample 2: n = 83) of children aged 10-13 wore an accelerometer and a GPS watch continuously over 7 days. They also completed a log where they recorded the start and end times of organized sport sessions. Sample 1 also completed an outdoor time log where they recorded the times they went outdoors and a description of the outdoor activity. Sample 2 also completed a curriculum log where they recorded times they participated in physical activity (e.g., physical education) during class time. We describe the development of a measurement approach that can be used to concurrently assess the time children spend participating in specific types of physical activity. The approach uses a combination of data from accelerometers, GPS, and activity logs and relies on merging and then processing these data using several manual (e.g., data checks and cleaning) and automated (e.g., algorithms) procedures. In the new measurement approach time spent in organized sport is estimated using the activity log. Time spent in active travel is estimated using an existing algorithm that uses GPS data. Time spent in outdoor active play is estimated using an algorithm (with a sensitivity and specificity of 85%) that was developed using data collected in sample 1 and which uses all of the data sources. Time spent in curriculum-based physical activity is estimated using an algorithm (with a sensitivity of 78% and specificity of 92%) that was developed using data collected in sample 2 and which uses accelerometer data collected during class time. There was evidence of excellent intra- and inter-rater reliability of the estimates for all of these types of physical activity when the manual steps were duplicated. This novel measurement approach can be used to estimate the time that children participate in different types of physical activity.
The stellar populations of nearby early-type galaxies
NASA Astrophysics Data System (ADS)
Concannon, Kristi Dendy
The recent completion of comprehensive photometric and spectroscopic galaxy surveys has revealed that early-type galaxies form a more heterogeneous family than previously thought. To better understand the star formation histories of early-type galaxies, we have obtained a set of high resolution, high signal-to-noise ratio spectra for a sample of 180 nearby early-type galaxies with the FAST spectrograph and the 1.5m telescope at F. L. Whipple Observatory. The spectra cover the wavelength range 3500 5500 Å which allows the comparison of various Balmer lines, most importantly the higher order lines in the blue, and have a S/N ratio higher than that of previous samples, which makes it easier to investigate the intrinsic spread in the observed parameters. The data set contains galaxies in both the local field and Virgo cluster environment and spans the velocity dispersion range 50 < log σ < 250km s -1. In conjunction with recent improvements in population synthesis modeling, our data set enables us to investigate the star formation history of E/S0 galaxies as a function of mass (σ), environment, and to some extent morphology. We are able to probe the effects of age and metallicity on fundamental observable relations such as the Mg-σ relation, and show that there is a significant spread in age in such diagrams, at all log σ, such that their “uniformity” can not be interpreted as a homogeneous history for early-type galaxies. Analyzing the age and [Fe/H] distribution as a function of the galaxy mass, we find that an age-σ relation exists among galaxies in both the local field and the Virgo cluster, such that the lower log σ galaxies have younger luminosity-weighted mean ages. The age spread of the low σ galaxies suggests that essentially all of the low-mass galaxies contain young to intermediate age populations, whereas the spread in age of the high log σ galaxies (log σ >˜ 2.0) is much larger, with galaxies spanning the age range of 4 19 Gyr. Thus, rather than pointing to all Es and S0s being old, the data show that even the most massive galaxies in our sample span a range of intermediate to old ages.
"It was a young man's life": G. A. Pearson
Susan D. Olberding
2008-01-01
The nation's initial USFS research site commenced in a rustic cabin in the midst of northern Arizona's expansive ponderosa pine forest. Gustaf A. Pearson was the first in a distinguished line of USFS scientists to live and study there. A visitor to Fort Valley today often wishes he could have stood in Pearson's large boots (he was said to have enormous...
Memories of Fort Valley from 1938 to 1942
Frank H. Wadsworth
2008-01-01
This delightful essay records Frank Wadsworth's early forestry career at FVEF in the late 1930s. Frank married Margaret Pearson, G.A. and May Pearson's daughter, in 1941. Pearson believed Frank could not continue to work for him because of nepotism rules, so Frank and Margaret moved to San Juan, Puerto Rico in 1942 where Frank continued his forestry career....
Natarajan, R; Nirdosh, I; Venuvanalingam, P; Ramalingam, M
2002-07-01
The QPPR approach has been used to model cupferrons as mineral collectors. Separation efficiencies (Es) of these chelating agents have been correlated with property parameters namely, log P, log Koc, substituent-constant sigma, Mullikan and ESP derived charges using multiple regression analysis. Es of substituted-cupferrons in the flotation of a uranium ore could be predicted within experimental error either by log P or log Koc and an electronic parameter. However, when a halo, methoxy or phenyl substituent was in para to the chelating group, experimental Es was greater than the predicted values. Inclusion of a Boolean type indicative parameter improved significantly the predictability power. This approach has been extended to 2-aminothiophenols that were used to float a zinc ore and the correlations were found to be reasonably good.
Syndromic surveillance models using Web data: the case of scarlet fever in the UK.
Samaras, Loukas; García-Barriocanal, Elena; Sicilia, Miguel-Angel
2012-03-01
Recent research has shown the potential of Web queries as a source for syndromic surveillance, and existing studies show that these queries can be used as a basis for estimation and prediction of the development of a syndromic disease, such as influenza, using log linear (logit) statistical models. Two alternative models are applied to the relationship between cases and Web queries in this paper. We examine the applicability of using statistical methods to relate search engine queries with scarlet fever cases in the UK, taking advantage of tools to acquire the appropriate data from Google, and using an alternative statistical method based on gamma distributions. The results show that using logit models, the Pearson correlation factor between Web queries and the data obtained from the official agencies must be over 0.90, otherwise the prediction of the peak and the spread of the distributions gives significant deviations. In this paper, we describe the gamma distribution model and show that we can obtain better results in all cases using gamma transformations, and especially in those with a smaller correlation factor.
Determinants of the Transmission Variation of Hand, Foot and Mouth Disease in China.
Zhao, Jijun; Li, Xinmin
2016-01-01
Severe outbreaks of hand, foot and mouth disease (HFMD) have occurred in China for decades. Our understanding of the HFMD transmission process and its determinants is still limited. In this paper, factors that affect the local variation of HFMD transmission process were studied. Three classes of factors, including meteorological, demographic and public health intervention factors, were carefully selected and their effects on HFMD transmission were investigated with Pearson's correlation coefficient and multiple linear regression models. The determining factors for the variation of HFMD transmission were different for the southeastern and the northwestern regions of China. In the northwest, fadeouts occurred yearly, and the average age at infection and the fadeout were negatively correlated with the population density. In the southeast, HFMD transmission was governed by the combined effects of the birth rate, the relative humidity and the interaction of the Health System Performance and the log of the population density. When the Health System Performance was low, HFMD transmission increased with the population density, but when the Health System Performance was high, the better health performance counteracted the transmission increase due to the higher population density.
An analysis of high-performing science students' preparation for collegiate science courses
NASA Astrophysics Data System (ADS)
Walter, Karen
This mixed-method study surveyed first year high-performing science students who participated in high-level courses such as International Baccalaureate (IB), Advanced Placement (AP), and honors science courses in high school to determine their perception of preparation for academic success at the collegiate level. The study used 52 students from an honors college campus and surveyed the students and their professors. The students reported that they felt better prepared for academic success at the collegiate level by taking these courses in high school (p<.001). There was a significant negative correlation between perception of preparation and student GPA with honors science courses (n=55 and Pearson's r=-0.336), while AP courses (n=47 and Pearson's r=0.0016) and IB courses (n=17 and Pearson's r=-0.2716) demonstrated no correlation between perception of preparation and GPA. Students reported various themes that helped or hindered their perception of academic success once at the collegiate level. Those themes that reportedly helped students were preparedness, different types of learning, and teacher qualities. Students reported in a post-hoc experience that more lab time, rigorous coursework, better teachers, and better study techniques helped prepare them for academic success at the collegiate level. Students further reported on qualities of teachers and teaching that helped foster their academic abilities at the collegiate level, including teacher knowledge, caring, teaching style, and expectations. Some reasons for taking high-level science courses in high school include boosting GPA, college credit, challenge, and getting into better colleges.
Buhr, R J; Berrang, M E; Cason, J A
2003-10-01
Genetically feathered and featherless sibling broilers selected for matched BW were killed, scalded, and defeathered to determine the consequences of feathers and empty feather follicles on the recovery of bacteria from carcass breast skin. In trial 1, the vents of all carcasses were plugged and sutured before scalding to prevent the expulsion of cloacal contents during picking. In trial 2, half of the carcasses had their vents plugged and sutured. Immediately after defeathering, breast skin was aseptically removed, and bacteria associated with it were enumerated. In trial 1, the levels of bacteria recovered did not differ between feathered and featherless carcasses: Campylobacter log10 1.4 cfu/mL of rinse, coliform log10 1.8, Escherichia coli log10 1.6, and total aerobic bacteria log10 3.1. In trial 2, the carcasses that had vents plugged and sutured had lower levels of all four types of bacteria (differences of Campylobacter log10 0.7 cfu/mL, coliform log10 1.8, E. coli log10 1.7, and total aerobic bacteria log10 0.5) than those carcasses with open vents. The lower levels of bacteria recovered from carcasses with the vents plugged and sutured during picking enabled detection of small but significant differences between feathered and featherless carcasses. The level of coliform and E. coli recovered was slightly higher by log10 0.7 cfu for feathered carcasses, but featherless carcasses had marginally higher levels of total aerobic bacteria by log10 0.4 cfu. Feathered and featherless carcasses with open vents during picking did not differ in the levels of recovery of coliform, E. coli, and total aerobic bacteria from breast skin.
Influence of logging on the effects of wildfire in Siberia
NASA Astrophysics Data System (ADS)
Kukavskaya, E. A.; Buryak, L. V.; Ivanova, G. A.; Conard, S. G.; Kalenskaya, O. P.; Zhila, S. V.; McRae, D. J.
2013-12-01
The Russian boreal zone supports a huge terrestrial carbon pool. Moreover, it is a tremendous reservoir of wood products concentrated mainly in Siberia. The main natural disturbance in these forests is wildfire, which modifies the carbon budget and has potentially important climate feedbacks. In addition, both legal and illegal logging increase landscape complexity and affect burning conditions and fuel consumption. We investigated 100 individual sites with different histories of logging and fire on a total of 23 study areas in three different regions of Siberia to evaluate the impacts of fire and logging on fuel loads, carbon emissions, and tree regeneration in pine and larch forests. We found large variations of fire and logging effects among regions depending on growing conditions and type of logging activity. Logged areas in the Angara region had the highest surface and ground fuel loads (up to 135 t ha-1), mainly due to logging debris. This resulted in high carbon emissions where fires occurred on logged sites (up to 41 tC ha-1). The Shushenskoe/Minusinsk and Zabaikal regions are characterized by better slash removal and a smaller amount of carbon emitted to the atmosphere during fires. Illegal logging, which is widespread in the Zabaikal region, resulted in an increase in fire hazard and higher carbon emissions than legal logging. The highest fuel loads (on average 108 t ha-1) and carbon emissions (18-28 tC ha-1) in the Zabaikal region are on repeatedly burned unlogged sites where trees fell on the ground following the first fire event. Partial logging in the Shushenskoe/Minusinsk region has insufficient impact on stand density, tree mortality, and other forest conditions to substantially increase fire hazard or affect carbon stocks. Repeated fires on logged sites resulted in insufficient tree regeneration and transformation of forest to grasslands. We conclude that negative impacts of fire and logging on air quality, the carbon cycle, and ecosystem sustainability could be decreased by better slash removal in the Angara region, removal of trees killed by fire in the Zabaikal region, and tree planting after fires in drier conditions where natural regeneration is hampered by soil overheating and grass proliferation.
NASA Astrophysics Data System (ADS)
Hwang, Seho; Shin, Jehyun
2010-05-01
Jeju located in the southern extremity of Korea is volcanic island, one of best-known tourist attractions in Korea. Jeju Province operates the monitoring boreholes for the evaluation of groundwater resources in coastal area. Major rock types identified from drill cores are trachybasalt, acicular basalt, scoria, hyalocastite, tuff, unconsolidated U formation, and seoguipo formation and so on. Various conventional geophysical well loggings including radioactive logs (natural gamma log, dual neutron log, and gamma-gamma log), electrical log (or electromagnetic induction log), caliper log, fluid temperature/ conductivity log, and televiewer logs have been conducted to identify basalt sequences and permeable zone, and verify seawater intrusion in monitoring boreholes. The conductivity logs clearly show the fresh water-saline water boundaries, but we find it hard to identify the permeable zones because of the mixed groundwater within the boreholes. Temperature gradient logs are mostly related with lithologic boundaries and permeable zones intersected by boreholes of eastern coasts. The wide range of periodic electrical conductivity logging in the deeper depth of monitoring boreholes indicates the possibility of submarine groundwater discharge. However we did not clearly understand the origin of seawater intrusion in the eastern coast until now. So we analysis the electrical conductivity profiles, record of sea-level change and 40Ar/39Ar absolute ages of volcanic rock cores from twenty boreholes in east coast. From comparing absolute ages of volcanic rock cores and sea-level of their ages, we find that the almost ages of depth showing high salinity groundwater are about 100 Ka, and from 130Ka to about 180Ka. The former is after the interglacial period and the latter is illinoian. These results indicate that the abrupt raising of sea level after illinoian formed the regional coast, and the zone of present seawater intrusion also are above the depth of illinoin period. So we conclude that the origin of seawater intrusion in eastern coast is caused mainly by the sea-level change.
Evaluation of various molecular parameters as predictors of bioconcentration in fish
DOE Office of Scientific and Technical Information (OSTI.GOV)
Connell, D.W.; Schueuermann G3
1988-06-01
A reliable set of data on the bioconcentration factors (KB) of a diverse range of compounds in fish was selected from the literature. Using the structures of these compounds, the following molecular parameters were calculated: molecular weight (MW), solvent accessible molecular surface area (SASA), solvent accessible molecular volume (SAV), molar refraction (MR), largest principal moment of inertia (LPMI) and several molecular connectivity indices of the Randic type (1 chi, 2 chi, 3 chi, 1 chi vr, 3 chi c). The relationships between these parameters and log KB were evaluated for all compounds and the following subgroups: chlorinated hydrocarbons (CHC), polyaromaticmore » hydrocarbons (PAH), and CHC and PAH combined. These relationships indicated that SASA, SAV, and MR were good predictors of log KB for the CHC and PAH combined or alone and the other parameters were less satisfactory with these groups. In addition with the CHC, the log of these parameters displayed an improved correlation with log KB due to apparent nonlinearity in the log to linear relationship. Thus, with these groups of compounds, calculated values of SASA, SAV, and MR provide a satisfactory means of estimating log KB without measured data.« less
NASA Astrophysics Data System (ADS)
Adams, Elizabeth L.; Carrier, Sarah J.; Minogue, James; Porter, Stephen R.; McEachin, Andrew; Walkowiak, Temple A.; Zulli, Rebecca A.
2017-02-01
The Instructional Practices Log in Science (IPL-S) is a daily teacher log developed for K-5 teachers to self-report their science instruction. The items on the IPL-S are grouped into scales measuring five dimensions of science instruction: Low-level Sense-making, High-level Sense-making, Communication, Integrated Practices, and Basic Practices. As part of the current validation study, 206 elementary teachers completed 4137 daily log entries. The purpose of this paper is to provide evidence of validity for the IPL-S's scales, including (a) support for the theoretical framework; (b) cognitive interviews with logging teachers; (c) item descriptive statistics; (d) comparisons of 28 pairs of teacher and rater logs; and (e) an examination of the internal structure of the IPL-S. We present evidence to describe the extent to which the items and the scales are completed accurately by teachers and differentiate various types of science instructional strategies employed by teachers. Finally, we point to several practical implications of our work and potential uses for the IPL-S. Overall, results provide neutral to positive support for the validity of the groupings of items or scales.
ERIC Educational Resources Information Center
Hidalgo, Mª Dolores; Gómez-Benito, Juana; Zumbo, Bruno D.
2014-01-01
The authors analyze the effectiveness of the R[superscript 2] and delta log odds ratio effect size measures when using logistic regression analysis to detect differential item functioning (DIF) in dichotomous items. A simulation study was carried out, and the Type I error rate and power estimates under conditions in which only statistical testing…
Victoria A. Saab; Jonathan G. Dudley
1998-01-01
From 1994 to 1996, researchers monitored 695 nests of nine cavity-nesting bird species and measured vegetation at nest sites and at 90 randomly located sites in burned ponderosa pine forests of southwestern Idaho. Site treatments included two types of salvage logging, and unlogged controls. All bird species selected nest sites with higher tree densities, larger...
Ganapathy, Dhanraj Muthuveera; Joseph, Sajeesh; Ariga, Padma; Selvaraj, Anand
2013-01-01
Candidal colonization in complete denture wearers is a commonly encountered condition that worsens in the presence of untreated Diabetes Mellitus. The aim of this study was to evaluate the correlation between oral candidiasis in denture-bearing mucosa and elevated blood glucose levels in complete denture wearers and to evaluate the effect of oral hypoglycemic drug therapy in controlling oral candidal colonization in denture-bearing mucosa of complete denture wearers with Type II Diabetes Mellitus. This prospective observational study involved the participation of 15 complete denture wearers with Type II Diabetes Mellitus. The sample collection was made prior and after oral hypoglycaemic drug intervention, by swabbing the rugal surfaces of palatal mucosa, cultured and the density of the candidal colony formed was analyzed and interpreted as colony forming units (CFU) per mL. The candidal samples CFU and corresponding pre- and post-prandial blood glucose levels were estimated, analyzed and compared using Karl Pearson correlation analysis and paired t-test (α = 0.05). The Karl Pearson correlation analysis showed that there was a positive correlation between the blood glucose levels (PPS and FBS) and the candidal colonization (CFU) (P < 0.05). The mean values of all the variables were analyzed using the paired t-test. There was significant reduction in the mean values of blood glucose levels (P < 0.001) and the mean values of the CFU (P < 0.001) following oral hypoglycemic drug therapy. Positive correlation was observed between oral candidiasis in complete denture-bearing mucosa and elevated blood glucose levels and oral hypoglycemic drug therapy has a positive effect in controlling oral candidal colonization in complete denture wearers with Type II Diabetes Mellitus.
Abbas, Mohsin
2015-09-01
The present study aimed to analyze the index value trends of injured employed persons (IEPs) covered in Pakistan Labour Force Surveys from 2001-02 to 2012-13. The index value method based on reference years and reference groups was used to analyze the IEP trends in terms of different criteria such as gender, area, employment status, industry types, occupational groups, types of injury, injured body parts, and treatment received. The Pearson correlation coefficient analysis was also performed to investigate the inter-relationship of different occupational variables. The values of IEP increased at the end of the studied year in industry divisions such as agriculture, forestry, hunting, and fishing, followed by in manufacturing and construction industry divisions. People associated with major occupations (such as skilled agricultural and fishery workers) and elementary (unskilled) occupations were found to be at an increasing risk of occupational injuries/diseases with an increasing IEP trend. Types of occupational injuries such as sprain or strain, superficial injury, and dislocation increased during the studied years. Major injured parts of body such as upper limb and lower limb found with increasing trend. Types of treatment received, including hospitalization and no treatment, were found to decrease. Increased IEP can be justified due to inadequate health care facilities, especially in rural areas by increased IEP in terms of gender, areas, received treatment, occupational groups and employment status as results found after Pearson correlation coefficient analysis. The increasing trend in the IEP% of the total employed persons due to agrarian activities shows that there is a need to improve health care setups in rural areas of Pakistan.
Abbas, Mohsin
2015-01-01
Background The present study aimed to analyze the index value trends of injured employed persons (IEPs) covered in Pakistan Labour Force Surveys from 2001–02 to 2012–13. Methods The index value method based on reference years and reference groups was used to analyze the IEP trends in terms of different criteria such as gender, area, employment status, industry types, occupational groups, types of injury, injured body parts, and treatment received. The Pearson correlation coefficient analysis was also performed to investigate the inter-relationship of different occupational variables. Results The values of IEP increased at the end of the studied year in industry divisions such as agriculture, forestry, hunting, and fishing, followed by in manufacturing and construction industry divisions. People associated with major occupations (such as skilled agricultural and fishery workers) and elementary (unskilled) occupations were found to be at an increasing risk of occupational injuries/diseases with an increasing IEP trend. Types of occupational injuries such as sprain or strain, superficial injury, and dislocation increased during the studied years. Major injured parts of body such as upper limb and lower limb found with increasing trend. Types of treatment received, including hospitalization and no treatment, were found to decrease. Increased IEP can be justified due to inadequate health care facilities, especially in rural areas by increased IEP in terms of gender, areas, received treatment, occupational groups and employment status as results found after Pearson correlation coefficient analysis. Conclusion The increasing trend in the IEP% of the total employed persons due to agrarian activities shows that there is a need to improve health care setups in rural areas of Pakistan. PMID:26929831
Kwee, Sandi A; Lim, John; Watanabe, Alex; Kromer-Baker, Kathleen; Coel, Marc N
2014-06-01
This study investigated the prognostic significance of metabolically active tumor volume (MATV) measurements applied to (18)F-fluorocholine PET/CT in castration-resistant prostate cancer (CRPC). (18)F-fluorocholine PET/CT imaging was performed on 30 patients with CRPC. Metastatic disease was quantified on the basis of maximum standardized uptake value (SUV(max)), MATV, and total lesion activity (TLA = MATV × mean standardized uptake value). Tumor burden indices derived from whole-body summation of PET tumor volume measurements (i.e., net MATV and net TLA) were evaluated as variables in Cox regression and Kaplan-Meier survival analyses. Net MATV ranged from 0.12 cm(3) to 1,543.9 cm(3) (median, 52.6 cm(3)). Net TLA ranged from 0.40 to 6,688.7 g (median, 225.1 g). Prostate-specific antigen level at the time of PET correlated significantly with net MATV (Pearson r = 0.65, P = 0.0001) and net TLA (r = 0.60, P = 0.0005) but not highest lesional SUV(max) of each scan. Survivors were followed for a median 23 mo (range, 6-38 mo). On Cox regression analyses, overall survival had a significant association with net MATV (P = 0.0068), net TLA (P = 0.0072), and highest lesion SUV(max) (P = 0.0173) and a borderline association with prostate-specific antigen level (P = 0.0458). Only net MATV and net TLA remained significant in univariate-adjusted survival analyses. Kaplan-Meier analysis demonstrated significant differences in survival between groups stratified by median net MATV (log-rank P = 0.0371), net TLA (log-rank P = 0.0371), and highest lesion SUV(max) (log-rank P = 0.0223). Metastatic prostate cancer detected by (18)F-fluorocholine PET/CT can be quantified on the basis of volumetric measurements of tumor metabolic activity. The prognostic value of (18)F-fluorocholine PET/CT may stem from this capacity to assess whole-body tumor burden. With further clinical validation, (18)F-fluorocholine PET-based indices of global disease activity and mortality risk could prove useful in patient-individualized treatment of CRPC. © 2014 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Sowan, Azizeh Khaled; Reed, Charles Calhoun; Staggers, Nancy
2016-09-30
Large datasets of the audit log of modern physiologic monitoring devices have rarely been used for predictive modeling, capturing unsafe practices, or guiding initiatives on alarm systems safety. This paper (1) describes a large clinical dataset using the audit log of the physiologic monitors, (2) discusses benefits and challenges of using the audit log in identifying the most important alarm signals and improving the safety of clinical alarm systems, and (3) provides suggestions for presenting alarm data and improving the audit log of the physiologic monitors. At a 20-bed transplant cardiac intensive care unit, alarm data recorded via the audit log of bedside monitors were retrieved from the server of the central station monitor. Benefits of the audit log are many. They include easily retrievable data at no cost, complete alarm records, easy capture of inconsistent and unsafe practices, and easy identification of bedside monitors missed from a unit change of alarm settings adjustments. Challenges in analyzing the audit log are related to the time-consuming processes of data cleaning and analysis, and limited storage and retrieval capabilities of the monitors. The audit log is a function of current capabilities of the physiologic monitoring systems, monitor's configuration, and alarm management practices by clinicians. Despite current challenges in data retrieval and analysis, large digitalized clinical datasets hold great promise in performance, safety, and quality improvement. Vendors, clinicians, researchers, and professional organizations should work closely to identify the most useful format and type of clinical data to expand medical devices' log capacity.
Wintrob, Zachary A P; Hammel, Jeffrey P; Nimako, George K; Gaile, Dan P; Forrest, Alan; Ceacareanu, Alice C
2017-04-01
Oral drugs stimulating insulin production may impact growth factor levels. The data presented shows the relationship between pre-existing insulin secretagogues use, growth factor profiles at the time of breast cancer diagnosis and subsequent cancer outcomes in women diagnosed with breast cancer and type 2 diabetes mellitus. A Pearson correlation analysis evaluating the relationship between growth factors stratified by diabetes pharmacotherapy and controls is also provided.
A uniform technique for flood frequency analysis.
Thomas, W.O.
1985-01-01
This uniform technique consisted of fitting the logarithms of annual peak discharges to a Pearson Type III distribution using the method of moments. The objective was to adopt a consistent approach for the estimation of floodflow frequencies that could be used in computing average annual flood losses for project evaluation. In addition, a consistent approach was needed for defining equitable flood-hazard zones as part of the National Flood Insurance Program. -from ASCE Publications Information
Cernecka, Hana; Veizerova, Lucia; Mensikova, Lucia; Svetlik, Jan; Krenek, Peter
2012-05-01
Dihydropyridine calcium channel blockers have some disadvantages such as light sensitivity and relatively short plasma half-lives. Stability of dihydropyrimidines analogues could be of advantage, yet they remain less well characterized. We aimed to test four newly synthesized Biginelli-type dihydropyrimidines for their calcium channel blocking activity on rat isolated aorta. Dihydropyrimidines (compounds A-D) were prepared by the Biginelli-like three-component condensation of benzaldehydes with urea/thiourea and dimethyl or diethyl acetone-1,3-dicarboxylate, and their physicochemical properties and effects on depolarization-induced and noradrenaline-induced contractions of rat isolated aorta were evaluated. Dihydropyrimidines A and C blocked KCl-induced contraction only weakly (-log(IC50)=5.03 and 3.73, respectively), while dihydropyrimidine D (-log(IC50)=7.03) was almost as potent as nifedipine (-log(IC50)=8.14). Washout experiments revealed that dihydropyrimidine D may bind strongly to the L-type calcium channel or remains bound to membrane. All tested dihydropyrimidines only marginally inhibited noradrenaline-induced contractions of rat isolated aorta (20% reduction of noradrenaline E(max) ), indicating a more selective action on L-type calcium channel than nifedipine with 75% inhibition of noradrenaline E(max) at 10(-4) m nifedipine). Compounds A and, particularly, D are potent calcium channel blockers in vitro, with a better selectivity in inhibiting depolarization-induced arterial smooth muscle contraction than nifedipine. © 2012 The Authors. JPP © 2012 Royal Pharmaceutical Society.
Effect of text type on near work-induced contrast adaptation in myopic and emmetropic young adults.
Yeo, Anna C H; Atchison, David A; Schmid, Katrina L
2013-02-27
Contrast adaptation has been speculated to be an error signal for emmetropization. Myopic children exhibit higher contrast adaptation than emmetropic children. This study aimed to determine whether contrast adaptation varies with the type of text viewed by emmetropic and myopic young adults. Baseline contrast sensitivity was determined in 25 emmetropic and 25 spectacle-corrected myopic young adults for 0.5, 1.2, 2.7, 4.4, and 6.2 cycles per degree (cpd) horizontal sine wave gratings. The adults spent periods looking at a 6.2 cpd high-contrast horizontal grating and reading lines of English and Chinese text (these texts comprised 1.2 cpd row and 6 cpd stroke frequencies). The effects of these near tasks on contrast sensitivity were determined, with decreases in sensitivity indicating contrast adaptation. Contrast adaptation was affected by the near task (F2,672 = 43.0; P < 0.001). Adaptation was greater for the grating task (0.13 ± 0.17 log unit, averaged across all frequencies) than reading tasks, but there was no significant difference between the two reading tasks (English 0.05 ± 0.13 log unit versus Chinese 0.04 ± 0.13 log unit). The myopic group showed significantly greater adaptation (by 0.04, 0.04, and 0.05 log units for English, Chinese, and grating tasks, respectively) than the emmetropic group (F1,48 = 5.0; P = 0.03). In young adults, reading Chinese text induced similar contrast adaptation as reading English text. Myopes exhibited greater contrast adaptation than emmetropes. Contrast adaptation, independent of text type, might be associated with myopia development.
NASA Astrophysics Data System (ADS)
Kim, Taeyoun; Hwang, Seho; Jang, Seonghyung
2017-01-01
When finding the "sweet spot" of a shale gas reservoir, it is essential to estimate the brittleness index (BI) and total organic carbon (TOC) of the formation. Particularly, the BI is one of the key factors in determining the crack propagation and crushing efficiency for hydraulic fracturing. There are several methods for estimating the BI of a formation, but most of them are empirical equations that are specific to particular rock types. We estimated the mineralogical BI based on elemental capture spectroscopy (ECS) log and elastic BI based on well log data, and we propose a new method for predicting S-wave velocity (VS) using mineralogical BI and elastic BI. The TOC is related to the gas content of shale gas reservoirs. Since it is difficult to perform core analysis for all intervals of shale gas reservoirs, we make empirical equations for the Horn River Basin, Canada, as well as TOC log using a linear relation between core-tested TOC and well log data. In addition, two empirical equations have been suggested for VS prediction based on density and gamma ray log used for TOC analysis. By applying the empirical equations proposed from the perspective of BI and TOC to another well log data and then comparing predicted VS log with real VS log, the validity of empirical equations suggested in this paper has been tested.
Pearson's Functions to Describe FSW Weld Geometry
NASA Astrophysics Data System (ADS)
Lacombe, D.; Gutierrez-Orrantia, M. E.; Coupard, D.; Tcherniaeff, S.; Girot, F.
2011-01-01
Friction stir welding (FSW) is a relatively new joining technique particularly for aluminium alloys that are difficult to fusion weld. In this study, the geometry of the weld has been investigated and modelled using Pearson's functions. It has been demonstrated that the Pearson's parameters (mean, standard deviation, skewness, kurtosis and geometric constant) can be used to characterize the weld geometry and the tensile strength of the weld assembly. Pearson's parameters and process parameters are strongly correlated allowing to define a control process procedure for FSW assemblies which make radiographic or ultrasonic controls unnecessary. Finally, an optimisation using a Generalized Gradient Method allows to determine the geometry of the weld which maximises the assembly tensile strength.
How accurate is the Pearson r-from-Z approximation? A Monte Carlo simulation study.
Hittner, James B; May, Kim
2012-01-01
The Pearson r-from-Z approximation estimates the sample correlation (as an effect size measure) from the ratio of two quantities: the standard normal deviate equivalent (Z-score) corresponding to a one-tailed p-value divided by the square root of the total (pooled) sample size. The formula has utility in meta-analytic work when reports of research contain minimal statistical information. Although simple to implement, the accuracy of the Pearson r-from-Z approximation has not been empirically evaluated. To address this omission, we performed a series of Monte Carlo simulations. Results indicated that in some cases the formula did accurately estimate the sample correlation. However, when sample size was very small (N = 10) and effect sizes were small to small-moderate (ds of 0.1 and 0.3), the Pearson r-from-Z approximation was very inaccurate. Detailed figures that provide guidance as to when the Pearson r-from-Z formula will likely yield valid inferences are presented.
The Large Space Structures Technology Program
1992-04-01
Organization and Plan--The LSSTP was initiated in July 1985. It was conceived by Jerome Pearson, who, as leader of the Vibration Group, was responsible... Jerome Pearson was named project manager and Terry Hertz from the Analysis and Optimization Branch was his deputy. The technical disciplines and the...continued until the end of 1990. The LSSTP was originally managed by Jerome Pearson, in addition to his responsibilities as Vibration Group leader. Terry
"It was a young man's life": G.A. Pearson (P-53)
Susan D. Olberding
2008-01-01
The nation's initial USFS research site commenced in a rustic cabin in the midst of northern Arizonaâs expansive ponderosa pine forest. Gustaf A. Pearson was the first in a distinguished line of USFS scientists to live and study there. A visitor to Fort Valley today often wishes he could have stood in Pearson's large boots (he was said to have enormous feet)...
Analytic posteriors for Pearson's correlation coefficient.
Ly, Alexander; Marsman, Maarten; Wagenmakers, Eric-Jan
2018-02-01
Pearson's correlation is one of the most common measures of linear dependence. Recently, Bernardo (11th International Workshop on Objective Bayes Methodology, 2015) introduced a flexible class of priors to study this measure in a Bayesian setting. For this large class of priors, we show that the (marginal) posterior for Pearson's correlation coefficient and all of the posterior moments are analytic. Our results are available in the open-source software package JASP.
Memories of Fort Valley From 1938 to 1942 (P-53)
Frank H. Wadsworth
2008-01-01
This delightful essay records Frank Wadsworthâs early forestry career at FVEF in the late 1930s. Frank married Margaret Pearson, G.A. and May Pearsonâs daughter, in 1941. Pearson believed Frank could not continue to work for him because of nepotism rules, so Frank and Margaret moved to San Juan, Puerto Rico in 1942 where Frank continued his forestry career. His...
Comparison of various techniques for calibration of AIS data
NASA Technical Reports Server (NTRS)
Roberts, D. A.; Yamaguchi, Y.; Lyon, R. J. P.
1986-01-01
The Airborne Imaging Spectrometer (AIS) samples a region which is strongly influenced by decreasing solar irradiance at longer wavelengths and strong atmospheric absorptions. Four techniques, the Log Residual, the Least Upper Bound Residual, the Flat Field Correction and calibration using field reflectance measurements were investigated as a means for removing these two features. Of the four techniques field reflectance calibration proved to be superior in terms of noise and normalization. Of the other three techniques, the Log Residual was superior when applied to areas which did not contain one dominant cover type. In heavily vegetated areas, the Log Residual proved to be ineffective. After removing anomalously bright data values, the Least Upper Bound Residual proved to be almost as effective as the Log Residual in sparsely vegetated areas and much more effective in heavily vegetated areas. Of all the techniques, the Flat Field Correction was the noisest.
Blood harmane concentrations and dietary protein consumption in essential tremor.
Louis, E D; Zheng, W; Applegate, L; Shi, L; Factor-Litvak, P
2005-08-09
Beta-carboline alkaloids (e.g., harmane) are highly tremorogenic chemicals. Animal protein (meat) is the major dietary source of these alkaloids. The authors previously demonstrated that blood harmane concentrations were elevated in patients with essential tremor (ET) vs controls. Whether this difference is due to greater animal protein consumption by patients or their failure to metabolize harmane is unknown. The aim of this study was to determine whether patients with ET and controls differ with regard to 1) daily animal protein consumption and 2) the correlation between animal protein consumption and blood harmane concentration. Data on current diet were collected with a semiquantitative food frequency questionnaire and daily calories and consumption of animal protein and other food types was calculated. Blood harmane concentrations were log-transformed (logHA). The mean logHA was higher in 106 patients than 161 controls (0.61 +/- 0.67 vs 0.43 +/- 0.72 g(-10)/mL, p = 0.035). Patients and controls consumed similar amounts of animal protein (50.2 +/- 19.6 vs 49.4 +/- 19.1 g/day, p = 0.74) and other food types (animal fat, carbohydrates, vegetable fat) and had similar caloric intakes. In controls, logHA was correlated with daily consumption of animal protein (r = 0.24, p = 0.003); in patients, there was no such correlation (r = -0.003, p = 0.98). The similarity between patients and controls in daily animal protein consumption and the absence of the normal correlation between daily animal protein consumption and logHA in patients suggests that another factor (e.g., a metabolic defect) may be increasing blood harmane concentration in patients.
Chang, Chia-Lin; Munin, Michael C.; Skidmore, Elizabeth R.; Niyonkuru, Christian; Huber, Lynne M.; Weber, Douglas J.
2015-01-01
Objective To determine whether baseline hand spastic hemiparesis assessed by the Chedoke-McMaster Assessment influences functional improvement after botulinum toxin type A (BTX-A) injections and postinjection therapy. Design Prospective cohort study. Setting Outpatient spasticity clinic. Participants Participants (N = 14) with spastic hemiparesis divided into 2 groups: Chedoke-McMaster Assessment Hand-Higher Function (stage≥4, n = 5) and Chedoke-McMaster Assessment Hand-Lower Function (stage = 2 or 3, n = 9). Interventions Upper-limb BTX-A injections followed by 6 weeks of postinjection therapy. Main Outcome Measures Primary outcomes were Motor Activity Log-28 and Motor Activity Log items. Secondary outcomes were Action Research Arm Test (ARAT), Motor Activity Log-Self-Report, and Modified Ashworth Scale (MAS). Measures were assessed at baseline (preinjection), 6 weeks, 9 weeks, and 12 weeks postinjection. Results Primary and secondary outcomes improved significantly over time in both groups. Although no significant differences in ARAT or MAS change scores were noted between groups, Chedoke-McMaster Assessment Hand-Higher Function group demonstrated greater change on Motor Activity Log-28 (P = .013) from baseline to 6 weeks and Motor Activity Log items (P = .006) from baseline to 12 weeks compared to Chedoke-McMaster Assessment Hand-Lower Function group. Conclusions BTX-A injections and postinjection therapy improved hand function and reduced spasticity for both Chedoke-McMaster Assessment Hand-Higher Function and Chedoke-McMaster Assessment Hand-Lower Function groups. Clinicians should expect to see larger gains for persons with less baseline impairment. PMID:19735772
Kalkanis, Alexandros; Kalkanis, Dimitrios; Drougas, Dimitrios; Vavougios, George D; Datseris, Ioannis; Judson, Marc A; Georgiou, Evangelos
2016-03-01
The objective of our study was to assess the possible relationship between splenic F-18-fluorodeoxyglucose (18F-FDG) uptake and other established biochemical markers of sarcoidosis activity. Thirty treatment-naive sarcoidosis patients were prospectively enrolled in this study. They underwent biochemical laboratory tests, including serum interleukin-2 receptor (sIL-2R), serum C-reactive protein, serum angiotensin-I converting enzyme, and 24-h urine calcium levels, and a whole-body combined 18F-FDG PET/computed tomography (PET/CT) scan as a part of an ongoing study at our institute. These biomarkers were statistically compared in these patients. A statistically significant linear dependence was detected between sIL-2R and log-transformed spleen-average standard uptake value (SUV avg) (R2=0.488, P<0.0001) and log-transformed spleen-maximum standard uptake value (SUV max) (R2=0.490, P<0.0001). sIL-2R levels and splenic size correlated linearly (Pearson's r=0.373, P=0.042). Multivariate linear regression analysis revealed that this correlation remained significant after age and sex adjustment (β=0.001, SE=0.001, P=0.024). No statistically significant associations were detected between (a) any two serum biomarkers or (b) between spleen-SUV measurements and any serum biomarker other than sIL-2R. Our analysis revealed an association between sIL-2R levels and spleen 18F-FDG uptake and size, whereas all other serum biomarkers were not significantly associated with each other or with PET 18F-FDG uptake. Our results suggest that splenic inflammation may be related to the systemic inflammatory response in sarcoidosis that may be associated with elevated sIL-2R levels.
Yu, Yan; Xie, Zhilan; Wang, Jihan; Chen, Chu; Du, Shuli; Chen, Peng; Li, Bin; Jin, Tianbo; Zhao, Heping
2016-12-01
The proportion of alcohol-induced osteonecrosis of the femoral head (ONFH) in all ONFH patients was 30.7%, with males prevailing among the ONFH patients in mainland China (70.1%). Matrix metalloproteinase 2 (MMP2), a member of the MMP gene family, encodes the enzyme MMP2, which can promote osteoclast migration, attachment, and bone matrix degradation. In this case-control study, we aimed to investigate the association between MMP2 and the alcohol-induced ONFH in Chinese males.In total, 299 patients with alcohol-induced ONFH and 396 healthy controls were recruited for a case-control association study. Five single-nucleotide polymorphisms within the MMP2 locus were genotyped and examined for their correlation with the risk of alcohol-induced ONFH and treatment response using Pearson χ test and unconditional logistic regression analysis. We identified 3 risk alleles for carriers: the allele "T" of rs243849 increased the risk of alcohol-induced ONFH in the allele model, the log-additive model without adjustment, and the log-additive model with adjustment for age. Conversely, the genotypes "CC" in rs7201 and "CC" in rs243832 decreased the risk of alcohol-induced ONFH, as revealed by the recessive model. After the Bonferroni multiple adjustment, no significant association was found. Furthermore, the haplotype analysis showed that the "TT" haplotype of MMP2 was more frequent among patients with alcohol-induced ONFH by unconditional logistic regression analysis adjusted for age.In conclusion, there may be an association between MMP2 and the risk of alcohol-induced ONFH in North-Chinese males. However, studies on larger populations are needed to confirm this hypothesis; these data may provide a theoretical foundation for future studies.
Vermeulen, Roel; Coble, Joseph B.; Yereb, Daniel; Lubin, Jay H.; Blair, Aaron; Portengen, Lützen; Stewart, Patricia A.; Attfield, Michael; Silverman, Debra T.
2010-01-01
Diesel exhaust (DE) has been implicated as a potential lung carcinogen. However, the exact components of DE that might be involved have not been clearly identified. In the past, nitrogen oxides (NOx) and carbon oxides (COx) were measured most frequently to estimate DE, but since the 1990s, the most commonly accepted surrogate for DE has been elemental carbon (EC). We developed quantitative estimates of historical exposure levels of respirable elemental carbon (REC) for an epidemiologic study of mortality, particularly lung cancer, among diesel-exposed miners by back-extrapolating 1998–2001 REC exposure levels using historical measurements of carbon monoxide (CO). The choice of CO was based on the availability of historical measurement data. Here, we evaluated the relationship of REC with CO and other current and historical components of DE from side-by-side area measurements taken in underground operations of seven non-metal mining facilities. The Pearson correlation coefficient of the natural log-transformed (Ln)REC measurements with the Ln(CO) measurements was 0.4. The correlation of REC with the other gaseous, organic carbon (OC), and particulate measurements ranged from 0.3 to 0.8. Factor analyses indicated that the gaseous components, including CO, together with REC, loaded most strongly on a presumed ‘Diesel exhaust’ factor, while the OC and particulate agents loaded predominantly on other factors. In addition, the relationship between Ln(REC) and Ln(CO) was approximately linear over a wide range of REC concentrations. The fact that CO correlated with REC, loaded on the same factor, and increased linearly in log–log space supported the use of CO in estimating historical exposure levels to DE. PMID:20876234
Olvera, Adib; Signorini, Marcelo; Tarabla, Héctor
2010-06-01
Quantify contamination by verotoxin-producing Escherichia coli associated with hemolytic uremic syndrome (VTEC-HUS) in cattle carcasses and generate estimates of exposure in three likely scenarios. A model was constructed of the frequency and magnitude of VTEC-HUS contamination from primary production to the removal of the carcasses from cold storage, based on the published scientific information, epidemiological data, and information from local experts. The probability distributions that best described each step in the process and scenarios were input to the @Risk program with multiple simulations using Monte Carlo analysis. Pearson s correlation test was used for the sensitivity analysis. The estimated frequency of carcasses with VTEC-HUS was 0.37 (95% CI: 0.26 to 0.58) and the final load of VTEC-HUS was 0.47 log CFU/carcass (95% CI: -2.46 to 3.62). The most closely related variables were the fattening system (r = -0.681) and the theoretical concentration of VTEC-HUS on the cattle's skin (r = 0.702). Vaccinating the animals reduced the frequency of VTEC-HUS in the carcasses by 54.1%, although there were no significant changes in the final VTEC-HUS load. Washing the carcasses reduced the final load by 0.42 log CFU/carcass compared with the baseline model, without any change in the frequency. A 50%-60% increase in the percentage of animals fattened in pens would increase the frequency of carcasses contaminated with VTEC-HUS by 15%-23%. Vaccinating the animals was the most effective scenario for reducing introduction of the bacteria in the beef production chain. Intensifying livestock production will increase the public health risk due to greater exposure to VTEC-HUS.
Reliability and validity of the instrument used in BRFSS to assess physical activity.
Yore, Michelle M; Ham, Sandra A; Ainsworth, Barbara E; Kruger, Judy; Reis, Jared P; Kohl, Harold W; Macera, Caroline A
2007-08-01
State-level statistics of adherence to the physical activity objectives in Healthy People 2010 are derived from the Behavioral Risk Factor Surveillance System (BRFSS) data. BRFSS physical activity questions were updated in 2001 to include domains of leisure time, household, and transportation-related activity of moderate- and vigorous intensity, and walking questions. This article reports the reliability and validity of these questions. The BRFSS Physical Activity Study (BPAS) was conducted from September 2000 to May 2001 in Columbia, SC. Sixty participants were followed for 22 d; they answered the physical activity questions three times via telephone, wore a pedometer and accelerometer, and completed a daily physical activity log for 1 wk. Measures for moderate, vigorous, recommended (i.e., met the criteria for moderate or vigorous), and strengthening activities were created according to Healthy People 2010 operational definitions. Reliability and validity were assessed using Cohen's kappa (kappa) and Pearson correlation coefficients. Seventy-three percent of participants met the recommended activity criteria compared with 45% in the total U.S. population. Test-retest reliability (kappa) was 0.35-0.53 for moderate activity, 0.80-0.86 for vigorous activity, 0.67-0.84 for recommended activity, and 0.85-0.92 for strengthening. Validity (kappa) of the survey (using the accelerometer as the standard) was 0.17-0.22 for recommended activity. Validity (kappa) of the survey (using the physical activity log as the standard) was 0.40-0.52 for recommended activity. The validity and reliability of the BRFSS physical activity questions suggests that this instrument can classify groups of adults into the levels of recommended and vigorous activity as defined by Healthy People 2010. Repeated administration of these questions over time will help to identify trends in physical activity.
Knot probabilities in random diagrams
NASA Astrophysics Data System (ADS)
Cantarella, Jason; Chapman, Harrison; Mastin, Matt
2016-10-01
We consider a natural model of random knotting—choose a knot diagram at random from the finite set of diagrams with n crossings. We tabulate diagrams with 10 and fewer crossings and classify the diagrams by knot type, allowing us to compute exact probabilities for knots in this model. As expected, most diagrams with 10 and fewer crossings are unknots (about 78% of the roughly 1.6 billion 10 crossing diagrams). For these crossing numbers, the unknot fraction is mostly explained by the prevalence of ‘tree-like’ diagrams which are unknots for any assignment of over/under information at crossings. The data shows a roughly linear relationship between the log of knot type probability and the log of the frequency rank of the knot type, analogous to Zipf’s law for word frequency. The complete tabulation and all knot frequencies are included as supplementary data.
USDA-ARS?s Scientific Manuscript database
Human histo-blood group antigens (HBGA) have been identified previously as candidate receptors for human norovirus (NOR). Type A, type H1, Lewis HBGAs have been identified major HBGA for NOR binding. We have identified that pig stomach mucin (PGM) contains group A, type H1, and Lewis b type HBGAs...
Stone tool production and utilization by bonobo-chimpanzees (Pan paniscus).
Roffman, Itai; Savage-Rumbaugh, Sue; Rubert-Pugh, Elizabeth; Ronen, Avraham; Nevo, Eviatar
2012-09-04
Using direct percussion, language-competent bonobo-chimpanzees Kanzi and Pan-Banisha produced a significantly wider variety of flint tool types than hitherto reported, and used them task-specifically to break wooden logs or to dig underground for food retrieval. For log breaking, small flakes were rotated drill-like or used as scrapers, whereas thick cortical flakes were used as axes or wedges, leaving consistent wear patterns along the glued slits, the weakest areas of the log. For digging underground, a variety of modified stone tools, as well as unmodified flint nodules, were used as shovels. Such tool production and utilization competencies reported here in Pan indicate that present-day Pan exhibits Homo-like technological competencies.
Air Launch Instrumented Vehicles Evaluation (ALIVE).
1977-02-01
propellant .s. The study addressed aging of two 12—inch—diamete r , SRBDM—type motors cast with mode ra te—burning—rate prope l l a n t . The propel lan t...s Ii t ttiis j t .y Factor vs Half Crack Length 86 30 Stress Intensity Factor /Load vs I1~ l 1 Crack Length 87 31 Log Stress I n t c r t s t t y... Factor vs Log Crac k Tip V e l o c i ty for S t r ip Biaxial Specimen 88 32 Log Stress I t i t i n s i t v Factor A d j u s t e d for Stra in
A log-Weibull spatial scan statistic for time to event data.
Usman, Iram; Rosychuk, Rhonda J
2018-06-13
Spatial scan statistics have been used for the identification of geographic clusters of elevated numbers of cases of a condition such as disease outbreaks. These statistics accompanied by the appropriate distribution can also identify geographic areas with either longer or shorter time to events. Other authors have proposed the spatial scan statistics based on the exponential and Weibull distributions. We propose the log-Weibull as an alternative distribution for the spatial scan statistic for time to events data and compare and contrast the log-Weibull and Weibull distributions through simulation studies. The effect of type I differential censoring and power have been investigated through simulated data. Methods are also illustrated on time to specialist visit data for discharged patients presenting to emergency departments for atrial fibrillation and flutter in Alberta during 2010-2011. We found northern regions of Alberta had longer times to specialist visit than other areas. We proposed the spatial scan statistic for the log-Weibull distribution as a new approach for detecting spatial clusters for time to event data. The simulation studies suggest that the test performs well for log-Weibull data.
Confronting the Gaia and NLTE spectroscopic parallaxes for the FGK stars
NASA Astrophysics Data System (ADS)
Sitnova, Tatyana; Mashonkina, Lyudmila; Pakhomov, Yury
2018-04-01
The understanding of the chemical evolution of the Galaxy relies on the stellar chemical composition. Accurate atmospheric parameters is a prerequisite of determination of accurate chemical abundances. For late type stars with known distance, surface gravity (log g) can be calculated from well-known relation between stellar mass, T eff, and absolute bolometric magnitude. This method weakly depends on model atmospheres, and provides reliable log g. However, accurate distances are available for limited number of stars. Another way to determine log g for cool stars is based on ionisation equilibrium, i.e. consistent abundances from lines of neutral and ionised species. In this study we determine atmospheric parameters moving step-by-step from well-studied nearby dwarfs to ultra-metal poor (UMP) giants. In each sample, we select stars with the most reliable T eff based on photometry and the distance-based log g, and compare with spectroscopic gravity calculated taking into account deviations from local thermodinamic equilibrium (LTE). After that, we apply spectroscopic method of log g determination to other stars of the sample with unknown distances.
First Test of Stochastic Growth Theory for Langmuir Waves in Earth's Foreshock
NASA Technical Reports Server (NTRS)
Cairns, Iver H.; Robinson, P. A.
1997-01-01
This paper presents the first test of whether stochastic growth theory (SGT) can explain the detailed characteristics of Langmuir-like waves in Earth's foreshock. A period with unusually constant solar wind magnetic field is analyzed. The observed distributions P(logE) of wave fields E for two intervals with relatively constant spacecraft location (DIFF) are shown to agree well with the fundamental prediction of SGT, that P(logE) is Gaussian in log E. This stochastic growth can be accounted for semi-quantitatively in terms of standard foreshock beam parameters and a model developed for interplanetary type III bursts. Averaged over the entire period with large variations in DIFF, the P(logE) distribution is a power-law with index approximately -1; this is interpreted in terms of convolution of intrinsic, spatially varying P(logE) distributions with a probability function describing ISEE's residence time at a given DIFF. Wave data from this interval thus provide good observational evidence that SGT can sometimes explain the clumping, burstiness, persistence, and highly variable fields of the foreshock Langmuir-like waves.
First test of stochastic growth theory for Langmuir waves in Earth's foreshock
NASA Astrophysics Data System (ADS)
Cairns, Iver H.; Robinson, P. A.
This paper presents the first test of whether stochastic growth theory (SGT) can explain the detailed characteristics of Langmuir-like waves in Earth's foreshock. A period with unusually constant solar wind magnetic field is analyzed. The observed distributions P(log E) of wave fields E for two intervals with relatively constant spacecraft location (DIFF) are shown to agree well with the fundamental prediction of SGT, that P(log E) is Gaussian in log E. This stochastic growth can be accounted for semi-quantitatively in terms of standard foreshock beam parameters and a model developed for interplanetary type III bursts. Averaged over the entire period with large variations in DIFF, the P(log E) distribution is a power-law with index ˜ -1 this is interpreted in terms of convolution of intrinsic, spatially varying P(log E) distributions with a probability function describing ISEE's residence time at a given DIFF. Wave data from this interval thus provide good observational evidence that SGT can sometimes explain the clumping, burstiness, persistence, and highly variable fields of the foreshock Langmuir-like waves.
Li, Wenli; Turner, Amy; Aggarwal, Praful; Matter, Andrea; Storvick, Erin; Arnett, Donna K; Broeckel, Ulrich
2015-12-16
Whole transcriptome sequencing (RNA-seq) represents a powerful approach for whole transcriptome gene expression analysis. However, RNA-seq carries a few limitations, e.g., the requirement of a significant amount of input RNA and complications led by non-specific mapping of short reads. The Ion AmpliSeq Transcriptome Human Gene Expression Kit (AmpliSeq) was recently introduced by Life Technologies as a whole-transcriptome, targeted gene quantification kit to overcome these limitations of RNA-seq. To assess the performance of this new methodology, we performed a comprehensive comparison of AmpliSeq with RNA-seq using two well-established next-generation sequencing platforms (Illumina HiSeq and Ion Torrent Proton). We analyzed standard reference RNA samples and RNA samples obtained from human induced pluripotent stem cell derived cardiomyocytes (hiPSC-CMs). Using published data from two standard RNA reference samples, we observed a strong concordance of log2 fold change for all genes when comparing AmpliSeq to Illumina HiSeq (Pearson's r = 0.92) and Ion Torrent Proton (Pearson's r = 0.92). We used ROC, Matthew's correlation coefficient and RMSD to determine the overall performance characteristics. All three statistical methods demonstrate AmpliSeq as a highly accurate method for differential gene expression analysis. Additionally, for genes with high abundance, AmpliSeq outperforms the two RNA-seq methods. When analyzing four closely related hiPSC-CM lines, we show that both AmpliSeq and RNA-seq capture similar global gene expression patterns consistent with known sources of variations. Our study indicates that AmpliSeq excels in the limiting areas of RNA-seq for gene expression quantification analysis. Thus, AmpliSeq stands as a very sensitive and cost-effective approach for very large scale gene expression analysis and mRNA marker screening with high accuracy.
Macedo-Ojeda, Gabriela; Márquez-Sandoval, Fabiola; Fernández-Ballart, Joan; Vizmanos, Barbara
2016-01-01
The study of diet quality in a population provides information for the development of programs to improve nutritional status through better directed actions. The aim of this study was to assess the reproducibility and relative validity of a Mexican Diet Quality Index (ICDMx) for the assessment of the habitual diet of adults. The ICDMx was designed to assess the characteristics of a healthy diet using a validated semi-quantitative food frequency questionnaire (FFQ-Mx). Reproducibility was determined by comparing 2 ICDMx based on FFQs (one-year interval). Relative validity was assessed by comparing the ICDMx (2nd FFQ) with that estimated based on the intake averages from dietary records (nine days). The questionnaires were answered by 97 adults (mean age in years = 27.5, SD = 12.6). Pearson (r) and intraclass correlations (ICC) were calculated; Bland-Altman plots, Cohen’s κ coefficients and blood lipid determinations complemented the analysis. Additional analysis compared ICDMx scores with nutrients derived from dietary records, using a Pearson correlation. These nutrient intakes were transformed logarithmically to improve normality (log10) and adjusted according to energy, prior to analyses. The ICDMx obtained ICC reproducibility values ranged from 0.33 to 0.87 (23/24 items with significant correlations; mean = 0.63), while relative validity ranged from 0.26 to 0.79 (mean = 0.45). Bland-Altman plots showed a high level of agreement between methods. ICDMx scores were inversely correlated (p < 0.05) with total blood cholesterol (r = −0.33) and triglycerides (r = −0.22). ICDMx (as calculated from FFQs and DRs) obtained positive correlations with fiber, magnesium, potassium, retinol, thiamin, riboflavin, pyridoxine, and folate. The ICDMx obtained acceptable levels of reproducibility and relative validity in this population. It can be useful for population nutritional surveillance and to assess the changes resulting from the implementation of nutritional interventions. PMID:27563921
Tropical forests are thermally buffered despite intensive selective logging.
Senior, Rebecca A; Hill, Jane K; Benedick, Suzan; Edwards, David P
2018-03-01
Tropical rainforests are subject to extensive degradation by commercial selective logging. Despite pervasive changes to forest structure, selectively logged forests represent vital refugia for global biodiversity. The ability of these forests to buffer temperature-sensitive species from climate warming will be an important determinant of their future conservation value, although this topic remains largely unexplored. Thermal buffering potential is broadly determined by: (i) the difference between the "macroclimate" (climate at a local scale, m to ha) and the "microclimate" (climate at a fine-scale, mm to m, that is distinct from the macroclimate); (ii) thermal stability of microclimates (e.g. variation in daily temperatures); and (iii) the availability of microclimates to organisms. We compared these metrics in undisturbed primary forest and intensively logged forest on Borneo, using thermal images to capture cool microclimates on the surface of the forest floor, and information from dataloggers placed inside deadwood, tree holes and leaf litter. Although major differences in forest structure remained 9-12 years after repeated selective logging, we found that logging activity had very little effect on thermal buffering, in terms of macroclimate and microclimate temperatures, and the overall availability of microclimates. For 1°C warming in the macroclimate, temperature inside deadwood, tree holes and leaf litter warmed slightly more in primary forest than in logged forest, but the effect amounted to <0.1°C difference between forest types. We therefore conclude that selectively logged forests are similar to primary forests in their potential for thermal buffering, and subsequent ability to retain temperature-sensitive species under climate change. Selectively logged forests can play a crucial role in the long-term maintenance of global biodiversity. © 2017 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.
Zero Pearson coefficient for strongly correlated growing trees
NASA Astrophysics Data System (ADS)
Dorogovtsev, S. N.; Ferreira, A. L.; Goltsev, A. V.; Mendes, J. F. F.
2010-03-01
We obtained Pearson’s coefficient of strongly correlated recursive networks growing by preferential attachment of every new vertex by m edges. We found that the Pearson coefficient is exactly zero in the infinite network limit for the recursive trees (m=1) . If the number of connections of new vertices exceeds one (m>1) , then the Pearson coefficient in the infinite networks equals zero only when the degree distribution exponent γ does not exceed 4. We calculated the Pearson coefficient for finite networks and observed a slow power-law-like approach to an infinite network limit. Our findings indicate that Pearson’s coefficient strongly depends on size and details of networks, which makes this characteristic virtually useless for quantitative comparison of different networks.
Porto-Fett, Anna C S; Juneja, Vijay K; Tamplin, Mark L; Luchansky, John B
2009-03-01
Irradiated ground beef samples (ca. 3-g portions with ca. 25% fat) inoculated with Yersina pestis strain KIM5 (ca. 6.7 log CFU/g) were heated in a circulating water bath stabilized at 48.9, 50, 52.5, 55, 57.5, or 60 degrees C (120, 122, 126.5, 131, 135.5, and 140 degrees F, respectively). Average D-values were 192.17, 34.38, 17.11, 3.87, 1.32, and 0.56 min, respectively, with a corresponding z-value of 4.67 degrees C (8.41 degrees F). In related experiments, irradiated ground beef patties (ca. 95 g per patty with ca. 25% fat) were inoculated with Y. pestis strains KIMS or CDC-A1122 (ca. 6.0 log CFU/g) and cooked on an open-flame gas grill or on a clam-shell type electric grill to internal target temperatures of 48.9, 60, and 71.1 degrees C (120, 140, and 160 degrees F, respectively). For patties cooked on the gas grill, strain KIM5 populations decreased from ca. 6.24 to 4.32, 3.51, and < or = 0.7 log CFU/g at 48.9, 60, and 71.1 degrees C, respectively, and strain CDC-A1122 populations decreased to 3.46 log CFU/g at 48.9 degrees C and to < or = 0.7 log CFU/g at both 60 and 71.1 degrees C. For patties cooked on the clam-shell grill, strain KIM5 populations decreased from ca. 5.96 to 2.53 log CFU/g at 48.9 degrees C and to < or = 0.7 log CFU/g at 60 or 71.1 degrees C, and strain CDC-A1122 populations decreased from ca. 5.98 to < or = 0.7 log CFU/g at all three cooking temperatures. These data confirm that cooking ground beef on an open-flame gas grill or on a clam-shell type electric grill to the temperatures and times recommended by the U.S. Department of Agriculture and the U.S. Food and Drug Administration Food Code, appreciably lessens the likelihood, severity, and/or magnitude of consumer illness if the ground beef were purposefully contaminated even with relatively high levels of Y. pestis.
Elliott, M A; Stauber, C E; Koksal, F; DiGiano, F A; Sobsey, M D
2008-05-01
Point-of-use (POU) drinking water treatment technology enables those without access to safe water sources to improve the quality of their water by treating it in the home. One of the most promising emerging POU technologies is the biosand filter (BSF), a household-scale, intermittently operated slow sand filter. Over 500,000 people in developing countries currently use the filters to treat their drinking water. However, despite this successful implementation, there has been almost no systematic, process engineering research to substantiate the effectiveness of the BSF or to optimize its design and operation. The major objectives of this research were to: (1) gain an understanding of the hydraulic flow condition within the filter (2) characterize the ability of the BSF to reduce the concentration of enteric bacteria and viruses in water and (3) gain insight into the key parameters of filter operation and their effects on filter performance. Three 6-8 week microbial challenge experiments are reported herein in which local surface water was seeded with E. coli, echovirus type 12 and bacteriophages (MS2 and PRD-1) and charged to the filter daily. Tracer tests indicate that the BSF operated at hydraulic conditions closely resembling plug flow. The performance of the filter in reducing microbial concentrations was highly dependent upon (1) filter ripening over weeks of operation and (2) the daily volume charged to the filter. BSF performance was best when less than one pore volume (18.3-L in the filter design studied) was charged to the filter per day and this has important implications for filter design and operation. Enhanced filter performance due to ripening was generally observed after roughly 30 days. Reductions of E. coli B ranged from 0.3 log10 (50%) to 4 log10, with geometric mean reductions after at least 30 days of operation of 1.9 log10. Echovirus 12 reductions were comparable to those for E. coli B with a range of 1 log10 to >3 log10 and mean reductions after 30 days of 2.1 log10. Bacteriophage reductions were much lower, ranging from zero to 1.3 log10 (95%) with mean reductions of only 0.5 log10 (70%). These data indicate that virus reduction by BSF may differ substantially depending upon the specific viral agent.
Stoyko, Stanislav; Voss, Leonard; He, Hua; ...
2015-09-24
New ternary arsenides AE 3TrAs 3 (AE = Sr, Ba; Tr = Al, Ga) and their phosphide analogs Sr 3GaP 3 and Ba 3AlP 3 have been prepared by reactions of the respective elements at high temperatures. Single-crystal X-ray diffraction studies reveal that Sr 3AlAs 3 and Ba 3AlAs 3 adopt the Ba 3AlSb 3-type structure (Pearson symbol oC56, space group Cmce, Z = 8). This structure is also realized for Sr 3GaP 3 and Ba 3AlP 3. Likewise, the compounds Sr 3GaAs 3 and Ba 3GaAs 3 crystallize with the Ba 3GaSb 3-type structure (Pearson symbol oP56, space groupmore » Pnma, Z = 8). Both structures are made up of isolated pairs of edge-shared AlPn 4 and GaPn 4 tetrahedra (Pn = pnictogen, i.e., P or As), separated by the alkaline-earth Sr 2+ and Ba 2+ cations. In both cases, there are no homoatomic bonds, hence, regardless of the slightly different atomic arrangements, both structures can be rationalized as valence-precise [AE 2+] 3[Tr 3+][Pn 3-] 3, or rather [AE 2+] 6[Tr 2Pn 6] 12-, i.e., as Zintl phases.« less
Metocean design parameter estimation for fixed platform based on copula functions
NASA Astrophysics Data System (ADS)
Zhai, Jinjin; Yin, Qilin; Dong, Sheng
2017-08-01
Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.
Svendstrup, Mathilde; Christiansen, Merete Skovdal; Magid, Erik; Hommel, Eva; Feldt-Rasmussen, Bo
2013-01-01
To evaluate whether increased urinary orosomucoid excretion rate (UOER) is an independent predictor of cardiovascular and all-cause mortality in type 2 diabetes (T2DM) and type 1 diabetes (T1DM) at 10years of follow-up. We followed 430 patients with T2DM and 148 patients with T1DM until emigration, death or November 2011. We measured UOER levels in overnight urine samples. Descriptive data are given in the article. In patients with T2DM and T1DM, all-cause mortality (log-rank test, p<0.01 for both types) and cardiovascular mortality (log-rank test, p<0.01 for T2DM and p=0.04 for T1DM) were significantly higher in patients with increased UOER. Normoalbuminuric patients with T2DM and increased UOER levels had higher all-cause and cardiovascular mortality (log-rank test, p<0.01 for both types). UOER was independently predictive of all-cause (HR 1.52; 95% CI 1.10-2.09; p=0.01) and cardiovascular (HR 2.31; 95% CI 1.46-3.66; p<0.01) mortality in patients with T2DM, but not in patients with T1DM. UOER is an independent predictor of all-cause and cardiovascular mortality even in normoalbuminuric patients with T2DM at 10years of follow-up. Further studies are needed in order to evaluate the prognostic and clinical relevance. Copyright © 2013 Elsevier Inc. All rights reserved.
Accurate computation of survival statistics in genome-wide studies.
Vandin, Fabio; Papoutsaki, Alexandra; Raphael, Benjamin J; Upfal, Eli
2015-05-01
A key challenge in genomics is to identify genetic variants that distinguish patients with different survival time following diagnosis or treatment. While the log-rank test is widely used for this purpose, nearly all implementations of the log-rank test rely on an asymptotic approximation that is not appropriate in many genomics applications. This is because: the two populations determined by a genetic variant may have very different sizes; and the evaluation of many possible variants demands highly accurate computation of very small p-values. We demonstrate this problem for cancer genomics data where the standard log-rank test leads to many false positive associations between somatic mutations and survival time. We develop and analyze a novel algorithm, Exact Log-rank Test (ExaLT), that accurately computes the p-value of the log-rank statistic under an exact distribution that is appropriate for any size populations. We demonstrate the advantages of ExaLT on data from published cancer genomics studies, finding significant differences from the reported p-values. We analyze somatic mutations in six cancer types from The Cancer Genome Atlas (TCGA), finding mutations with known association to survival as well as several novel associations. In contrast, standard implementations of the log-rank test report dozens-hundreds of likely false positive associations as more significant than these known associations.
Two in-vivo protocols for testing virucidal efficacy of handwashing and hand disinfection.
Steinmann, J; Nehrkorn, R; Meyer, A; Becker, K
1995-01-01
Whole-hands and fingerpads of seven volunteers were contaminated with poliovirus type 1 Sabin strain in order to evaluate virucidal efficacy of different forms of handwashing and handrub with alcohols and alcohol-based disinfectants. In the whole-hand protocol, handwashing with unmedicated soap for 5 min and handrubbing with 80% ethanol yielded a log reduction factor (RF) of > 2, whereas the log RF by 96.8% ethanol exceeded 3.2. With the fingerpad model ethanol produced a greater log RF than iso- or n-propanol. Comparing five commercial hand disinfectants and a chlorine solution (1.0% chloramine T-solution) for handrub, Desderman and Promanum, both composed of ethanol, yielded log RFs of 2.47 and 2.26 respectively after an application time of 60 s, similar to 1.0% chloramine T-solution (log RF of 2.28). Autosept, Mucasept, and Sterillium, based on n-propanol and/or isopropanol, were found to be significantly less effective (log RFs of 1.16, 1.06 and 1.52 respectively). A comparison of a modified whole-hand and the fingerpad protocol with Promanum showed similar results with the two systems suggesting both models are suitable for testing the in-vivo efficacy of handwashing agents and hand disinfectants which are used without any water.
Accurate Computation of Survival Statistics in Genome-Wide Studies
Vandin, Fabio; Papoutsaki, Alexandra; Raphael, Benjamin J.; Upfal, Eli
2015-01-01
A key challenge in genomics is to identify genetic variants that distinguish patients with different survival time following diagnosis or treatment. While the log-rank test is widely used for this purpose, nearly all implementations of the log-rank test rely on an asymptotic approximation that is not appropriate in many genomics applications. This is because: the two populations determined by a genetic variant may have very different sizes; and the evaluation of many possible variants demands highly accurate computation of very small p-values. We demonstrate this problem for cancer genomics data where the standard log-rank test leads to many false positive associations between somatic mutations and survival time. We develop and analyze a novel algorithm, Exact Log-rank Test (ExaLT), that accurately computes the p-value of the log-rank statistic under an exact distribution that is appropriate for any size populations. We demonstrate the advantages of ExaLT on data from published cancer genomics studies, finding significant differences from the reported p-values. We analyze somatic mutations in six cancer types from The Cancer Genome Atlas (TCGA), finding mutations with known association to survival as well as several novel associations. In contrast, standard implementations of the log-rank test report dozens-hundreds of likely false positive associations as more significant than these known associations. PMID:25950620
Luchansky, John B; Porto-Fett, Anna C S; Shoyer, Bradley A; Phillips, John; Chen, Vivian; Eblen, Denise R; Cook, L Victor; Mohr, Tim B; Esteban, Emilio; Bauer, Nathan
2013-09-01
Both high-fat and low-fat ground beef (percent lean:fat = ca. 70:30 and 93:7, respectively) were inoculated with a 6-strain cocktail of non-O157:H7 Shiga toxin-producing Escherichia coli (STEC) or a five-strain cocktail of E. coli O157:H7 (ca. 7.0 log CFU/g). Patties were pressed (ca. 2.54 cm thick, ca. 300 g each) and then refrigerated (4°C, 18 to 24 h), or frozen (-18°C, 3 weeks), or frozen (-18°C, 3 weeks) and then thawed (4°C for 18 h or 21°C for 10 h) before being cooked on commercial gas or electric grills to internal temperatures of 60 to 76.6°C. For E. coli O157:H7, regardless of grill type or fat level, cooking refrigerated patties to 71.1 or 76.6°C decreased E. coli O157:H7 numbers from an initial level of ca. 7.0 log CFU/g to a final level of ≤1.0 log CFU/g, whereas decreases to ca. 1.1 to 3.1 log CFU/g were observed when refrigerated patties were cooked to 60.0 or 65.5°C. For patties that were frozen or freeze-thawed and cooked to 71.1 or 76.6°C, E. coli O157:H7 numbers decreased to ca. 1.7 or ≤0.7 log CFU/g. Likewise, pathogen numbers decreased to ca. 0.7 to 3.7 log CFU/g in patties that were frozen or freeze-thawed and cooked to 60.0 or 65.5°C. For STEC, regardless of grill type or fat level, cooking refrigerated patties to 71.1 or 76.6°C decreased pathogen numbers from ca. 7.0 to ≤0.7 log CFU/g, whereas decreases to ca. 0.7 to 3.6 log CFU/g were observed when refrigerated patties were cooked to 60.0 or 65.5°C. For patties that were frozen or freeze-thawed and cooked to 71.1 or 76.6°C, STEC numbers decreased to a final level of ca. 1.5 to ≤0.7 log CFU/g. Likewise, pathogen numbers decreased from ca. 7.0 to ca. 0.8 to 4.3 log CFU/g in patties that were frozen or freeze-thawed and cooked to 60.0 or 65.5°C. Thus, cooking ground beef patties that were refrigerated, frozen, or freeze-thawed to internal temperatures of 71.1 and 76.6°C was effective for eliminating ca. 5.1 to 7.0 log CFU of E. coli O157:H7 and STEC per g.
Logging residues in principal forest types of the Northern Rocky Mountains
Robert E. Benson; Joyce A. Schlieter
1980-01-01
An estimated 466 million ft 3 of forest residue material (nonmerchantable, 3 inches diameter and larger) is generated annually in the Northern Rocky Mountains (Montana, Idaho, Wyoming). Extensive studies of residues in the major forest types show a considerable portion is suited for various products. The lodgepole pine type has the greatest potential for increased...
James T. Bones; David R. Dickson
1970-01-01
Early in 1969 a survey of the veneer industry in the Northeast was made to determine veneer-log receipts for calendar year 1968, by species and states of origin. The survey revealed the following changes in the past 5 years: A drop in the number of container veneer plants, offset by an increase in the number of other types of veneer plants. A 15-percent increase in...
Predicting permeability with NMR imaging in the Edwards Limestone/Stuart City Trend
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dewitt, H.; Globe, M.; Sorenson, R.
1996-09-01
Determining pore size and pore geometry relationships in carbonate rocks and relating both to permeability is difficult using traditional logging methods. This problem is further complicated by the presence of abundant microporosity (pore size less than 62 microns) in the Edwards Limestone. The use of Nuclear Magnetic Resonance Imaging (NMR) allows for an alternative approach to evaluating the pore types present by examining the response of hydrogen nuclei contained within the free fluid pore space. By testing the hypothesis that larger pore types exhibit an NMR signal decay much slower than smaller pore types, an estimate of the pore typemore » present, (i.e.) vuggy, interparticle, or micropores, can be inferred. Calibration of the NMR decay curve to known samples with measured petrophysical properties allows for improved predictability of pore types and permeability. The next stage of the analysis involves the application of the calibration technique to the borehole environment using an NMR logging tool to more accurately predict production performance.« less
Log corrections to entropy of three dimensional black holes with soft hair
NASA Astrophysics Data System (ADS)
Grumiller, Daniel; Perez, Alfredo; Tempo, David; Troncoso, Ricardo
2017-08-01
We calculate log corrections to the entropy of three-dimensional black holes with "soft hairy" boundary conditions. Their thermodynamics possesses some special features that preclude a naive direct evaluation of these corrections, so we follow two different approaches. The first one exploits that the BTZ black hole belongs to the spectrum of Brown-Henneaux as well as soft hairy boundary conditions, so that the respective log corrections are related through a suitable change of the thermodynamic ensemble. In the second approach the analogue of modular invariance is considered for dual theories with anisotropic scaling of Lifshitz type with dynamical exponent z at the boundary. On the gravity side such scalings arise for KdV-type boundary conditions, which provide a specific 1-parameter family of multi-trace deformations of the usual AdS3/CFT2 setup, with Brown-Henneaux corresponding to z = 1 and soft hairy boundary conditions to the limiting case z → 0+. Both approaches agree in the case of BTZ black holes for any non-negative z. Finally, for soft hairy boundary conditions we show that not only the leading term, but also the log corrections to the entropy of black flowers endowed with affine û (1) soft hair charges exclusively depend on the zero modes and hence coincide with the ones for BTZ black holes.
Phage inactivation of Staphylococcus aureus in fresh and hard-type cheeses.
Bueno, Edita; García, Pilar; Martínez, Beatriz; Rodríguez, Ana
2012-08-01
Bacteriophages are regarded as natural antibacterial agents in food since they are able to specifically infect and lyse food-borne pathogenic bacteria without disturbing the indigenous microbiota. Two Staphylococcus aureus obligately lytic bacteriophages (vB_SauS-phi-IPLA35 and vB_SauS-phi-SauS-IPLA88), previously isolated from the dairy environment, were evaluated for their potential as biocontrol agents against this pathogenic microorganism in both fresh and hard-type cheeses. Pasteurized milk was contaminated with S. aureus Sa9 (about 10(6) CFU/mL) and a cocktail of the two lytic phages (about 10(6) PFU/mL) was also added. For control purposes, cheeses were manufactured without addition of phages. In both types of cheeses, the presence of phages resulted in a notorious decrease of S. aureus viable counts during curdling. In test fresh cheeses, a reduction of 3.83 log CFU/g of S. aureus occurred in 3h compared with control cheese, and viable counts were under the detection limits after 6h. The staphylococcal strain was undetected in both test and control cheeses at the end of the curdling process (24 h) and, of note, no re-growth occurred during cold storage. In hard cheeses, the presence of phages resulted in a continuous reduction of staphylococcal counts. In curd, viable counts of S. aureus were reduced by 4.64 log CFU/g compared with the control cheeses. At the end of ripening, 1.24 log CFU/g of the staphylococcal strain was still detected in test cheeses whereas 6.73log CFU/g was present in control cheeses. Starter strains were not affected by the presence of phages in the cheese making processes and cheeses maintained their expected physico-chemical properties. Copyright © 2012 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leibler, C. N.; Berger, E.
2010-12-10
We present multi-band optical and near-infrared observations of 19 short {gamma}-ray burst (GRB) host galaxies, aimed at measuring their stellar masses and population ages. The goals of this study are to evaluate whether short GRBs track the stellar mass distribution of galaxies, to investigate the progenitor delay time distribution, and to explore any connection between long and short GRB progenitors. Using single stellar population models we infer masses of log(M{sub *}/M{sub sun}) {approx} 8.8-11.6, with a median of (log(M{sub *}/M{sub sun})) {approx} 10.1, and population ages of {tau}{sub *} {approx} 0.03-4.4 Gyr with a median of ({tau}{sub *}) {approx} 0.3more » Gyr. We further infer maximal masses of log(M{sub *}/M{sub sun}) {approx} 9.7-11.9 by assuming stellar population ages equal to the age of the universe at each host's redshift. Comparing the distribution of stellar masses to the general galaxy mass function, we find that short GRBs track the cosmic stellar mass distribution only if the late-type hosts generally have maximal masses. However, there is an apparent dearth of early-type hosts compared to the equal contribution of early- and late-type galaxies to the cosmic stellar mass budget. Similarly, the short GRB rate per unit old stellar mass appears to be elevated in the late-type hosts. These results suggest that stellar mass may not be the sole parameter controlling the short GRB rate, and raise the possibility of a two-component model with both mass and star formation playing a role (reminiscent of the case for Type Ia supernovae). If short GRBs in late-type galaxies indeed track the star formation activity, the resulting typical delay time is {approx}0.2 Gyr, while those in early-type hosts have a typical delay of {approx}3 Gyr. Using the same stellar population models, we fit the broadband photometry for 22 long GRB host galaxies in a similar redshift range and find that they have significantly lower masses and younger population ages, with (log(M{sub *}/M{sub sun})) {approx} 9.1 and ({tau}{sub *}) {approx} 0.06 Gyr, respectively; their maximal masses are similarly lower, (log(M{sub *}/M{sub sun})) {approx} 9.6, and as expected do not track the galaxy mass function. Most importantly, the two GRB host populations remain distinct even if we consider only the star-forming hosts of short GRBs, supporting our previous findings (based on star formation rates and metallicities) that the progenitors of long and short GRBs in late-type galaxies are distinct. Given the much younger stellar populations of long GRB hosts (and hence of long GRB progenitors), and the substantial differences in host properties, we caution against the use of Type I and II designations for GRBs since this may erroneously imply that all GRBs which track star formation activity share the same massive star progenitors.« less
75 FR 15726 - Polyvinyl Alcohol From Taiwan; Determination
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-30
...\\ Vice Chairman Pearson and Commissioners Okun and Lane dissented, having determined that there is no... remand, Vice Chairman Pearson and Commissioners Okun and Lane reaffirmed their negative preliminary...
Well log characterization of natural gas-hydrates
Collett, Timothy S.; Lee, Myung W.
2012-01-01
In the last 25 years there have been significant advancements in the use of well-logging tools to acquire detailed information on the occurrence of gas hydrates in nature: whereas wireline electrical resistivity and acoustic logs were formerly used to identify gas-hydrate occurrences in wells drilled in Arctic permafrost environments, more advanced wireline and logging-while-drilling (LWD) tools are now routinely used to examine the petrophysical nature of gas-hydrate reservoirs and the distribution and concentration of gas hydrates within various complex reservoir systems. Resistivity- and acoustic-logging tools are the most widely used for estimating the gas-hydrate content (i.e., reservoir saturations) in various sediment types and geologic settings. Recent integrated sediment coring and well-log studies have confirmed that electrical-resistivity and acoustic-velocity data can yield accurate gas-hydrate saturations in sediment grain-supported (isotropic) systems such as sand reservoirs, but more advanced log-analysis models are required to characterize gas hydrate in fractured (anisotropic) reservoir systems. New well-logging tools designed to make directionally oriented acoustic and propagation-resistivity log measurements provide the data needed to analyze the acoustic and electrical anisotropic properties of both highly interbedded and fracture-dominated gas-hydrate reservoirs. Advancements in nuclear magnetic resonance (NMR) logging and wireline formation testing (WFT) also allow for the characterization of gas hydrate at the pore scale. Integrated NMR and formation testing studies from northern Canada and Alaska have yielded valuable insight into how gas hydrates are physically distributed in sediments and the occurrence and nature of pore fluids(i.e., free water along with clay- and capillary-bound water) in gas-hydrate-bearing reservoirs. Information on the distribution of gas hydrate at the pore scale has provided invaluable insight on the mechanisms controlling the formation and occurrence of gas hydrate in nature along with data on gas-hydrate reservoir properties (i.e., porosities and permeabilities) needed to accurately predict gas production rates for various gas-hydrate production schemes.
Evaluation of hirst-type spore trap to monitor environmental fungal load in hospital.
Dananché, Cédric; Gustin, Marie-Paule; Cassier, Pierre; Loeffert, Sophie Tiphaine; Thibaudon, Michel; Bénet, Thomas; Vanhems, Philippe
2017-01-01
The main purpose was to validate the use of outdoor-indoor volumetric impaction sampler with Hirst-type spore traps (HTSTs) to continuously monitor fungal load in order to prevent invasive fungal infections during major structural work in hospital settings. For 4 weeks, outdoor fungal loads were quantified continuously by 3 HTSTs. Indoor air was sampled by both HTST and viable impaction sampler. Results were expressed as particles/m3 (HTST) or colony-forming units (CFU)/m3 (biocollector). Paired comparisons by day were made with Wilcoxon's paired signed-rank test or paired Student's t-test as appropriate. Paired airborne spore levels were correlated 2 by 2, after log-transformation with Pearson's cross-correlation. Concordance was calculated with kappa coefficient (κ). Median total fungal loads (TFLs) sampled by the 3 outdoor HTSTs were 3,025.0, 3,287.5 and 3,625.0 particles/m3 (P = 0.6, 0.6 and 0.3).-Concordance between Aspergillaceae fungal loads (AFLs, including Aspergillus spp. + Penicillium spp.) was low (κ = 0.2). A low positive correlation was found between TFLs sampled with outdoor HTST and indoor HTST with applying a 4-hour time lag, r = 0.30, 95% CI (0.23-0.43), P<0.001. In indoor air, Aspergillus spp. were detected by the viable impaction sampler on 63.1% of the samples, whereas AFLs were found by HTST-I on only 3.6% of the samples. Concordance between Aspergillus spp. loads and AFLs sampled with the 2 methods was very low (κ = 0.1). This study showed a 4-hour time lag between increase of outdoor and indoor TFLs, possibly due to insulation and aeraulic flow of the building. Outdoor HTSTs may permit to quickly identify (after 48 hours) time periods with high outdoor fungal loads. An identified drawback is that a too low sample area read did not seem to enable detection of Aspergillaceae spores efficiently. Indoor HTSTs may not be recommended at this time, and outdoor HTSTs need further study. Air sampling by viable impaction sampler remains the reference tool for quantifying fungal contamination of indoor air in hospitals.
Kwiatkowska-Stenzel, Agnieszka; Witkowska, Dorota; Sowińska, Janina; Stopyra, Artur
2017-12-01
The choice of bedding material affects the quality of air in a stable and, consequently, the respiratory health of horses and humans. The risk of respiratory problems can be mitigated by improving the quality of air in the stable. The choice of bedding material is particularly important in cold climate conditions where horses are kept indoors throughout the year. This study examined the impact of three bedding materials: straw (S), peat with shavings (PS), and crushed wood pellets (CWP). The investigated factors were air contamination, including dust contamination and microbial (bacterial and fungal) contamination, and the condition of the equine respiratory tract. The condition of the respiratory tract was evaluated based on the results of arterial blood biochemistry tests and endoscopic evaluations of the upper respiratory tract. Mechanical dust contamination was lowest for PS (1.09mg/m 3 ) and highest for CWP (4.07mg/m 3 ). Bacterial contamination (in CFU - colony forming units) was highest for PS (5.14log 10 CFU/m 3 ) and lowest for CWP (4.81log 10 CFU/m 3 ). Fungal air contamination was lowest for CWP (4.54log 10 CFU/m 3 ) and highest for S (4.82log 10 CFU/m 3 ) and PS (4.88log 10 CFU/m 3 ). An analysis of physiological indicators revealed that all horses were clinically healthy regardless of the type of applied bedding. The type of bedding material did not exert a clear influence on arterial blood biochemistry or the results of endoscopic evaluations of the respiratory tract; however, the use of alternative for straw bedding materials improved endoscopy results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Berger, Philip; Messner, Michael J; Crosby, Jake; Vacs Renwick, Deborah; Heinrich, Austin
2018-05-01
Spore reduction can be used as a surrogate measure of Cryptosporidium natural filtration efficiency. Estimates of log10 (log) reduction were derived from spore measurements in paired surface and well water samples in Casper Wyoming and Kearney Nebraska. We found that these data were suitable for testing the hypothesis (H 0 ) that the average reduction at each site was 2 log or less, using a one-sided Student's t-test. After establishing data quality objectives for the test (expressed as tolerable Type I and Type II error rates), we evaluated the test's performance as a function of the (a) true log reduction, (b) number of paired samples assayed and (c) variance of observed log reductions. We found that 36 paired spore samples are sufficient to achieve the objectives over a wide range of variance, including the variances observed in the two data sets. We also explored the feasibility of using smaller numbers of paired spore samples to supplement bioparticle counts for screening purposes in alluvial aquifers, to differentiate wells with large volume surface water induced recharge from wells with negligible surface water induced recharge. With key assumptions, we propose a normal statistical test of the same hypothesis (H 0 ), but with different performance objectives. As few as six paired spore samples appear adequate as a screening metric to supplement bioparticle counts to differentiate wells in alluvial aquifers with large volume surface water induced recharge. For the case when all available information (including failure to reject H 0 based on the limited paired spore data) leads to the conclusion that wells have large surface water induced recharge, we recommend further evaluation using additional paired biweekly spore samples. Published by Elsevier GmbH.
Blood harmane concentrations and dietary protein consumption in essential tremor
Louis, E.D.; Zheng, W.; Applegate, L.; Shi, L.; Factor-Litvak, P.
2016-01-01
Background β-Carboline alkaloids (e.g., harmane) are highly tremorogenic chemicals. Animal protein (meat) is the major dietary source of these alkaloids. The authors previously demonstrated that blood harmane concentrations were elevated in patients with essential tremor (ET) vs controls. Whether this difference is due to greater animal protein consumption by patients or their failure to metabolize harmane is unknown. Objective The aim of this study was to determine whether patients with ET and controls differ with regard to 1) daily animal protein consumption and 2) the correlation between animal protein consumption and blood harmane concentration. Methods Data on current diet were collected with a semiquantitative food frequency questionnaire and daily calories and consumption of animal protein and other food types was calculated. Blood harmane concentrations were log-transformed (logHA). Results The mean logHA was higher in 106 patients than 161 controls (0.61 ± 0.67 vs 0.43 ± 0.72 g−10/mL, p = 0.035). Patients and controls consumed similar amounts of animal protein (50.2 ± 19.6 vs 49.4 ± 19.1 g/day, p = 0.74) and other food types (animal fat, carbohydrates, vegetable fat) and had similar caloric intakes. In controls, logHA was correlated with daily consumption of animal protein (r = 0.24, p = 0.003); in patients, there was no such correlation (r = −0.003, p = 0.98). Conclusions The similarity between patients and controls in daily animal protein consumption and the absence of the normal correlation between daily animal protein consumption and logHA in patients suggests that another factor (e.g., a metabolic defect) may be increasing blood harmane concentration in patients. PMID:16087903
Constraining the weak-wind problem: an XMM-HST campaign for the magnetic O9.7 V star HD 54879
NASA Astrophysics Data System (ADS)
Shenar, T.; Oskinova, L. M.; Järvinen, S. P.; Luckas, P.; Hainich, R.; Todt, H.; Hubrig, S.; Sander, A. A. C.; Ilyin, I.; Hamann, W.-R.
2018-01-01
Mass-loss rates of massive, late type main sequence stars are much weaker than currently predicted, but their true values are very difficult to measure. We suggest that confined stellar winds of magnetic stars can be exploited to constrain the true mass-loss rates Ṁ of massive main sequence stars. We acquired UV, X-ray, and optical amateur data of HD 54879 (O9.7 V), one of a few O-type stars with a detected atmospheric magnetic field (Bd ≳ 2 kG). We analyze these data with the Potsdam Wolf-Rayet (PoWR) and XSPEC codes. We can roughly estimate the mass-loss rate the star would have in the absence of a magnetic field as log ṀB = 0 ≈ -9.0 M⊙yr-1. Since the wind is partially trapped within the Alfvén radius rA ≳ 12 R*, the true mass-loss rate of HD 54879 is log Ṁ ≲ -10.2 M⊙yr-1. Moreover, we find that the microturbulent, macroturbulent, and projected rotational velocities are lower than previously suggested (< 4 km s-1). An initial mass of 16 M⊙ and an age of 5 Myr are inferred. We derive a mean X-ray emitting temperature of log TX = 6.7 K and an X-ray luminosity of log LX = 32 erg s-1. The latter implies a significant X-ray excess (log LX/LBol ≈ -6.0), most likely stemming from collisions at the magnetic equator. A tentative period of P ≈ 5 yr is derived from variability of the Hα line. Our study confirms that strongly magnetized stars lose little or no mass, and supplies important constraints on the weak-wind problem of massive main sequence stars.
Xiong, Xiaoping; Wu, Jianrong
2017-01-01
The treatment of cancer has progressed dramatically in recent decades, such that it is no longer uncommon to see a cure or log-term survival in a significant proportion of patients with various types of cancer. To adequately account for the cure fraction when designing clinical trials, the cure models should be used. In this article, a sample size formula for the weighted log-rank test is derived under the fixed alternative hypothesis for the proportional hazards cure models. Simulation showed that the proposed sample size formula provides an accurate estimation of sample size for designing clinical trials under the proportional hazards cure models. Copyright © 2016 John Wiley & Sons, Ltd.
Hallberg, Laura L.; Mason, Jon P.
2007-01-01
The U.S. Geological Survey, in cooperation with the Wyoming State Engineer's Office, created a hydrogeologic database for southwestern Laramie County, Wyoming. The database contains records from 166 wells and test holes drilled during 1931-2006. Several types of information, including well construction; well or test hole locations; lithologic logs; gamma, neutron, spontaneous-potential, and single-point resistivity logs; water levels; and transmissivities and storativities estimated from aquifer tests, are available in the database. Most wells and test holes in the database have records containing information about construction, location, and lithology; 77 wells and test holes have geophysical logs; 70 wells have tabulated water-level data; and 60 wells have records of aquifer-test results.
Measuring User Similarity Using Electric Circuit Analysis: Application to Collaborative Filtering
Yang, Joonhyuk; Kim, Jinwook; Kim, Wonjoon; Kim, Young Hwan
2012-01-01
We propose a new technique of measuring user similarity in collaborative filtering using electric circuit analysis. Electric circuit analysis is used to measure the potential differences between nodes on an electric circuit. In this paper, by applying this method to transaction networks comprising users and items, i.e., user–item matrix, and by using the full information about the relationship structure of users in the perspective of item adoption, we overcome the limitations of one-to-one similarity calculation approach, such as the Pearson correlation, Tanimoto coefficient, and Hamming distance, in collaborative filtering. We found that electric circuit analysis can be successfully incorporated into recommender systems and has the potential to significantly enhance predictability, especially when combined with user-based collaborative filtering. We also propose four types of hybrid algorithms that combine the Pearson correlation method and electric circuit analysis. One of the algorithms exceeds the performance of the traditional collaborative filtering by 37.5% at most. This work opens new opportunities for interdisciplinary research between physics and computer science and the development of new recommendation systems PMID:23145095
Measuring user similarity using electric circuit analysis: application to collaborative filtering.
Yang, Joonhyuk; Kim, Jinwook; Kim, Wonjoon; Kim, Young Hwan
2012-01-01
We propose a new technique of measuring user similarity in collaborative filtering using electric circuit analysis. Electric circuit analysis is used to measure the potential differences between nodes on an electric circuit. In this paper, by applying this method to transaction networks comprising users and items, i.e., user-item matrix, and by using the full information about the relationship structure of users in the perspective of item adoption, we overcome the limitations of one-to-one similarity calculation approach, such as the Pearson correlation, Tanimoto coefficient, and Hamming distance, in collaborative filtering. We found that electric circuit analysis can be successfully incorporated into recommender systems and has the potential to significantly enhance predictability, especially when combined with user-based collaborative filtering. We also propose four types of hybrid algorithms that combine the Pearson correlation method and electric circuit analysis. One of the algorithms exceeds the performance of the traditional collaborative filtering by 37.5% at most. This work opens new opportunities for interdisciplinary research between physics and computer science and the development of new recommendation systems.
2 × 2 Tables: a note on Campbell's recommendation.
Busing, F M T A; Weaver, B; Dubois, S
2016-04-15
For 2 × 2 tables, Egon Pearson's N - 1 chi-squared statistic is theoretically more sound than Karl Pearson's chi-squared statistic, and provides more accurate p values. Moreover, Egon Pearson's N - 1 chi-squared statistic is equal to the Mantel-Haenszel chi-squared statistic for a single 2 × 2 table, and as such, is often available in statistical software packages like SPSS, SAS, Stata, or R, which facilitates compliance with Ian Campbell's recommendations. Copyright © 2015 John Wiley & Sons, Ltd.
Wintrob, Zachary A P; Hammel, Jeffrey P; Nimako, George K; Gaile, Dan P; Forrest, Alan; Ceacareanu, Alice C
2017-04-01
Growth factor profiles could be influenced by the utilization of exogenous insulin. The data presented shows the relationship between pre-existing use of injectable insulin in women diagnosed with breast cancer and type 2 diabetes mellitus, the growth factor profiles at the time of breast cancer diagnosis, and subsequent cancer outcomes. A Pearson correlation analysis evaluating the relationship between growth factors stratified by of insulin use and controls is also provided.
2010-06-01
shadow |\\/ etc\\/ passwd |cmd... \\.exe .*?)\\s.*\\s\\".*\\" desc=$0 action=shellcmd /home/user/sec -2.5.3/ common/syslogclient "... Synthetic : " "$2|$1...etc\\/ shadow |\\/ etc\\/ passwd |cmd... \\.exe .*?)\\s.*\\s\\".*\\" desc=$0 context =[ HYBRID_LOGGING] action=shellcmd /home/user/sec -2.5.3/ common...suspicious filenames type=Single continue=TakeNext ptype=RegExp pattern =(.*)\\s(.*)\\s.*(\\/ etc\\/ shadow |\\/ etc\\/ passwd |cmd\\.exe .*?)\\s... .*\\s(.*)\\s.*\\s
3. Historic American Buildings Survey, Elmer R. Pearson, Photographer, 1968 ...
3. Historic American Buildings Survey, Elmer R. Pearson, Photographer, 1968 ELEVATION, LOOKING NORTHWEST. - Shaker Centre Family, Broom Shop, East side of Oxford Road, White Water Park, Hamilton County, OH
76 FR 14372 - Glenn/Colusa County Resource Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-16
... agenda items contact Eduardo Olmedo, DFO, 825 N. Humboldt Ave., Willows, CA 95988 or Laurie Pearson..., Stonyford, CA 95979. FOR FURTHER INFORMATION CONTACT: Laurie Pearson, Glenn/Colusa RAC Coordinator, USDA...
Assessing the relationship between groundwater nitrate and animal feeding operations in Iowa (USA)
Zirkle, Keith W.; Nolan, Bernard T.; Jones, Rena R.; Weyer, Peter J.; Ward, Mary H.; Wheeler, David C.
2016-01-01
Nitrate-nitrogen is a common contaminant of drinking water in many agricultural areas of the United States of America (USA). Ingested nitrate from contaminated drinking water has been linked to an increased risk of several cancers, specific birth defects, and other diseases. In this research, we assessed the relationship between animal feeding operations (AFOs) and groundwater nitrate in private wells in Iowa. We characterized AFOs by swine and total animal units and type (open, confined, or mixed), and we evaluated the number and spatial intensities of AFOs in proximity to private wells. The types of AFO indicate the extent to which a facility is enclosed by a roof. Using linear regression models, we found significant positive associations between the total number of AFOs within 2 km of a well (p trend < 0.001), number of open AFOs within 5 km of a well (p trend < 0.001), and number of mixed AFOs within 30 km of a well (p trend < 0.001) and the log nitrate concentration. Additionally, we found significant increases in log nitrate in the top quartiles for AFO spatial intensity, open AFO spatial intensity, and mixed AFO spatial intensity compared to the bottom quartile (0.171 log(mg/L), 0.319 log(mg/L), and 0.541 log(mg/L), respectively; all p < 0.001). We also explored the spatial distribution of nitrate-nitrogen in drinking wells and found significant spatial clustering of high-nitrate wells (> 5 mg/L) compared with low-nitrate (≤ 5 mg/L) wells (p = 0.001). A generalized additive model for high-nitrate status identified statistically significant areas of risk for high levels of nitrate. Adjustment for some AFO predictor variables explained a portion of the elevated nitrate risk. These results support a relationship between animal feeding operations and groundwater nitrate concentrations and differences in nitrate loss from confined AFOs vs. open or mixed types.
Activity trends in young solar-type stars
NASA Astrophysics Data System (ADS)
Lehtinen, J.; Jetsu, L.; Hackman, T.; Kajatkari, P.; Henry, G. W.
2016-04-01
Aims: We study a sample of 21 young and active solar-type stars with spectral types ranging from late F to mid K and characterize the behaviour of their activity. Methods: We apply the continuous period search (CPS) time series analysis method on Johnson B- and V-band photometry of the sample stars, collected over a period of 16 to 27 years. Using the CPS method, we estimate the surface differential rotation and determine the existence and behaviour of active longitudes and activity cycles on the stars. We supplement the time series results by calculating new log R'HK = log F'HK/σTeff4 emission indices for the stars from high resolution spectroscopy. Results: The measurements of the photometric rotation period variations reveal a positive correlation between the relative differential rotation coefficient and the rotation period as k ∝ Prot1.36, but do not reveal any dependence of the differential rotation on the effective temperature of the stars. Secondary period searches reveal activity cycles in 18 of the stars and temporary or persistent active longitudes in 11 of them. The activity cycles fall into specific activity branches when examined in the log Prot/Pcyc vs. log Ro-1, where Ro-1 = 2Ωτc, or log Prot/Pcyc vs. log R'HK diagram. We find a new split into sub-branches within this diagram, indicating multiple simultaneously present cycle modes. Active longitudes appear to be present only on the more active stars. There is a sharp break at approximately log R'HK = -4.46 separating the less active stars with long-term axisymmetric spot distributions from the more active ones with non-axisymmetric configurations. In seven out of eleven of our stars with clearly detected long-term non-axisymmetric spot activity the estimated active longitude periods are significantly shorter than the mean photometric rotation periods. This systematic trend can be interpreted either as a sign of the active longitudes being sustained from a deeper level in the stellar interior than the individual spots or as azimuthal dynamo waves exhibiting prograde propagation. Based on observations made as part of the automated astronomy program at Tennessee State University and with the Nordic Optical Telescope, operated on the island of La Palma jointly by Denmark, Finland, Iceland, Norway, and Sweden, in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias.Photometric data and results are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/588/A38
Quantitative Literacy: Working with Log Graphs
NASA Astrophysics Data System (ADS)
Shawl, S.
2013-04-01
The need for working with and understanding different types of graphs is a common occurrence in everyday life. Examples include anything having to do investments, being an educated juror in a case that involves evidence presented graphically, and understanding many aspect of our current political discourse. Within a science class graphs play a crucial role in presenting and interpreting data. In astronomy, where the range of graphed values is many orders of magnitude, log-axes must be used and understood. Experience shows that students do not understand how to read and interpret log-axes or how they differ from linear. Alters (1996), in a study of college students in an algebra-based physics class, found little understanding of log plotting. The purpose of this poster is to show the method and progression I have developed for use in my “ASTRO 101” class, with the goal being to help students better understand the H-R diagram, mass-luminosity relationship, and digital spectra.
[Utilization suitability of forest resources in typical forest zone of Changbai Mountains].
Hao, Zhanqing; Yu, Deyong; Xiong, Zaiping; Ye, Ji
2004-10-01
Conservation of natural forest does not simply equal to no logging. The Northeast China Forest Region has a logging quota of mature forest as part of natural forest conservation project. How to determine the logging spots rationally and scientifically is very important. Recent scientific theories of forest resources management advocate that the utilization of forest resources should stick to the principle of sustaining use, and pay attention to the ecological function of forest resources. According to the logging standards, RS and GIS techniques can be used to detect the precise location of forest resources and obtain information of forest areas and types, and thus, provide more rational and scientific support for space choice about future utilization of forest resources. In this paper, the Lushuihe Forest Bureau was selected as a typical case in Changbai Mountains Forest Region to assess the utilization conditions of forest resources, and some advices on spatial choice for future management of forest resources in the study area were offered.
Toward prediction of alkane/water partition coefficients.
Toulmin, Anita; Wood, J Matthew; Kenny, Peter W
2008-07-10
Partition coefficients were measured for 47 compounds in the hexadecane/water ( P hxd) and 1-octanol/water ( P oct) systems. Some types of hydrogen bond acceptor presented by these compounds to the partitioning systems are not well represented in the literature of alkane/water partitioning. The difference, DeltalogP, between logP oct and logP hxd is a measure of the hydrogen bonding potential of a molecule and is identified as a target for predictive modeling. Minimized molecular electrostatic potential ( V min) was shown to be an effective predictor of the contribution of hydrogen bond acceptors to DeltalogP. Carbonyl oxygen atoms were found to be stronger hydrogen bond acceptors for their electrostatic potential than heteroaromatic nitrogen or oxygen bound to hypervalent sulfur or nitrogen. Values of V min calculated for hydrogen-bonded complexes were used to explore polarization effects. Predicted logP hxd and DeltalogP were shown to be more effective than logP oct for modeling brain penetration for a data set of 18 compounds.
Nonuniversality of the Archie exponent due to multifractality of resistivity well logs
NASA Astrophysics Data System (ADS)
Dashtian, Hassan; Yang, Yafan; Sahimi, Muhammad
2015-12-01
Archie's law expresses a relation between the formation factor F of porous media and their porosity ϕ, F∝ϕ-m, where m is the Archie or the cementation exponent. Despite widespread use of Archie's law, the value of m and whether it is universal and independent of the type of reservoir have remained controversial. We analyze various porosity and resistivity logs along 36 wells in six Iranian oil and gas reservoirs using wavelet transform coherence and multifractal detrended fluctuation analysis. m is estimated for two sets of data: one set contains the resistivity data that include those segments of the well that contain significant clay content and one without. The analysis indicates that the well logs are multifractal and that due to the multifractality the exponent m is nonuniversal. Thus, analysis of the resistivity of laboratory or outcrop samples that are not multifractal yields estimates of m that are not applicable to well logs in oil or gas reservoirs.
Fracture characteristics of gas hydrate-bearing sediments in the Ulleung Basin, East Sea
NASA Astrophysics Data System (ADS)
Kim, Gil Young; Narantsetseg, Buyanbat; Yoo, Dong Geun; Ryu, Byong Jae
2015-04-01
The LWD (Logging-While-Drilling) logging (including wireline logging) and coring (including pressure coring) were conducted during UBGH2 (Ulleung Basin Gas Hydrate) expedition. The LWD data from 13 logged sites were obtained and most of the sites showed typical log data indicating the presence of gas hydrate. In particular, prominent fractures were clearly identified on the resistivity borehole images from the seismic chimney structures. The strike and dip of each fracture in all sites was calculated and displayed on the stereographic plot and rosette diagram. Fracture orientations on the stereographic plot are more broadly distributed, indicating that the fracture pattern is not well-ordered on the rosette diagram, although the maximum horizontal stress dominates NW-SE direction at most sites. This indicates that accurate horizontal stress directions cannot be completely resolved from the fractures. Moreover, the fractures may be developed from overburden (e.g., gravitational effect) compaction associated with sediment dewatering after deposition. Thus we should consider various factors affecting formation of fractures in order to interpret the origin of fractures. Nevertheless, the results of fracture analysis can be used to interpret distribution pattern and type of gas hydrate in the Ulleung Basin. .
Paterson, Gord; Liu, Jian; Haffner, G Douglas; Drouillard, Ken G
2010-08-01
This research investigated dose-dependent whole body and fecal elimination of 39 polychlorinated biphenyl (PCB) congeners spanning a range of chemical hydrophobicities (log Kow) by the Japanese koi (Cyprinus carpio). Both whole body (ktot) and fecal (keg) PCB congener elimination rate coefficients were negatively correlated with log Kow and observed to be dose independent. PCB congener ktot values determined for koi were representative of those generated for fish species of similar size and reared at near optimal temperatures. For persistent and metabolized-type PCB congeners, no significant difference was observed between the regressions describing the relationships between ktot and log Kow for these congeners. Individual PCB congener keg coefficient estimates ranged between 1% and 20% of their respective ktot values but averaged only 5% of the magnitude of ktot over a log Kow range of 5.7-7.8. These results verify first-order kinetics of PCB elimination by a fish species and demonstrate that the relative contribution of keg to ktot is negligible, even for highly hydrophobic (log Kow>6.5) compounds. It was concluded that gill elimination is the primary mechanism of elimination for persistent organic pollutants such as PCBs by Japanese koi.
Yelland, Lisa N; Salter, Amy B; Ryan, Philip
2011-10-15
Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.
Thorn, Conde R.
2000-01-01
Over the last several years, an improved conceptual understanding of the aquifer system in the Albuquerque area, New Mexico, has lead to better knowledge about the location and extent of the aquifer system. This information will aid with the refinement of ground-water simulation and with the location of sites for future water-production wells. With an impeller-type flowmeter, well-bore flow was logged under pumping conditions along the screened interval of the well bore in six City of Albuquerque water-production wells: the Ponderosa 3, Love 6, Volcano Cliffs 1, Gonzales 2, Zamora 2, and Gonzales 3 wells. From each of these six wells, a well-bore flow log was collected that represents the cumulative upward well-bore flow. Evaluation of the well-bore flow log for each well allowed delineation of the more productive zones supplying water to the well along the logged interval. Yields from the more productive zones in the six wells ranged from about 70 to 880 gallons per minute. The lithology of these zones is predominantly gravel and sand with varying amounts of sandy clay.
76 FR 35399 - Glenn/Colusa Resource Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-17
... building to view comments. FOR FURTHER INFORMATION CONTACT: Laurie L. Pearson, Visitor Information...: Laurie L. Pearson, Glenn/Colusa R.A.C. Coordinator, PO Box 160, Stonyford, CA 95979, or by e-mail to...
MaxEnt alternatives to pearson family distributions
NASA Astrophysics Data System (ADS)
Stokes, Barrie J.
2012-05-01
In a previous MaxEnt conference [11] a method of obtaining MaxEnt univariate distributions under a variety of constraints was presented. The Mathematica function Interpolation[], normally used with numerical data, can also process "semi-symbolic" data, and Lagrange Multiplier equations were solved for a set of symbolic ordinates describing the required MaxEnt probability density function. We apply a more developed version of this approach to finding MaxEnt distributions having prescribed β1 and β2 values, and compare the entropy of the MaxEnt distribution to that of the Pearson family distribution having the same β1 and β2. These MaxEnt distributions do have, in general, greater entropy than the related Pearson distribution. In accordance with Jaynes' Maximum Entropy Principle, these MaxEnt distributions are thus to be preferred to the corresponding Pearson distributions as priors in Bayes' Theorem.
Some engineering aspects of the Nicholson-Koch mobile chipper
Donald L. Sirois
1981-01-01
A proto-type mobile chip harvester has been designed to harvest forest biomass in the form of logging residuals for use as energy wood. The proto-type is presently undergoing developmental tests. Results are encouraging, indicating mechanical feasibility with prospects of working systems within the next several years.
Well log characterization of natural gas hydrates
Collett, Timothy S.; Lee, Myung W.
2011-01-01
In the last 25 years we have seen significant advancements in the use of downhole well logging tools to acquire detailed information on the occurrence of gas hydrate in nature: From an early start of using wireline electrical resistivity and acoustic logs to identify gas hydrate occurrences in wells drilled in Arctic permafrost environments to today where wireline and advanced logging-while-drilling tools are routinely used to examine the petrophysical nature of gas hydrate reservoirs and the distribution and concentration of gas hydrates within various complex reservoir systems. The most established and well known use of downhole log data in gas hydrate research is the use of electrical resistivity and acoustic velocity data (both compressional- and shear-wave data) to make estimates of gas hydrate content (i.e., reservoir saturations) in various sediment types and geologic settings. New downhole logging tools designed to make directionally oriented acoustic and propagation resistivity log measurements have provided the data needed to analyze the acoustic and electrical anisotropic properties of both highly inter-bedded and fracture dominated gas hydrate reservoirs. Advancements in nuclear-magnetic-resonance (NMR) logging and wireline formation testing have also allowed for the characterization of gas hydrate at the pore scale. Integrated NMR and formation testing studies from northern Canada and Alaska have yielded valuable insight into how gas hydrates are physically distributed in sediments and the occurrence and nature of pore fluids (i.e., free-water along with clay and capillary bound water) in gas-hydrate-bearing reservoirs. Information on the distribution of gas hydrate at the pore scale has provided invaluable insight on the mechanisms controlling the formation and occurrence of gas hydrate in nature along with data on gas hydrate reservoir properties (i.e., permeabilities) needed to accurately predict gas production rates for various gas hydrate production schemes.
Lee-Cruz, Larisa; Edwards, David P; Tripathi, Binu M; Adams, Jonathan M
2013-12-01
Tropical forests are being rapidly altered by logging and cleared for agriculture. Understanding the effects of these land use changes on soil bacteria, which constitute a large proportion of total biodiversity and perform important ecosystem functions, is a major conservation frontier. Here we studied the effects of logging history and forest conversion to oil palm plantations in Sabah, Borneo, on the soil bacterial community. We used paired-end Illumina sequencing of the 16S rRNA gene, V3 region, to compare the bacterial communities in primary, once-logged, and twice-logged forest and land converted to oil palm plantations. Bacteria were grouped into operational taxonomic units (OTUs) at the 97% similarity level, and OTU richness and local-scale α-diversity showed no difference between the various forest types and oil palm plantations. Focusing on the turnover of bacteria across space, true β-diversity was higher in oil palm plantation soil than in forest soil, whereas community dissimilarity-based metrics of β-diversity were only marginally different between habitats, suggesting that at large scales, oil palm plantation soil could have higher overall γ-diversity than forest soil, driven by a slightly more heterogeneous community across space. Clearance of primary and logged forest for oil palm plantations did, however, significantly impact the composition of soil bacterial communities, reflecting in part the loss of some forest bacteria, whereas primary and logged forests did not differ in composition. Overall, our results suggest that the soil bacteria of tropical forest are to some extent resilient or resistant to logging but that the impacts of forest conversion to oil palm plantations are more severe.
Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J
2016-05-01
Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.
Colón-Emeric, Cathleen; Pieper, Carl F.; Grubber, Janet; Van Scoyoc, Lynn; Schnell, Merritt L; Van Houtven, Courtney Harold; Pearson, Megan; Lafleur, Joanne; Lyles, Kenneth W.; Adler, Robert A.
2016-01-01
Purpose With ethical requirements to the enrollment of lower risk subjects, osteoporosis trials are underpowered to detect reduction in hip fractures. Different skeletal sites have different levels of fracture risk and response to treatment. We sought to identify fracture sites which cluster with hip fracture at higher than expected frequency; if these sites respond to treatment similarly, then a composite fracture endpoint could provide a better estimate of hip fracture reduction. Methods Cohort study using Veterans Affairs and Medicare administrative data. Male Veterans (n=5,036,536) aged 50-99 years receiving VA primary care between1999-2009 were included. Fractures were ascertained using ICD9 and CPT codes and classified by skeletal site. Pearson correlation coefficients, logistic regression and kappa statistics, were used to describe the correlation between each fracture type and hip fracture within individuals, without regards to the timing of the events. Results 595,579 (11.8%) men suffered 1 or more fractures and 179,597 (3.6%) suffered 2 or more fractures during the time under study. Of those with one or more fractures, rib was the most common site (29%), followed by spine (22%), hip (21%) and femur (20%). The fracture types most highly correlated with hip fracture were pelvic/acetabular (Pearson correlation coefficient 0.25, p<0.0001), femur (0.15, p<0.0001), and shoulder (0.11, p<0.0001). Conclusions Pelvic, acetabular, femur, and shoulder fractures cluster with hip fractures within individuals at greater than expected frequency. If we observe similar treatment risk reductions within that cluster, subsequent trials could consider use of a composite endpoint to better estimate hip fracture risk. PMID:26151123
Biofilms associated with poultry processing equipment.
Lindsay, D; Geornaras, I; von Holy, A
1996-01-01
Aerobic and Gram-negative bacteria were enumerated on non-metallic surfaces and stainless steel test pieces attached to equipment surfaces by swabbing and a mechanical dislodging procedure, respectively, in a South African grade B poultry processing plant. Changes in bacterial numbers were also monitored over time on metal test pieces. The highest bacterial counts were obtained from non-metallic surfaces such as rubber fingered pluckers and plastic defeathering curtains which exceeded the highest counts found on the metal surfaces by at least 1 log CFU cm-2. Gram-negative bacterial counts on all non-metallic surface types were at least 2 log CFU cm-2 lower than corresponding aerobic plate counts. On metal surfaces, the highest microbial numbers were obtained after 14 days exposure, with aerobic plate counts ranging from 3.57 log CFU cm-2 to 5.13 log CFU cm-2, and Gram-negative counts from 0.70 log CFU cm-2 to 3.31 log CFU cm-2. Scanning electron microscopy confirmed the presence of bacterial cells on non-metallic and metallic surfaces associated with poultry processing. Rubber 'fingers', plastic curtains, conveyor belt material and stainless steel test surfaces placed on the scald tank overflow and several chutes revealed extensive and often confluent bacterial biofilms. Extracellular polymeric substances, but few bacterial cells were visible on test pieces placed on evisceration equipment, spinchiller blades and the spinchiller outlet.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galbraith, R.M.
1978-05-01
The Coso Geothermal Exploration Hole number one (CGEH-1) was drilled in the Coso Hot Springs KGRA, California, from September 2 to December 2, 1977. Chip samples were collected at ten foot intervals and extensive geophysical logging surveys were conducted to document the geologic character of the geothermal system as penetrated by CGEH-1. The major rock units encountered include a mafic metamorphic sequence and a leucogranite which intruded the metamorphic rocks. Only weak hydrothermal alteration was noted in these rocks. Drillhole surveys and drilling rate data indicate that the geothermal system is structurally controlled and that the drillhole itself was stronglymore » influenced by structural zones. Water chemistry indicates that this geothermal resource is a hot-water rather than a vapor-dominated system. Several geophysical logs were employed to characcterize the drillhole geology. The natural gamma and neutron porosity logs indicate gross rock type and the accoustic logs indicate fractured rock and potentially permeable zones. A series of temperature logs run as a function of time during and after the completion of drilling were most useful in delineating the zones of maximum heat flux. Convective heat flow and temperatures greater than 350/sup 0/F appear to occur only along an open fracture system encountered between depths of 1850 and 2775 feet. Temperature logs indicate a negative thermal gradient below 3000 feet.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galbraith, R.M.
1978-05-01
The Coso Geothermal Exploration Hole number one (CGEH-1) was drilled in the Coso Hot Springs KGRA, California from September 2 to December 2, 1977. Chip samples were collected at ten foot intervals and extensive geophysical logging surveys were conducted to document the geologic character of the geothermal system as penetrated by CGEH-1. The major rock units encountered include a mafic metamorphic sequence and a leucogranite which intruded the metamorphic rocks. Only weak hydrothermal alteration was noted in these rocks. Drillhole surveys and drilling rate data indicate that the geothermal system is structurally controlled and that the drillhole itself was stronglymore » influenced by structural zones. Water chemistry indicates that this geothermal resource is a hot-water rather than a vapor-dominated system. Several geophysical logs were employed to characterize the drillhole geology. The natural gamma and neutron porosity logs indicate gross rock type and the acoustic logs indicate fractured rock and potentially permeable zones. A series of temperature logs run as a function of time during and after the completion of drilling were most useful in delineating the zones of maximum heat flux. Convective heat flow and temperatures greater than 350/sup 0/F appear to occur only along an open fracture system encountered between depths of 1850 and 2775 feet. Temperature logs indicate a negative thermal gradient below 3000 feet.« less
Bailey, Emily S; Casanova, Lisa M; Simmons, Otto D; Sobsey, Mark D
2018-07-15
Treated wastewater is increasingly of interest for either nonpotable purposes, such as agriculture and industrial use, or as source water for drinking water supplies; however, this type of advanced treatment for water supply is not always possible for many low resource settings. As an alternative, multiple barriers of physical, chemical and biological treatment with lower cost and simpler operation and maintenance have been proposed as more globally applicable. One such water reclamation system for both non-potable and potable reuse, is that approved by the State of North Carolina "for Type 2" reclaimed water (NCT2RW). NC Type 2 potable reuse systems consist of a sequence of tertiary treatment to produce well oxidized reclaimed water that is then then further treated by two steps of disinfection, typically UV radiation and chlorination. In this case study, the log10 microbial reduction performance of NCT2RW producing water reclamation facilities is evaluated. Based on the results presented here, NCT2RW consistently achieved high (6 for bacteria, 4 for virus and 4 for protozoan parasite surrogates) log10 reductions using the NC proposed treatment methods. Additionally, lower but significant log10 reduction performance was also documented for protozoan parasites and human enteric viruses. Copyright © 2018 Elsevier B.V. All rights reserved.
Quantification of DNA using the luminescent oxygen channeling assay.
Patel, R; Pollner, R; de Keczer, S; Pease, J; Pirio, M; DeChene, N; Dafforn, A; Rose, S
2000-09-01
Simplified and cost-effective methods for the detection and quantification of nucleic acid targets are still a challenge in molecular diagnostics. Luminescent oxygen channeling assay (LOCI(TM)) latex particles can be conjugated to synthetic oligodeoxynucleotides and hybridized, via linking probes, to different DNA targets. These oligomer-conjugated LOCI particles survive thermocycling in a PCR reaction and allow quantified detection of DNA targets in both real-time and endpoint formats. The endpoint DNA quantification format utilized two sensitizer bead types that are sensitive to separate illumination wavelengths. These two bead types were uniquely annealed to target or control amplicons, and separate illuminations generated time-resolved chemiluminescence, which distinguished the two amplicon types. In the endpoint method, ratios of the two signals allowed determination of the target DNA concentration over a three-log range. The real-time format allowed quantification of the DNA target over a six-log range with a linear relationship between threshold cycle and log of the number of DNA targets. This is the first report of the use of an oligomer-labeled latex particle assay capable of producing DNA quantification and sequence-specific chemiluminescent signals in a homogeneous format. It is also the first report of the generation of two signals from a LOCI assay. The methods described here have been shown to be easily adaptable to new DNA targets because of the generic nature of the oligomer-labeled LOCI particles.
Getting out of bed after surgery
... to Advanced Skills . 9th ed. New York, NY: Pearson; 2017:chap 13. Smith SF, Duell DJ, Martin ... to Advanced Skills . 9th ed. New York, NY: Pearson; 2017:chap 26. Patient Instructions Gallbladder removal - open - ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
You, Tae-Soo; Bobev, Svilen, E-mail: bobev@udel.ed
Two types of strontium-, barium- and europium-containing germanides have been synthesized using high temperature reactions and characterized by single-crystal X-ray diffraction. All reported compounds also contain mixed-occupied Li and In atoms, resulting in quaternary phases with narrow homogeneity ranges. The first type comprises EuLi{sub 0.91(1)}In{sub 0.09}Ge{sub 2}, SrLi{sub 0.95(1)}In{sub 0.05}Ge{sub 2} and BaLi{sub 0.99(1)}In{sub 0.01}Ge{sub 2}, which crystallize in the orthorhombic space group Pnma (BaLi{sub 0.9}Mg{sub 0.1}Si{sub 2} structure type, Pearson code oP16). The lattice parameters are a=7.129(4)-7.405(4) A; b=4.426(3)-4.638(2) A; and c=11.462(7)-11.872(6) A. The second type includes Eu{sub 2}Li{sub 1.36(1)}In{sub 0.64}Ge{sub 3} and Sr{sub 2}Li{sub 1.45(1)}In{sub 0.55}Ge{sub 3}, whichmore » adopt the orthorhombic space group Cmcm (Ce{sub 2}Li{sub 2}Ge{sub 3} structure type, Pearson code oC28) with lattice parameters a=4.534(2)-4.618(2) A; b=19.347(8)-19.685(9) A; and c=7.164(3)-7.260(3) A. The polyanionic sub-structures in both cases feature one-dimensional Ge chains with alternating Ge-Ge bonds in cis- and trans-conformation. Theoretical studies using the tight-binding linear muffin-tin orbital (LMTO) method provide the rationale for optimizing the overall bonding by diminishing the {pi}-p delocalization along the Ge chains, accounting for the experimentally confirmed substitution of Li forIn. -- Graphical abstract: Presented are the single-crystal structures of two types of closely related intermetallics, as well as their band structures, calculated using tight-binding linear muffin-tin orbital (TB-LMTO-ASA) method. Display Omitted« less
Leong, Wan Mei; Geier, Renae; Engstrom, Sarah; Ingham, Steve; Ingham, Barbara; Smukowski, Marianne
2014-08-01
Potentially hazardous foods require time/temperature control for safety. According to the U.S. Food and Drug Administration Food Code, most cheeses are potentially hazardous foods based on pH and water activity, and a product assessment is required to evaluate safety of storage >6 h at 21°C. We tested the ability of 67 market cheeses to support growth of Listeria monocytogenes (LM), Salmonella spp. (SALM), Escherichia coli O157:H7 (EC), and Staphylococcus aureus (SA) over 15 days at 25°C. Hard (Asiago and Cheddar), semi-hard (Colby and Havarti), and soft cheeses (mozzarella and Mexican-style), and reduced-sodium or reduced-fat types were tested. Single-pathogen cocktails were prepared and individually inoculated onto cheese slices (∼10(5) CFU/g). Cocktails were 10 strains of L. monocytogenes, 6 of Salmonella spp., or 5 of E. coli O157:H7 or S. aureus. Inoculated slices were vacuum packaged and stored at 25°C for ≤ 15 days, with surviving inocula enumerated every 3 days. Percent salt-in-the-moisture phase, percent titratable acidity, pH, water activity, and levels of indigenous/starter bacteria were measured. Pathogens did not grow on 53 cheeses, while 14 cheeses supported growth of SA, 6 of SALM, 4 of LM, and 3 of EC. Of the cheeses supporting pathogen growth, all supported growth of SA, ranging from 0.57 to 3.08 log CFU/g (average 1.70 log CFU/g). Growth of SALM, LM, and EC ranged from 1.01 to 3.02 log CFU/g (average 2.05 log CFU/g), 0.60 to 2.68 log CFU/g (average 1.60 log CFU/g), and 0.41 to 2.90 log CFU/g (average 1.69 log CFU/g), respectively. Pathogen growth varied within cheese types or lots. Pathogen growth was influenced by pH and percent salt-in-the-moisture phase, and these two factors were used to establish growth/no-growth boundary conditions for safe, extended storage (≤25°C) of pasteurized milk cheeses. Pathogen growth/no-growth could not be predicted for Swiss-style cheeses, mold-ripened or bacterial surface-ripened cheeses, and cheeses made with nonbovine milk, as insufficient data were gathered. This challenge study data can support science-based decision making in a regulatory framework.
[Pearson syndrome. Case report].
Cammarata-Scalisi, Francisco; López-Gallardo, Ester; Emperador, Sonia; Ruiz-Pesini, Eduardo; Da Silva, Gloria; Camacho, Nolis; Montoya, Julio
2011-09-01
Among the etiologies of anemia in the infancy, the mitochondrial cytopathies are infrequent. Pearson syndrome is diagnosed principally during the initial stages of life and it is characterized by refractory sideroblastic anemia with vacuolization of marrow progenitor cells, exocrine pancreatic dysfunction and variable neurologic, hepatic, renal and endocrine failures. We report the case of a 14 month-old girl evaluated by a multicentric study, with clinic and molecular diagnosis of Pearson syndrome, with the 4,977-base pair common deletion of mitochondrial DNA. This entity has been associated to diverse phenotypes within the broad clinical spectrum of mitochondrial disease.
Wieczyńska, Justyna; Cavoski, Ivana
2018-09-01
In this study, bio-based emitting sachets containing eugenol (EUG), carvacrol (CAR) and trans-anethole (ANT) were inserted into cellulose (CE) and polypropylene (PP) pillow packages of organic ready-to-eat (RTE) iceberg lettuce to investigate their functional features. EUG, CAR and ANT sachets in CE; and CAR in PP packages showed antimicrobial activities against coliforms (Δlog CFU g -1 of -1.38, -0.91, -0.93 and -0.93, respectively). EUG and ANT sachets in both packages reduced discoloration (ΔE of 9.5, 1.8, 9.4 and 5.6, respectively). ANT in both, and EUG only in PP packages induced biosynthesis of caffeoyl derivatives (C a T A , D i C a T A , D i C a Q A ), total phenolics and antioxidant activity (FRAP). Also, ANT and EUG in both packages improved overall freshness and odor. Principal component analysis separated ANT and EUG from CAR in both packages. The Pearson correlation confirmed that overall quality improvements were more pronounced by ANT inside the packages in comparison to EUG and CAR. Copyright © 2018 Elsevier Ltd. All rights reserved.
Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations
Jia, Kebin
2015-01-01
This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed. PMID:26120356
Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations.
Zhao, Liya; Jia, Kebin
2015-01-01
This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed.
Detecting lane departures from steering wheel signal.
Sandström, Max; Lampsijärvi, Eetu; Holmström, Axi; Maconi, Göran; Ahmadzai, Shabana; Meriläinen, Antti; Hæggström, Edward; Forsman, Pia
2017-02-01
Current lane departure warning systems are video-based and lose data when road- and weather conditions are bad. This study sought to develop a lane departure warning algorithm based on the signal drawn from the steering wheel. The rationale is that a car-based lane departure warning system should be robust regardless of road- and weather conditions. N=34 professional driver students drove in a high-fidelity driving simulator at 80km/h for 55min every third hour during 36h of sustained wakefulness. During each driving session we logged the steering wheel- and lane position signals at 60Hz. To derive the lane position signal, we quantified the transfer function of the simulated vehicle and used it to derive the absolute lane position signal from the steering wheel signal. The Pearson correlation between the derived- and actual lane position signals was r=0.48 (based on 12,000km). Next we designed an algorithm that alerted, up to three seconds before they occurred, about upcoming lane deviations that exceeded 0.2m. The sensitivity of the algorithm was 47% and the specificity was 71%. To our knowledge this exceeds the performance of the current video-based systems. Copyright © 2016 Elsevier Ltd. All rights reserved.
Wilson, Robin Taylor; Donahue, Mark; Gridley, Gloria; Adami, Johanna; ghormli, Laure El; Dosemeci, Mustafa
2009-01-01
Background: Unlike cancer of the bladder, cancer of the renal pelvis is not considered an occupational cancer and little is known about risks among women. Methods: Using the Swedish national census and cancer registry-linked data (1971-1989), we identified transitional cell cancers of the renal pelvis (N=1374) and bladder (N=21,591). Correlation between cancer sites for the Standardized Incidence Ratios (SIR) were determined using Pearson's coefficient of the log SIR. Relative risks of job exposure matrix variables were calculated using Poisson regression. Results: Both cancer sites were significantly elevated among women and men employed in the machine/electronics industry, sedentary work, and indoor work, as well as among men employed in the shop and construction metal industry, contributing 10-14% of cases among men. Risks by industry were more highly correlated among women (r=0.49, p=0.002) than men (r=0.24, p=0.04). Conclusion: Cancers of the renal pelvis and bladder share common occupational risk factors that may be more frequent among women. In addition, there may be several jobs that pose an increased risk specifically for cancer of the renal pelvis but not bladder. PMID:18067176
Seeing visual word forms: spatial summation, eccentricity and spatial configuration.
Kao, Chien-Hui; Chen, Chien-Chung
2012-06-01
We investigated observers' performance in detecting and discriminating visual word forms as a function of target size and retinal eccentricity. The contrast threshold of visual words was measured with a spatial two-alternative forced-choice paradigm and a PSI adaptive method. The observers were to indicate which of two sides contained a stimulus in the detection task, and which contained a real character (as opposed to a pseudo- or non-character) in the discrimination task. When the target size was sufficiently small, the detection threshold of a character decreased as its size increased, with a slope of -1/2 on log-log coordinates, up to a critical size at all eccentricities and for all stimulus types. The discrimination threshold decreased with target size with a slope of -1 up to a critical size that was dependent on stimulus type and eccentricity. Beyond that size, the threshold decreased with a slope of -1/2 on log-log coordinates before leveling out. The data was well fit by a spatial summation model that contains local receptive fields (RFs) and a summation across these filters within an attention window. Our result implies that detection is mediated by local RFs smaller than any tested stimuli and thus detection performance is dominated by summation across receptive fields. On the other hand, discrimination is dominated by a summation within a local RF in the fovea but a cross RF summation in the periphery. Copyright © 2012 Elsevier Ltd. All rights reserved.
Blaschke, A. P.; Toze, S.; Sidhu, J. P. S.; Ahmed, W.; van Driezum, I. H.; Sommer, R.; Kirschner, A. K. T.; Cervero-Aragó, S.; Farnleitner, A. H.; Pang, L.
2015-01-01
Members of the genus Cryptosporidium are waterborne protozoa of great health concern. Many studies have attempted to find appropriate surrogates for assessing Cryptosporidium filtration removal in porous media. In this study, we evaluated the filtration of Cryptosporidium parvum in granular limestone medium by the use of biotin- and glycoprotein-coated carboxylated polystyrene microspheres (CPMs) as surrogates. Column experiments were carried out with core material taken from a managed aquifer recharge site in Adelaide, Australia. For the experiments with injection of a single type of particle, we observed the total removal of the oocysts and glycoprotein-coated CPMs, a 4.6- to 6.3-log10 reduction of biotin-coated CPMs, and a 2.6-log10 reduction of unmodified CPMs. When two different types of particles were simultaneously injected, glycoprotein-coated CPMs showed a 5.3-log10 reduction, while the uncoated CPMs displayed a 3.7-log10 reduction, probably due to particle-particle interactions. Our results confirm that glycoprotein-coated CPMs are the most accurate surrogates for C. parvum; biotin-coated CPMs are slightly more conservative, while unmodified CPMs are markedly overly conservative for predicting C. parvum removal in granular limestone medium. The total removal of C. parvum observed in our study suggests that granular limestone medium is very effective for the filtration removal of C. parvum and could potentially be used for the pretreatment of drinking water and aquifer storage recovery of recycled water. PMID:25888174
Correlating Species and Spectral Diversity using Remote Sensing in Successional Fields in Virginia
NASA Astrophysics Data System (ADS)
Aneece, I.; Epstein, H. E.
2015-12-01
Conserving biodiversity can help preserve ecosystem properties and function. As the increasing prevalence of invasive plant species threatens biodiversity, advances in remote sensing technology can help monitor invasive species and their effects on ecosystems and plant communities. To assess whether we could study the effects of invasive species on biodiversity using remote sensing, we asked whether species diversity was positively correlated with spectral diversity, and whether correlations differed among spectral regions along the visible and near-infrared range. To answer these questions, we established community plots in secondary successional fields at the Blandy Experimental Farm in northern Virginia and collected vegetation surveys and ground-level hyperspectral data from 350 to 1025 nm wavelengths. Pearson correlation analysis revealed a positive correlation between spectral diversity and species diversity in the visible ranges of 350-499 nm (Pearson correlation=0.69, p=0.01), 500-589 nm (Pearson=0.64, p=0.03), and 590-674 nm (Pearson=0.70, p=0.01), slight positive correlation in the red edge range of 675-754 nm (Pearson=0.56, p=0.06), and no correlation in the near-infrared ranges of 755-924 nm (Pearson=-0.06, p=0.85) and 925-1025 nm (Pearson=0.30, p=0.34). These differences in correlations across spectral regions may be due to the elements that contribute to signatures in those regions and spectral data transformation methods. To investigate the role of pigment variability in these correlations, we estimated chlorophyll, carotenoid, and anthocyanin concentrations of five dominant species in the plots using vegetation indices. Although interspecific variability in pigment levels exceeded intraspecific variability, chlorophyll (F value=118) was more varied within species than carotenoids (F=322) and anthocyanins (F=126), perhaps contributing to the lack of correlation between species diversity and spectral diversity in the red edge region. Interspecific differences in pigment levels, however, make it possible to differentiate species remotely.
Yadav, Arjita; Singh, Sudhi
2014-05-01
To study whether the chronotype is linked with the sleep characteristics among college going students assessed during college days and vacation days, adult female students at undergraduate level were asked to answer the Hindi/English version of the Munich Chronotype Questionnaire (MCTQ), fill a sleep log, and drinking and feeding logs for three weeks covering college and vacation days. Based on chronotype categorization as morning type, intermediate type and evening type, sleep onset and offset times, sleep duration and mid-sleep times for each group were compared, separately for college and vacation days. Results indicate that the sleep duration of the morning types was significantly longer than the evening types, both, during college and vacation days. Similarly, the sleep onset and sleep offset times were significantly earlier in the morning types than the evening type students. During the vacation days, the individuals exhibited longer sleep duration with delayed mid-sleep times. Further there was no significant difference among the chronotypes regarding their feeding and drinking frequency per cent during the college and the vacation days. It is suggested that the students should be made aware of their chronotype, so that they can utilize their time optimally, and develop a schedule more suitable to their natural needs.
Al-Qadiri, Hamzah M; Ovissipour, Mahmoudreza; Al-Alami, Nivin; Govindan, Byju N; Shiroodi, Setareh Ghorban; Rasco, Barbara
2016-05-01
Bactericidal activity of neutral electrolyzed water (NEW), quaternary ammonium (QUAT), and lactic acid-based solutions was investigated using a manual spraying technique against Salmonella Typhimurium, Escherichia coli O157:H7, Campylobacter jejuni, Listeria monocytogenes and Staphylococcus aureus that were inoculated onto the surface of scarred polypropylene and wooden food cutting boards. Antimicrobial activity was also examined when using cutting boards in preparation of raw chopped beef, chicken tenders or salmon fillets. Viable counts of survivors were determined as log10 CFU/100 cm(2) within 0 (untreated control), 1, 3, and 5 min of treatment at ambient temperature. Within the first minute of treatment, NEW and QUAT solutions caused more than 3 log10 bacterial reductions on polypropylene surfaces whereas less than 3 log10 reductions were achieved on wooden surfaces. After 5 min of treatment, more than 5 log10 reductions were achieved for all bacterial strains inoculated onto polypropylene surfaces. Using NEW and QUAT solutions within 5 min reduced Gram-negative bacteria by 4.58 to 4.85 log10 compared to more than 5 log10 reductions in Gram-positive bacteria inoculated onto wooden surfaces. Lactic acid treatment was significantly less effective (P < 0.05) compared to NEW and QUAT treatments. A decline in antimicrobial effectiveness was observed (0.5 to <2 log10 reductions were achieved within the first minute) when both cutting board types were used to prepare raw chopped beef, chicken tenders or salmon fillets. © 2016 Institute of Food Technologists®
31. Historic American Buildings Survey E. R. Pearson, Photographer 1972 ...
31. Historic American Buildings Survey E. R. Pearson, Photographer 1972 CLOTHES ROOM, FIRST ATTIC, SOUTHEAST CORNER, LOOKING EAST - Shaker Centre Family Dwelling House, West side of U.S. Route 68, South Union, Logan County, KY
75 FR 68618 - Diamond Sawblades and Parts Thereof From China and Korea
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-08
... Pearson dissent, having determined that an industry in the United States is not materially injured or... remand, Vice Chairman Pearson and Commissioners Okun and Lane voted in the negative. On January 13, 2009...
A General Purpose Connections type CTI Server Based on SIP Protocol and Its Implementation
NASA Astrophysics Data System (ADS)
Watanabe, Toru; Koizumi, Hisao
In this paper, we propose a general purpose connections type CTI (Computer Telephony Integration) server that provides various CTI services such as voice logging where the CTI server communicates with IP-PBX using the SIP (Session Initiation Protocol), and accumulates voice packets of external line telephone call flowing between an IP telephone for extension and a VoIP gateway connected to outside line networks. The CTI server realizes CTI services such as voice logging, telephone conference, or IVR (interactive voice response) with accumulating and processing voice packets sampled. Furthermore, the CTI server incorporates a web server function which can provide various CTI services such as a Web telephone directory via a Web browser to PCs, cellular telephones or smart-phones in mobile environments.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. From left, Carl Benoit, senior national science consultant, Pearson Scott Foresman; Paul McFall, president, Pearson Scott Foresman; Dr. Adena Williams Loston, NASA chief education officer; and James Lippe, science product manager, Pearson Scott Foresman, participate in the unveiling of 'The Science in Space Challenge' at the Doubletree Hotel in Orlando, Fla. The national challenge program is sponsored by NASA and Pearson Scott Foresman, publisher of pre-K through grade six educational books. To participate in the challenge, teachers may submit proposals, on behalf of their students, for a science and technology investigation. Astronauts will conduct the winning projects on a Space Shuttle mission or on the International Space Station, while teachers and students follow along via television or the Web. For more information about the announcement, see the news release at http://www.nasa.gov/home/hqnews/2004/oct/HQ_04341_publication.htm l.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. From left, NASA astronaut Patrick Forrester; Paul McFall, president, Pearson Scott Foresman; Dr. Adena Williams Loston, NASA chief education officer; James Lippe, science product manager, Pearson Scott Foresman; and Carl Benoit, senior national science consultant, Pearson Scott Foresman, participate in the unveiling of 'The Science in Space Challenge' at the Doubletree Hotel in Orlando, Fla. The national challenge program is sponsored by NASA and Pearson Scott Foresman, publisher of pre-K through grade six educational books. To participate in the challenge, teachers may submit proposals, on behalf of their students, for a science and technology investigation. Astronauts will conduct the winning projects on a Space Shuttle mission or on the International Space Station, while teachers and students follow along via television or the Web. For more information about the announcement, see the news release at http://www.nasa.gov/home/hqnews/2004/oct/HQ_04341_publication.htm l.
Rao, Prethy; Lum, Flora; Wood, Kevin; Salman, Craig; Burugapalli, Bhavya; Hall, Rebecca; Singh, Sukhminder; Parke, David W; Williams, George A
2018-04-01
The purpose of this study is to compare real-world visual acuity (VA) in patients with neovascular age-related macular degeneration (nAMD) treated with a single anti-vascular endothelial growth factor (VEGF) drug monotherapy for 1 year from the American Academy of Ophthalmology (AAO) Intelligent Research in Sight (IRIS) Registry. Retrospective, nonrandomized, comparative study. IRIS Registry patients with nAMD who received bevacizumab, ranibizumab, or aflibercept only for 1 year between 2013-2016. Participants were divided into 3 groups based on monotherapy type. Multivariate analysis of covariance models (ANCOVA) was constructed in a stepwise fashion. The logarithm of the minimum angle of resolution (logMAR) VA at 1 year and mean change in logMAR VA between baseline and 1 year were compared between drug types. Of 13 859 patients, 6723 received bevacizumab, 2749 received ranibizumab, and 4387 received aflibercept only for 1 year. A total of 84 828 injections were performed. The mean number of injections (standard deviation) at 1 year was higher in the ranibizumab (6.4 [±2.4]) and aflibercept groups (6.2 [±2.4]) compared to bevacizumab group (5.9 [±2.4]; P < 0.0001). In the age-adjusted model, both ranibizumab and aflibercept achieved better logMAR VA at 1 year compared with bevacizumab (0.50 [±0.49], 0.49 [±0.44], 0.55 [±0.57]; P < 0.0001). However, this difference was not significant after multivariate adjustment (age, baseline VA, diabetes, posterior vitreous detachment, number of injections, race, insurance). There was no statistical difference in the age-adjusted or multivariate-adjusted mean logMAR VA change (standard deviation) at 1 year among treatment groups (-0.048 [0.44] bevacizumab, -0.053 [0.46] ranibizumab, -0.040 [0.39] aflibercept; P = 0.46). A higher percentage of patients achieved a ≥3-line VA improvement at 1 year in the bevacizumab group (22.7%) compared with ranibizumab (20.1%; P = 0.0093) and aflibercept (17.8%; P < 0.0001). However, after multivariate adjustment, aflibercept exhibited a greater log odds of a ≥3-line VA loss compared with bevacizumab only (1.25 log odds ratio; P < 0.0016). This study suggests that all 3 drugs improve VA similarly over 1 year of monotherapy. Copyright © 2017 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Peña Angulo, Dhais; Trigo, Ricardo; Cortesi, Nicola; Gonzalez-Hidalgo, Jose Carlos
2016-04-01
We have analyzed at monthly scale the spatial distribution of Pearson correlation between monthly mean of maximum (Tmax) and minimum (Tmin) temperatures with weather types (WTs) in the Iberian Peninsula (IP), represent them in a high spatial resolution grid (10km x 10km) from MOTEDAS dataset (Gonzalez-Hidalgo et al., 2015a). The WT classification was that developed by Jenkinson and Collison, adapted to the Iberian Peninsula by Trigo and DaCamara, using Sea Level Pressure data from NCAR/NCEP Reanalysis dataset (period 1951-2010). The spatial distribution of Pearson correlations shows a clear zonal gradient in Tmax under the zonal advection produced in westerly (W) and easterly (E) flows, with negative correlation in the coastland where the air mass come from but positive correlation to the inland areas. The same is true under North-West (NW), North-East (NE), South-West (SW) and South-East (SE) WTs. These spatial gradients are coherent with the spatial distribution of the main mountain chain and offer an example of regional adiabatic phenomena that affect the entire IP (Peña-Angulo et al., 2015b). These spatial gradients have not been observed in Tmin. We suggest that Tmin values are less sensitive to changes in Sea Level Pressure and more related to local factors. These directional WT present a monthly frequency over 10 days and could be a valuable tool for downscaling processes. González-Hidalgo J.C., Peña-Angulo D., Brunetti M., Cortesi, C. (2015a): MOTEDAS: a new monthly temperature database for mainland Spain and the trend in temperature (1951-2010). International Journal of Climatology 31, 715-731. DOI: 10.1002/joc.4298 Peña-Angulo, D., Trigo, R., Cortesi, C., González-Hidalgo, J.C. (2015b): The influence of weather types on the monthly average maximum and minimum temperatures in the Iberian Peninsula. Submitted to Hydrology and Earth System Sciences.
de Winter, Joost C F; Gosling, Samuel D; Potter, Jeff
2016-09-01
The Pearson product–moment correlation coefficient ( r p ) and the Spearman rank correlation coefficient ( r s ) are widely used in psychological research. We compare r p and r s on 3 criteria: variability, bias with respect to the population value, and robustness to an outlier. Using simulations across low (N = 5) to high (N = 1,000) sample sizes we show that, for normally distributed variables, r p and r s have similar expected values but r s is more variable, especially when the correlation is strong. However, when the variables have high kurtosis, r p is more variable than r s . Next, we conducted a sampling study of a psychometric dataset featuring symmetrically distributed data with light tails, and of 2 Likert-type survey datasets, 1 with light-tailed and the other with heavy-tailed distributions. Consistent with the simulations, r p had lower variability than r s in the psychometric dataset. In the survey datasets with heavy-tailed variables in particular, r s had lower variability than r p , and often corresponded more accurately to the population Pearson correlation coefficient ( R p ) than r p did. The simulations and the sampling studies showed that variability in terms of standard deviations can be reduced by about 20% by choosing r s instead of r p . In comparison, increasing the sample size by a factor of 2 results in a 41% reduction of the standard deviations of r s and r p . In conclusion, r p is suitable for light-tailed distributions, whereas r s is preferable when variables feature heavy-tailed distributions or when outliers are present, as is often the case in psychological research. PsycINFO Database Record (c) 2016 APA, all rights reserved
Factors influencing avian communities in high-elevation southern Allegheny mountain forests
Harry A. Kahler; James T. Anderson
2010-01-01
Myriad factors may influence bird community characteristics among subalpine, central, and northern hardwood forest cover types of the southern Allegheny Mountains. Differences in forest cover types may result from natural characteristics, such as tree species composition, topography, or elevation, as well as from past influences, such as poor logging practices. Our...
Elementary School Students' Strategic Learning: Does Task-Type Matter?
ERIC Educational Resources Information Center
Malmberg, Jonna; Järvelä, Sanna; Kirschner, Paul A.
2014-01-01
This study investigated what types of learning patterns and strategies elementary school students use to carry out ill- and well-structured tasks. Specifically, it was investigated which and when learning patterns actually emerge with respect to students' task solutions. The present study uses computer log file traces to investigate how…
14 CFR 61.55 - Second-in-command qualifications.
Code of Federal Regulations, 2010 CFR
2010-01-01
... logged pilot time in the type of aircraft or in a flight simulator that represents the type of aircraft... management training. (c) If a person complies with the requirements in paragraph (b) of this section in the... lieu of the trainer, it is permissible for a qualified management official within the organization to...
Fisher, K; Rowe, C; Phillips, C A
2007-05-01
To test the effect of oils and vapours of lemon, sweet orange and bergamot and their components against three Arcobacter butzleri strains. The disc diffusion method was used to screen the oils and vapours against three strains of A. butzleri. In vitro bergamot was the most inhibitory essential oil (EO) and both citral and linalool were effective. On cabbage leaf, the water isolate was the least susceptible to bergamot EO, citral and linalool (1-2 log reduction), with the chicken isolate being the most susceptible (6-8 log reduction). However, the latter appeared not to be susceptible to vapours over 24 h although type strain and water isolate populations reduced by 8 logs. On chicken skin, the effectiveness of the oils was reduced compared with that on cabbage leaf. Bergamot was the most effective of the oils tested and linalool the most effective component. All strains tested were less susceptible in food systems than in vitro. Arcobacter isolates vary in their response to EO suggesting that the results of type strain studies should be interpreted with caution. Bergamot EO has the potential for the inhibition of this 'emerging' pathogen.
Biological legacies buffer local species extinction after logging
Rudolphi, Jörgen; Jönsson, Mari T; Gustafsson, Lena
2014-01-01
Clearcutting has been identified as a main threat to forest biodiversity. In the last few decades, alternatives to clearcutting have gained much interest. Living and dead trees are often retained after harvest to serve as structural legacies to mitigate negative effects of forestry. However, this practice is widely employed without information from systematic before–after control-impact studies to assess the processes involved in species responses after clearcutting with retention. We performed a large-scale survey of the occurrence of logging-sensitive and red-listed bryophytes and lichens before and after clearcutting with the retention approach. A methodology was adopted that, for the first time in studies on retention approaches, enabled monitoring of location-specific substrates. We used uncut stands as controls to assess the variables affecting the survival of species after a major disturbance. In total, 12 bryophyte species and 27 lichen species were analysed. All were classified as sensitive to logging, and most species are also currently red-listed. We found that living and dead trees retained after final harvest acted as refugia in which logging-sensitive species were able to survive for 3 to 7 years after logging. Depending on type of retention and organism group, between 35% and 92% of the species occurrences persisted on retained structures. Most species observed outside retention trees or patches disappeared. Larger pre-harvest population sizes of bryophytes on dead wood increased the survival probability of the species and hence buffered the negative effects of logging. Synthesis and applications. Careful spatial planning of retention structures is required to fully embrace the habitats of logging-sensitive species. Bryophytes and lichens persisted to a higher degree in retention patches compared to solitary trees or in the clearcut area. Retaining groups of trees in logged areas will help to sustain populations of species over the clearcut phase. When possible, old logs should be moved into retention patches to provide a more beneficial environment for dead wood-dependent species. Our study also highlights the need for more before–after control-impact studies of retention forestry to explore factors influencing the survival of species after logging. PMID:25653456
Biological legacies buffer local species extinction after logging.
Rudolphi, Jörgen; Jönsson, Mari T; Gustafsson, Lena; Bugmann, H
2014-02-01
Clearcutting has been identified as a main threat to forest biodiversity. In the last few decades, alternatives to clearcutting have gained much interest. Living and dead trees are often retained after harvest to serve as structural legacies to mitigate negative effects of forestry. However, this practice is widely employed without information from systematic before-after control-impact studies to assess the processes involved in species responses after clearcutting with retention. We performed a large-scale survey of the occurrence of logging-sensitive and red-listed bryophytes and lichens before and after clearcutting with the retention approach. A methodology was adopted that, for the first time in studies on retention approaches, enabled monitoring of location-specific substrates. We used uncut stands as controls to assess the variables affecting the survival of species after a major disturbance. In total, 12 bryophyte species and 27 lichen species were analysed. All were classified as sensitive to logging, and most species are also currently red-listed. We found that living and dead trees retained after final harvest acted as refugia in which logging-sensitive species were able to survive for 3 to 7 years after logging. Depending on type of retention and organism group, between 35% and 92% of the species occurrences persisted on retained structures. Most species observed outside retention trees or patches disappeared. Larger pre-harvest population sizes of bryophytes on dead wood increased the survival probability of the species and hence buffered the negative effects of logging. Synthesis and applications . Careful spatial planning of retention structures is required to fully embrace the habitats of logging-sensitive species. Bryophytes and lichens persisted to a higher degree in retention patches compared to solitary trees or in the clearcut area. Retaining groups of trees in logged areas will help to sustain populations of species over the clearcut phase. When possible, old logs should be moved into retention patches to provide a more beneficial environment for dead wood-dependent species. Our study also highlights the need for more before-after control-impact studies of retention forestry to explore factors influencing the survival of species after logging.
O'Boyle, Cathy; Chen, Sean I; Little, Julie-Anne
2017-04-01
Clinically, picture acuity tests are thought to overestimate visual acuity (VA) compared with letter tests, but this has not been systematically investigated in children with amblyopia. This study compared VA measurements with the LogMAR Crowded Kay Picture test to the LogMAR Crowded Keeler Letter acuity test in a group of young children with amblyopia. 58 children (34 male) with amblyopia (22 anisometropic, 18 strabismic and 18 with both strabismic/anisometropic amblyopia) aged 4-6 years (mean=68.7, range=48-83 months) underwent VA measurements. VA chart testing order was randomised, but the amblyopic eye was tested before the fellow eye. All participants wore up-to-date refractive correction. The Kay Picture test significantly overestimated VA by 0.098 logMAR (95% limits of agreement (LOA), 0.13) in the amblyopic eye and 0.088 logMAR (95% LOA, 0.13) in the fellow eye, respectively (p<0.001). No interactions were found from occlusion therapy, refractive correction or type of amblyopia on VA results (p>0.23). For both the amblyopic and fellow eyes, Bland-Altman plots demonstrated a systematic and predictable difference between Kay Picture and Keeler Letter charts across the range of acuities tested (Keeler acuity: amblyopic eye 0.75 to -0.05 logMAR; fellow eye 0.45 to -0.15 logMAR). Linear regression analysis (p<0.00001) and also slope values close to one (amblyopic 0.98, fellow 0.86) demonstrate that there is no proportional bias. The Kay Picture test consistently overestimated VA by approximately 0.10 logMAR when compared with the Keeler Letter test in young children with amblyopia. Due to the predictable difference found between both crowded logMAR acuity tests, it is reasonable to adjust Kay Picture acuity thresholds by +0.10 logMAR to compute expected Keeler Letter acuity scores. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
A new estimate of the volume and distribution of gas hydrate in the northern Gulf of Mexico
NASA Astrophysics Data System (ADS)
Majumdar, U.; Cook, A.
2016-12-01
In spite of the wealth of information gained over the last several decades about gas hydrate in the northern Gulf of Mexico, there is still considerable uncertainty about the distribution and volume of gas hydrate. In our assessment we build a dataset of basin-wide gas hydrate distribution and thickness, as appraised from publicly available petroleum industry well logs within the gas hydrate stability zone (HSZ), and subsequently develop a Monte Carlo to determine the volumetric estimate of gas hydrate using the dataset. We evaluate the presence of gas hydrate from electrical resistivity well logs, and categorized possible reservoir type (either sand or clay) based on the gamma ray response and resistivity curve characteristics. Out of the 798 wells with resistivity well log data within the HSZ we analyzed, we found evidence of gas hydrate in 124 wells. In this research we present a new stochastic estimate of the gas hydrate volume in the northern Gulf of Mexico guided by our well log dataset. For our Monte Carlo simulation, we divided our assessment area of 200,000 km2 into 1 km2 grid cells. Our volume assessment model incorporates variables unique to our well log dataset such as the likelihood of gas hydrate occurrence, fraction of the HSZ occupied by gas hydrate, reservoir type, and gas hydrate saturation depending on the reservoir, in each grid cell, in addition to other basic variables such as HSZ thickness and porosity. Preliminary results from our model suggests that the total volume of gas at standard temperature and pressure in gas hydrate in the northern Gulf of Mexico is in the range of 430 trillion cubic feet (TCF) to 730 TCF, with a mean volume of 585 TCF. While the reservoir distribution from our well log dataset found gas hydrate in sand reservoirs in 30 wells out of the 124 wells with evidence of gas hydrate ( 24%), we find sand reservoirs contain over half of the total volume of gas hydrate in the Gulf of Mexico, as a result of the relatively high gas hydrate saturation in sand.
Gkana, E; Chorianopoulos, N; Grounta, A; Koutsoumanis, K; Nychas, G-J E
2017-04-01
The objective of the present study was to determine the factors affecting the transfer of foodborne pathogens from inoculated beef fillets to non-inoculated ones, through food processing surfaces. Three different levels of inoculation of beef fillets surface were prepared: a high one of approximately 10 7 CFU/cm 2 , a medium one of 10 5 CFU/cm 2 and a low one of 10 3 CFU/cm 2 , using mixed-strains of Listeria monocytogenes, or Salmonella enterica Typhimurium, or Escherichia coli O157:H7. The inoculated fillets were then placed on 3 different types of surfaces (stainless steel-SS, polyethylene-PE and wood-WD), for 1 or 15 min. Subsequently, these fillets were removed from the cutting boards and six sequential non-inoculated fillets were placed on the same surfaces for the same period of time. All non-inoculated fillets were contaminated with a progressive reduction trend of each pathogen's population level from the inoculated fillets to the sixth non-inoculated ones that got in contact with the surfaces, and regardless the initial inoculum, a reduction of approximately 2 log CFU/g between inoculated and 1st non-inoculated fillet was observed. S. Typhimurium was transferred at lower mean population (2.39 log CFU/g) to contaminated fillets than E. coli O157:H7 (2.93 log CFU/g), followed by L. monocytogenes (3.12 log CFU/g; P < 0.05). Wooden surfaces (2.77 log CFU/g) enhanced the transfer of bacteria to subsequent fillets compared to other materials (2.66 log CFU/g for SS and PE; P < 0.05). Cross-contamination between meat and surfaces is a multifactorial process strongly depended on the species, initial contamination level, kind of surface, contact time and the number of subsequent fillet, according to analysis of variance. Thus, quantifying the cross-contamination risk associated with various steps of meat processing and food establishments or households can provide a scientific basis for risk management of such products. Copyright © 2016 Elsevier Ltd. All rights reserved.
BRIEF REPORT: A simple interpolation formula for the spectra of power-law and log potentials
NASA Astrophysics Data System (ADS)
Hall, Richard L.
2000-06-01
Non-relativistic potential models are considered of the pure power V(r) = sgn(q) r q and logarithmic V(r) = ln (r) types. It is shown that, from the spectral viewpoint, these potentials are actually in a single family. The log spectra can be obtained from the power spectra by the limit q→0 taken in a smooth representation Pnl(q) for the eigenvalues Enl(q). A simple approximation formula is developed which yields the first 30 eigenvalues with an error less than 0.04%.
Calibration Tests of a German Log Rodmeter
NASA Technical Reports Server (NTRS)
Mottard, Elmo J.; Stillman, Everette R.
1949-01-01
A German log rodmeter of the pitot static type was calibrated in Langley tank no. 1 at speeds up to 34 knots and angles of yaw from 0 deg to plus or minus 10 3/4 degrees. The dynamic head approximated the theoretical head at 0 degrees yaw but decreased as the yaw was increased. The static head was negative and in general became more negative with increasing speed and yaw. Cavitation occurred at speeds above 31 knots at 0 deg yaw and 21 knots at 10 3/4 deg yaw.
Categorical Data Analysis Using a Skewed Weibull Regression Model
NASA Astrophysics Data System (ADS)
Caron, Renault; Sinha, Debajyoti; Dey, Dipak; Polpo, Adriano
2018-03-01
In this paper, we present a Weibull link (skewed) model for categorical response data arising from binomial as well as multinomial model. We show that, for such types of categorical data, the most commonly used models (logit, probit and complementary log-log) can be obtained as limiting cases. We further compare the proposed model with some other asymmetrical models. The Bayesian as well as frequentist estimation procedures for binomial and multinomial data responses are presented in details. The analysis of two data sets to show the efficiency of the proposed model is performed.
NASA Astrophysics Data System (ADS)
Moe, Maxwell; Di Stefano, Rosanne
2017-06-01
We compile observations of early-type binaries identified via spectroscopy, eclipses, long-baseline interferometry, adaptive optics, common proper motion, etc. Each observational technique is sensitive to companions across a narrow parameter space of orbital periods P and mass ratios q = {M}{comp}/M 1. After combining the samples from the various surveys and correcting for their respective selection effects, we find that the properties of companions to O-type and B-type main-sequence (MS) stars differ among three regimes. First, at short orbital periods P ≲ 20 days (separations a ≲ 0.4 au), the binaries have small eccentricities e ≲ 0.4, favor modest mass ratios < q> ≈ 0.5, and exhibit a small excess of twins q > 0.95. Second, the companion frequency peaks at intermediate periods log P (days) ≈ 3.5 (a ≈ 10 au), where the binaries have mass ratios weighted toward small values q ≈ 0.2-0.3 and follow a Maxwellian “thermal” eccentricity distribution. Finally, companions with long orbital periods log P (days) ≈ 5.5-7.5 (a ≈ 200-5000 au) are outer tertiary components in hierarchical triples and have a mass ratio distribution across q ≈ 0.1-1.0 that is nearly consistent with random pairings drawn from the initial mass function. We discuss these companion distributions and properties in the context of binary-star formation and evolution. We also reanalyze the binary statistics of solar-type MS primaries, taking into account that 30% ± 10% of single-lined spectroscopic binaries likely contain white dwarf companions instead of low-mass stellar secondaries. The mean frequency of stellar companions with q > 0.1 and log P (days) < 8.0 per primary increases from 0.50 ± 0.04 for solar-type MS primaries to 2.1 ± 0.3 for O-type MS primaries. We fit joint probability density functions f({M}1,q,P,e)\
Burnout Syndrome Among Health Care Students: The Role of Type D Personality.
Skodova, Zuzana; Lajciakova, Petra; Banovcinova, Lubica
2016-07-18
The aim of this study was to examine the effect of Type D personality, along with other personality traits (resilience and sense of coherence), on burnout syndrome and its counterpart, engagement, among students of nursing, midwifery, and psychology. A cross-sectional study was conducted on 97 university students (91.9% females; M age = 20.2 ± 1.49 years). A Type D personality subscale, School Burnout Inventory, Utrecht Work Engagement Scale, Sense of Coherence Questionnaire, and Baruth Protective Factor Inventory were used. Linear regression models, Student's t test, and Pearson's correlation analysis were employed. Negative affectivity, a dimension of Type D personality, was a significant personality predictor for burnout syndrome (β = .54; 95% CI = [0.33, 1.01]). The only significant personality predictor of engagement was a sense of coherence. Students who were identified as having Type D personality characteristics scored significantly higher on the burnout syndrome questionnaire (t = -2.58, p < .01). In health care professions, personality predictors should be addressed to prevent burnout. © The Author(s) 2016.
Kaur, Ishtdeep; Suthar, Nancy; Kaur, Jasmeen; Bansal, Yogita; Bansal, Gulshan
2016-10-01
Regulatory guidelines recommend systematic stability studies on a herbal product to establish its shelf life. In the present study, commercial extracts (Types I and II) and freshly prepared extract (Type III) of Centella asiatica were subjected to accelerated stability testing for 6 months. Control and stability samples were evaluated for organoleptics, pH, moisture, total phenolic content (TPC), asiatic acid, kaempherol, and high-performance thin layer chromatography fingerprints, and for antioxidant and acetylcholinesterase inhibitory activities. Markers and TPC and both the activities of each extract decreased in stability samples with respect to control. These losses were maximum in Type I extract and minimum in Type III extract. Higher stability of Type III extract than others might be attributed to the additional phytoconstituents and/or preservatives in it. Pearson correlation analysis of the results suggested that TPC, asiatic acid, and kaempferol can be taken as chemical markers to assess chemical and therapeutic shelf lives of herbal products containing Centella asiatica. © The Author(s) 2016.
Measurement of Harm Outcomes in Older Adults after Hospital Discharge: Reliability and Validity
Douglas, Alison; Letts, Lori; Eva, Kevin; Richardson, Julie
2012-01-01
Objectives. Defining and validating a measure of safety contributes to further validation of clinical measures. The objective was to define and examine the psychometric properties of the outcome “incidents of harm.” Methods. The Incident of Harm Caregiver Questionnaire was administered to caregivers of older adults discharged from hospital by telephone. Caregivers completed daily logs for one month and medical charts were examined. Results. Test-retest reliability (n = 38) was high for the occurrence of an incident of harm (yes/no; kappa = 1.0) and the type of incident (agreement = 100%). Validation against daily logs found no disagreement regarding occurrence or types of incidents. Validation with medical charts found no disagreement regarding incident occurrence and disagreement in half regarding incident type. Discussion. The data support the Incident of Harm Caregiver Questionnaire as a reliable and valid estimation of incidents for this sample and are important to researchers as a method to measure safety when validating clinical measures. PMID:22649728
Clinical manifestations and management of four children with Pearson syndrome.
Tumino, Manuela; Meli, Concetta; Farruggia, Piero; La Spina, Milena; Faraci, Maura; Castana, Cinzia; Di Raimondo, Vincenzo; Alfano, Marivana; Pittalà, Annarita; Lo Nigro, Luca; Russo, Giovanna; Di Cataldo, Andrea
2011-12-01
Pearson marrow-pancreas syndrome is a fatal disorder mostly diagnosed during infancy and caused by mutations of mitochondrial DNA. We hereby report on four children affected by Pearson syndrome with hematological disorders at onset. The disease was fatal to three of them and the fourth one, who received hematopoietic stem cell transplantation, died of secondary malignancy. In this latter patient transplantation corrected hematological and non-hematological issues like metabolic acidosis, and we therefore argue that it could be considered as a useful option in an early stage of the disease. Copyright © 2011 Wiley Periodicals, Inc.
Pearson Syndrome, A Medical Diagnosis Difficult to Sustain Without Genetic Testing.
Sur, Lucia; Floca, Emanuela; Samasca, Gabriel; Lupan, Iulia; Aldea, Cornel; Sur, Genel
2018-03-01
The detection of sideroblastic anemia in a newborn may suggest developing Pearson syndrome. The prognosis of these patients is severe and death occurs in the first 3 years of life, so it is important to find new ways of diagnosis. Case Presentation: In the case of our patient the diagnosis was supported only at the age of 5 months, highlighting the difficulties of diagnosis at this age. The diagnosis of Pearson syndrome with neonatal onset is difficult to sustain or even impossible at that age. This diagnosis can be confirmed and supported during disease progression.
Heavy-tailed fractional Pearson diffusions.
Leonenko, N N; Papić, I; Sikorskii, A; Šuvak, N
2017-11-01
We define heavy-tailed fractional reciprocal gamma and Fisher-Snedecor diffusions by a non-Markovian time change in the corresponding Pearson diffusions. Pearson diffusions are governed by the backward Kolmogorov equations with space-varying polynomial coefficients and are widely used in applications. The corresponding fractional reciprocal gamma and Fisher-Snedecor diffusions are governed by the fractional backward Kolmogorov equations and have heavy-tailed marginal distributions in the steady state. We derive the explicit expressions for the transition densities of the fractional reciprocal gamma and Fisher-Snedecor diffusions and strong solutions of the associated Cauchy problems for the fractional backward Kolmogorov equation.
Shapira, Adi; Konopnicki, Muriel; Hammad-Saied, Mohammed; Shabad, Evelyn
2014-07-01
Pearson disease is a rare, usually fatal, mitochondrial disorder affecting primarily the bone marrow and the exocrine pancreas. We report a previously healthy 10-week-old girl who presented with profound macrocytic anemia followed by pancytopenia, synthetic liver dysfunction with liver steatosis, and metabolic acidosis with high lactate levels. She had no pancreatic involvement. Multiple cytoplasmic vacuoles in myelocytes and monocytes were seen upon microscopic evaluation of the bone marrow. Genetic analysis of the mitochondrial genome revealed a 5 kbp deletion, thus establishing the diagnosis of Pearson disease.
"Describing our whole experience": the statistical philosophies of W. F. R. Weldon and Karl Pearson.
Pence, Charles H
2011-12-01
There are two motivations commonly ascribed to historical actors for taking up statistics: to reduce complicated data to a mean value (e.g., Quetelet), and to take account of diversity (e.g., Galton). Different motivations will, it is assumed, lead to different methodological decisions in the practice of the statistical sciences. Karl Pearson and W. F. R. Weldon are generally seen as following directly in Galton's footsteps. I argue for two related theses in light of this standard interpretation, based on a reading of several sources in which Weldon, independently of Pearson, reflects on his own motivations. First, while Pearson does approach statistics from this "Galtonian" perspective, he is, consistent with his positivist philosophy of science, utilizing statistics to simplify the highly variable data of biology. Weldon, on the other hand, is brought to statistics by a rich empiricism and a desire to preserve the diversity of biological data. Secondly, we have here a counterexample to the claim that divergence in motivation will lead to a corresponding separation in methodology. Pearson and Weldon, despite embracing biometry for different reasons, settled on precisely the same set of statistical tools for the investigation of evolution. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Piao, Lin; Fu, Zuntao
2016-11-01
Cross-correlation between pairs of variables takes multi-time scale characteristic, and it can be totally different on different time scales (changing from positive correlation to negative one), e.g., the associations between mean air temperature and relative humidity over regions to the east of Taihang mountain in China. Therefore, how to correctly unveil these correlations on different time scales is really of great importance since we actually do not know if the correlation varies with scales in advance. Here, we compare two methods, i.e. Detrended Cross-Correlation Analysis (DCCA for short) and Pearson correlation, in quantifying scale-dependent correlations directly to raw observed records and artificially generated sequences with known cross-correlation features. Studies show that 1) DCCA related methods can indeed quantify scale-dependent correlations, but not Pearson method; 2) the correlation features from DCCA related methods are robust to contaminated noises, however, the results from Pearson method are sensitive to noise; 3) the scale-dependent correlation results from DCCA related methods are robust to the amplitude ratio between slow and fast components, while Pearson method may be sensitive to the amplitude ratio. All these features indicate that DCCA related methods take some advantages in correctly quantifying scale-dependent correlations, which results from different physical processes.
Piao, Lin; Fu, Zuntao
2016-11-09
Cross-correlation between pairs of variables takes multi-time scale characteristic, and it can be totally different on different time scales (changing from positive correlation to negative one), e.g., the associations between mean air temperature and relative humidity over regions to the east of Taihang mountain in China. Therefore, how to correctly unveil these correlations on different time scales is really of great importance since we actually do not know if the correlation varies with scales in advance. Here, we compare two methods, i.e. Detrended Cross-Correlation Analysis (DCCA for short) and Pearson correlation, in quantifying scale-dependent correlations directly to raw observed records and artificially generated sequences with known cross-correlation features. Studies show that 1) DCCA related methods can indeed quantify scale-dependent correlations, but not Pearson method; 2) the correlation features from DCCA related methods are robust to contaminated noises, however, the results from Pearson method are sensitive to noise; 3) the scale-dependent correlation results from DCCA related methods are robust to the amplitude ratio between slow and fast components, while Pearson method may be sensitive to the amplitude ratio. All these features indicate that DCCA related methods take some advantages in correctly quantifying scale-dependent correlations, which results from different physical processes.
NASA Astrophysics Data System (ADS)
Shojaeefard, Mohammad Hasan; Khalkhali, Abolfazl; Yarmohammadisatri, Sadegh
2017-06-01
The main purpose of this paper is to propose a new method for designing Macpherson suspension, based on the Sobol indices in terms of Pearson correlation which determines the importance of each member on the behaviour of vehicle suspension. The formulation of dynamic analysis of Macpherson suspension system is developed using the suspension members as the modified links in order to achieve the desired kinematic behaviour. The mechanical system is replaced with an equivalent constrained links and then kinematic laws are utilised to obtain a new modified geometry of Macpherson suspension. The equivalent mechanism of Macpherson suspension increased the speed of analysis and reduced its complexity. The ADAMS/CAR software is utilised to simulate a full vehicle, Renault Logan car, in order to analyse the accuracy of modified geometry model. An experimental 4-poster test rig is considered for validating both ADAMS/CAR simulation and analytical geometry model. Pearson correlation coefficient is applied to analyse the sensitivity of each suspension member according to vehicle objective functions such as sprung mass acceleration, etc. Besides this matter, the estimation of Pearson correlation coefficient between variables is analysed in this method. It is understood that the Pearson correlation coefficient is an efficient method for analysing the vehicle suspension which leads to a better design of Macpherson suspension system.
Gamma-Ray Burst Jet Breaks Revisited
NASA Astrophysics Data System (ADS)
Wang, Xiang-Gao; Zhang, Bing; Liang, En-Wei; Lu, Rui-Jing; Lin, Da-Bin; Li, Jing; Li, Long
2018-06-01
Gamma-ray Burst (GRB) collimation has been inferred with the observations of achromatic steepening in GRB light curves, known as jet breaks. Identifying a jet break from a GRB afterglow light curve allows a measurement of the jet opening angle and true energetics of GRBs. In this paper, we re-investigate this problem using a large sample of GRBs that have an optical jet break that is consistent with being achromatic in the X-ray band. Our sample includes 99 GRBs from 1997 February to 2015 March that have optical and, for Swift GRBs, X-ray light curves that are consistent with the jet break interpretation. Out of the 99 GRBs we have studied, 55 GRBs are found to have temporal and spectral behaviors both before and after the break, consistent with the theoretical predictions of the jet break models, respectively. These include 53 long/soft (Type II) and 2 short/hard (Type I) GRBs. Only 1 GRB is classified as the candidate of a jet break with energy injection. Another 41 and 3 GRBs are classified as the candidates with the lower and upper limits of the jet break time, respectively. Most jet breaks occur at 90 ks, with a typical opening angle θj = (2.5 ± 1.0)°. This gives a typical beaming correction factor {f}b-1∼ 1000 for Type II GRBs, suggesting an even higher total GRB event rate density in the universe. Both isotropic and jet-corrected energies have a wide span in their distributions: log(Eγ,iso/erg) = 53.11 with σ = 0.84 log(EK,iso/erg) = 54.82 with σ = 0.56 log(Eγ/erg) = 49.54 with σ = 1.29 and log(EK/erg) = 51.33 with σ = 0.58. We also investigate several empirical correlations (Amati, Frail, Ghirlanda, and Liang–Zhang) previously discussed in the literature. We find that in general most of these relations are less tight than before. The existence of early jet breaks and hence small opening angle jets, which were detected in the Swfit era, is most likely the source of scatter. If one limits the sample to jet breaks later than 104 s, the Liang–Zhang relation remains tight and the Ghirlanda relation still exists. These relations are derived from Type II GRBs, and Type I GRBs usually deviate from them.
NASA Astrophysics Data System (ADS)
Nyssen, Jan; Gebreslassie, Seifu; Assefa, Romha; Deckers, Jozef; Guyassa, Etefa; Poesen, Jean; Frankl, Amaury
2017-04-01
Many thousands of gabion check dams have been installed to control gully erosion in Ethiopia, but several challenges still remain, such as the issue of gabion failure in ephemeral streams with coarse bed load, that abrades at the chute step. As an alternative for gabion check dams in torrents with coarse bed load, boulder-faced log dams were conceived, installed transversally across torrents and tested (n = 30). For this, logs (22-35 cm across) were embedded in the banks of torrents, 0.5-1 m above the bed and their upstream sides were faced with boulders (0.3-0.7 m across). Similar to gabion check dams, boulder-faced log dams lead to temporary ponding, spreading of peak flow over the entire channel width and sediment deposition. Results of testing under extreme flow conditions (including two storms with return periods of 5.6 and 7 years) show that 18 dams resisted strong floods. Beyond certain flood thresholds, represented by proxies such as Strahler's stream order, catchment area, D95 or channel width), 11 log dams were completely destroyed. Smallholder farmers see much potential in this type of structure to control first-order torrents with coarse bed load, since the technique is cost-effective and can be easily installed.
Kotwal, Grishma; Harrison, Mark A.; Law, S. Edward; Harrison, Judy A.
2013-01-01
Human noroviruses are major etiologic agents of epidemic gastroenteritis. Outbreaks are often accompanied by contamination of environmental surfaces, but since these viruses cannot be routinely propagated in laboratory cultures, their response to surface disinfectants is predicted by using surrogates, such as murine norovirus 1 (MNV-1). This study compared the virucidal efficacies of various liquid treatments (three sanitizer liquids, 5% levulinic acid plus 2% SDS [LEV/SDS], 200 ppm chlorine, and an isopropanol-based quaternary ammonium compound [Alpet D2], and two control liquids, sterile tap water and sterile tap water plus 2% SDS) when delivered to MNV-1-inoculated stainless steel surfaces by conventional hydraulic or air-assisted, induction-charged (AAIC) electrostatic spraying or by wiping with impregnated towelettes. For the spray treatments, LEV/SDS proved effective when applied with hydraulic and AAIC electrostatic spraying, providing virus reductions of 2.71 and 1.66 log PFU/ml, respectively. Alpet D2 provided a 2.23-log PFU/ml reduction with hydraulic spraying, outperforming chlorine (1.16-log PFU/ml reduction). Chlorine and LEV/SDS were equally effective as wipes, reducing the viral load by 7.05 log PFU/ml. Controls reduced the viral load by <1 log with spraying applications and by >3 log PFU/ml with wiping. Results indicated that both sanitizer type and application methods should be carefully considered when choosing a surface disinfectant to best prevent and control environmental contamination by noroviruses. PMID:23263949
Mohammad Al Alfy, Ibrahim
2018-01-01
A set of three pads was constructed from primary materials (sand, gravel and cement) to calibrate the gamma-gamma density tool. A simple equation was devised to convert the qualitative cps values to quantitative g/cc values. The neutron-neutron porosity tool measures the qualitative cps porosity values. A direct equation was derived to calculate the porosity percentage from the cps porosity values. Cement-bond log illustrates the cement quantities, which surround well pipes. This log needs a difficult process due to the existence of various parameters, such as: drilling well diameter as well as internal diameter, thickness and type of well pipes. An equation was invented to calculate the cement percentage at standard conditions. This equation can be modified according to varying conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Campbell, Mary S.; Kahle, Erin M.; Celum, Connie; Lingappa, Jairam R.; Kapiga, Saidi; Mujugira, Andrew; Mugo, Nelly R.; Fife, Kenneth H.; Mullins, James I.; Baeten, Jared M.; Celum, Connie; Wald, Anna; Lingappa, Jairam; Baeten, Jared M.; Campbell, Mary S.; Corey, Lawrence; Coombs, Robert W.; Hughes, James P.; Magaret, Amalia; McElrath, M. Juliana; Morrow, Rhoda; Mullins, James I.; Coetzee, David; Fife, Kenneth; Were, Edwin; Essex, Max; Makhema, Joseph; Katabira, Elly; Ronald, Allan; Allen, Susan; Kayitenkore, Kayitesi; Karita, Etienne; Bukusi, Elizabeth; Cohen, Craig; Allen, Susan; Kanweka, William; Allen, Susan; Vwalika, Bellington; Kapiga, Saidi; Manongi, Rachel; Farquhar, Carey; John-Stewart, Grace; Kiarie, James; Allen, Susan; Inambao, Mubiana; Farm, Orange; Delany-Moretlwe, Sinead; Rees, Helen; de Bruyn, Guy; Gray, Glenda; McIntyre, James; Mugo, Nelly Rwamba
2013-01-01
Recent data suggest that infection with human immunodeficiency virus type 1 (HIV-1) subtype C results in prolonged high-level viremia (>5 log10 copies/mL) during early infection. We examined the relationship between HIV-1 subtype and plasma viremia among 153 African seroconverters. Mean setpoint viral loads were similar for C and non-C subtypes: 4.36 vs 4.42 log10 copies/mL (P = .61). The proportion of subtype C–infected participants with viral loads >5 log10 copies/mL was not greater than the proportion for those with non-C infection. Our data do not support the hypothesis that higher early viral load accounts for the rapid spread of HIV-1 subtype C in southern Africa. PMID:23315322
Formation and evolution of dwarf elliptical galaxies. I. Structural and kinematical properties
NASA Astrophysics Data System (ADS)
de Rijcke, S.; Michielsen, D.; Dejonghe, H.; Zeilinger, W. W.; Hau, G. K. T.
2005-08-01
This paper is the first in a series in which we present the results of an ESO Large Program on the kinematics and internal dynamics of dwarf elliptical galaxies (dEs). We obtained deep major and minor axis spectra of 15 dEs and broad-band imaging of 22 dEs. Here, we investigate the relations between the parameters that quantify the structure (B-band luminosity L_B, half-light radius R_e, and mean surface brightness within the half-light radius Ie = LB / 2 π R_e^2) and internal dynamics (velocity dispersion σ) of dEs. We confront predictions of the currently popular theories for dE formation and evolution with the observed position of dEs in log LB vs. log σ, log LB vs. log R_e, log LB vs. log I_e, and log Re vs. log Ie diagrams and in the (log σ,log R_e,log I_e) parameter space in which bright and intermediate-luminosity elliptical galaxies and bulges of spirals define a Fundamental Plane (FP). In order to achieve statistical significance and to cover a parameter interval that is large enough for reliable inferences to be made, we merge the data set presented in this paper with two other recently published, equally large data sets. We show that the dE sequences in the various univariate diagrams are disjunct from those traced by bright and intermediate-luminosity elliptical galaxies and bulges of spirals. It appears that semi-analytical models (SAMs) that incorporate quiescent star formation with an essentially z-independent star-formation efficiency, combined with post-merger starbursts and the dynamical response after supernova-driven gas-loss, are able to reproduce the position of the dEs in the various univariate diagrams. SAMs with star-formation efficiencies that rise as a function of redshift are excluded since they leave the observed sequences traced by dEs virtually unpopulated. dEs tend to lie above the FP and the FP residual declines as a function of luminosity. Again, models that take into account the response after supernova-driven mass-loss correctly predict the position of dEs in the (log σ,log R_e,log I_e) parameter space as well as the trend of the FP residual as a function of luminosity. While these findings are clearly a success for the hierarchical-merging picture of galaxy formation, they do not necessarily invalidate the alternative “harassment” scenario, which posits that dEs stem from perturbed and stripped late-type disk galaxies that entered clusters and groups of galaxies about 5 Gyr ago.
March, Jordon K; Pratt, Michael D; Lowe, Chinn-Woan; Cohen, Marissa N; Satterfield, Benjamin A; Schaalje, Bruce; O'Neill, Kim L; Robison, Richard A
2015-01-01
This study investigated (1) the susceptibility of Bacillus anthracis (Ames strain), Bacillus subtilis (ATCC 19659), and Clostridium sporogenes (ATCC 3584) spores to commercially available peracetic acid (PAA)- and glutaraldehyde (GA)-based disinfectants, (2) the effects that heat-shocking spores after treatment with these disinfectants has on spore recovery, and (3) the timing of heat-shocking after disinfectant treatment that promotes the optimal recovery of spores deposited on carriers. Suspension tests were used to obtain inactivation kinetics for the disinfectants against three spore types. The effects of heat-shocking spores after disinfectant treatment were also determined. Generalized linear mixed models were used to estimate 6-log reduction times for each spore type, disinfectant, and heat treatment combination. Reduction times were compared statistically using the delta method. Carrier tests were performed according to AOAC Official Method 966.04 and a modified version that employed immediate heat-shocking after disinfectant treatment. Carrier test results were analyzed using Fisher's exact test. PAA-based disinfectants had significantly shorter 6-log reduction times than the GA-based disinfectant. Heat-shocking B. anthracis spores after PAA treatment resulted in significantly shorter 6-log reduction times. Conversely, heat-shocking B. subtilis spores after PAA treatment resulted in significantly longer 6-log reduction times. Significant interactions were also observed between spore type, disinfectant, and heat treatment combinations. Immediately heat-shocking spore carriers after disinfectant treatment produced greater spore recovery. Sporicidal activities of disinfectants were not consistent across spore species. The effects of heat-shocking spores after disinfectant treatment were dependent on both disinfectant and spore species. Caution must be used when extrapolating sporicidal data of disinfectants from one spore species to another. Heat-shocking provides a more accurate picture of spore survival for only some disinfectant/spore combinations. Collaborative studies should be conducted to further examine a revision of AOAC Official Method 966.04 relative to heat-shocking. PMID:26185111
Ota, Koki; Kikuchi, Yuichiro; Imamura, Kentaro; Kita, Daichi; Yoshikawa, Kouki; Saito, Atsushi; Ishihara, Kazuyuki
2017-02-01
Extracytoplasmic function (ECF) sigma factors play an important role in the bacterial response to various environmental stresses. Porphyromonas gingivalis, a prominent etiological agent in human periodontitis, possesses six putative ECF sigma factors. So far, information is limited on the ECF sigma factor, PGN_0319. The aim of this study was to investigate the role of PGN_0319 (SigCH) of P. gingivalis, focusing on the regulation of hmuY and hmuR, which encode outer-membrane proteins involved in hemin utilization, and cdhR, a transcriptional regulator of hmuYR. First, we evaluated the gene expression profile of the sigCH mutant by DNA microarray. Among the genes with altered expression levels, those involved in hemin utilization were downregulated in the sigCH mutant. To verify the microarray data, quantitative reverse transcription PCR analysis was performed. The RNA samples used were obtained from bacterial cells grown to early-log phase, in which sigCH expression in the wild type was significantly higher than that in mid-log and late-log phases. The expression levels of hmuY, hmuR, and cdhR were significantly decreased in the sigCH mutant compared to wild type. Transcription of these genes was restored in a sigCH complemented strain. Compared to the wild type, the sigCH mutant showed reduced growth in log phase under hemin-limiting conditions. Electrophoretic mobility shift assays showed that recombinant SigCH protein bound to the promoter region of hmuY and cdhR. These results suggest that SigCH plays an important role in the early growth of P. gingivalis, and directly regulates cdhR and hmuYR, thereby playing a potential role in the mechanisms of hemin utilization by P. gingivalis. Copyright © 2016 Elsevier Ltd. All rights reserved.
Jinggut, Tajang; Yule, Catherine M; Boyero, Luz
2012-10-15
In common with most of Borneo, the Bakun region of Sarawak is currently subject to heavy deforestation mainly due to logging and, to a lesser extent, traditional slash-and-burn farming practices. This has the potential to affect stream ecosystems, which are integrators of environmental change in the surrounding terrestrial landscape. This study evaluated the effects of both types of deforestation by using functional and structural indicators (leaf litter decomposition rates and associated detritivores or 'shredders', respectively) to compare a fundamental ecosystem process, leaf litter decomposition, within logged, farmed and pristine streams. Slash-and-burn agricultural practices increased the overall rate of decomposition despite a decrease in shredder species richness (but not shredder abundance) due to increased microbial decomposition. In contrast, decomposition by microbes and invertebrates was slowed down in the logged streams, where shredders were less abundant and less species rich. This study suggests that shredder communities are less affected by traditional agricultural farming practices, while modern mechanized deforestation has an adverse effect on both shredder communities and leaf breakdown. Copyright © 2012 Elsevier B.V. All rights reserved.
Estimating the Aqueous Solubility of Pharmaceutical Hydrates
Franklin, Stephen J.; Younis, Usir S.; Myrdal, Paul B.
2016-01-01
Estimation of crystalline solute solubility is well documented throughout the literature. However, the anhydrous crystal form is typically considered with these models, which is not always the most stable crystal form in water. In this study an equation which predicts the aqueous solubility of a hydrate is presented. This research attempts to extend the utility of the ideal solubility equation by incorporating desolvation energetics of the hydrated crystal. Similar to the ideal solubility equation, which accounts for the energetics of melting, this model approximates the energy of dehydration to the entropy of vaporization for water. Aqueous solubilities, dehydration and melting temperatures, and log P values were collected experimentally and from the literature. The data set includes different hydrate types and a range of log P values. Three models are evaluated, the most accurate model approximates the entropy of dehydration (ΔSd) by the entropy of vaporization (ΔSvap) for water, and utilizes onset dehydration and melting temperatures in combination with log P. With this model, the average absolute error for the prediction of solubility of 14 compounds was 0.32 log units. PMID:27238488
A composite lithology log while drilling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tannenbaum, E.; Sutcliffe, B.; Franks, A.
A new method for producing a computerized composite lithology log (CLL) while drilling by integrating MWD (measurement while drilling) and surface data is described. The CLL integrates three types of data (MWD mechanical, MWD geophysical, and surface cuttings) acquired during drilling, in three time stages: (1) Real Time. MWD drilling mechanical data including the rate of penetration and the downhole torque. This stage would provide bed boundaries and some inferred lithology. This would assist the driller with immediate drilling decisions and determine formation tops for coring, casing point, and correlation. (2) MWD Time. Recomputation of the above by adding MWDmore » geophysical data (gamma-ray, resistivity, neutron-density). This stage would upgrade the lithology inference, and give higher resolution of bed boundaries. (3) Lag Time. Detailed analysis of surface cuttings to confirm the inferred lithologies. This last input will result in a high-quality CLL with accurate lithologies and bed boundaries. The log will serve the geologist as well as the driller, petrophysicist, and reservoir engineer. It will form the basis for more comprehensive formation evaluation while drilling by adding hydrocarbon and MWD log data.« less
Kasurinen, Stefanie; Jalava, Pasi I; Happo, Mikko S; Sippula, Olli; Uski, Oskari; Koponen, Hanna; Orasche, Jürgen; Zimmermann, Ralf; Jokiniemi, Jorma; Hirvonen, Maija-Riitta
2017-05-01
According to the World Health Organization particulate emissions from the combustion of solid fuels caused more than 110,000 premature deaths worldwide in 2010. Log wood combustion is the most prevalent form of residential biomass heating in developed countries, but it is unknown how the type of wood logs used in furnaces influences the chemical composition of the particulate emissions and their toxicological potential. We burned logs of birch, beech and spruce, which are used commonly as firewood in Central and Northern Europe in a modern masonry heater, and compared them to the particulate emissions from an automated pellet boiler fired with softwood pellets. We determined the chemical composition (elements, ions, and carbonaceous compounds) of the particulate emissions with a diameter of less than 1 µm and tested their cytotoxicity, genotoxicity, inflammatory potential, and ability to induce oxidative stress in a human lung epithelial cell line. The chemical composition of the samples differed significantly, especially with regard to the carbonaceous and metal contents. Also the toxic effects in our tested endpoints varied considerably between each of the three log wood combustion samples, as well as between the log wood combustion samples and the pellet combustion sample. The difference in the toxicological potential of the samples in the various endpoints indicates the involvement of different pathways of toxicity depending on the chemical composition. All three emission samples from the log wood combustions were considerably more toxic in all endpoints than the emissions from the pellet combustion. © 2016 Wiley Periodicals, Inc. Environ Toxicol 32: 1487-1499, 2017. © 2016 Wiley Periodicals, Inc.
Impact of virus surface characteristics on removal mechanisms within membrane bioreactors.
Chaudhry, Rabia M; Holloway, Ryan W; Cath, Tzahi Y; Nelson, Kara L
2015-11-01
In this study we investigated the removal of viruses with similar size and shape but with different external surface capsid proteins by a bench-scale membrane bioreactor (MBR). The goal was to determine which virus removal mechanisms (retention by clean backwashed membrane, retention by cake layer, attachment to biomass, and inactivation) were most impacted by differences in the virus surface properties. Seven bench-scale MBR experiments were performed using mixed liquor wastewater sludge that was seeded with three lab-cultured bacteriophages with icosahedral capsids of ∼30 nm diameter (MS2, phiX174, and fr). The operating conditions were designed to simulate those at a reference, full-scale MBR facility. The virus removal mechanism most affected by virus type was attachment to biomass (removals of 0.2 log for MS2, 1.2 log for phiX174, and 3 log for fr). These differences in removal could not be explained by electrostatic interactions, as the three viruses had similar net negative charge when suspended in MBR permeate. Removals by the clean backwashed membrane (less than 1 log) and cake layer (∼0.6 log) were similar for the three viruses. A comparison between the clean membrane removals seen at the bench-scale using a virgin membrane (∼1 log), and the full-scale using 10-year old membranes (∼2-3 logs) suggests that irreversible fouling, accumulated on the membrane over years of operation that cannot be removed by cleaning, also contributes towards virus removal. This study enhances the current mechanistic understanding of virus removal in MBRs and will contribute to more reliable treatment for water reuse applications. Copyright © 2015 Elsevier Ltd. All rights reserved.
Accuracy and precision of Legionella isolation by US laboratories in the ELITE program pilot study.
Lucas, Claressa E; Taylor, Thomas H; Fields, Barry S
2011-10-01
A pilot study for the Environmental Legionella Isolation Techniques Evaluation (ELITE) Program, a proficiency testing scheme for US laboratories that culture Legionella from environmental samples, was conducted September 1, 2008 through March 31, 2009. Participants (n=20) processed panels consisting of six sample types: pure and mixed positive, pure and mixed negative, pure and mixed variable. The majority (93%) of all samples (n=286) were correctly characterized, with 88.5% of samples positive for Legionella and 100% of negative samples identified correctly. Variable samples were incorrectly identified as negative in 36.9% of reports. For all samples reported positive (n=128), participants underestimated the cfu/ml by a mean of 1.25 logs with standard deviation of 0.78 logs, standard error of 0.07 logs, and a range of 3.57 logs compared to the CDC re-test value. Centering results around the interlaboratory mean yielded a standard deviation of 0.65 logs, standard error of 0.06 logs, and a range of 3.22 logs. Sampling protocol, treatment regimen, culture procedure, and laboratory experience did not significantly affect the accuracy or precision of reported concentrations. Qualitative and quantitative results from the ELITE pilot study were similar to reports from a corresponding proficiency testing scheme available in the European Union, indicating these results are probably valid for most environmental laboratories worldwide. The large enumeration error observed suggests that the need for remediation of a water system should not be determined solely by the concentration of Legionella observed in a sample since that value is likely to underestimate the true level of contamination. Published by Elsevier Ltd.
Analyser-based phase contrast image reconstruction using geometrical optics.
Kitchen, M J; Pavlov, K M; Siu, K K W; Menk, R H; Tromba, G; Lewis, R A
2007-07-21
Analyser-based phase contrast imaging can provide radiographs of exceptional contrast at high resolution (<100 microm), whilst quantitative phase and attenuation information can be extracted using just two images when the approximations of geometrical optics are satisfied. Analytical phase retrieval can be performed by fitting the analyser rocking curve with a symmetric Pearson type VII function. The Pearson VII function provided at least a 10% better fit to experimentally measured rocking curves than linear or Gaussian functions. A test phantom, a hollow nylon cylinder, was imaged at 20 keV using a Si(1 1 1) analyser at the ELETTRA synchrotron radiation facility. Our phase retrieval method yielded a more accurate object reconstruction than methods based on a linear fit to the rocking curve. Where reconstructions failed to map expected values, calculations of the Takagi number permitted distinction between the violation of the geometrical optics conditions and the failure of curve fitting procedures. The need for synchronized object/detector translation stages was removed by using a large, divergent beam and imaging the object in segments. Our image acquisition and reconstruction procedure enables quantitative phase retrieval for systems with a divergent source and accounts for imperfections in the analyser.
Novel four-sided neural probe fabricated by a thermal lamination process of polymer films.
Shin, Soowon; Kim, Jae-Hyun; Jeong, Joonsoo; Gwon, Tae Mok; Lee, Seung-Hee; Kim, Sung June
2017-02-15
Ideally, neural probes should have channels with a three-dimensional (3-D) configuration to record the activities of 3-D neural circuits. Many types of 3-D neural probes have been developed; however, most of them were designed as an array of multiple shanks with electrodes located along one side of the shanks. We developed a novel liquid crystal polymer (LCP)-based neural probe with four-sided electrodes. This probe has electrodes on four sides of the shank, i.e., the front, back and two sidewalls. To generate the proposed configuration of the electrodes, we used a thermal lamination process involving LCP films and laser micromachining. The proposed novel four-sided neural probe, was used to successfully perform in vivo multichannel neural recording in the mouse primary somatosensory cortex. The multichannel neural recording showed that the proposed four-sided neural probe can record spiking activities from a more diverse neuronal population than single-sided probes. This was confirmed by a pairwise Pearson correlation coefficient (Pearson's r) analysis and a cross-correlation analysis. The developed four-sided neural probe can be used to record various signals from a complex neural network. Copyright © 2016 Elsevier B.V. All rights reserved.
Identifying the Source of Misfit in Item Response Theory Models.
Liu, Yang; Maydeu-Olivares, Alberto
2014-01-01
When an item response theory model fails to fit adequately, the items for which the model provides a good fit and those for which it does not must be determined. To this end, we compare the performance of several fit statistics for item pairs with known asymptotic distributions under maximum likelihood estimation of the item parameters: (a) a mean and variance adjustment to bivariate Pearson's X(2), (b) a bivariate subtable analog to Reiser's (1996) overall goodness-of-fit test, (c) a z statistic for the bivariate residual cross product, and (d) Maydeu-Olivares and Joe's (2006) M2 statistic applied to bivariate subtables. The unadjusted Pearson's X(2) with heuristically determined degrees of freedom is also included in the comparison. For binary and ordinal data, our simulation results suggest that the z statistic has the best Type I error and power behavior among all the statistics under investigation when the observed information matrix is used in its computation. However, if one has to use the cross-product information, the mean and variance adjusted X(2) is recommended. We illustrate the use of pairwise fit statistics in 2 real-data examples and discuss possible extensions of the current research in various directions.
Stability of physical activity, fitness components and diet quality indices.
Mertens, E; Clarys, P; Mullie, P; Lefevre, J; Charlier, R; Knaeps, S; Huybrechts, I; Deforche, B
2017-04-01
Regular physical activity (PA), a high level of fitness and a high diet quality are positively associated with health. However, information about stability of fitness components and diet quality indices is limited. This study aimed to evaluate stability of those parameters. This study includes 652 adults (men=57.56 (10.28) years; women=55.90 (8.34) years at follow-up) who participated in 2002-2004 and returned for follow-up at the Policy Research Centre Leuven in 2012-2014. Minutes sport per day and Physical activity level (PAL) were calculated from the Flemish Physical Activity Computerized Questionnaire. Cardiorespiratory fitness (CRF), morphological fitness (MORF; body mass index and waist circumference) and metabolic fitness (METF) (blood cholesterol and triglycerides) were used as fitness components. Diet quality indices (Healthy Eating Index-2010 (HEI), Diet Quality Index (DQI), Mediterranean Diet Score (MDS)) were calculated from a diet record. Tracking coefficients were calculated using Pearson/Spearman correlation coefficients (r Pearson ) and intra-class correlation coefficients (r ICC ). In both men (r Pearson&ICC =0.51) and women (r Pearson =0.62 and r ICC =0.60) PAL showed good stability, while minutes sport remained stable in women (r Pearson&ICC =0.57) but less in men (r Pearson&ICC =0.45). Most fitness components remained stable (r⩾0.50) except some METF components in women. In general the diet quality indices and their components were unstable (r<0.50). PAL and the majority of the fitness components remained stable, while diet quality was unstable over 10 years. For unstable parameters such as diet quality measurements are needed at both time points in prospective research.
Diaper area skin microflora of normal children and children with atopic dermatitis.
Keswick, B H; Seymour, J L; Milligan, M C
1987-01-01
In vitro studies established that neither cloth nor disposable diapers demonstrably contributed to the growth of Escherichia coli, Proteus vulgaris, Staphylococcus aureus, or Candida albicans when urine was present as a growth medium. In a clinical study of 166 children, the microbial skin flora of children with atopic dermatitis was compared with the flora of children with normal skin to determine the influence of diaper type. No biologically significant differences were detected between groups wearing disposable or cloth diapers in terms of frequency of isolation or log mean recovery of selected skin flora. Repeated isolation of S. aureus correlated with atopic dermatitis. The log mean recovery of S. aureus was higher in the atopic groups. The effects of each diaper type on skin microflora were equivalent in the normal and atopic populations. PMID:3546360
Low resolution spectroscopic investigation of Am stars using Automated method
NASA Astrophysics Data System (ADS)
Sharma, Kaushal; Joshi, Santosh; Singh, Harinder P.
2018-04-01
The automated method of full spectrum fitting gives reliable estimates of stellar atmospheric parameters (Teff, log g and [Fe/H]) for late A, F, G, and early K type stars. Recently, the technique was further improved in the cooler regime and the validity range was extended up to a spectral type of M6 - M7 (Teff˜ 2900 K). The present study aims to explore the application of this method on the low-resolution spectra of Am stars, a class of chemically peculiar stars, to examine its robustness for these objects. We use ULySS with the Medium-resolution INT Library of Empirical Spectra (MILES) V2 spectral interpolator for parameter determination. The determined Teff and log g values are found to be in good agreement with those obtained from high-resolution spectroscopy.
Cheiloscopy and dactyloscopy: Do they dictate personality patterns?
Abidullah, Mohammed; Kumar, M Naveen; Bhorgonde, Kavita D; Reddy, D Shyam Prasad
2015-01-01
Cheiloscopy and dactyloscopy, both are well-established forensic tools used in individual identification in any scenario be it a crime scene or civil cause. Like finger prints, lip prints are unique and distinguishable for every individual. But their relationship to personality types has not been established excepting the hypothesis stating that finger prints could explain these personality patterns. The study was aimed to record and correlate the lip and finger prints with that of character/personality of a person. The lip and finger prints and character of a person were recorded and the data obtained was subjected for statistical analysis, especially for Pearson's Chi-square test and correlation/association between the groups was also studied. The study sample comprised of 200 subjects, 100 males and 100 females, aged between 18 and 30 years. For recording lip prints, brown/pink-colored lipstick was applied on the lips and the subjects were asked to spread uniformly over the lips. Lip prints were traced in the normal rest position on a plain white bond paper. For recording the finger prints, imprints of the fingers were taken on a plain white bond paper using ink pad. The collected prints were visualized using magnifying lens. To record the character of person, a pro forma manual for multivariable personality inventory by Dr. BC Muthayya was used. Data obtained was subjected for statistical analysis, especially for Pearson's Chi-square test and correlation/association between the groups was also studied. In males, predominant lip pattern recorded was Type I with whorls-type finger pattern and the character being ego ideal, pessimism, introvert, and dogmatic; whereas in females, predominant lip pattern recorded was Type II with loops-type finger pattern and the character being neurotic, need achievers, and dominant. Many studies on lip pattern, finger pattern, palatal rugae, etc., for individual identification and gender determination exist, but correlative studies are scanty. This is the first study done on correlating patterns, that is, lip and finger pattern with the character of a person. With this study we conclude that this correlation can be used as an adjunct in the investigatory process in forensic sciences.
Zhou, Meicen; Li, Zengyi; Min, Rui; Dong, Yaxiu; Sun, Qi; Li, Yuxiu
2015-09-01
The aim of the present study was to explore whether the triglyceride to high density lipoprotein cholesterol ratio [log (TG)/HDL-C] and peripheral blood leukocytes DNA telomere length could predict future islet beta cell function decreased in Chinese type 2 diabetes mellitus (T2DM) during a 6-year cohort. Sixty T2DM patients (without insulin treatment at baseline) were included in the 6-year cohort study. Peripheral blood leukocytes DNA telomere length, HbA1c, blood lipid profile, fatty fat acid, glucose, insulin and C peptide (3 h after a mixed meal) were determined. Delta C peptide area under curve (Delta CP AUC) was used to reflect change in beta cell secretion function (Delta CP AUC = baseline CP AUC - CP AUC after 6 years). Subjects were divided into slow decrease of beta cell function group (Delta CP AUCslow group) and fast decrease group (Delta CP AUCfast group) according to median of Delta CP AUC. Baseline demographic characteristics, clinical variables between two groups were compared. Correlations between baseline data and Delta CP AUC were analyzed. Baseline log (TG)/HDL-C was positively correlated with Delta CP AUC (r = 0.306, P = 0.027); log (TG)/HDL-C in Delta CP AUCfast group was higher than that in Delta CP AUCslow group (0.103 ± 0.033 vs 0.083 ± 0.030, P = 0.027). There was no significant difference in DNA telomere length between the two groups. Change in DNA telomere length over 6 years was not significantly correlated with baseline blood lipid. In Chinese T2DM patients, high baseline log (TG)/HDL-C ratio predicts fast progression of islet beta cell dysfunction. It may be a simple index to predict progression speed of islet beta cell dysfunction. © 2014 Ruijin Hospital, Shanghai Jiaotong University School of Medicine and Wiley Publishing Asia Pty Ltd.
NASA Astrophysics Data System (ADS)
Chung, Kee-Choo; Park, Hwangseo
2016-11-01
The performance of the extended solvent-contact model has been addressed in the SAMPL5 blind prediction challenge for distribution coefficient (LogD) of drug-like molecules with respect to the cyclohexane/water partitioning system. All the atomic parameters defined for 41 atom types in the solvation free energy function were optimized by operating a standard genetic algorithm with respect to water and cyclohexane solvents. In the parameterizations for cyclohexane, the experimental solvation free energy (Δ G sol ) data of 15 molecules for 1-octanol were combined with those of 77 molecules for cyclohexane to construct a training set because Δ G sol values of the former were unavailable for cyclohexane in publicly accessible databases. Using this hybrid training set, we established the LogD prediction model with the correlation coefficient ( R), average error (AE), and root mean square error (RMSE) of 0.55, 1.53, and 3.03, respectively, for the comparison of experimental and computational results for 53 SAMPL5 molecules. The modest accuracy in LogD prediction could be attributed to the incomplete optimization of atomic solvation parameters for cyclohexane. With respect to 31 SAMPL5 molecules containing the atom types for which experimental reference data for Δ G sol were available for both water and cyclohexane, the accuracy in LogD prediction increased remarkably with the R, AE, and RMSE values of 0.82, 0.89, and 1.60, respectively. This significant enhancement in performance stemmed from the better optimization of atomic solvation parameters by limiting the element of training set to the molecules with experimental Δ G sol data for cyclohexane. Due to the simplicity in model building and to low computational cost for parameterizations, the extended solvent-contact model is anticipated to serve as a valuable computational tool for LogD prediction upon the enrichment of experimental Δ G sol data for organic solvents.
NASA Astrophysics Data System (ADS)
Kahraman Aliçavuş, F.; Niemczura, E.; Polińska, M.; Hełminiak, K. G.; Lampens, P.; Molenda-Żakowicz, J.; Ukita, N.; Kambe, E.
2017-10-01
δ Scuti stars are remarkable objects for asteroseismology. In spite of decades of investigations, there are still important questions about these pulsating stars to be answered, such as their positions in log Teff-log g diagram, or the dependence of the pulsation modes on atmospheric parameters and rotation. Therefore, we performed a detailed spectroscopic study of 41 δ Scuti stars. The selected objects are located near the γ Doradus instability strip to make a reliable comparison between both types of variables. Spectral classification, stellar atmospheric parameters (Teff, log g, ξ) and v sin I values were determined. The spectral types and luminosity classes of stars were found to be A1-F5 and III-V, respectively. The Teff ranges from 6600 to 9400 K, whereas the obtained log g values are from 3.4 to 4.3. The v sin I values were found between 10 and 222 km s-1. The derived chemical abundances of δ Scuti stars were compared to those of the non-pulsating stars and γ Doradus variables. It turned out that both δ Scuti and γ Doradus variables have similar abundance patterns, which are slightly different from the non-pulsating stars. These chemical differences can help us to understand why there are non-pulsating stars in classical instability strip. Effects of the obtained parameters on pulsation period and amplitude were examined. It appears that the pulsation period decreases with increasing Teff. No significant correlations were found between pulsation period, amplitude and v sin I.
Jelden, Katelyn C; Gibbs, Shawn G; Smith, Philip W; Hewlett, Angela L; Iwen, Peter C; Schmid, Kendra K; Lowe, John J
2016-09-01
The estimated 721,800 hospital acquired infections per year in the United States have necessitated development of novel environmental decontamination technologies such as ultraviolet germicidal irradiation (UVGI). This study evaluated the efficacy of a novel, portable UVGI generator (the TORCH, ChlorDiSys Solutions, Inc., Lebanon, NJ) to disinfect surface coupons composed of plastic from a bedrail, stainless steel, chrome-plated light switch cover, and a porcelain tile that were inoculated with methicillin-resistant Staphylococcus aureus (MRSA) or vancomycin-resistant Enterococcus faecalis (VRE). Each surface type was placed at 6 different sites within a hospital room and treated by 10-min ultraviolet-C (UVC) exposures using the TORCH with doses ranging from 0-688 mJ/cm(2) between sites. Organism reductions were compared with untreated surface coupons as controls. Overall, UVGI significantly reduced MRSA by an average of 4.6 log10 (GSD: 1.7 log10, 77% inactivation, p < 0.0001) and VRE by an average of 3.9 log10 (GSD: 1.7 log10, 65% inactivation, p < 0.0001). MRSA on bedrail was reduced significantly (p < 0.0001) less than on other surfaces, while VRE was reduced significantly less on chrome (p = 0.0004) and stainless steel (p = 0.0012) than porcelain tile. Organisms out of direct line of sight of the UVC generator were reduced significantly less (p < 0.0001) than those directly in line of sight. UVGI was found an effective method to inactivate nosocomial pathogens on surfaces evaluated within the hospital environment in direct line of sight of UVGI treatment with variation between organism and surface types.
Workplace Incivility: Worker and Organizational Antecedents and Outcomes
ERIC Educational Resources Information Center
Bartlett, James E., II; Bartlett, Michelle E.; Reio, Thomas G., Jr.
2008-01-01
Unresolved workplace conflicts represent the largest reducible costs to an organization (Keenan & Newton, 1985). As incivility increases (Buhler, 2003; Pearson, Andersson, & Wegner, 2001; Pearson & Porath, 2005) more research is being conducted (Tepper, Duffy, Henle, & Lambert, 2006; Vickers, 2006). This review examined antecedents (variables that…
Ayed, Imen Ben; Chamkha, Imen; Mkaouar-Rebai, Emna; Kammoun, Thouraya; Mezghani, Najla; Chabchoub, Imen; Aloulou, Hajer; Hachicha, Mongia; Fakhfakh, Faiza
2011-07-29
Pearson syndrome (PS) is a multisystem disease including refractory anemia, vacuolization of marrow precursors and pancreatic fibrosis. The disease starts during infancy and affects various tissues and organs, and most affected children die before the age of 3years. Pearson syndrome is caused by de novo large-scale deletions or, more rarely, duplications in the mitochondrial genome. In the present report, we described a Pearson syndrome patient harboring multiple mitochondrial deletions which is, in our knowledge, the first case described and studied in Tunisia. In fact, we reported the common 4.977kb deletion and two novel heteroplasmic deletions (5.030 and 5.234kb) of the mtDNA. These deletions affect several protein-coding and tRNAs genes and could strongly lead to defects in mitochondrial polypeptides synthesis, and impair oxidative phosphorylation and energy metabolism in the respiratory chain in the studied patient. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Chun-mei; Zhang, Chong-ming; Zou, Jun-zhong; Zhang, Jian
2012-02-01
The diagnosis of several neurological disorders is based on the detection of typical pathological patterns in electroencephalograms (EEGs). This is a time-consuming task requiring significant training and experience. A lot of effort has been devoted to developing automatic detection techniques which might help not only in accelerating this process but also in avoiding the disagreement among readers of the same record. In this work, Neyman-Pearson criteria and a support vector machine (SVM) are applied for detecting an epileptic EEG. Decision making is performed in two stages: feature extraction by computing the wavelet coefficients and the approximate entropy (ApEn) and detection by using Neyman-Pearson criteria and an SVM. Then the detection performance of the proposed method is evaluated. Simulation results demonstrate that the wavelet coefficients and the ApEn are features that represent the EEG signals well. By comparison with Neyman-Pearson criteria, an SVM applied on these features achieved higher detection accuracies.
Inoue, Tomoaki; Maeda, Yasutaka; Sonoda, Noriyuki; Sasaki, Shuji; Kabemura, Teppei; Kobayashi, Kunihisa; Inoguchi, Toyoshi
2016-01-01
Objective Although diabetes mellitus is associated with an increased risk of heart failure with preserved ejection fraction, the underlying mechanisms leading to left ventricular diastolic dysfunction (LVDD) remain poorly understood. The study was designed to assess the risk factors for LVDD in patients with type 2 diabetes mellitus. Research design and methods The study cohort included 101 asymptomatic patients with type 2 diabetes mellitus without overt heart disease. Left ventricular diastolic function was estimated as the ratio of early diastolic velocity (E) from transmitral inflow to early diastolic velocity (e’) of tissue Doppler at mitral annulus (E/e’). Parameters of glycemic control, plasma insulin concentration, treatment with antidiabetic drugs, lipid profile, and other clinical characteristics were evaluated, and their association with E/e’ determined. Patients with New York Heart Association class >1, ejection fraction <50%, history of coronary artery disease, severe valvulopathy, chronic atrial fibrillation, or creatinine clearance <30 mL/min, as well as those receiving insulin treatment, were excluded. Results Univariate analysis showed that E/e’ was significantly correlated with age (p<0.001), sex (p<0.001), duration of diabetes (p=0.002), systolic blood pressure (p=0.017), pulse pressure (p=0.010), fasting insulin concentration (p=0.025), and sulfonylurea use (p<0.001). Multivariate linear regression analysis showed that log E/e’ was significantly and positively correlated with log age (p=0.034), female sex (p=0.019), log fasting insulin concentration (p=0.010), and sulfonylurea use (p=0.027). Conclusions Hyperinsulinemia and sulfonylurea use may be important in the development of LVDD in patients with type 2 diabetes mellitus. PMID:27648285