Sample records for sampling time intervals

  1. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    PubMed

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.

  2. Improved confidence intervals when the sample is counted an integer times longer than the blank.

    PubMed

    Potter, William Edward; Strzelczyk, Jadwiga Jodi

    2011-05-01

    Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.

  3. Analysis of single ion channel data incorporating time-interval omission and sampling

    PubMed Central

    The, Yu-Kai; Timmer, Jens

    2005-01-01

    Hidden Markov models are widely used to describe single channel currents from patch-clamp experiments. The inevitable anti-aliasing filter limits the time resolution of the measurements and therefore the standard hidden Markov model is not adequate anymore. The notion of time-interval omission has been introduced where brief events are not detected. The developed, exact solutions to this problem do not take into account that the measured intervals are limited by the sampling time. In this case the dead-time that specifies the minimal detectable interval length is not defined unambiguously. We show that a wrong choice of the dead-time leads to considerably biased estimates and present the appropriate equations to describe sampled data. PMID:16849220

  4. Interval sampling methods and measurement error: a computer simulation.

    PubMed

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  5. Fixed-interval matching-to-sample: intermatching time and intermatching error runs1

    PubMed Central

    Nelson, Thomas D.

    1978-01-01

    Four pigeons were trained on a matching-to-sample task in which reinforcers followed either the first matching response (fixed interval) or the fifth matching response (tandem fixed-interval fixed-ratio) that occurred 80 seconds or longer after the last reinforcement. Relative frequency distributions of the matching-to-sample responses that concluded intermatching times and runs of mismatches (intermatching error runs) were computed for the final matching responses directly followed by grain access and also for the three matching responses immediately preceding the final match. Comparison of these two distributions showed that the fixed-interval schedule arranged for the preferential reinforcement of matches concluding relatively extended intermatching times and runs of mismatches. Differences in matching accuracy and rate during the fixed interval, compared to the tandem fixed-interval fixed-ratio, suggested that reinforcers following matches concluding various intermatching times and runs of mismatches influenced the rate and accuracy of the last few matches before grain access, but did not control rate and accuracy throughout the entire fixed-interval period. PMID:16812032

  6. Observer-based output feedback control of networked control systems with non-uniform sampling and time-varying delay

    NASA Astrophysics Data System (ADS)

    Meng, Su; Chen, Jie; Sun, Jian

    2017-10-01

    This paper investigates the problem of observer-based output feedback control for networked control systems with non-uniform sampling and time-varying transmission delay. The sampling intervals are assumed to vary within a given interval. The transmission delay belongs to a known interval. A discrete-time model is first established, which contains time-varying delay and norm-bounded uncertainties coming from non-uniform sampling intervals. It is then converted to an interconnection of two subsystems in which the forward channel is delay-free. The scaled small gain theorem is used to derive the stability condition for the closed-loop system. Moreover, the observer-based output feedback controller design method is proposed by utilising a modified cone complementary linearisation algorithm. Finally, numerical examples illustrate the validity and superiority of the proposed method.

  7. Effect of increasing the sampling interval to 2 seconds on the radiation dose and accuracy of CT perfusion of the head and neck.

    PubMed

    Tawfik, Ahmed M; Razek, Ahmed A; Elhawary, Galal; Batouty, Nihal M

    2014-01-01

    To evaluate the effect of increasing the sampling interval from 1 second (1 image per second) to 2 seconds (1 image every 2 seconds) on computed tomographic (CT) perfusion (CTP) of head and neck tumors. Twenty patients underwent CTP studies of head and neck tumors with images acquired in cine mode for 50 seconds using sampling interval of 1 second. Using deconvolution-based software, analysis of CTP was done with sampling interval of 1 second and then 2 seconds. Perfusion maps representing blood flow, blood volume, mean transit time, and permeability surface area product (PS) were obtained. Quantitative tumor CTP values were compared between the 2 sampling intervals. Two blinded radiologists compared the subjective quality of CTP maps using a 3-point scale between the 2 sampling intervals. Radiation dose parameters were recorded for the 2 sampling interval rates. No significant differences were observed between the means of the 4 perfusion parameters generated using both sampling intervals; all P >0.05. The 95% limits of agreement between the 2 sampling intervals were -65.9 to 48.1) mL/min per 100 g for blood flow, -3.6 to 3.1 mL/100 g for blood volume, -2.9 to 3.8 seconds for mean transit time, and -10.0 to 12.5 mL/min per 100 g for PS. There was no significant difference between the subjective quality scores of CTP maps obtained using the 2 sampling intervals; all P > 0.05. Radiation dose was halved when sampling interval increased from 1 to 2 seconds. Increasing the sampling interval rate to 1 image every 2 seconds does not compromise the image quality and has no significant effect on quantitative perfusion parameters of head and neck tumors. The radiation dose is halved.

  8. Measuring Safety Performance: A Comparison of Whole, Partial, and Momentary Time-Sampling Recording Methods

    ERIC Educational Resources Information Center

    Alvero, Alicia M.; Struss, Kristen; Rappaport, Eva

    2008-01-01

    Partial-interval (PIR), whole-interval (WIR), and momentary time sampling (MTS) estimates were compared against continuous measures of safety performance for three postural behaviors: feet, back, and shoulder position. Twenty-five samples of safety performance across five undergraduate students were scored using a second-by-second continuous…

  9. Technical note: Instantaneous sampling intervals validated from continuous video observation for behavioral recording of feedlot lambs.

    PubMed

    Pullin, A N; Pairis-Garcia, M D; Campbell, B J; Campler, M R; Proudfoot, K L

    2017-11-01

    When considering methodologies for collecting behavioral data, continuous sampling provides the most complete and accurate data set whereas instantaneous sampling can provide similar results and also increase the efficiency of data collection. However, instantaneous time intervals require validation to ensure accurate estimation of the data. Therefore, the objective of this study was to validate scan sampling intervals for lambs housed in a feedlot environment. Feeding, lying, standing, drinking, locomotion, and oral manipulation were measured on 18 crossbred lambs housed in an indoor feedlot facility for 14 h (0600-2000 h). Data from continuous sampling were compared with data from instantaneous scan sampling intervals of 5, 10, 15, and 20 min using a linear regression analysis. Three criteria determined if a time interval accurately estimated behaviors: 1) ≥ 0.90, 2) slope not statistically different from 1 ( > 0.05), and 3) intercept not statistically different from 0 ( > 0.05). Estimations for lying behavior were accurate up to 20-min intervals, whereas feeding and standing behaviors were accurate only at 5-min intervals (i.e., met all 3 regression criteria). Drinking, locomotion, and oral manipulation demonstrated poor associations () for all tested intervals. The results from this study suggest that a 5-min instantaneous sampling interval will accurately estimate lying, feeding, and standing behaviors for lambs housed in a feedlot, whereas continuous sampling is recommended for the remaining behaviors. This methodology will contribute toward the efficiency, accuracy, and transparency of future behavioral data collection in lamb behavior research.

  10. Ultrasonic sensor and method of use

    DOEpatents

    Condreva, Kenneth J.

    2001-01-01

    An ultrasonic sensor system and method of use for measuring transit time though a liquid sample, using one ultrasonic transducer coupled to a precision time interval counter. The timing circuit captures changes in transit time, representing small changes in the velocity of sound transmitted, over necessarily small time intervals (nanoseconds) and uses the transit time changes to identify the presence of non-conforming constituents in the sample.

  11. Method and apparatus for measuring nuclear magnetic properties

    DOEpatents

    Weitekamp, D.P.; Bielecki, A.; Zax, D.B.; Zilm, K.W.; Pines, A.

    1987-12-01

    A method for studying the chemical and structural characteristics of materials is disclosed. The method includes placement of a sample material in a high strength polarizing magnetic field to order the sample nuclei. The condition used to order the sample is then removed abruptly and the ordering of the sample allowed to evolve for a time interval. At the end of the time interval, the ordering of the sample is measured by conventional nuclear magnetic resonance techniques. 5 figs.

  12. Method and apparatus for measuring nuclear magnetic properties

    DOEpatents

    Weitekamp, Daniel P.; Bielecki, Anthony; Zax, David B.; Zilm, Kurt W.; Pines, Alexander

    1987-01-01

    A method for studying the chemical and structural characteristics of materials is disclosed. The method includes placement of a sample material in a high strength polarizing magnetic field to order the sample nucleii. The condition used to order the sample is then removed abruptly and the ordering of the sample allowed to evolve for a time interval. At the end of the time interval, the ordering of the sample is measured by conventional nuclear magnetic resonance techniques.

  13. Outcome-Dependent Sampling with Interval-Censored Failure Time Data

    PubMed Central

    Zhou, Qingning; Cai, Jianwen; Zhou, Haibo

    2017-01-01

    Summary Epidemiologic studies and disease prevention trials often seek to relate an exposure variable to a failure time that suffers from interval-censoring. When the failure rate is low and the time intervals are wide, a large cohort is often required so as to yield reliable precision on the exposure-failure-time relationship. However, large cohort studies with simple random sampling could be prohibitive for investigators with a limited budget, especially when the exposure variables are expensive to obtain. Alternative cost-effective sampling designs and inference procedures are therefore desirable. We propose an outcome-dependent sampling (ODS) design with interval-censored failure time data, where we enrich the observed sample by selectively including certain more informative failure subjects. We develop a novel sieve semiparametric maximum empirical likelihood approach for fitting the proportional hazards model to data from the proposed interval-censoring ODS design. This approach employs the empirical likelihood and sieve methods to deal with the infinite-dimensional nuisance parameters, which greatly reduces the dimensionality of the estimation problem and eases the computation difficulty. The consistency and asymptotic normality of the resulting regression parameter estimator are established. The results from our extensive simulation study show that the proposed design and method works well for practical situations and is more efficient than the alternative designs and competing approaches. An example from the Atherosclerosis Risk in Communities (ARIC) study is provided for illustration. PMID:28771664

  14. Time interval measurement device based on surface acoustic wave filter excitation, providing 1 ps precision and stability.

    PubMed

    Panek, Petr; Prochazka, Ivan

    2007-09-01

    This article deals with the time interval measurement device, which is based on a surface acoustic wave (SAW) filter as a time interpolator. The operating principle is based on the fact that a transversal SAW filter excited by a short pulse can generate a finite signal with highly suppressed spectra outside a narrow frequency band. If the responses to two excitations are sampled at clock ticks, they can be precisely reconstructed from a finite number of samples and then compared so as to determine the time interval between the two excitations. We have designed and constructed a two-channel time interval measurement device which allows independent timing of two events and evaluation of the time interval between them. The device has been constructed using commercially available components. The experimental results proved the concept. We have assessed the single-shot time interval measurement precision of 1.3 ps rms that corresponds to the time of arrival precision of 0.9 ps rms in each channel. The temperature drift of the measured time interval on temperature is lower than 0.5 ps/K, and the long term stability is better than +/-0.2 ps/h. These are to our knowledge the best values reported for the time interval measurement device. The results are in good agreement with the error budget based on the theoretical analysis.

  15. a New Approach for Accuracy Improvement of Pulsed LIDAR Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Huang, W.; Zhou, X.; He, C.; Li, X.; Huang, Y.; Zhang, L.

    2018-05-01

    In remote sensing applications, the accuracy of time interval measurement is one of the most important parameters that affect the quality of pulsed lidar data. The traditional time interval measurement technique has the disadvantages of low measurement accuracy, complicated circuit structure and large error. A high-precision time interval data cannot be obtained in these traditional methods. In order to obtain higher quality of remote sensing cloud images based on the time interval measurement, a higher accuracy time interval measurement method is proposed. The method is based on charging the capacitance and sampling the change of capacitor voltage at the same time. Firstly, the approximate model of the capacitance voltage curve in the time of flight of pulse is fitted based on the sampled data. Then, the whole charging time is obtained with the fitting function. In this method, only a high-speed A/D sampler and capacitor are required in a single receiving channel, and the collected data is processed directly in the main control unit. The experimental results show that the proposed method can get error less than 3 ps. Compared with other methods, the proposed method improves the time interval accuracy by at least 20 %.

  16. Estimating the duration of geologic intervals from a small number of age determinations: A challenge common to petrology and paleobiology

    NASA Astrophysics Data System (ADS)

    Glazner, Allen F.; Sadler, Peter M.

    2016-12-01

    The duration of a geologic interval, such as the time over which a given volume of magma accumulated to form a pluton, or the lifespan of a large igneous province, is commonly determined from a relatively small number of geochronologic determinations (e.g., 4-10) within that interval. Such sample sets can underestimate the true length of the interval by a significant amount. For example, the average interval determined from a sample of size n = 5, drawn from a uniform random distribution, will underestimate the true interval by 50%. Even for n = 10, the average sample only captures ˜80% of the interval. If the underlying distribution is known then a correction factor can be determined from theory or Monte Carlo analysis; for a uniform random distribution, this factor is n+1n-1. Systematic undersampling of interval lengths can have a large effect on calculated magma fluxes in plutonic systems. The problem is analogous to determining the duration of an extinct species from its fossil occurrences. Confidence interval statistics developed for species origination and extinction times are applicable to the onset and cessation of magmatic events.

  17. A Comparison of Momentary Time Sampling and Partial-Interval Recording for Assessment of Effects of Social Skills Training

    ERIC Educational Resources Information Center

    Radley, Keith C.; O'Handley, Roderick D.; Labrot, Zachary C.

    2015-01-01

    Assessment in social skills training often utilizes procedures such as partial-interval recording (PIR) and momentary time sampling (MTS) to estimate changes in duration in social engagements due to intervention. Although previous research suggests PIR to be more inaccurate than MTS in estimating levels of behavior, treatment analysis decisions…

  18. Predictive sensor method and apparatus

    NASA Technical Reports Server (NTRS)

    Nail, William L. (Inventor); Koger, Thomas L. (Inventor); Cambridge, Vivien (Inventor)

    1990-01-01

    A predictive algorithm is used to determine, in near real time, the steady state response of a slow responding sensor such as hydrogen gas sensor of the type which produces an output current proportional to the partial pressure of the hydrogen present. A microprocessor connected to the sensor samples the sensor output at small regular time intervals and predicts the steady state response of the sensor in response to a perturbation in the parameter being sensed, based on the beginning and end samples of the sensor output for the current sample time interval.

  19. Considerations for Time Sampling Interval Durations in the Measurement of Young Children's Classroom Engagement

    ERIC Educational Resources Information Center

    Zakszeski, Brittany N.; Hojnoski, Robin L.; Wood, Brenna K.

    2017-01-01

    Classroom engagement is important to young children's academic and social development. Accurate methods of capturing this behavior are needed to inform and evaluate intervention efforts. This study compared the accuracy of interval durations (i.e., 5 s, 10 s, 15 s, 20 s, 30 s, and 60 s) of momentary time sampling (MTS) in approximating the…

  20. Validity and Generalizability of Measuring Student Engaged Time in Physical Education.

    ERIC Educational Resources Information Center

    Silverman, Stephen; Zotos, Connee

    The validity of interval and time sampling methods of measuring student engaged time was investigated in a study estimating the actual time students spent engaged in relevant motor performance in physical education classes. Two versions of the interval Academic Learning Time in Physical Education (ALT-PE) instrument and an equivalent time sampling…

  1. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    USGS Publications Warehouse

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  2. Effect of the time interval from harvesting to the pre-drying step on natural fumonisin contamination in freshly harvested corn from the State of Parana, Brazil.

    PubMed

    Da Silva, M; Garcia, G T; Vizoni, E; Kawamura, O; Hirooka, E Y; Ono, E Y S

    2008-05-01

    Natural mycoflora and fumonisins were analysed in 490 samples of freshly harvested corn (Zea mays L.) (2003 and 2004 crops) collected at three points in the producing chain from the Northern region of Parana State, Brazil, and correlated to the time interval between the harvesting and the pre-drying step. The two crops showed a similar profile concerning the fungal frequency, and Fusarium sp. was the prevalent genera (100%) for the sampling sites from both crops. Fumonisins were detected in all samples from the three points of the producing chain (2003 and 2004 crops). The levels ranged from 0.11 to 15.32 microg g(-1)in field samples, from 0.16 to 15.90 microg g(-1)in reception samples, and from 0.02 to 18.78 microg g(-1)in pre-drying samples (2003 crop). Samples from the 2004 crop showed lower contamination and fumonisin levels ranged from 0.07 to 4.78 microg g(-1)in field samples, from 0.03 to 4.09 microg g(-1)in reception samples, and from 0.11 to 11.21 microg g(-1)in pre-drying samples. The mean fumonisin level increased gradually from < or = 5.0 to 19.0 microg g(-1)as the time interval between the harvesting and the pre-drying step increased from 3.22 to 8.89 h (2003 crop). The same profile was observed for samples from the 2004 crop. Fumonisin levels and the time interval (rho = 0.96) showed positive correlation (p < or = 0.05), indicating that delay in the drying process can increase fumonisin levels.

  3. Normal reference intervals and the effects of time and feeding on serum bile acid concentrations in llamas.

    PubMed

    Andreasen, C B; Pearson, E G; Smith, B B; Gerros, T C; Lassen, E D

    1998-04-01

    Fifty clinically healthy llamas, 0.5-13 years of age (22 intact males, 10 neutered males, 18 females), with no biochemical evidence of liver disease or hematologic abnormalities, were selected to establish serum bile acid reference intervals. Serum samples submitted to the clinical pathology laboratory were analyzed using a colorimetric enzymatic assay to establish bile acid reference intervals. A nonparametric distribution of llama bile acid concentrations was 1-23 micromol/liter for llamas >1 year of age and 10-44 micromol/liter for llamas < or = 1 year of age. A significant difference was found between these 2 age groups. No correlation was detected between gender and bile acid concentrations. The reference intervals were 1.1-22.9 micromol/liter for llamas >1 year of age and 1.8-49.8 micromol/liter for llamas < or = 1 year of age. Additionally, a separate group of 10 healthy adult llamas (5 males, 5 females, 5-11 years of age) without biochemical or hematologic abnormalities was selected to assess the effects of feeding and time intervals on serum bile acid concentrations. These 10 llamas were provided fresh water and hay ad libitum, and serum samples were obtained via an indwelling jugular catheter hourly for 11 hours. Llamas were then kept from food overnight (12 hours), and subsequent samples were taken prior to feeding (fasting baseline time, 23 hours after trial initiation) and postprandially at 0.5, 1, 2, 4, and 8 hours. In feeding trials, there was no consistent interaction between bile acid concentrations and time, feeding, or 12-hour fasting. Prior feeding or time of day did not result in serum bile acid concentrations outside the reference interval, but concentrations from individual llamas varied within this interval over time.

  4. Modulation Based on Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2009-01-01

    A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.

  5. High sensitivity Troponin T: an audit of implementation of its protocol in a district general hospital.

    PubMed

    Kalim, Shahid; Nazir, Shaista; Khan, Zia Ullah

    2013-01-01

    Protocols based on newer high sensitivity Troponin T (hsTropT) assays can rule in a suspected Acute Myocardial Infarction (AMI) as early as 3 hours. We conducted this study to audit adherence to our Trust's newly introduced AMI diagnostic protocol based on paired hsTropT testing at 0 and 3 hours. We retrospectively reviewed data of all patients who had hsTropT test done between 1st and 7th May 2012. Patient's demographics, utility of single or paired samples, time interval between paired samples, patient's presenting symptoms and ECG findings were noted and their means, medians, Standard deviations and proportions were calculated. A total of 66 patients had hsTropT test done during this period. Mean age was 63.30 +/- 17.46 years and 38 (57.57%) were males. Twenty-four (36.36%) patients had only single, rather than protocol recommended paired hsTropT samples, taken. Among the 42 (63.63%) patients with paired samples, the mean time interval was found to be 4.41 +/- 5.7 hours. Contrary to the recommendations, 15 (22.73%) had a very long whereas 2 (3.03%) had a very short time interval between two samples. A subgroup analysis of patients with single samples, found only 2 (3.03%) patient with ST-segment elevation, appropriate for single testing. Our study confirmed that in a large number of patients the protocol for paired sampling or a recommended time interval of 3 hours between 2 samples was not being followed.

  6. A new time calibration method for switched-capacitor-array-based waveform samplers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, H.; Chen, C. -T.; Eclov, N.

    2014-08-24

    Here we have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibrationmore » is measured to be ~2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. Ultimately, the new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration.« less

  7. A new time calibration method for switched-capacitor-array-based waveform samplers

    NASA Astrophysics Data System (ADS)

    Kim, H.; Chen, C.-T.; Eclov, N.; Ronzhin, A.; Murat, P.; Ramberg, E.; Los, S.; Moses, W.; Choong, W.-S.; Kao, C.-M.

    2014-12-01

    We have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibration is measured to be 2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. The new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration.

  8. A New Time Calibration Method for Switched-capacitor-array-based Waveform Samplers.

    PubMed

    Kim, H; Chen, C-T; Eclov, N; Ronzhin, A; Murat, P; Ramberg, E; Los, S; Moses, W; Choong, W-S; Kao, C-M

    2014-12-11

    We have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibration is measured to be ~2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. The new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration.

  9. A New Time Calibration Method for Switched-capacitor-array-based Waveform Samplers

    PubMed Central

    Kim, H.; Chen, C.-T.; Eclov, N.; Ronzhin, A.; Murat, P.; Ramberg, E.; Los, S.; Moses, W.; Choong, W.-S.; Kao, C.-M.

    2014-01-01

    We have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibration is measured to be ~2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. The new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration. PMID:25506113

  10. OSL response bleaching of BeO samples, using fluorescent light and blue LEDs

    NASA Astrophysics Data System (ADS)

    Groppo, D. P.; Caldas, L. V. E.

    2016-07-01

    The optically stimulated luminescence (OSL) is widely used as a dosimetric technique for many applications. In this work, the OSL response bleaching of BeO samples was studied. The samples were irradiated using a beta radiation source (90Sr+90Y); the bleaching treatments (fluorescent light and blue LEDs) were performed, and the results were compared. Various optical treatment time intervals were tested until reaching the complete bleaching of the OSL response. The best combination of the time interval and bleaching type was analyzed.

  11. The effects of sampling frequency on the climate statistics of the European Centre for Medium-Range Weather Forecasts

    NASA Astrophysics Data System (ADS)

    Phillips, Thomas J.; Gates, W. Lawrence; Arpe, Klaus

    1992-12-01

    The effects of sampling frequency on the first- and second-moment statistics of selected European Centre for Medium-Range Weather Forecasts (ECMWF) model variables are investigated in a simulation of "perpetual July" with a diurnal cycle included and with surface and atmospheric fields saved at hourly intervals. The shortest characteristic time scales (as determined by the e-folding time of lagged autocorrelation functions) are those of ground heat fluxes and temperatures, precipitation and runoff, convective processes, cloud properties, and atmospheric vertical motion, while the longest time scales are exhibited by soil temperature and moisture, surface pressure, and atmospheric specific humidity, temperature, and wind. The time scales of surface heat and momentum fluxes and of convective processes are substantially shorter over land than over oceans. An appropriate sampling frequency for each model variable is obtained by comparing the estimates of first- and second-moment statistics determined at intervals ranging from 2 to 24 hours with the "best" estimates obtained from hourly sampling. Relatively accurate estimation of first- and second-moment climate statistics (10% errors in means, 20% errors in variances) can be achieved by sampling a model variable at intervals that usually are longer than the bandwidth of its time series but that often are shorter than its characteristic time scale. For the surface variables, sampling at intervals that are nonintegral divisors of a 24-hour day yields relatively more accurate time-mean statistics because of a reduction in errors associated with aliasing of the diurnal cycle and higher-frequency harmonics. The superior estimates of first-moment statistics are accompanied by inferior estimates of the variance of the daily means due to the presence of systematic biases, but these probably can be avoided by defining a different measure of low-frequency variability. Estimates of the intradiurnal variance of accumulated precipitation and surface runoff also are strongly impacted by the length of the storage interval. In light of these results, several alternative strategies for storage of the EMWF model variables are recommended.

  12. Nonuniform sampling and non-Fourier signal processing methods in multidimensional NMR

    PubMed Central

    Mobli, Mehdi; Hoch, Jeffrey C.

    2017-01-01

    Beginning with the introduction of Fourier Transform NMR by Ernst and Anderson in 1966, time domain measurement of the impulse response (the free induction decay, FID) consisted of sampling the signal at a series of discrete intervals. For compatibility with the discrete Fourier transform (DFT), the intervals are kept uniform, and the Nyquist theorem dictates the largest value of the interval sufficient to avoid aliasing. With the proposal by Jeener of parametric sampling along an indirect time dimension, extension to multidimensional experiments employed the same sampling techniques used in one dimension, similarly subject to the Nyquist condition and suitable for processing via the discrete Fourier transform. The challenges of obtaining high-resolution spectral estimates from short data records using the DFT were already well understood, however. Despite techniques such as linear prediction extrapolation, the achievable resolution in the indirect dimensions is limited by practical constraints on measuring time. The advent of non-Fourier methods of spectrum analysis capable of processing nonuniformly sampled data has led to an explosion in the development of novel sampling strategies that avoid the limits on resolution and measurement time imposed by uniform sampling. The first part of this review discusses the many approaches to data sampling in multidimensional NMR, the second part highlights commonly used methods for signal processing of such data, and the review concludes with a discussion of other approaches to speeding up data acquisition in NMR. PMID:25456315

  13. Quantification of transuranic elements by time interval correlation spectroscopy of the detected neutrons

    PubMed

    Baeten; Bruggeman; Paepen; Carchon

    2000-03-01

    The non-destructive quantification of transuranic elements in nuclear waste management or in safeguards verifications is commonly performed by passive neutron assay techniques. To minimise the number of unknown sample-dependent parameters, Neutron Multiplicity Counting (NMC) is applied. We developed a new NMC-technique, called Time Interval Correlation Spectroscopy (TICS), which is based on the measurement of Rossi-alpha time interval distributions. Compared to other NMC-techniques, TICS offers several advantages.

  14. Demodulator for binary-phase modulated signals having a variable clock rate

    NASA Technical Reports Server (NTRS)

    Wu, Ta Tzu (Inventor)

    1976-01-01

    Method and apparatus for demodulating binary-phase modulated signals recorded on a magnetic stripe on a card as the card is manually inserted into a card reader. Magnetic transitions are sensed as the card is read and the time interval between immediately preceeding basic transitions determines the duration of a data sampling pulse which detects the presence or absence of an intermediate transition pulse indicative of two respective logic states. The duration of the data sampling pulse is approximately 75 percent of the preceeding interval between basic transitions to permit tracking succeeding time differences in basic transition intervals of up to approximately 25 percent.

  15. Technical note: A device for obtaining time-integrated samples of ruminal fluid

    USGS Publications Warehouse

    Corley, R. N.; Murphy, M.R.; Lucena, J.; Panno, S.V.

    1999-01-01

    A device was adapted to allow for time-integrated sampling of fluid from the rumen via a cannula. The sampler consisted of a cup-shaped ceramic filter positioned in the ventral rumen of a cannulated cow and attached to a tube through which fluid entering the filter was removed continuously using a peristaltic pump. Rate of ruminal fluid removal using the device was monitored over two 36-h periods (at 6-h intervals) and was not affected (P > .05) by time, indicating that the system was not susceptible to clogging during this period. Two cows having ad libitum access to a totally mixed ration were used in a split-block design to evaluate the utility of the system for obtaining time-integrated samples of ruminal fluid. Ruminal fluid VFA concentration and pattern in samples collected in two replicated 8-h periods by the time-integrated sampler (at 1-h intervals) were compared with composite samples collected using a conventional suction-strainer device (at 30-min intervals). Each 8-h collection period started 2 h before or 6 h after feeding. Results indicated that total VFA concentration was not affected (P > .05) by the sampling method. Volatile fatty acid patterns were likewise unaffected (P > .05) except that acetate was 2.5% higher (P < .05) in samples collected 2 h before feeding and valerate was 5% higher (P < .05) in samples collected 6 h after feeding by the suction-strainer device. Although significant, these differences were not considered physiologically important. We concluded that use of the ceramic filter improved the sampling of ruminal fluid by simplifying the technique and allowing time-integrated samples to be obtained.

  16. Appropriate time scales for nonlinear analyses of deterministic jump systems

    NASA Astrophysics Data System (ADS)

    Suzuki, Tomoya

    2011-06-01

    In the real world, there are many phenomena that are derived from deterministic systems but which fluctuate with nonuniform time intervals. This paper discusses the appropriate time scales that can be applied to such systems to analyze their properties. The financial markets are an example of such systems wherein price movements fluctuate with nonuniform time intervals. However, it is common to apply uniform time scales such as 1-min data and 1-h data to study price movements. This paper examines the validity of such time scales by using surrogate data tests to ascertain whether the deterministic properties of the original system can be identified from uniform sampled data. The results show that uniform time samplings are often inappropriate for nonlinear analyses. However, for other systems such as neural spikes and Internet traffic packets, which produce similar outputs, uniform time samplings are quite effective in extracting the system properties. Nevertheless, uniform samplings often generate overlapping data, which can cause false rejections of surrogate data tests.

  17. Seasonal variation in size-dependent survival of juvenile Atlantic salmon (Salmo salar): Performance of multistate capture-mark-recapture models

    USGS Publications Warehouse

    Letcher, B.H.; Horton, G.E.

    2008-01-01

    We estimated the magnitude and shape of size-dependent survival (SDS) across multiple sampling intervals for two cohorts of stream-dwelling Atlantic salmon (Salmo salar) juveniles using multistate capture-mark-recapture (CMR) models. Simulations designed to test the effectiveness of multistate models for detecting SDS in our system indicated that error in SDS estimates was low and that both time-invariant and time-varying SDS could be detected with sample sizes of >250, average survival of >0.6, and average probability of capture of >0.6, except for cases of very strong SDS. In the field (N ??? 750, survival 0.6-0.8 among sampling intervals, probability of capture 0.6-0.8 among sampling occasions), about one-third of the sampling intervals showed evidence of SDS, with poorer survival of larger fish during the age-2+ autumn and quadratic survival (opposite direction between cohorts) during age-1+ spring. The varying magnitude and shape of SDS among sampling intervals suggest a potential mechanism for the maintenance of the very wide observed size distributions. Estimating SDS using multistate CMR models appears complementary to established approaches, can provide estimates with low error, and can be used to detect intermittent SDS. ?? 2008 NRC Canada.

  18. Cleaning frequency and the microbial load in ice-cream.

    PubMed

    Holm, Sonya; Toma, Ramses B; Reiboldt, Wendy; Newcomer, Chris; Calicchia, Melissa

    2002-07-01

    This study investigates the efficacy of a 62 h cleaning frequency in the manufacturing of ice-cream. Various product and product contact surfaces were sampled progressively throughout the time period between cleaning cycles, and analyzed for microbial growth. The coliform and standard plate counts (SPC) of these samples did not vary significantly over time after 0, 24, 48, or 62 h from Cleaning in Place (CiP). Data for product contact surfaces were significant for the SPC representing sample locations. Some of the variables in cleaning practices had significant influence on microbial loads. An increase in the number of flavors manufactured caused a decrease in SPC within the 24 h interval, but by the 48 h interval the SPC increased. More washouts within the first 24 h interval were favorable, as indicated by decreased SPC. The more frequently the liquefier was sanitized within the 62 h interval, the lower the SPC. This study indicates that food safety was not compromised and safety practices were effectively implemented throughout the process.

  19. Improving laboratory results turnaround time by reducing pre analytical phase.

    PubMed

    Khalifa, Mohamed; Khalid, Parwaiz

    2014-01-01

    Laboratory turnaround time is considered one of the most important indicators of work efficiency in hospitals, physicians always need timely results to take effective clinical decisions especially in the emergency department where these results can guide physicians whether to admit patients to the hospital, discharge them home or do further investigations. A retrospective data analysis study was performed to identify the effects of ER and Lab staff training on new routines for sample collection and transportation on the pre-analytical phase of turnaround time. Renal profile tests requested by the ER and performed in 2013 has been selected as a sample, and data about 7,519 tests were retrieved and analyzed to compare turnaround time intervals before and after implementing new routines. Results showed significant time reduction on "Request to Sample Collection" and "Collection to In Lab Delivery" time intervals with less significant improvement on the analytical phase of the turnaround time.

  20. Interval Timing Accuracy and Scalar Timing in C57BL/6 Mice

    PubMed Central

    Buhusi, Catalin V.; Aziz, Dyana; Winslow, David; Carter, Rickey E.; Swearingen, Joshua E.; Buhusi, Mona C.

    2010-01-01

    In many species, interval timing behavior is accurate—appropriate estimated durations—and scalar—errors vary linearly with estimated durations. While accuracy has been previously examined, scalar timing has not been yet clearly demonstrated in house mice (Mus musculus), raising concerns about mouse models of human disease. We estimated timing accuracy and precision in C57BL/6 mice, the most used background strain for genetic models of human disease, in a peak-interval procedure with multiple intervals. Both when timing two intervals (Experiment 1) or three intervals (Experiment 2), C57BL/6 mice demonstrated varying degrees of timing accuracy. Importantly, both at individual and group level, their precision varied linearly with the subjective estimated duration. Further evidence for scalar timing was obtained using an intraclass correlation statistic. This is the first report of consistent, reliable scalar timing in a sizable sample of house mice, thus validating the PI procedure as a valuable technique, the intraclass correlation statistic as a powerful test of the scalar property, and the C57BL/6 strain as a suitable background for behavioral investigations of genetically engineered mice modeling disorders of interval timing. PMID:19824777

  1. Nonuniform sampling and non-Fourier signal processing methods in multidimensional NMR.

    PubMed

    Mobli, Mehdi; Hoch, Jeffrey C

    2014-11-01

    Beginning with the introduction of Fourier Transform NMR by Ernst and Anderson in 1966, time domain measurement of the impulse response (the free induction decay, FID) consisted of sampling the signal at a series of discrete intervals. For compatibility with the discrete Fourier transform (DFT), the intervals are kept uniform, and the Nyquist theorem dictates the largest value of the interval sufficient to avoid aliasing. With the proposal by Jeener of parametric sampling along an indirect time dimension, extension to multidimensional experiments employed the same sampling techniques used in one dimension, similarly subject to the Nyquist condition and suitable for processing via the discrete Fourier transform. The challenges of obtaining high-resolution spectral estimates from short data records using the DFT were already well understood, however. Despite techniques such as linear prediction extrapolation, the achievable resolution in the indirect dimensions is limited by practical constraints on measuring time. The advent of non-Fourier methods of spectrum analysis capable of processing nonuniformly sampled data has led to an explosion in the development of novel sampling strategies that avoid the limits on resolution and measurement time imposed by uniform sampling. The first part of this review discusses the many approaches to data sampling in multidimensional NMR, the second part highlights commonly used methods for signal processing of such data, and the review concludes with a discussion of other approaches to speeding up data acquisition in NMR. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Study of the Effect of Temporal Sampling Frequency on DSCOVR Observations Using the GEOS-5 Nature Run Results. Part II; Cloud Coverage

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Yang, Yuekui

    2016-01-01

    This is the second part of a study on how temporal sampling frequency affects satellite retrievals in support of the Deep Space Climate Observatory (DSCOVR) mission. Continuing from Part 1, which looked at Earth's radiation budget, this paper presents the effect of sampling frequency on DSCOVR-derived cloud fraction. The output from NASA's Goddard Earth Observing System version 5 (GEOS-5) Nature Run is used as the "truth". The effect of temporal resolution on potential DSCOVR observations is assessed by subsampling the full Nature Run data. A set of metrics, including uncertainty and absolute error in the subsampled time series, correlation between the original and the subsamples, and Fourier analysis have been used for this study. Results show that, for a given sampling frequency, the uncertainties in the annual mean cloud fraction of the sunlit half of the Earth are larger over land than over ocean. Analysis of correlation coefficients between the subsamples and the original time series demonstrates that even though sampling at certain longer time intervals may not increase the uncertainty in the mean, the subsampled time series is further and further away from the "truth" as the sampling interval becomes larger and larger. Fourier analysis shows that the simulated DSCOVR cloud fraction has underlying periodical features at certain time intervals, such as 8, 12, and 24 h. If the data is subsampled at these frequencies, the uncertainties in the mean cloud fraction are higher. These results provide helpful insights for the DSCOVR temporal sampling strategy.

  3. Observer Error when Measuring Safety-Related Behavior: Momentary Time Sampling versus Whole-Interval Recording

    ERIC Educational Resources Information Center

    Taylor, Matthew A.; Skourides, Andreas; Alvero, Alicia M.

    2012-01-01

    Interval recording procedures are used by persons who collect data through observation to estimate the cumulative occurrence and nonoccurrence of behavior/events. Although interval recording procedures can increase the efficiency of observational data collection, they can also induce error from the observer. In the present study, 50 observers were…

  4. Statistical inferences with jointly type-II censored samples from two Pareto distributions

    NASA Astrophysics Data System (ADS)

    Abu-Zinadah, Hanaa H.

    2017-08-01

    In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.

  5. Automated storm water sampling on small watersheds

    USGS Publications Warehouse

    Harmel, R.D.; King, K.W.; Slade, R.M.

    2003-01-01

    Few guidelines are currently available to assist in designing appropriate automated storm water sampling strategies for small watersheds. Therefore, guidance is needed to develop strategies that achieve an appropriate balance between accurate characterization of storm water quality and loads and limitations of budget, equipment, and personnel. In this article, we explore the important sampling strategy components (minimum flow threshold, sampling interval, and discrete versus composite sampling) and project-specific considerations (sampling goal, sampling and analysis resources, and watershed characteristics) based on personal experiences and pertinent field and analytical studies. These components and considerations are important in achieving the balance between sampling goals and limitations because they determine how and when samples are taken and the potential sampling error. Several general recommendations are made, including: setting low minimum flow thresholds, using flow-interval or variable time-interval sampling, and using composite sampling to limit the number of samples collected. Guidelines are presented to aid in selection of an appropriate sampling strategy based on user's project-specific considerations. Our experiences suggest these recommendations should allow implementation of a successful sampling strategy for most small watershed sampling projects with common sampling goals.

  6. A new variable interval schedule with constant hazard rate and finite time range.

    PubMed

    Bugallo, Mehdi; Machado, Armando; Vasconcelos, Marco

    2018-05-27

    We propose a new variable interval (VI) schedule that achieves constant probability of reinforcement in time while using a bounded range of intervals. By sampling each trial duration from a uniform distribution ranging from 0 to 2 T seconds, and then applying a reinforcement rule that depends linearly on trial duration, the schedule alternates reinforced and unreinforced trials, each less than 2 T seconds, while preserving a constant hazard function. © 2018 Society for the Experimental Analysis of Behavior.

  7. Analysis of the Factors Affecting the Interval between Blood Donations Using Log-Normal Hazard Model with Gamma Correlated Frailties.

    PubMed

    Tavakol, Najmeh; Kheiri, Soleiman; Sedehi, Morteza

    2016-01-01

    Time to donating blood plays a major role in a regular donor to becoming continues one. The aim of this study was to determine the effective factors on the interval between the blood donations. In a longitudinal study in 2008, 864 samples of first-time donors in Shahrekord Blood Transfusion Center,  capital city of Chaharmahal and Bakhtiari Province, Iran were selected by a systematic sampling and were followed up for five years. Among these samples, a subset of 424 donors who had at least two successful blood donations were chosen for this study and the time intervals between their donations were measured as response variable. Sex, body weight, age, marital status, education, stay and job were recorded as independent variables. Data analysis was performed based on log-normal hazard model with gamma correlated frailty. In this model, the frailties are sum of two independent components assumed a gamma distribution. The analysis was done via Bayesian approach using Markov Chain Monte Carlo algorithm by OpenBUGS. Convergence was checked via Gelman-Rubin criteria using BOA program in R. Age, job and education were significant on chance to donate blood (P<0.05). The chances of blood donation for the higher-aged donors, clericals, workers, free job, students and educated donors were higher and in return, time intervals between their blood donations were shorter. Due to the significance effect of some variables in the log-normal correlated frailty model, it is necessary to plan educational and cultural program to encourage the people with longer inter-donation intervals to donate more frequently.

  8. Estimating fluvial wood discharge from timelapse photography with varying sampling intervals

    NASA Astrophysics Data System (ADS)

    Anderson, N. K.

    2013-12-01

    There is recent focus on calculating wood budgets for streams and rivers to help inform management decisions, ecological studies and carbon/nutrient cycling models. Most work has measured in situ wood in temporary storage along stream banks or estimated wood inputs from banks. Little effort has been employed monitoring and quantifying wood in transport during high flows. This paper outlines a procedure for estimating total seasonal wood loads using non-continuous coarse interval sampling and examines differences in estimation between sampling at 1, 5, 10 and 15 minutes. Analysis is performed on wood transport for the Slave River in Northwest Territories, Canada. Relative to the 1 minute dataset, precision decreased by 23%, 46% and 60% for the 5, 10 and 15 minute datasets, respectively. Five and 10 minute sampling intervals provided unbiased equal variance estimates of 1 minute sampling, whereas 15 minute intervals were biased towards underestimation by 6%. Stratifying estimates by day and by discharge increased precision over non-stratification by 4% and 3%, respectively. Not including wood transported during ice break-up, the total minimum wood load estimated at this site is 3300 × 800$ m3 for the 2012 runoff season. The vast majority of the imprecision in total wood volumes came from variance in estimating average volume per log. Comparison of proportions and variance across sample intervals using bootstrap sampling to achieve equal n. Each trial was sampled for n=100, 10,000 times and averaged. All trials were then averaged to obtain an estimate for each sample interval. Dashed lines represent values from the one minute dataset.

  9. Complexity quantification of cardiac variability time series using improved sample entropy (I-SampEn).

    PubMed

    Marwaha, Puneeta; Sunkaria, Ramesh Kumar

    2016-09-01

    The sample entropy (SampEn) has been widely used to quantify the complexity of RR-interval time series. It is a fact that higher complexity, and hence, entropy is associated with the RR-interval time series of healthy subjects. But, SampEn suffers from the disadvantage that it assigns higher entropy to the randomized surrogate time series as well as to certain pathological time series, which is a misleading observation. This wrong estimation of the complexity of a time series may be due to the fact that the existing SampEn technique updates the threshold value as a function of long-term standard deviation (SD) of a time series. However, time series of certain pathologies exhibits substantial variability in beat-to-beat fluctuations. So the SD of the first order difference (short term SD) of the time series should be considered while updating threshold value, to account for period-to-period variations inherited in a time series. In the present work, improved sample entropy (I-SampEn), a new methodology has been proposed in which threshold value is updated by considering the period-to-period variations of a time series. The I-SampEn technique results in assigning higher entropy value to age-matched healthy subjects than patients suffering atrial fibrillation (AF) and diabetes mellitus (DM). Our results are in agreement with the theory of reduction in complexity of RR-interval time series in patients suffering from chronic cardiovascular and non-cardiovascular diseases.

  10. Sampling effects on the identification of roadkill hotspots: Implications for survey design.

    PubMed

    Santos, Sara M; Marques, J Tiago; Lourenço, André; Medinas, Denis; Barbosa, A Márcia; Beja, Pedro; Mira, António

    2015-10-01

    Although locating wildlife roadkill hotspots is essential to mitigate road impacts, the influence of study design on hotspot identification remains uncertain. We evaluated how sampling frequency affects the accuracy of hotspot identification, using a dataset of vertebrate roadkills (n = 4427) recorded over a year of daily surveys along 37 km of roads. "True" hotspots were identified using this baseline dataset, as the 500-m segments where the number of road-killed vertebrates exceeded the upper 95% confidence limit of the mean, assuming a Poisson distribution of road-kills per segment. "Estimated" hotspots were identified likewise, using datasets representing progressively lower sampling frequencies, which were produced by extracting data from the baseline dataset at appropriate time intervals (1-30 days). Overall, 24.3% of segments were "true" hotspots, concentrating 40.4% of roadkills. For different groups, "true" hotspots accounted from 6.8% (bats) to 29.7% (small birds) of road segments, concentrating from <40% (frogs and toads, snakes) to >60% (lizards, lagomorphs, carnivores) of roadkills. Spatial congruence between "true" and "estimated" hotspots declined rapidly with increasing time interval between surveys, due primarily to increasing false negatives (i.e., missing "true" hotspots). There were also false positives (i.e., wrong "estimated" hotspots), particularly at low sampling frequencies. Spatial accuracy decay with increasing time interval between surveys was higher for smaller-bodied (amphibians, reptiles, small birds, small mammals) than for larger-bodied species (birds of prey, hedgehogs, lagomorphs, carnivores). Results suggest that widely used surveys at weekly or longer intervals may produce poor estimates of roadkill hotspots, particularly for small-bodied species. Surveying daily or at two-day intervals may be required to achieve high accuracy in hotspot identification for multiple species. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Investigation of within- and between-herd variability of bovine leukaemia virus bulk tank milk antibody levels over different sampling intervals in the Canadian Maritimes.

    PubMed

    John, Emily E; Nekouei, Omid; McClure, J T; Cameron, Marguerite; Keefe, Greg; Stryhn, Henrik

    2018-06-01

    Bulk tank milk (BTM) samples are used to determine the infection status and estimate dairy herd prevalence for bovine leukaemia virus (BLV) using an antibody ELISA assay. BLV ELISA variability between samples from the same herd or from different herds has not been investigated over long time periods. The main objective of this study was to determine the within-herd and between-herd variability of a BTM BLV ELISA assay over 1-month, 3-month, and 3-year sampling intervals. All of the Canadian Maritime region dairy herds (n = 523) that were active in 2013 and 2016 were included (83.9% and 86.9% of total herds in 2013 and 2016, respectively). BLV antibody levels were measured in three BTM samples collected at 1-month intervals in early 2013 as well as two BTM samples collected over a 3-month interval in early 2016. Random-effects models, with fixed effects for sample replicate and province and random effects for herd, were used to estimate the variability between BTM samples from the same herd and between herds for 1-month, 3-month, and 3-year sampling intervals. The majority of variability of BTM BLV ELISA results was seen between herds (1-month, 6.792 ± 0.533; 3-month, 7.806 ± 0.652; 3-year, 6.222 ± 0.528). Unexplained variance between samples from the same herd, on square-root scale, was greatest for the 3-year (0.976 ± 0.104), followed by the 1-month (0.611 ± 0.035) then the 3-month (0.557 ± 0.071) intervals. Variability of BTM antibody levels within the same herd was present but was much smaller than the variability between herds, and was greatest for the 3-year sampling interval. The 3-month sampling interval resulted in the least variability and is appropriate to use for estimating the baseline level of within-herd prevalence for BLV control programs. Knowledge of the baseline variability and within-herd prevalence can help to determine effectiveness of control programs when BTM sampling is repeated at longer intervals. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Bayesian analyses of time-interval data for environmental radiation monitoring.

    PubMed

    Luo, Peng; Sharp, Julia L; DeVol, Timothy A

    2013-01-01

    Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.

  13. Measurement of trained speech patterns in stuttering: interjudge and intrajudge agreement of experts by means of modified time-interval analysis.

    PubMed

    Alpermann, Anke; Huber, Walter; Natke, Ulrich; Willmes, Klaus

    2010-09-01

    Improved fluency after stuttering therapy is usually measured by the percentage of stuttered syllables. However, outcome studies rarely evaluate the use of trained speech patterns that speakers use to manage stuttering. This study investigated whether the modified time interval analysis can distinguish between trained speech patterns, fluent speech, and stuttered speech. Seventeen German experts on stuttering judged a speech sample on two occasions. Speakers of the sample were stuttering adults, who were not undergoing therapy, as well as participants in a fluency shaping and a stuttering modification therapy. Results showed satisfactory inter-judge and intra-judge agreement above 80%. Intervals with trained speech patterns were identified as consistently as stuttered and fluent intervals. We discuss limitations of the study, as well as implications of our findings for the development of training for identification of trained speech patterns and future outcome studies. The reader will be able to (a) explain different methods to measure the use of trained speech patterns, (b) evaluate whether German experts are able to discriminate intervals with trained speech patterns reliably from fluent and stuttered intervals and (c) describe how the measurement of trained speech patterns can contribute to outcome studies.

  14. Sampling Development

    ERIC Educational Resources Information Center

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…

  15. Estimating daily fat yield from a single milking on test day for herds with a robotic milking system.

    PubMed

    Peeters, R; Galesloot, P J B

    2002-03-01

    The objective of this study was to estimate the daily fat yield and fat percentage from one sampled milking per cow per test day in an automatic milking system herd, when the milking times and milk yields of all individual milkings are recorded by the automatic milking system. Multiple regression models were used to estimate the 24-h fat percentage when only one milking is sampled for components and milk yields and milking times are known for all milkings in the 24-h period before the sampled milking. In total, 10,697 cow test day records, from 595 herd tests at 91 Dutch herds milked with an automatic milking system, were used. The best model to predict 24-h fat percentage included fat percentage, protein percentage, milk yield and milking interval of the sampled milking, milk yield, and milking interval of the preceding milking, and the interaction between milking interval and the ratio of fat and protein percentage of the sampled milking. This model gave a standard deviation of the prediction error (SE) for 24-h fat percentage of 0.321 and a correlation between the predicted and actual 24-h fat percentage of 0.910. For the 24-h fat yield, we found SE = 90 g and correlation = 0.967. This precision is slightly better than that of present a.m.-p.m. testing schemes. Extra attention must be paid to correctly matching the sample jars and the milkings. Furthermore, milkings with an interval of less than 4 h must be excluded from sampling as well as milkings that are interrupted or that follow an interrupted milking. Under these restrictions (correct matching, interval of at least 4 h, and no interrupted milking), one sampled milking suffices to get a satisfactory estimate for the test-day fat yield.

  16. Baseline-dependent sampling and windowing for radio interferometry: data compression, field-of-interest shaping, and outer field suppression

    NASA Astrophysics Data System (ADS)

    Atemkeng, M.; Smirnov, O.; Tasse, C.; Foster, G.; Keimpema, A.; Paragi, Z.; Jonas, J.

    2018-07-01

    Traditional radio interferometric correlators produce regular-gridded samples of the true uv-distribution by averaging the signal over constant, discrete time-frequency intervals. This regular sampling and averaging then translate to be irregular-gridded samples in the uv-space, and results in a baseline-length-dependent loss of amplitude and phase coherence, which is dependent on the distance from the image phase centre. The effect is often referred to as `decorrelation' in the uv-space, which is equivalent in the source domain to `smearing'. This work discusses and implements a regular-gridded sampling scheme in the uv-space (baseline-dependent sampling) and windowing that allow for data compression, field-of-interest shaping, and source suppression. The baseline-dependent sampling requires irregular-gridded sampling in the time-frequency space, i.e. the time-frequency interval becomes baseline dependent. Analytic models and simulations are used to show that decorrelation remains constant across all the baselines when applying baseline-dependent sampling and windowing. Simulations using MeerKAT telescope and the European Very Long Baseline Interferometry Network show that both data compression, field-of-interest shaping, and outer field-of-interest suppression are achieved.

  17. Sample interval modulation for the simultaneous acquisition of displacement vector data in magnetic resonance elastography: theory and application

    NASA Astrophysics Data System (ADS)

    Klatt, Dieter; Yasar, Temel K.; Royston, Thomas J.; Magin, Richard L.

    2013-12-01

    SampLe Interval Modulation-magnetic resonance elastography (SLIM-MRE) is introduced for simultaneously encoding all three displacement projections of a monofrequency vibration into the MR signal phase. In SLIM-MRE, the individual displacement components are observed using different sample intervals. In doing so, the components are modulated with different apparent frequencies in the MR signal phase expressed as a harmonic function of the start time of the motion encoding gradients and can thus be decomposed by applying a Fourier transform to the sampled multidirectional MR phases. In this work, the theoretical foundations of SLIM-MRE are presented and the new idea is implemented using a high field (11.7 T) vertical bore magnetic resonance imaging system on an inhomogeneous agarose gel phantom sample. The local frequency estimation-derived stiffness values were the same within the error margins for both the new SLIM-MRE method and for conventional MRE, while the number of temporally-resolved MRE experiments needed for each study was reduced from three to one. In this work, we present for the first time, monofrequency displacement data along three sensitization directions that were acquired simultaneously and stored in the same k-space.

  18. Sample interval modulation for the simultaneous acquisition of displacement vector data in magnetic resonance elastography: theory and application.

    PubMed

    Klatt, Dieter; Yasar, Temel K; Royston, Thomas J; Magin, Richard L

    2013-12-21

    SampLe Interval Modulation-magnetic resonance elastography (SLIM-MRE) is introduced for simultaneously encoding all three displacement projections of a monofrequency vibration into the MR signal phase. In SLIM-MRE, the individual displacement components are observed using different sample intervals. In doing so, the components are modulated with different apparent frequencies in the MR signal phase expressed as a harmonic function of the start time of the motion encoding gradients and can thus be decomposed by applying a Fourier transform to the sampled multidirectional MR phases. In this work, the theoretical foundations of SLIM-MRE are presented and the new idea is implemented using a high field (11.7 T) vertical bore magnetic resonance imaging system on an inhomogeneous agarose gel phantom sample. The local frequency estimation-derived stiffness values were the same within the error margins for both the new SLIM-MRE method and for conventional MRE, while the number of temporally-resolved MRE experiments needed for each study was reduced from three to one. In this work, we present for the first time, monofrequency displacement data along three sensitization directions that were acquired simultaneously and stored in the same k-space.

  19. A novel recursive Fourier transform for nonuniform sampled signals: application to heart rate variability spectrum estimation.

    PubMed

    Holland, Alexander; Aboy, Mateo

    2009-07-01

    We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.

  20. Optimal regulation in systems with stochastic time sampling

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1980-01-01

    An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.

  1. Temporal Structure of Volatility Fluctuations

    NASA Astrophysics Data System (ADS)

    Wang, Fengzhong; Yamasaki, Kazuko; Stanley, H. Eugene; Havlin, Shlomo

    Volatility fluctuations are of great importance for the study of financial markets, and the temporal structure is an essential feature of fluctuations. To explore the temporal structure, we employ a new approach based on the return interval, which is defined as the time interval between two successive volatility values that are above a given threshold. We find that the distribution of the return intervals follows a scaling law over a wide range of thresholds, and over a broad range of sampling intervals. Moreover, this scaling law is universal for stocks of different countries, for commodities, for interest rates, and for currencies. However, further and more detailed analysis of the return intervals shows some systematic deviations from the scaling law. We also demonstrate a significant memory effect in the return intervals time organization. We find that the distribution of return intervals is strongly related to the correlations in the volatility.

  2. Quantitative investigation of resolution increase of free-flow electrophoresis via simple interval sample injection and separation.

    PubMed

    Shao, Jing; Fan, Liu-Yin; Cao, Cheng-Xi; Huang, Xian-Qing; Xu, Yu-Quan

    2012-07-01

    Interval free-flow zone electrophoresis (FFZE) has been used to suppress sample band broadening greatly hindering the development of free-flow electrophoresis (FFE). However, there has been still no quantitative study on the resolution increase of interval FFZE. Herein, we tried to make a comparison between bandwidths in interval FFZE and continuous one. A commercial dye with methyl green and crystal violet was well chosen to show the bandwidth. The comparative experiments were conducted under the same sample loading of the model dye (viz. 3.49, 1.75, 1.17, and 0.88 mg/h), the same running time (viz. 5, 10, 15, and 20 min), and the same flux ratio between sample and background buffer (= 10.64 × 10⁻³). Under the given conditions, the experiments demonstrated that (i) the band broadening was evidently caused by hydrodynamic factor in continuous mode, and (ii) the interval mode could clearly eliminate the hydrodynamic broadening existing in continuous mode, greatly increasing the resolution of dye separation. Finally, the interval FFZE was successfully used for the complete separation of two-model antibiotics (herein pyoluteorin and phenazine-1-carboxylic acid coexisting in fermentation broth of a new strain Pseudomonas aeruginosa M18), demonstrating the feasibility of interval FFZE mode for separation of biomolecules. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Identifying Issues and Concerns with the Use of Interval-Based Systems in Single Case Research Using a Pilot Simulation Study

    ERIC Educational Resources Information Center

    Ledford, Jennifer R.; Ayres, Kevin M.; Lane, Justin D.; Lam, Man Fung

    2015-01-01

    Momentary time sampling (MTS), whole interval recording (WIR), and partial interval recording (PIR) are commonly used in applied research. We discuss potential difficulties with analyzing data when these systems are used and present results from a pilot simulation study designed to determine the extent to which these issues are likely to be…

  4. Evaluating the efficiency of environmental monitoring programs

    USGS Publications Warehouse

    Levine, Carrie R.; Yanai, Ruth D.; Lampman, Gregory G.; Burns, Douglas A.; Driscoll, Charles T.; Lawrence, Gregory B.; Lynch, Jason; Schoch, Nina

    2014-01-01

    Statistical uncertainty analyses can be used to improve the efficiency of environmental monitoring, allowing sampling designs to maximize information gained relative to resources required for data collection and analysis. In this paper, we illustrate four methods of data analysis appropriate to four types of environmental monitoring designs. To analyze a long-term record from a single site, we applied a general linear model to weekly stream chemistry data at Biscuit Brook, NY, to simulate the effects of reducing sampling effort and to evaluate statistical confidence in the detection of change over time. To illustrate a detectable difference analysis, we analyzed a one-time survey of mercury concentrations in loon tissues in lakes in the Adirondack Park, NY, demonstrating the effects of sampling intensity on statistical power and the selection of a resampling interval. To illustrate a bootstrapping method, we analyzed the plot-level sampling intensity of forest inventory at the Hubbard Brook Experimental Forest, NH, to quantify the sampling regime needed to achieve a desired confidence interval. Finally, to analyze time-series data from multiple sites, we assessed the number of lakes and the number of samples per year needed to monitor change over time in Adirondack lake chemistry using a repeated-measures mixed-effects model. Evaluations of time series and synoptic long-term monitoring data can help determine whether sampling should be re-allocated in space or time to optimize the use of financial and human resources.

  5. Dynamic response analysis of structure under time-variant interval process model

    NASA Astrophysics Data System (ADS)

    Xia, Baizhan; Qin, Yuan; Yu, Dejie; Jiang, Chao

    2016-10-01

    Due to the aggressiveness of the environmental factor, the variation of the dynamic load, the degeneration of the material property and the wear of the machine surface, parameters related with the structure are distinctly time-variant. Typical model for time-variant uncertainties is the random process model which is constructed on the basis of a large number of samples. In this work, we propose a time-variant interval process model which can be effectively used to deal with time-variant uncertainties with limit information. And then two methods are presented for the dynamic response analysis of the structure under the time-variant interval process model. The first one is the direct Monte Carlo method (DMCM) whose computational burden is relative high. The second one is the Monte Carlo method based on the Chebyshev polynomial expansion (MCM-CPE) whose computational efficiency is high. In MCM-CPE, the dynamic response of the structure is approximated by the Chebyshev polynomials which can be efficiently calculated, and then the variational range of the dynamic response is estimated according to the samples yielded by the Monte Carlo method. To solve the dependency phenomenon of the interval operation, the affine arithmetic is integrated into the Chebyshev polynomial expansion. The computational effectiveness and efficiency of MCM-CPE is verified by two numerical examples, including a spring-mass-damper system and a shell structure.

  6. Test plan for evaluating the operational performance of the prototype nested, fixed-depth fluidic sampler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    REICH, F.R.

    The PHMC will provide Low Activity Wastes (LAW) tank wastes for final treatment by a privatization contractor from two double-shell feed tanks, 241-AP-102 and 241-AP-104. Concerns about the inability of the baseline ''grab'' sampling to provide large volume samples within time constraints has led to the development of a nested, fixed-depth sampling system. This sampling system will provide large volume, representative samples without the environmental, radiation exposure, and sample volume impacts of the current base-line ''grab'' sampling method. A plan has been developed for the cold testing of this nested, fixed-depth sampling system with simulant materials. The sampling system willmore » fill the 500-ml bottles and provide inner packaging to interface with the Hanford Sites cask shipping systems (PAS-1 and/or ''safe-send''). The sampling system will provide a waste stream that will be used for on-line, real-time measurements with an at-tank analysis system. The cold tests evaluate the performance and ability to provide samples that are representative of the tanks' content within a 95 percent confidence interval, to sample while mixing pumps are operating, to provide large sample volumes (1-15 liters) within a short time interval, to sample supernatant wastes with over 25 wt% solids content, to recover from precipitation- and settling-based plugging, and the potential to operate over the 20-year expected time span of the privatization contract.« less

  7. Evaluating test-retest reliability in patient-reported outcome measures for older people: A systematic review.

    PubMed

    Park, Myung Sook; Kang, Kyung Ja; Jang, Sun Joo; Lee, Joo Yun; Chang, Sun Ju

    2018-03-01

    This study aimed to evaluate the components of test-retest reliability including time interval, sample size, and statistical methods used in patient-reported outcome measures in older people and to provide suggestions on the methodology for calculating test-retest reliability for patient-reported outcomes in older people. This was a systematic literature review. MEDLINE, Embase, CINAHL, and PsycINFO were searched from January 1, 2000 to August 10, 2017 by an information specialist. This systematic review was guided by both the Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist and the guideline for systematic review published by the National Evidence-based Healthcare Collaborating Agency in Korea. The methodological quality was assessed by the Consensus-based Standards for the selection of health Measurement Instruments checklist box B. Ninety-five out of 12,641 studies were selected for the analysis. The median time interval for test-retest reliability was 14days, and the ratio of sample size for test-retest reliability to the number of items in each measure ranged from 1:1 to 1:4. The most frequently used statistical methods for continuous scores was intraclass correlation coefficients (ICCs). Among the 63 studies that used ICCs, 21 studies presented models for ICC calculations and 30 studies reported 95% confidence intervals of the ICCs. Additional analyses using 17 studies that reported a strong ICC (>0.09) showed that the mean time interval was 12.88days and the mean ratio of the number of items to sample size was 1:5.37. When researchers plan to assess the test-retest reliability of patient-reported outcome measures for older people, they need to consider an adequate time interval of approximately 13days and the sample size of about 5 times the number of items. Particularly, statistical methods should not only be selected based on the types of scores of the patient-reported outcome measures, but should also be described clearly in the studies that report the results of test-retest reliability. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. The effects of morphine on fixed-interval patterning and temporal discrimination.

    PubMed Central

    Odum, A L; Schaal, D W

    2000-01-01

    Changes produced by drugs in response patterns under fixed-interval schedules of reinforcement have been interpreted to result from changes in temporal discrimination. To examine this possibility, this experiment determined the effects of morphine on the response patterning of 4 pigeons during a fixed-interval 1-min schedule of food delivery with interpolated temporal discrimination trials. Twenty of the 50 total intervals were interrupted by choice trials. Pecks to one key color produced food if the interval was interrupted after a short time (after 2 or 4.64 s). Pecks to another key color produced food if the interval was interrupted after a long time (after 24.99 or 58 s). Morphine (1.0 to 10.0 mg/kg) decreased the index of curvature (a measure of response patterning) during fixed intervals and accuracy during temporal discrimination trials. Accuracy was equally disrupted following short and long sample durations. Although morphine disrupted temporal discrimination in the context of a fixed-interval schedule, these effects are inconsistent with interpretations of the disruption of response patterning as a selective overestimation of elapsed time. The effects of morphine may be related to the effects of more conventional external stimuli on response patterning. PMID:11029024

  9. Gas, Oil, and Water Production from Grand Valley, Parachute, Rulison, and Mamm Creek Fields in the Piceance Basin, Colorado

    USGS Publications Warehouse

    Nelson, Philip H.; Santus, Stephen L.

    2010-01-01

    Gas, oil, and water production data for tight gas reservoirs were compiled from selected wells in western Colorado. These reservoir rocks-the relatively shallow Paleogene Wasatch G sandstone interval in the Parachute and Rulison fields and fluvial sandstones in the deeper Upper Cretaceous Mesaverde Group in the Grand Valley, Parachute, Rulison, and Mamm Creek fields-are characterized by low permeability, low porosity, and the presence of clay minerals in pore space. Production from each well is represented by two samples spaced five years apart, the first sample typically taken two years after production commenced, which was generally in the 1990s. For each producing interval, summary diagrams of oil-versus-gas and water-versus-gas production show fluid production rates, the change in rates during five years, the water-gas and oil-gas ratios, and the fluid type. These diagrams permit well-to-well and field-to-field comparisons. Fields producing water at low rates (water dissolved in gas in the reservoir) can be distinguished from fields producing water at moderate or high rates, and the water-gas ratios are quantified. Dry gas is produced from the Wasatch G interval and wet gas is produced from the Mesaverde Group. Production from the Wasatch G interval is also almost completely free of water, but water production commences with gas production in wells producing from the Mesaverde Group-all of these wells have water-gas ratios exceeding the amount that could exist dissolved in gas at reservoir temperature and pressure. The lack of produced water from the Wasatch G interval is attributed to expansion of the gas accumulation with uplift and erosion. The reported underpressure of the Wasatch G interval is here attributed to hydraulic connection to the atmosphere by outcrops in the Colorado River valley at an elevation lower than that of the gas fields. The amount of reduction of gas production over the five-year time span between the first and second samples is roughly one-half, with median values of second-sample to first-sample gas-production ratios ranging from 0.40 for Rulison-Mesaverde to 0.63 for Rulison-Wasatch G. Commencing with the first sample, the logarithm-of-production rate appears to decline linearly with time in many wells. However, water production is much more erratic as a function of time from an individual well and also from one well to the next within a field. Water production can either decrease or increase with time (from the first to the second sample). In this study, slightly more than half the wells producing from the Mesaverde Group show decreases in water production with time. Plots of water decline versus gas decline show little relation between the two, with only the wells in Rulison field displaying some tendency for water and gas to decline proportionately

  10. Quantifying the impact of time-varying baseline risk adjustment in the self-controlled risk interval design.

    PubMed

    Li, Lingling; Kulldorff, Martin; Russek-Cohen, Estelle; Kawai, Alison Tse; Hua, Wei

    2015-12-01

    The self-controlled risk interval design is commonly used to assess the association between an acute exposure and an adverse event of interest, implicitly adjusting for fixed, non-time-varying covariates. Explicit adjustment needs to be made for time-varying covariates, for example, age in young children. It can be performed via either a fixed or random adjustment. The random-adjustment approach can provide valid point and interval estimates but requires access to individual-level data for an unexposed baseline sample. The fixed-adjustment approach does not have this requirement and will provide a valid point estimate but may underestimate the variance. We conducted a comprehensive simulation study to evaluate their performance. We designed the simulation study using empirical data from the Food and Drug Administration-sponsored Mini-Sentinel Post-licensure Rapid Immunization Safety Monitoring Rotavirus Vaccines and Intussusception study in children 5-36.9 weeks of age. The time-varying confounder is age. We considered a variety of design parameters including sample size, relative risk, time-varying baseline risks, and risk interval length. The random-adjustment approach has very good performance in almost all considered settings. The fixed-adjustment approach can be used as a good alternative when the number of events used to estimate the time-varying baseline risks is at least the number of events used to estimate the relative risk, which is almost always the case. We successfully identified settings in which the fixed-adjustment approach can be used as a good alternative and provided guidelines on the selection and implementation of appropriate analyses for the self-controlled risk interval design. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Estimating clinical chemistry reference values based on an existing data set of unselected animals.

    PubMed

    Dimauro, Corrado; Bonelli, Piero; Nicolussi, Paola; Rassu, Salvatore P G; Cappio-Borlino, Aldo; Pulina, Giuseppe

    2008-11-01

    In an attempt to standardise the determination of biological reference values, the International Federation of Clinical Chemistry (IFCC) has published a series of recommendations on developing reference intervals. The IFCC recommends the use of an a priori sampling of at least 120 healthy individuals. However, such a high number of samples and laboratory analysis is expensive, time-consuming and not always feasible, especially in veterinary medicine. In this paper, an alternative (a posteriori) method is described and is used to determine reference intervals for biochemical parameters of farm animals using an existing laboratory data set. The method used was based on the detection and removal of outliers to obtain a large sample of animals likely to be healthy from the existing data set. This allowed the estimation of reliable reference intervals for biochemical parameters in Sarda dairy sheep. This method may also be useful for the determination of reference intervals for different species, ages and gender.

  12. Retrospective analysis of mercury content in feathers of birds collected from the state of Michigan (1895-2007).

    PubMed

    Head, Jessica A; DeBofsky, Abigail; Hinshaw, Janet; Basu, Niladri

    2011-10-01

    Museum specimens were used to analyze temporal trends in feather mercury (Hg) concentrations in birds collected from the state of Michigan between the years 1895 and 2007. Hg was measured in flank and secondary feathers from three species of birds that breed in the Great Lakes region; common terns (n = 32), great blue herons (n = 35), and herring gulls (n = 35). More than 90% of the Hg in feathers should be organic, but some of the heron and gull feathers collected prior to 1936 showed evidence of contamination with inorganic Hg, likely from museum preservatives. The data presented here therefore consist of organic Hg in pre-1936 samples and total Hg in post-1936 samples. Insufficient tissue was available from terns to assess organic Hg content. Mean Hg concentrations ranged from 2.9 ± 2.5 μg/g Hg in tern flank feathers to 12.4 ± 10.6 μg/g Hg in gull flank feathers. No linear trend of Hg contamination over time was detected in herons and gulls. Though a significant decrease was noted for terns, these data are presented with caution given the strong likelihood that earlier samples were preserved with inorganic mercury. When data were separated into 30-year intervals, Hg content in heron and gull feathers collected from birds sampled between 1920 and 1949 were consistently highest but not to a level of statistical significance. For example, Hg concentrations in gull secondary feathers collected in the second time interval (1920-1949) were 11.5 ± 7.8. This value was 67% higher than the first time interval (1890-1919), 44% higher than the third interval (1950-1979), and 187% higher than the fourth interval (1980-2009). Studies on Great Lakes sediments also showed greatest Hg accumulations in the mid-twentieth century. Through the use of museum specimens, these results present a unique snapshot of Hg concentrations in Great Lakes biota in the early part of the twentieth century.

  13. Comparison of Observational Methods and Their Relation to Ratings of Engagement in Young Children

    ERIC Educational Resources Information Center

    Wood, Brenna K.; Hojnoski, Robin L.; Laracy, Seth D.; Olson, Christopher L.

    2016-01-01

    Although, collectively, results of earlier direct observation studies suggest momentary time sampling (MTS) may offer certain technical advantages over whole-interval (WIR) and partial-interval (PIR) recording, no study has compared these methods for measuring engagement in young children in naturalistic environments. This study compared direct…

  14. Discrimination of Variable Schedules Is Controlled by Interresponse Times Proximal to Reinforcement

    ERIC Educational Resources Information Center

    Tanno, Takayuki; Silberberg, Alan; Sakagami, Takayuki

    2012-01-01

    In Experiment 1, food-deprived rats responded to one of two schedules that were, with equal probability, associated with a sample lever. One schedule was always variable ratio, while the other schedule, depending on the trial within a session, was: (a) a variable-interval schedule; (b) a tandem variable-interval,…

  15. Lone star tick abundance, fire, and bison grazing in tall-grass prairie

    USGS Publications Warehouse

    Cully, J.F.

    1999-01-01

    Lone star ticks (Amblyomma americanum L.) were collected by drag samples of 1 km transects on 12 watersheds at Konza Prairie Research Natural Area near Manhattan, Kans., during summer 1995-1996. Watersheds were treated to 2 experimental treatments: 3 burn intervals (1-year, 4-year, and 20-year) and 2 grazing treatments (grazed by bison (Bos bison L.) or ungrazed). The objectives were to determine whether fire interval, time since most recent burn, and the presence of large ungulate grazers would cause changes in lone star tick abundance in tallgrass prairie in central Kansas. Watersheds burned at 1-year intervals had fewer larvae and adults than watersheds burned at 4-year or 20-year intervals. Watersheds burned during the year of sampling had fewer ticks than watersheds burned one or more years in the past. For watersheds burned 1 or more years in the past there was no effect from time since burn. The presence of bison did not affect tick abundance. Spring burning is an effective method to reduce tick populations in tallgrass prairie during the year of the burn.

  16. A Further Assessment of Momentary Time-Sampling across Extended Interval Lengths

    ERIC Educational Resources Information Center

    Alvero, Alicia M.; Rappaport, Eva; Taylor, Matthew A.

    2011-01-01

    The current study compared the estimation of momentary time-sampling (MTS) to actual safety performance of three ergonomic responses: back, shoulder, and feet. Actual safety performance was established for the five participants by measuring the target responses with a continuous procedure. MTS 90, 105, 120, 135, 150, 165, 180, 195, 210, 240, and…

  17. Exponential synchronization of neural networks with discrete and distributed delays under time-varying sampling.

    PubMed

    Wu, Zheng-Guang; Shi, Peng; Su, Hongye; Chu, Jian

    2012-09-01

    This paper investigates the problem of master-slave synchronization for neural networks with discrete and distributed delays under variable sampling with a known upper bound on the sampling intervals. An improved method is proposed, which captures the characteristic of sampled-data systems. Some delay-dependent criteria are derived to ensure the exponential stability of the error systems, and thus the master systems synchronize with the slave systems. The desired sampled-data controller can be achieved by solving a set of linear matrix inequalitys, which depend upon the maximum sampling interval and the decay rate. The obtained conditions not only have less conservatism but also have less decision variables than existing results. Simulation results are given to show the effectiveness and benefits of the proposed methods.

  18. Extended Task Space Control for Robotic Manipulators

    NASA Technical Reports Server (NTRS)

    Backes, Paul G. (Inventor); Long, Mark K. (Inventor)

    1996-01-01

    The invention is a method of operating a robot in successive sampling intervals to perform a task, the robot having joints and joint actuators with actuator control loops, by decomposing the task into behavior forces, accelerations, velocities and positions of plural behaviors to be exhibited by the robot simultaneously, computing actuator accelerations of the joint actuators for the current sampling interval from both behavior forces, accelerations velocities and positions of the current sampling interval and actuator velocities and positions of the previous sampling interval, computing actuator velocities and positions of the joint actuators for the current sampling interval from the actuator velocities and positions of the previous sampling interval, and, finally, controlling the actuators in accordance with the actuator accelerations, velocities and positions of the current sampling interval. The actuator accelerations, velocities and positions of the current sampling interval are stored for use during the next sampling interval.

  19. Self-calibrating threshold detector

    NASA Technical Reports Server (NTRS)

    Barnes, J. R.; Huang, M. Y. (Inventor)

    1980-01-01

    A self calibrating threshold detector comprises a single demodulating channel which includes a mixer having one input receiving the incoming signal and another input receiving a local replica code. During a short time interval, an incorrect local code is applied to the mixer to incorrectly demodulate the incoming signal and to provide a reference level that calibrates the noise propagating through the channel. A sample and hold circuit is coupled to the channel for storing a sample of the reference level. During a relatively long time interval, the correct replica code provides an output level which ranges between the reference level and a maximum level that represents incoming signal presence and synchronism with the replica code. A summer substracts the stored sample reference from the output level to provide a resultant difference signal indicative of the acquisition of the expected signal.

  20. Representativeness of direct observations selected using a work-sampling equation.

    PubMed

    Sharp, Rebecca A; Mudford, Oliver C; Elliffe, Douglas

    2015-01-01

    Deciding on appropriate sampling to obtain representative samples of behavior is important but not straightforward, because the relative duration of the target behavior may affect its observation in a given sampling interval. Work-sampling methods, which offer a way to adjust the frequency of sampling according to a priori or ongoing estimates of the behavior to achieve a preselected level of representativeness, may provide a solution. Full-week observations of 7 behaviors were conducted for 3 students with autism spectrum disorder and intellectual disabilities. Work-sampling methods were used to select momentary time samples from the full time-of-interest, which produced representative samples. However, work sampling required impractically high numbers of time samples to obtain representative samples. More practical momentary time samples produced less representative samples, particularly for low-duration behaviors. The utility and limits of work-sampling methods for applied behavior analysis are discussed. © Society for the Experimental Analysis of Behavior.

  1. Magnetic Resonance Fingerprinting with short relaxation intervals.

    PubMed

    Amthor, Thomas; Doneva, Mariya; Koken, Peter; Sommer, Karsten; Meineke, Jakob; Börnert, Peter

    2017-09-01

    The aim of this study was to investigate a technique for improving the performance of Magnetic Resonance Fingerprinting (MRF) in repetitive sampling schemes, in particular for 3D MRF acquisition, by shortening relaxation intervals between MRF pulse train repetitions. A calculation method for MRF dictionaries adapted to short relaxation intervals and non-relaxed initial spin states is presented, based on the concept of stationary fingerprints. The method is applicable to many different k-space sampling schemes in 2D and 3D. For accuracy analysis, T 1 and T 2 values of a phantom are determined by single-slice Cartesian MRF for different relaxation intervals and are compared with quantitative reference measurements. The relevance of slice profile effects is also investigated in this case. To further illustrate the capabilities of the method, an application to in-vivo spiral 3D MRF measurements is demonstrated. The proposed computation method enables accurate parameter estimation even for the shortest relaxation intervals, as investigated for different sampling patterns in 2D and 3D. In 2D Cartesian measurements, we achieved a scan acceleration of more than a factor of two, while maintaining acceptable accuracy: The largest T 1 values of a sample set deviated from their reference values by 0.3% (longest relaxation interval) and 2.4% (shortest relaxation interval). The largest T 2 values showed systematic deviations of up to 10% for all relaxation intervals, which is discussed. The influence of slice profile effects for multislice acquisition is shown to become increasingly relevant for short relaxation intervals. In 3D spiral measurements, a scan time reduction of 36% was achieved, maintaining the quality of in-vivo T1 and T2 maps. Reducing the relaxation interval between MRF sequence repetitions using stationary fingerprint dictionaries is a feasible method to improve the scan efficiency of MRF sequences. The method enables fast implementations of 3D spatially resolved MRF. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Point Intercept (PO)

    Treesearch

    John F. Caratti

    2006-01-01

    The FIREMON Point Intercept (PO) method is used to assess changes in plant species cover or ground cover for a macroplot. This method uses a narrow diameter sampling pole or sampling pins, placed at systematic intervals along line transects to sample within plot variation and quantify statistically valid changes in plant species cover and height over time. Plant...

  3. Online Doppler Effect Elimination Based on Unequal Time Interval Sampling for Wayside Acoustic Bearing Fault Detecting System

    PubMed Central

    Ouyang, Kesai; Lu, Siliang; Zhang, Shangbin; Zhang, Haibin; He, Qingbo; Kong, Fanrang

    2015-01-01

    The railway occupies a fairly important position in transportation due to its high speed and strong transportation capability. As a consequence, it is a key issue to guarantee continuous running and transportation safety of trains. Meanwhile, time consumption of the diagnosis procedure is of extreme importance for the detecting system. However, most of the current adopted techniques in the wayside acoustic defective bearing detector system (ADBD) are offline strategies, which means that the signal is analyzed after the sampling process. This would result in unavoidable time latency. Besides, the acquired acoustic signal would be corrupted by the Doppler effect because of high relative speed between the train and the data acquisition system (DAS). Thus, it is difficult to effectively diagnose the bearing defects immediately. In this paper, a new strategy called online Doppler effect elimination (ODEE) is proposed to remove the Doppler distortion online by the introduced unequal interval sampling scheme. The steps of proposed strategy are as follows: The essential parameters are acquired in advance. Then, the introduced unequal time interval sampling strategy is used to restore the Doppler distortion signal, and the amplitude of the signal is demodulated as well. Thus, the restored Doppler-free signal is obtained online. The proposed ODEE method has been employed in simulation analysis. Ultimately, the ODEE method is implemented in the embedded system for fault diagnosis of the train bearing. The results are in good accordance with the bearing defects, which verifies the good performance of the proposed strategy. PMID:26343657

  4. Reversing the Course of Forgetting

    PubMed Central

    White, K. Geoffrey; Brown, Glenn S

    2011-01-01

    Forgetting functions were generated for pigeons in a delayed matching-to-sample task, in which accuracy decreased with increasing retention-interval duration. In baseline training with dark retention intervals, accuracy was high overall. Illumination of the experimental chamber by a houselight during the retention interval impaired performance accuracy by increasing the rate of forgetting. In novel conditions, the houselight was lit at the beginning of a retention interval and then turned off partway through the retention interval. Accuracy was low at the beginning of the retention interval and then increased later in the interval. Thus the course of forgetting was reversed. Such a dissociation of forgetting from the passage of time is consistent with an interference account in which attention or stimulus control switches between the remembering task and extraneous events. PMID:21909163

  5. An apparatus for sequentially combining microvolumes of reagents by infrasonic mixing.

    PubMed

    Camien, M N; Warner, R C

    1984-05-01

    A method employing high-speed infrasonic mixing for obtaining timed samples for following the progress of a moderately rapid chemical reaction is described. Drops of 10 to 50 microliter each of two reagents are mixed to initiate the reaction, followed, after a measured time interval, by mixing with a drop of a third reagent to quench the reaction. The method was developed for measuring the rate of denaturation of covalently closed, circular DNA in NaOH at several temperatures. For this purpose the timed samples were analyzed by analytical ultracentrifugation. The apparatus was tested by determination of the rate of hydrolysis of 2,4-dinitrophenyl acetate in an alkaline buffer. The important characteristics of the method are (i) it requires very small volumes of sample and reagents; (ii) the components of the reaction mixture are pre-equilibrated and mixed with no transfer outside the prescribed constant temperature environment; (iii) the mixing is very rapid; and (iv) satisfactorily precise measurements of relatively short time intervals (approximately 2 sec minimum) between sequential mixings of the components are readily obtainable.

  6. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    PubMed

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. The Performance of the Date-Randomization Test in Phylogenetic Analyses of Time-Structured Virus Data.

    PubMed

    Duchêne, Sebastián; Duchêne, David; Holmes, Edward C; Ho, Simon Y W

    2015-07-01

    Rates and timescales of viral evolution can be estimated using phylogenetic analyses of time-structured molecular sequences. This involves the use of molecular-clock methods, calibrated by the sampling times of the viral sequences. However, the spread of these sampling times is not always sufficient to allow the substitution rate to be estimated accurately. We conducted Bayesian phylogenetic analyses of simulated virus data to evaluate the performance of the date-randomization test, which is sometimes used to investigate whether time-structured data sets have temporal signal. An estimate of the substitution rate passes this test if its mean does not fall within the 95% credible intervals of rate estimates obtained using replicate data sets in which the sampling times have been randomized. We find that the test sometimes fails to detect rate estimates from data with no temporal signal. This error can be minimized by using a more conservative criterion, whereby the 95% credible interval of the estimate with correct sampling times should not overlap with those obtained with randomized sampling times. We also investigated the behavior of the test when the sampling times are not uniformly distributed throughout the tree, which sometimes occurs in empirical data sets. The test performs poorly in these circumstances, such that a modification to the randomization scheme is needed. Finally, we illustrate the behavior of the test in analyses of nucleotide sequences of cereal yellow dwarf virus. Our results validate the use of the date-randomization test and allow us to propose guidelines for interpretation of its results. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Interlaboratory Reproducibility and Proficiency Testing within the Human Papillomavirus Cervical Cancer Screening Program in Catalonia, Spain

    PubMed Central

    Ibáñez, R.; Félez-Sánchez, M.; Godínez, J. M.; Guardià, C.; Caballero, E.; Juve, R.; Combalia, N.; Bellosillo, B.; Cuevas, D.; Moreno-Crespi, J.; Pons, L.; Autonell, J.; Gutierrez, C.; Ordi, J.; de Sanjosé, S.

    2014-01-01

    In Catalonia, a screening protocol for cervical cancer, including human papillomavirus (HPV) DNA testing using the Digene Hybrid Capture 2 (HC2) assay, was implemented in 2006. In order to monitor interlaboratory reproducibility, a proficiency testing (PT) survey of the HPV samples was launched in 2008. The aim of this study was to explore the repeatability of the HC2 assay's performance. Participating laboratories provided 20 samples annually, 5 randomly chosen samples from each of the following relative light unit (RLU) intervals: <0.5, 0.5 to 0.99, 1 to 9.99, and ≥10. Kappa statistics were used to determine the agreement levels between the original and the PT readings. The nature and origin of the discrepant results were calculated by bootstrapping. A total of 946 specimens were retested. The kappa values were 0.91 for positive/negative categorical classification and 0.79 for the four RLU intervals studied. Sample retesting yielded systematically lower RLU values than the original test (P < 0.005), independently of the time elapsed between the two determinations (median, 53 days), possibly due to freeze-thaw cycles. The probability for a sample to show clinically discrepant results upon retesting was a function of the RLU value; samples with RLU values in the 0.5 to 5 interval showed 10.80% probability to yield discrepant results (95% confidence interval [CI], 7.86 to 14.33) compared to 0.85% probability for samples outside this interval (95% CI, 0.17 to 1.69). Globally, the HC2 assay shows high interlaboratory concordance. We have identified differential confidence thresholds and suggested the guidelines for interlaboratory PT in the future, as analytical quality assessment of HPV DNA detection remains a central component of the screening program for cervical cancer prevention. PMID:24574284

  9. 40 CFR 60.285a - Test methods and procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... performance test. (b) The owner or operator must determine compliance with the filterable particulate matter... used to determine the filterable particulate matter concentration. The sampling time and sample volume... repeat performance tests for filterable particulate matter at intervals no longer than 5 years following...

  10. Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.

    PubMed

    Obuchowski, Nancy A; Bullen, Jennifer

    2017-01-01

    Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.

  11. Pediatric-specific reference intervals in a nationally representative sample of Iranian children and adolescents: the CASPIAN-III study.

    PubMed

    Kelishadi, Roya; Marateb, Hamid Reza; Mansourian, Marjan; Ardalan, Gelayol; Heshmat, Ramin; Adeli, Khosrow

    2016-08-01

    This study aimed to determine for the first time the age- and gender-specific reference intervals for biomarkers of bone, metabolism, nutrition, and obesity in a nationally representative sample of the Iranian children and adolescents. We assessed the data of blood samples obtained from healthy Iranian children and adolescents, aged 7 to 19 years. The reference intervals of glucose, lipid profile, liver enzymes, zinc, copper, chromium, magnesium, and 25-hydroxy vitamin D [25(OH)D] were determined according to the Clinical & Laboratory Standards Institute C28-A3 guidelines. The reference intervals were partitioned using the Harris-Boyd method according to age and gender. The study population consisted of 4800 school students (50% boys, mean age of 13.8 years). Twelve chemistry analyses were partitioned by age and gender, displaying the range of results between the 2.5th to 97.5th percentiles. Significant differences existed only between boys and girls at 18 to 19 years of age for low density lipoprotein-cholesterol. 25(OH)D had the only reference interval that was similar to all age groups and both sexes. This study presented the first national database of reference intervals for a number of biochemical markers in Iranian children and adolescents. It is the first report of its kind from the Middle East and North Africa. The findings underscore the importance of providing reference intervals in different ethnicities and in various regions.

  12. Recent progresses in outcome-dependent sampling with failure time data.

    PubMed

    Ding, Jieli; Lu, Tsui-Shan; Cai, Jianwen; Zhou, Haibo

    2017-01-01

    An outcome-dependent sampling (ODS) design is a retrospective sampling scheme where one observes the primary exposure variables with a probability that depends on the observed value of the outcome variable. When the outcome of interest is failure time, the observed data are often censored. By allowing the selection of the supplemental samples depends on whether the event of interest happens or not and oversampling subjects from the most informative regions, ODS design for the time-to-event data can reduce the cost of the study and improve the efficiency. We review recent progresses and advances in research on ODS designs with failure time data. This includes researches on ODS related designs like case-cohort design, generalized case-cohort design, stratified case-cohort design, general failure-time ODS design, length-biased sampling design and interval sampling design.

  13. Recent progresses in outcome-dependent sampling with failure time data

    PubMed Central

    Ding, Jieli; Lu, Tsui-Shan; Cai, Jianwen; Zhou, Haibo

    2016-01-01

    An outcome-dependent sampling (ODS) design is a retrospective sampling scheme where one observes the primary exposure variables with a probability that depends on the observed value of the outcome variable. When the outcome of interest is failure time, the observed data are often censored. By allowing the selection of the supplemental samples depends on whether the event of interest happens or not and oversampling subjects from the most informative regions, ODS design for the time-to-event data can reduce the cost of the study and improve the efficiency. We review recent progresses and advances in research on ODS designs with failure time data. This includes researches on ODS related designs like case–cohort design, generalized case–cohort design, stratified case–cohort design, general failure-time ODS design, length-biased sampling design and interval sampling design. PMID:26759313

  14. An analysis of first-time blood donors return behaviour using regression models.

    PubMed

    Kheiri, S; Alibeigi, Z

    2015-08-01

    Blood products have a vital role in saving many patients' lives. The aim of this study was to analyse blood donor return behaviour. Using a cross-sectional follow-up design of 5-year duration, 864 first-time donors who had donated blood were selected using a systematic sampling. The behaviours of donors via three response variables, return to donation, frequency of return to donation and the time interval between donations, were analysed based on logistic regression, negative binomial regression and Cox's shared frailty model for recurrent events respectively. Successful return to donation rated at 49·1% and the deferral rate was 13·3%. There was a significant reverse relationship between the frequency of return to donation and the time interval between donations. Sex, body weight and job had an effect on return to donation; weight and frequency of donation during the first year had a direct effect on the total frequency of donations. Age, weight and job had a significant effect on the time intervals between donations. Aging decreases the chances of return to donation and increases the time interval between donations. Body weight affects the three response variables, i.e. the higher the weight, the more the chances of return to donation and the shorter the time interval between donations. There is a positive correlation between the frequency of donations in the first year and the total number of return to donations. Also, the shorter the time interval between donations is, the higher the frequency of donations. © 2015 British Blood Transfusion Society.

  15. High-frequency rock temperature data from hyper-arid desert environments in the Atacama and the Antarctic Dry Valleys and implications for rock weathering

    NASA Astrophysics Data System (ADS)

    McKay, Christopher P.; Molaro, Jamie L.; Marinova, Margarita M.

    2009-09-01

    In desert environments with low water and salt contents, rapid thermal variations may be an important source of rock weathering. We have obtained temperature measurements of the surface of rocks in hyper-arid hot and cold desert environments at a rate of 1/s over several days. The values of temperature change over 1-second intervals were similar in hot and cold deserts despite a 30 °C difference in absolute rock surface temperature. The average percentage of the time dT/dt > 2 °C/min was ~ 8 ± 3%, > 4 °C/min was 1 ± 0.9%, and > 8 °C/min was 0.02 ± 0.03%. The maximum change over a 1-second interval was ~ 10 °C/min. When sampled to simulate data taken over intervals longer than 1 s, we found a reduction in time spent above the 2 °C/min temperature gradient threshold. For 1-minute samples, the time spent above any given threshold was about two orders of magnitude lower than the corresponding value for 1-second sampling. We suggest that a rough measure of efficacy of weathering as a function of frequency is the product of the percentage of time spent above a given threshold value multiplied by the damping depth for the corresponding frequency. This product has a broad maximum for periods between 3 and 10 s.

  16. Sex Differences in the Age of Peak Marathon Race Time.

    PubMed

    Nikolaidis, Pantelis T.; Rosemann, Thomas; Knechtle, Beat

    2018-04-30

    Recent studies showed that women were older than men when achieving their fastest marathon race time. These studies, however, investigated a limited sample of athletes. We investigated the age of peak marathon performance in a large sample of female and male marathon finishers by using data from all finishers. We analyzed the age of peak marathon performance in 1-year and 5-year age intervals of 451,637 runners (i.e. 168,702 women and 282,935 men) who finished the ‘New York City Marathon’ between 2006 and 2016, using analysis of variance and non-linear regression analysis. During these 11 years, men were faster and older than women, the participation of women increased disproportionately to that of men resulting in a decrease of the male-to-female ratio, and relatively more women participated in the younger age groups. Most women were in the age group 30-34 years and most men in the age group 40-44 years. The fastest race time was shown at 29.7 years in women and 34.8 years in men in the 1-year age intervals, and in age group 30-34 years in women and 35-39 years in men in the 5-year age intervals. In contrast to existing findings reporting a higher age of peak marathon performance in women compared to men, we found that women achieved their best marathon race time ~5 years earlier in life than men in both 1-year and 5-year age intervals. Female athletes and their coaches should plan to achieve their fastest marathon race time at the age of ~30 years.

  17. A general method to determine sampling windows for nonlinear mixed effects models with an application to population pharmacokinetic studies.

    PubMed

    Foo, Lee Kien; McGree, James; Duffull, Stephen

    2012-01-01

    Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.

  18. A 3.9 ps Time-Interval RMS Precision Time-to-Digital Converter Using a Dual-Sampling Method in an UltraScale FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Yonggang; Liu, Chong

    2016-10-01

    Field programmable gate arrays (FPGAs) manufactured with more advanced processing technology have faster carry chains and smaller delay elements, which are favorable for the design of tapped delay line (TDL)-style time-to-digital converters (TDCs) in FPGA. However, new challenges are posed in using them to implement TDCs with a high time precision. In this paper, we propose a bin realignment method and a dual-sampling method for TDC implementation in a Xilinx UltraScale FPGA. The former realigns the disordered time delay taps so that the TDC precision can approach the limit of its delay granularity, while the latter doubles the number of taps in the delay line so that the TDC precision beyond the cell delay limitation can be expected. Two TDC channels were implemented in a Kintex UltraScale FPGA, and the effectiveness of the new methods was evaluated. For fixed time intervals in the range from 0 to 440 ns, the average RMS precision measured by the two TDC channels reaches 5.8 ps using the bin realignment, and it further improves to 3.9 ps by using the dual-sampling method. The time precision has a 5.6% variation in the measured temperature range. Every part of the TDC, including dual-sampling, encoding, and on-line calibration, could run at a 500 MHz clock frequency. The system measurement dead time is only 4 ns.

  19. Tooth enamel mineralization in ungulates: implications for recovering a primary isotopic time-series

    NASA Astrophysics Data System (ADS)

    Passey, Benjamin H.; Cerling, Thure E.

    2002-09-01

    Temporal changes in the carbon and oxygen isotopic composition of an animal are an environmental and behavioral input signal that is recorded into the enamel of developing teeth. In this paper, we evaluate changes in phosphorus content and density along the axial lengths of three developing ungulate teeth to illustrate the protracted nature of mineral accumulation in a volume of developing enamel. The least mature enamel in these teeth contains by volume about 25% of the mineral mass of mature enamel, and the remaining 75% of the mineral accumulates during maturation. Using data from one of these teeth (a Hippopotamus amphibius canine), we develop a model for teeth growing at constant rate that describes how an input signal is recorded into tooth enamel. The model accounts for both the temporal and spatial patterns of amelogenesis (enamel formation) and the sampling geometry. The model shows that input signal attenuation occurs as a result of time-averaging during amelogenesis when the maturation interval is long compared to the duration of features in the input signal. Sampling does not induce significant attenuation, provided that the sampling interval is several times shorter than the maturation interval. We present a detailed δ 13C and δ 18O record for the H. amphibius canine and suggest possible input isotope signals that may have given rise to the measured isotope signal.

  20. Adrenal Hormones in Common Bottlenose Dolphins (Tursiops truncatus): Influential Factors and Reference Intervals.

    PubMed

    Hart, Leslie B; Wells, Randall S; Kellar, Nick; Balmer, Brian C; Hohn, Aleta A; Lamb, Stephen V; Rowles, Teri; Zolman, Eric S; Schwacke, Lori H

    2015-01-01

    Inshore common bottlenose dolphins (Tursiops truncatus) are exposed to a broad spectrum of natural and anthropogenic stressors. In response to these stressors, the mammalian adrenal gland releases hormones such as cortisol and aldosterone to maintain physiological and biochemical homeostasis. Consequently, adrenal gland dysfunction results in disruption of hormone secretion and an inappropriate stress response. Our objective herein was to develop diagnostic reference intervals (RIs) for adrenal hormones commonly associated with the stress response (i.e., cortisol, aldosterone) that account for the influence of intrinsic (e.g., age, sex) and extrinsic (e.g., time) factors. Ultimately, these reference intervals will be used to gauge an individual's response to chase-capture stress and could indicate adrenal abnormalities. Linear mixed models (LMMs) were used to evaluate demographic and sampling factors contributing to differences in serum cortisol and aldosterone concentrations among bottlenose dolphins sampled in Sarasota Bay, Florida, USA (2000-2012). Serum cortisol concentrations were significantly associated with elapsed time from initial stimulation to sample collection (p<0.05), and RIs were constructed using nonparametric methods based on elapsed sampling time for dolphins sampled in less than 30 minutes following net deployment (95% RI: 0.91-4.21 µg/dL) and following biological sampling aboard a research vessel (95% RI: 2.32-6.68 µg/dL). To examine the applicability of the pre-sampling cortisol RI across multiple estuarine stocks, data from three additional southeast U.S. sites were compared, revealing that all of the dolphins sampled from the other sites (N = 34) had cortisol concentrations within the 95th percentile RI. Significant associations between serum concentrations of aldosterone and variables reported in previous studies (i.e., age, elapsed sampling time) were not observed in the current project (p<0.05). Also, approximately 16% of Sarasota Bay bottlenose dolphin aldosterone concentrations were below the assay's detection limit (11 pg/mL), thus hindering the ability to derive 95th percentile RIs. Serum aldosterone concentrations from animals sampled at the three additional sites were compared to the detection limit, and the proportion of animals with low aldosterone concentrations was not significantly different than an expected prevalence of 16%. Although this study relied upon long-term, free-ranging bottlenose dolphin health data from a single site, the objective RIs can be used for future evaluation of adrenal function among individuals sampled during capture-release health assessments.

  1. Event- and interval-based measurement of stuttering: a review.

    PubMed

    Valente, Ana Rita S; Jesus, Luis M T; Hall, Andreia; Leahy, Margaret

    2015-01-01

    Event- and interval-based measurements are two different ways of computing frequency of stuttering. Interval-based methodology emerged as an alternative measure to overcome problems associated with reproducibility in the event-based methodology. No review has been made to study the effect of methodological factors in interval-based absolute reliability data or to compute the agreement between the two methodologies in terms of inter-judge, intra-judge and accuracy (i.e., correspondence between raters' scores and an established criterion). To provide a review related to reproducibility of event-based and time-interval measurement, and to verify the effect of methodological factors (training, experience, interval duration, sample presentation order and judgment conditions) on agreement of time-interval measurement; in addition, to determine if it is possible to quantify the agreement between the two methodologies The first two authors searched for articles on ERIC, MEDLINE, PubMed, B-on, CENTRAL and Dissertation Abstracts during January-February 2013 and retrieved 495 articles. Forty-eight articles were selected for review. Content tables were constructed with the main findings. Articles related to event-based measurements revealed values of inter- and intra-judge greater than 0.70 and agreement percentages beyond 80%. The articles related to time-interval measures revealed that, in general, judges with more experience with stuttering presented significantly higher levels of intra- and inter-judge agreement. Inter- and intra-judge values were beyond the references for high reproducibility values for both methodologies. Accuracy (regarding the closeness of raters' judgements with an established criterion), intra- and inter-judge agreement were higher for trained groups when compared with non-trained groups. Sample presentation order and audio/video conditions did not result in differences in inter- or intra-judge results. A duration of 5 s for an interval appears to be an acceptable agreement. Explanation for high reproducibility values as well as parameter choice to report those data are discussed. Both interval- and event-based methodologies used trained or experienced judges for inter- and intra-judge determination and data were beyond the references for good reproducibility values. Inter- and intra-judge values were reported in different metric scales among event- and interval-based methods studies, making it unfeasible to quantify the agreement between the two methods. © 2014 Royal College of Speech and Language Therapists.

  2. Microbial Monitoring of Common Opportunistic Pathogens by Comparing Multiple Real-time PCR Platforms for Potential Space Applications

    NASA Technical Reports Server (NTRS)

    Roman, Monserrate C.; Jones, Kathy U.; Oubre, Cherie M.; Castro, Victoria; Ott, Mark C.; Birmele, Michele; Venkateswaran, Kasthuri J.; Vaishampayan, Parag A.

    2013-01-01

    Current methods for microbial detection: a) Labor & time intensive cultivation-based approaches that can fail to detect or characterize all cells present. b) Requires collection of samples on orbit and transportation back to ground for analysis. Disadvantages to current detection methods: a) Unable to perform quick and reliable detection on orbit. b) Lengthy sampling intervals. c) No microbe identification.

  3. Study design and sampling intensity for demographic analyses of bear populations

    USGS Publications Warehouse

    Harris, R.B.; Schwartz, C.C.; Mace, R.D.; Haroldson, M.A.

    2011-01-01

    The rate of population change through time (??) is a fundamental element of a wildlife population's conservation status, yet estimating it with acceptable precision for bears is difficult. For studies that follow known (usually marked) bears, ?? can be estimated during some defined time by applying either life-table or matrix projection methods to estimates of individual vital rates. Usually however, confidence intervals surrounding the estimate are broader than one would like. Using an estimator suggested by Doak et al. (2005), we explored the precision to be expected in ?? from demographic analyses of typical grizzly (Ursus arctos) and American black (U. americanus) bear data sets. We also evaluated some trade-offs among vital rates in sampling strategies. Confidence intervals around ?? were more sensitive to adding to the duration of a short (e.g., 3 yrs) than a long (e.g., 10 yrs) study, and more sensitive to adding additional bears to studies with small (e.g., 10 adult females/yr) than large (e.g., 30 adult females/yr) sample sizes. Confidence intervals of ?? projected using process-only variance of vital rates were only slightly smaller than those projected using total variances of vital rates. Under sampling constraints typical of most bear studies, it may be more efficient to invest additional resources into monitoring recruitment and juvenile survival rates of females already a part of the study, than to simply increase the sample size of study females. ?? 2011 International Association for Bear Research and Management.

  4. Factors affecting blood sample haemolysis: a cross-sectional study.

    PubMed

    Barnard, Ed B G; Potter, David L; Ayling, Ruth M; Higginson, Ian; Bailey, Andrew G; Smith, Jason E

    2016-04-01

    To determine the effect of blood sampling through an intravenous catheter compared with a needle in Emergency Department blood sampling. We undertook a prospective, cross-sectional study in a UK university teaching hospital Emergency Department. A convenience sample of 985 patients who required blood sampling via venepuncture was collected. A total of 844 complete sets of data were analysed. The median age was 63 years, and 57% of patients were male. The primary outcome measure was the incidence of haemolysis in blood samples obtained via a needle compared with samples obtained via an intravenous catheter. Secondary outcome measures defined the effect on sample haemolysis of the side of the patient the sample was obtained from, the anatomical location of sampling, the perceived difficulty in obtaining the sample, the order of sample tubes collected, estimated tourniquet time and bench time. Data were analysed with logistic regression, and expressed as odds ratios (95% confidence intervals; P-values). Blood samples obtained through an intravenous catheter were more likely to be haemolysed than those obtained via a needle, odds ratio 5.63 (95% confidence interval 2.49-12.73; P<0.001). Blood sampling via an intravenous catheter was significantly associated with an increase in the likelihood of sample haemolysis compared with sampling with a needle. Wherever practicable, blood samples should be obtained via a needle in preference to an intravenous catheter. Future research should include both an economic evaluation, and staff and patient satisfaction of separating blood sampling and intravenous catheter placement.

  5. Seasonal and high-resolution variability in hydrochemistry of the Andes-Amazon

    NASA Astrophysics Data System (ADS)

    Burt, E.; West, A. J.

    2017-12-01

    Stream hydrochemistry acts as a record of integrated catchment processes such as the amount of time it takes precipitation to flow through the subsurface and become streamflow (water transit times), water-rock interaction and biogeochemical cycling. Although it is understood that sampling interval affects observed patterns in hydrochemistry, most studies collect samples on a weekly, bi-weekly or monthly schedule due to lack of resources or the difficulty of maintaining automated sampling devices. Here, we attempt to combine information from two sampling time scales, comparing a year-long hydrochemical time series to data from a recent sub-daily sampling campaign. Starting in April 2016, river, soil and rain waters have been collected every two weeks at five small catchments spanning the tropical Andes and Amazon - a natural laboratory for its gradients in topography, erosion rates, precipitation, temperature and flora. Between January and March, 2017, we conducted high frequency sampling for approximately one week at each catchment, sampling at least every four hours including overnight. We will constrain young water fractions (Kirchner, 2016) and storm water fluxes for the experimental catchments using stable isotopes of water as conservative tracers. Major element data will provide the opportunity to make initial constraints on geochemical and hydrologic coupling. Preliminary results suggest that in the Amazon, hydrochemistry patterns are dependent on sampling frequency: the seasonal cycle in stable isotopes of water is highly damped, while the high resolution sampling displays large variability. This suggests that a two-week sampling interval is not frequent enough to capture rapid transport of water, perhaps through preferential flow networks. In the Andes, stable isotopes of water are highly damped in both the seasonal and high resolution cycle, suggesting that the catchment behaves as a "well-mixed" system.

  6. Confidence intervals for the population mean tailored to small sample sizes, with applications to survey sampling.

    PubMed

    Rosenblum, Michael A; Laan, Mark J van der

    2009-01-07

    The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).

  7. Fourier-transform infrared derivative spectroscopy with an improved signal-to-noise ratio.

    PubMed

    Fetterman, M R

    2005-09-01

    Infrared derivative spectroscopy is a useful technique for finding peaks hidden in broad spectral features. A data acquisition technique is shown that will improve the signal-to-noise ratio (SNR) of Fourier-transform infrared (FTIR) derivative spectroscopy. Typically, in a FTIR measurement one samples each point for the same time interval. The effect of using a graded time interval is studied. The simulations presented show that the SNR of first-derivative FTIR spectroscopy will improve by 15% and that the SNR of second-derivative FTIR will improve by 34%.

  8. Modeling stream fish distributions using interval-censored detection times.

    PubMed

    Ferreira, Mário; Filipe, Ana Filipa; Bardos, David C; Magalhães, Maria Filomena; Beja, Pedro

    2016-08-01

    Controlling for imperfect detection is important for developing species distribution models (SDMs). Occupancy-detection models based on the time needed to detect a species can be used to address this problem, but this is hindered when times to detection are not known precisely. Here, we extend the time-to-detection model to deal with detections recorded in time intervals and illustrate the method using a case study on stream fish distribution modeling. We collected electrofishing samples of six fish species across a Mediterranean watershed in Northeast Portugal. Based on a Bayesian hierarchical framework, we modeled the probability of water presence in stream channels, and the probability of species occupancy conditional on water presence, in relation to environmental and spatial variables. We also modeled time-to-first detection conditional on occupancy in relation to local factors, using modified interval-censored exponential survival models. Posterior distributions of occupancy probabilities derived from the models were used to produce species distribution maps. Simulations indicated that the modified time-to-detection model provided unbiased parameter estimates despite interval-censoring. There was a tendency for spatial variation in detection rates to be primarily influenced by depth and, to a lesser extent, stream width. Species occupancies were consistently affected by stream order, elevation, and annual precipitation. Bayesian P-values and AUCs indicated that all models had adequate fit and high discrimination ability, respectively. Mapping of predicted occupancy probabilities showed widespread distribution by most species, but uncertainty was generally higher in tributaries and upper reaches. The interval-censored time-to-detection model provides a practical solution to model occupancy-detection when detections are recorded in time intervals. This modeling framework is useful for developing SDMs while controlling for variation in detection rates, as it uses simple data that can be readily collected by field ecologists.

  9. 'Aussie normals': an a priori study to develop clinical chemistry reference intervals in a healthy Australian population.

    PubMed

    Koerbin, G; Cavanaugh, J A; Potter, J M; Abhayaratna, W P; West, N P; Glasgow, N; Hawkins, C; Armbruster, D; Oakman, C; Hickman, P E

    2015-02-01

    Development of reference intervals is difficult, time consuming, expensive and beyond the scope of most laboratories. The Aussie Normals study is a direct a priori study to determine reference intervals in healthy Australian adults. All volunteers completed a health and lifestyle questionnaire and exclusion was based on conditions such as pregnancy, diabetes, renal or cardiovascular disease. Up to 91 biochemical analyses were undertaken on a variety of analytical platforms using serum samples collected from 1856 volunteers. We report on our findings for 40 of these analytes and two calculated parameters performed on the Abbott ARCHITECTci8200/ci16200 analysers. Not all samples were analysed for all assays due to volume requirements or assay/instrument availability. Results with elevated interference indices and those deemed unsuitable after clinical evaluation were removed from the database. Reference intervals were partitioned based on the method of Harris and Boyd into three scenarios, combined gender, males and females and age and gender. We have performed a detailed reference interval study on a healthy Australian population considering the effects of sex, age and body mass. These reference intervals may be adapted to other manufacturer's analytical methods using method transference.

  10. Usability of Immunohistochemistry in Forensic Samples With Varying Decomposition.

    PubMed

    Lesnikova, Iana; Schreckenbach, Marc Niclas; Kristensen, Maria Pihlmann; Papanikolaou, Liv Lindegaard; Hamilton-Dutoit, Stephen

    2018-05-24

    Immunohistochemistry (IHC) is an important diagnostic tool in anatomic and surgical pathology but is used less frequently in forensic pathology. Degradation of tissue because of postmortem decomposition is believed to be a major limiting factor, although it is unclear what impact such degradation actually has on IHC staining validity. This study included 120 forensic autopsy samples of liver, lung, and brain tissues obtained for diagnostic purposes. The time from death to autopsy ranged between 1 and more than 14 days. Samples were prepared using the tissue microarray technique. The antibodies chosen for the study included KL1 (for staining bile duct epithelium), S100 (for staining glial cells and myelin), vimentin (for endothelial cells in cerebral blood vessels), and CD45 (for pulmonary lymphocytes). Slides were evaluated by light microscopy. Immunohistochemistry reactions were scored according to a system based on the extent and intensity of the positive stain. An overall correlation between the postmortem interval and the IHC score for all tissue samples was found. Samples from decedents with a postmortem interval of 1 to 3 days showed positive staining with all antibodies, whereas samples from decedents with a longer postmortem interval showed decreased staining rates. Our results suggest that IHC analysis can be successfully used for postmortem diagnosis in a range of autopsy samples showing lesser degrees of decomposition.

  11. Validation of mercury tip-switch and accelerometer activity sensors for identifying resting and active behavior in bears

    USGS Publications Warehouse

    Jasmine Ware,; Rode, Karyn D.; Pagano, Anthony M.; Bromaghin, Jeffrey F.; Robbins, Charles T.; Joy Erlenbach,; Shannon Jensen,; Amy Cutting,; Nicole Nicassio-Hiskey,; Amy Hash,; Owen, Megan A.; Heiko Jansen,

    2015-01-01

    Activity sensors are often included in wildlife transmitters and can provide information on the behavior and activity patterns of animals remotely. However, interpreting activity-sensor data relative to animal behavior can be difficult if animals cannot be continuously observed. In this study, we examined the performance of a mercury tip-switch and a tri-axial accelerometer housed in collars to determine whether sensor data can be accurately classified as resting and active behaviors and whether data are comparable for the 2 sensor types. Five captive bears (3 polar [Ursus maritimus] and 2 brown [U. arctos horribilis]) were fitted with a collar specially designed to internally house the sensors. The bears’ behaviors were recorded, classified, and then compared with sensor readings. A separate tri-axial accelerometer that sampled continuously at a higher frequency and provided raw acceleration values from 3 axes was also mounted on the collar to compare with the lower resolution sensors. Both accelerometers more accurately identified resting and active behaviors at time intervals ranging from 1 minute to 1 hour (≥91.1% accuracy) compared with the mercury tip-switch (range = 75.5–86.3%). However, mercury tip-switch accuracy improved when sampled at longer intervals (e.g., 30–60 min). Data from the lower resolution accelerometer, but not the mercury tip-switch, accurately predicted the percentage of time spent resting during an hour. Although the number of bears available for this study was small, our results suggest that these activity sensors can remotely identify resting versus active behaviors across most time intervals. We recommend that investigators consider both study objectives and the variation in accuracy of classifying resting and active behaviors reported here when determining sampling interval.

  12. Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models

    NASA Astrophysics Data System (ADS)

    Saha, Debasish; Kemanian, Armen R.; Rau, Benjamin M.; Adler, Paul R.; Montes, Felipe

    2017-04-01

    Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (corn-soybean rotation), College Station, TX (corn-vetch rotation), Fort Collins, CO (irrigated corn), and Pullman, WA (winter wheat), representing diverse agro-ecoregions of the United States. Fertilization source, rate, and timing were site-specific. These simulated fluxes surrogated daily measurements in the analysis. We ;sampled; the fluxes using a fixed interval (1-32 days) or a rule-based (decision tree-based) sampling method. Two types of decision trees were built: a high-input tree (HI) that included soil inorganic nitrogen (SIN) as a predictor variable, and a low-input tree (LI) that excluded SIN. Other predictor variables were identified with Random Forest. The decision trees were inverted to be used as rules for sampling a representative number of members from each terminal node. The uncertainty of the annual N2O flux estimation increased along with the fixed interval length. A 4- and 8-day fixed sampling interval was required at College Station and Ames, respectively, to yield ±20% accuracy in the flux estimate; a 12-day interval rendered the same accuracy at Fort Collins and Pullman. Both the HI and the LI rule-based methods provided the same accuracy as that of fixed interval method with up to a 60% reduction in sampling events, particularly at locations with greater temporal flux variability. For instance, at Ames, the HI rule-based and the fixed interval methods required 16 and 91 sampling events, respectively, to achieve the same absolute bias of 0.2 kg N ha-1 yr-1 in estimating cumulative N2O flux. These results suggest that using simulation models along with decision trees can reduce the cost and improve the accuracy of the estimations of cumulative N2O fluxes using the discrete chamber-based method.

  13. Effect of temporal sampling and timing for soil moisture measurements at field scale

    NASA Astrophysics Data System (ADS)

    Snapir, B.; Hobbs, S.

    2012-04-01

    Estimating soil moisture at field scale is valuable for various applications such as irrigation scheduling in cultivated watersheds, flood and drought prediction, waterborne disease spread assessment, or even determination of mobility with lightweight vehicles. Synthetic aperture radar on satellites in low Earth orbit can provide fine resolution images with a repeat time of a few days. For an Earth observing satellite, the choice of the orbit is driven in particular by the frequency of measurements required to meet a certain accuracy in retrieving the parameters of interest. For a given target, having only one image every week may not enable to capture the full dynamic range of soil moisture - soil moisture can change significantly within a day when rainfall occurs. Hence this study focuses on the effect of temporal sampling and timing of measurements in terms of error on the retrieved signal. All the analyses are based on in situ measurements of soil moisture (acquired every 30 min) from the OzNet Hydrological Monitoring Network in Australia for different fields over several years. The first study concerns sampling frequency. Measurements at different frequencies were simulated by sub-sampling the original data. Linear interpolation was used to estimate the missing intermediate values, and then this time series was compared to the original. The difference between these two signals is computed for different levels of sub-sampling. Results show that the error increases linearly when the interval is less than 1 day. For intervals longer than a day, a sinusoidal component appears on top of the linear growth due to the diurnal variation of surface soil moisture. Thus, for example, the error with measurements every 4.5 days can be slightly less than the error with measurements every 2 days. Next, for a given sampling interval, this study evaluated the effect of the time during the day at which measurements are made. Of course when measurements are very frequent the time of acquisition does not matter, but when few measurements are available (sampling interval > 1 day), the time of acquisition can be important. It is shown that with daily measurements the error can double depending on the time of acquisition. This result is very sensitive to the phase of the sinusoidal variation of soil moisture. For example, in autumn for a given field with soil moisture ranging from 7.08% to 11.44% (mean and standard deviation being respectively 8.68% and 0.74%), daily measurements at 2 pm lead to a mean error of 0.47% v/v, while daily measurements at 9 am/pm produce a mean error of 0.24% v/v. The minimum of the sinusoid occurs every afternoon around 2 pm, after interpolation, measurements acquired at this time underestimate soil moisture, whereas measurements around 9 am/pm correspond to nodes of the sinusoid, hence they represent the average soil moisture. These results concerning the frequency and the timing of measurements can potentially drive the schedule of satellite image acquisition over some fields.

  14. Rethinking Timing of First Sex and Delinquency

    ERIC Educational Resources Information Center

    Harden, K. Paige; Mendle, Jane; Hill, Jennifer E.; Turkheimer, Eric; Emery, Robert E.

    2008-01-01

    The relation between timing of first sex and later delinquency was examined using a genetically informed sample of 534 same-sex twin pairs from the National Longitudinal Study of Adolescent Health, who were assessed at three time points over a 7-year interval. Genetic and environmental differences between families were found to account for the…

  15. Gas, water, and oil production from Wattenberg field in the Denver Basin, Colorado

    USGS Publications Warehouse

    Nelson, Philip H.; Santus, Stephen L.

    2011-01-01

    Gas, oil, and water production data were compiled from selected wells in two tight gas reservoirs-the Codell-Niobrara interval, comprised of the Codell Sandstone Member of the Carlile Shale and the Niobrara Formation; and the Dakota J interval, comprised mostly of the Muddy (J) Sandstone of the Dakota Group; both intervals are of Cretaceous age-in the Wattenberg field in the Denver Basin of Colorado. Production from each well is represented by two samples spaced five years apart, the first sample typically taken two years after production commenced, which generally was in the 1990s. For each producing interval, summary diagrams and tables of oil-versus-gas production and water-versus-gas production are shown with fluid-production rates, the change in production over five years, the water-gas and oil-gas ratios, and the fluid type. These diagrams and tables permit well-to-well and field-to-field comparisons. Fields producing water at low rates (water dissolved in gas in the reservoir) can be distinguished from fields producing water at moderate or high rates, and the water-gas ratios are quantified. The Dakota J interval produces gas on a per-well basis at roughly three times the rate of the Codell-Niobrara interval. After five years of production, gas data from the second samples show that both intervals produce gas, on average, at about one-half the rate as the first sample. Oil-gas ratios in the Codell-Niobrara interval are characteristic of a retrograde gas and are considerably higher than oil-gas ratios in the Dakota J interval, which are characteristic of a wet gas. Water production from both intervals is low, and records in many wells are discontinuous, particularly in the Codell-Niobrara interval. Water-gas ratios are broadly variable, with some of the variability possibly due to the difficulty of measuring small production rates. Most wells for which water is reported have water-gas ratios exceeding the amount that could exist dissolved in gas at reservoir pressure and temperature. The Codell-Niobrara interval is reported to be overpressured (that is, pressure greater than hydrostatic) whereas the underlying Dakota J interval is underpressured (less than hydrostatic), demonstrating a lack of hydraulic communication between the two intervals despite their proximity over a broad geographical area. The underpressuring in the Dakota J interval has been attributed by others to outcropping strata east of the basin. We agree with this interpretation and postulate that the gas accumulation also may contribute to hydraulic isolation from outcrops immediately west of the basin.

  16. Sampling Theory and Confidence Intervals for Effect Sizes: Using ESCI To Illustrate "Bouncing"; Confidence Intervals.

    ERIC Educational Resources Information Center

    Du, Yunfei

    This paper discusses the impact of sampling error on the construction of confidence intervals around effect sizes. Sampling error affects the location and precision of confidence intervals. Meta-analytic resampling demonstrates that confidence intervals can haphazardly bounce around the true population parameter. Special software with graphical…

  17. Investigation of modulation parameters in multiplexing gas chromatography.

    PubMed

    Trapp, Oliver

    2010-10-22

    Combination of information technology and separation sciences opens a new avenue to achieve high sample throughputs and therefore is of great interest to bypass bottlenecks in catalyst screening of parallelized reactors or using multitier well plates in reaction optimization. Multiplexing gas chromatography utilizes pseudo-random injection sequences derived from Hadamard matrices to perform rapid sample injections which gives a convoluted chromatogram containing the information of a single sample or of several samples with similar analyte composition. The conventional chromatogram is obtained by application of the Hadamard transform using the known injection sequence or in case of several samples an averaged transformed chromatogram is obtained which can be used in a Gauss-Jordan deconvolution procedure to obtain all single chromatograms of the individual samples. The performance of such a system depends on the modulation precision and on the parameters, e.g. the sequence length and modulation interval. Here we demonstrate the effects of the sequence length and modulation interval on the deconvoluted chromatogram, peak shapes and peak integration for sequences between 9-bit (511 elements) and 13-bit (8191 elements) and modulation intervals Δt between 5 s and 500 ms using a mixture of five components. It could be demonstrated that even for high-speed modulation at time intervals of 500 ms the chromatographic information is very well preserved and that the separation efficiency can be improved by very narrow sample injections. Furthermore this study shows that the relative peak areas in multiplexed chromatograms do not deviate from conventionally recorded chromatograms. Copyright © 2010 Elsevier B.V. All rights reserved.

  18. An actual load forecasting methodology by interval grey modeling based on the fractional calculus.

    PubMed

    Yang, Yang; Xue, Dingyü

    2017-07-17

    The operation processes for thermal power plant are measured by the real-time data, and a large number of historical interval data can be obtained from the dataset. Within defined periods of time, the interval information could provide important information for decision making and equipment maintenance. Actual load is one of the most important parameters, and the trends hidden in the historical data will show the overall operation status of the equipments. However, based on the interval grey parameter numbers, the modeling and prediction process is more complicated than the one with real numbers. In order not lose any information, the geometric coordinate features are used by the coordinates of area and middle point lines in this paper, which are proved with the same information as the original interval data. The grey prediction model for interval grey number by the fractional-order accumulation calculus is proposed. Compared with integer-order model, the proposed method could have more freedom with better performance for modeling and prediction, which can be widely used in the modeling process and prediction for the small amount interval historical industry sequence samples. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Sampling and Control Circuit Board for an Inertial Measurement Unit

    NASA Technical Reports Server (NTRS)

    Chelmins, David T (Inventor); Sands, Obed (Inventor); Powis, Richard T., Jr. (Inventor)

    2016-01-01

    A circuit board that serves as a control and sampling interface to an inertial measurement unit ("IMU") is provided. The circuit board is also configured to interface with a local oscillator and an external trigger pulse. The circuit board is further configured to receive the external trigger pulse from an external source that time aligns the local oscillator and initiates sampling of the inertial measurement device for data at precise time intervals based on pulses from the local oscillator. The sampled data may be synchronized by the circuit board with other sensors of a navigation system via the trigger pulse.

  20. Comparison of two methods for recovering migrating Ascaris suum larvae from the liver and lungs of pigs.

    PubMed

    Slotved, H C; Roepstorff, A; Barnes, E H; Eriksen, L; Nansen, P

    1996-08-01

    Nine groups of 5 pigs were inoculated with Ascaris suum eggs on day 0. Groups 1, 2, and 3 were inoculated with 100 eggs, groups 4, 5, and 6 with 1,000 eggs, and groups 7, 8, and 9 with 10,000 eggs. On day 3, groups 1, 4, and 7 were slaughtered, on day 7 groups 2, 5, and 8, and on day 10 groups 3, 6, and 9. The liver (days 3 and 7) and lungs (days 3, 7, and 10) were removed and 2, 25% samples of both organs were collected. Larvae were recovered from 1 sample by the Baermann method and from the other by an agar-gel method. Overall there were no significant differences in the liver larval recovery between the 2 methods. The use of the agar-gel method resulted in a very clean suspension of larvae and thereby reduced the sample counting time by a factor of 5-10 compared to the Baermann method. With both methods larval recovery from the lungs resulted in a clean larval suspension that was easy to count, and there were overall no significant differences between the 2 methods, although there was a tendency toward the Baermann method recovering more larvae from the lungs than the agar-gel method. The tissue sample dry weight did not significantly influence larval recovery by the agar-gel method, and the time interval from slaughtering to start of incubation on day 3 (interval 51-92 min), day 7 (interval 37-114 min), and day 10 (interval 50-129 min) had no significant effect on recovery by either method.

  1. Hematology and biochemistry reference intervals for Ontario commercial nursing pigs close to the time of weaning

    PubMed Central

    Perri, Amanda M.; O’Sullivan, Terri L.; Harding, John C.S.; Wood, R. Darren; Friendship, Robert M.

    2017-01-01

    The evaluation of pig hematology and biochemistry parameters is rarely done largely due to the costs associated with laboratory testing and labor, and the limited availability of reference intervals needed for interpretation. Within-herd and between-herd biological variation of these values also make it difficult to establish reference intervals. Regardless, baseline reference intervals are important to aid veterinarians in the interpretation of blood parameters for the diagnosis and treatment of diseased swine. The objective of this research was to provide reference intervals for hematology and biochemistry parameters of 3-week-old commercial nursing piglets in Ontario. A total of 1032 pigs lacking clinical signs of disease from 20 swine farms were sampled for hematology and iron panel evaluation, with biochemistry analysis performed on a subset of 189 randomly selected pigs. The 95% reference interval, mean, median, range, and 90% confidence intervals were calculated for each parameter. PMID:28373729

  2. Measuring discharge with ADCPs: Inferences from synthetic velocity profiles

    USGS Publications Warehouse

    Rehmann, C.R.; Mueller, D.S.; Oberg, K.A.

    2009-01-01

    Synthetic velocity profiles are used to determine guidelines for sampling discharge with acoustic Doppler current profilers (ADCPs). The analysis allows the effects of instrument characteristics, sampling parameters, and properties of the flow to be studied systematically. For mid-section measurements, the averaging time required for a single profile measurement always exceeded the 40 s usually recommended for velocity measurements, and it increased with increasing sample interval and increasing time scale of the large eddies. Similarly, simulations of transect measurements show that discharge error decreases as the number of large eddies sampled increases. The simulations allow sampling criteria that account for the physics of the flow to be developed. ?? 2009 ASCE.

  3. Tobacco smoking trajectory and associated ethnic differences among adolescent smokers seeking cessation treatment.

    PubMed

    Robinson, Miqun L; Berlin, Ivan; Moolchan, Eric T

    2004-09-01

    To examine smoking trajectories in a clinical sample of adolescent smokers seeking cessation treatment, including: (a) smoking onset (initial, daily) and time intervals from initial to daily smoking and from daily smoking to treatment request, (b) associations between current level of tobacco dependence and smoking history, and (c) differences in smoking trajectory between African-American and non-African-American youth. Four hundred and thirty-two adolescent smokers (aged 13-17 years, 61.8% female, 32% African-American) responding to various media advertisement completed a telephone interview as part of pre-eligibility screening for a smoking cessation trial. Smoking trajectory data included age at onset of initial and daily smoking, intervals between those time points, and cigarettes smoked per day (CPD). Tobacco dependence was assessed using the Fagerström Test for Nicotine Dependence (FTND). Data were analyzed using regression models and multiple analyses of covariance. Initial smoking occurred at a mean age of less than 12 years and daily smoking at age 13 years. Earlier onset of daily smoking was associated with higher FTND scores and longer duration from daily smoking to treatment request. For the entire sample, the time interval from initial to daily smoking was 1.14 years. When the sample was divided into early (before age 14 years) and later (at or after age 14 years) initiators, early initiators showed a slower progression from initial to daily smoking compared with late initiators (16 months vs. 6 months). Compared with non-African-American teen smokers, African-American youth reported a 1-year delay in onset of both initial and daily smoking. Early age of daily smoking and short time interval from initial to daily smoking highlight a brief window of opportunity to prevent the development of tobacco addiction and its consequences. Ethnic differences in smoking trajectory uncovered in this report call for ethnically tailored interventions to reduce youth smoking.

  4. Comparative evaluation of human pulp tissue dissolution by different concentrations of chlorine dioxide, calcium hypochlorite and sodium hypochlorite: An in vitro study

    PubMed Central

    Taneja, Sonali; Mishra, Neha; Malik, Shubhra

    2014-01-01

    Introduction: Irrigation plays an indispensable role in removal of tissue remnants and debris from the complicated root canal system. This study compared the human pulp tissue dissolution by different concentrations of chlorine dioxide, calcium hypochlorite and sodium hypochlorite. Materials and Methods: Pulp tissue was standardized to a weight of 9 mg for each sample. In all,60 samples obtained were divided into 6 groups according to the irrigating solution used- 2.5% sodium hypochlorite (NaOCl), 5.25% NaOCl, 5% calcium hypochlorite (Ca(OCl)2), 10% Ca(OCl)2, 5%chlorine dioxide (ClO2) and 13% ClO2. Pulp tissue was placed in each test tube carrying irrigants of measured volume (5ml) according to their specified subgroup time interval: 30 minutes (Subgroup A) and 60 minutes (Subgroup B). The solution from each sample test tube was filtered and was left for drying overnight. The residual weight was calculated by filtration method. Results: Mean tissue dissolution increases with increase in time period. Results showed 5.25% NaOCl to be most effective at both time intervals followed by 2.5% NaOCl at 60 minutes, 10%Ca(OCl)2 and 13% ClO2 at 60 minutes. Least amount of tissue dissolving ability was demonstrated by 5% Ca(OCl)2 and 5% ClO2 at 30 minutes. Distilled water showed no pulp tissue dissolution. Conclusion: Withinthe limitations of the study, NaOCl most efficiently dissolved the pulp tissue at both concentrations and at both time intervals. Mean tissue dissolution by Ca(OCl)2 and ClO2 gradually increased with time and with their increase in concentration. PMID:25506141

  5. Impacts of sampling design and estimation methods on nutrient leaching of intensively monitored forest plots in the Netherlands.

    PubMed

    de Vries, W; Wieggers, H J J; Brus, D J

    2010-08-05

    Element fluxes through forest ecosystems are generally based on measurements of concentrations in soil solution at regular time intervals at plot locations sampled in a regular grid. Here we present spatially averaged annual element leaching fluxes in three Dutch forest monitoring plots using a new sampling strategy in which both sampling locations and sampling times are selected by probability sampling. Locations were selected by stratified random sampling with compact geographical blocks of equal surface area as strata. In each sampling round, six composite soil solution samples were collected, consisting of five aliquots, one per stratum. The plot-mean concentration was estimated by linear regression, so that the bias due to one or more strata being not represented in the composite samples is eliminated. The sampling times were selected in such a way that the cumulative precipitation surplus of the time interval between two consecutive sampling times was constant, using an estimated precipitation surplus averaged over the past 30 years. The spatially averaged annual leaching flux was estimated by using the modeled daily water flux as an ancillary variable. An important advantage of the new method is that the uncertainty in the estimated annual leaching fluxes due to spatial and temporal variation and resulting sampling errors can be quantified. Results of this new method were compared with the reference approach in which daily leaching fluxes were calculated by multiplying daily interpolated element concentrations with daily water fluxes and then aggregated to a year. Results show that the annual fluxes calculated with the reference method for the period 2003-2005, including all plots, elements and depths, lies only in 53% of the cases within the range of the average +/-2 times the standard error of the new method. Despite the differences in results, both methods indicate comparable N retention and strong Al mobilization in all plots, with Al leaching being nearly equal to the leaching of SO(4) and NO(3) with fluxes expressed in mol(c) ha(-1) yr(-1). This illustrates that Al release, which is the clearest signal of soil acidification, is mainly due to the external input of SO(4) and NO(3).

  6. A review of statistical issues with progression-free survival as an interval-censored time-to-event endpoint.

    PubMed

    Sun, Xing; Li, Xiaoyun; Chen, Cong; Song, Yang

    2013-01-01

    Frequent rise of interval-censored time-to-event data in randomized clinical trials (e.g., progression-free survival [PFS] in oncology) challenges statistical researchers in the pharmaceutical industry in various ways. These challenges exist in both trial design and data analysis. Conventional statistical methods treating intervals as fixed points, which are generally practiced by pharmaceutical industry, sometimes yield inferior or even flawed analysis results in extreme cases for interval-censored data. In this article, we examine the limitation of these standard methods under typical clinical trial settings and further review and compare several existing nonparametric likelihood-based methods for interval-censored data, methods that are more sophisticated but robust. Trial design issues involved with interval-censored data comprise another topic to be explored in this article. Unlike right-censored survival data, expected sample size or power for a trial with interval-censored data relies heavily on the parametric distribution of the baseline survival function as well as the frequency of assessments. There can be substantial power loss in trials with interval-censored data if the assessments are very infrequent. Such an additional dependency controverts many fundamental assumptions and principles in conventional survival trial designs, especially the group sequential design (e.g., the concept of information fraction). In this article, we discuss these fundamental changes and available tools to work around their impacts. Although progression-free survival is often used as a discussion point in the article, the general conclusions are equally applicable to other interval-censored time-to-event endpoints.

  7. Cigarette smoke chemistry market maps under Massachusetts Department of Public Health smoking conditions.

    PubMed

    Morton, Michael J; Laffoon, Susan W

    2008-06-01

    This study extends the market mapping concept introduced by Counts et al. (Counts, M.E., Hsu, F.S., Tewes, F.J., 2006. Development of a commercial cigarette "market map" comparison methodology for evaluating new or non-conventional cigarettes. Regul. Toxicol. Pharmacol. 46, 225-242) to include both temporal cigarette and testing variation and also machine smoking with more intense puffing parameters, as defined by the Massachusetts Department of Public Health (MDPH). The study was conducted over a two year period and involved a total of 23 different commercial cigarette brands from the U.S. marketplace. Market mapping prediction intervals were developed for 40 mainstream cigarette smoke constituents and the potential utility of the market map as a comparison tool for new brands was demonstrated. The over-time character of the data allowed for the variance structure of the smoke constituents to be more completely characterized than is possible with one-time sample data. The variance was partitioned among brand-to-brand differences, temporal differences, and the remaining residual variation using a mixed random and fixed effects model. It was shown that a conventional weighted least squares model typically gave similar prediction intervals to those of the more complicated mixed model. For most constituents there was less difference in the prediction intervals calculated from over-time samples and those calculated from one-time samples than had been anticipated. One-time sample maps may be adequate for many purposes if the user is aware of their limitations. Cigarette tobacco fillers were analyzed for nitrate, nicotine, tobacco-specific nitrosamines, ammonia, chlorogenic acid, and reducing sugars. The filler information was used to improve predicting relationships for several of the smoke constituents, and it was concluded that the effects of filler chemistry on smoke chemistry were partial explanations of the observed brand-to-brand variation.

  8. Longitudinal study of fingerprint recognition.

    PubMed

    Yoon, Soweon; Jain, Anil K

    2015-07-14

    Human identification by fingerprints is based on the fundamental premise that ridge patterns from distinct fingers are different (uniqueness) and a fingerprint pattern does not change over time (persistence). Although the uniqueness of fingerprints has been investigated by developing statistical models to estimate the probability of error in comparing two random samples of fingerprints, the persistence of fingerprints has remained a general belief based on only a few case studies. In this study, fingerprint match (similarity) scores are analyzed by multilevel statistical models with covariates such as time interval between two fingerprints in comparison, subject's age, and fingerprint image quality. Longitudinal fingerprint records of 15,597 subjects are sampled from an operational fingerprint database such that each individual has at least five 10-print records over a minimum time span of 5 y. In regard to the persistence of fingerprints, the longitudinal analysis on a single (right index) finger demonstrates that (i) genuine match scores tend to significantly decrease when time interval between two fingerprints in comparison increases, whereas the change in impostor match scores is negligible; and (ii) fingerprint recognition accuracy at operational settings, nevertheless, tends to be stable as the time interval increases up to 12 y, the maximum time span in the dataset. However, the uncertainty of temporal stability of fingerprint recognition accuracy becomes substantially large if either of the two fingerprints being compared is of poor quality. The conclusions drawn from 10-finger fusion analysis coincide with the conclusions from single-finger analysis.

  9. Longitudinal study of fingerprint recognition

    PubMed Central

    Yoon, Soweon; Jain, Anil K.

    2015-01-01

    Human identification by fingerprints is based on the fundamental premise that ridge patterns from distinct fingers are different (uniqueness) and a fingerprint pattern does not change over time (persistence). Although the uniqueness of fingerprints has been investigated by developing statistical models to estimate the probability of error in comparing two random samples of fingerprints, the persistence of fingerprints has remained a general belief based on only a few case studies. In this study, fingerprint match (similarity) scores are analyzed by multilevel statistical models with covariates such as time interval between two fingerprints in comparison, subject’s age, and fingerprint image quality. Longitudinal fingerprint records of 15,597 subjects are sampled from an operational fingerprint database such that each individual has at least five 10-print records over a minimum time span of 5 y. In regard to the persistence of fingerprints, the longitudinal analysis on a single (right index) finger demonstrates that (i) genuine match scores tend to significantly decrease when time interval between two fingerprints in comparison increases, whereas the change in impostor match scores is negligible; and (ii) fingerprint recognition accuracy at operational settings, nevertheless, tends to be stable as the time interval increases up to 12 y, the maximum time span in the dataset. However, the uncertainty of temporal stability of fingerprint recognition accuracy becomes substantially large if either of the two fingerprints being compared is of poor quality. The conclusions drawn from 10-finger fusion analysis coincide with the conclusions from single-finger analysis. PMID:26124106

  10. Irrigation Water Sources and Time Intervals as Variables on the Presence of Campylobacter spp. and Listeria monocytogenes on Romaine Lettuce Grown in Muck Soil.

    PubMed

    Guévremont, Evelyne; Lamoureux, Lisyanne; Généreux, Mylène; Côté, Caroline

    2017-07-01

    Irrigation water has been identified as a possible source of vegetable contamination by foodborne pathogens. Risk management for pathogens such as Campylobacter spp. and Listeria monocytogenes in fields can be influenced by the source of the irrigation water and the time interval between last irrigation and harvest. Plots of romaine lettuce were irrigated with manure-contaminated water or aerated pond water 21, 7, or 3 days prior to harvesting, and water and muck soil samples were collected at each irrigation treatment. Lettuce samples were collected at the end of the trials. The samples were tested for the presence of Campylobacter spp. and L. monocytogenes. Campylobacter coli was isolated from 33% of hog manure samples (n = 9) and from 11% of the contaminated water samples (n = 27), but no lettuce samples were positive (n = 288). L. monocytogenes was not found in manure, and only one sample of manure-contaminated irrigation water (n = 27) and one lettuce sample (n = 288) were positive. No Campylobacter or L. monocytogenes was recovered from the soil samples (n = 288). Because of the low incidence of pathogens, it was not possible to link the contamination of either soil or lettuce with the type of irrigation water. Nevertheless, experimental field trials mimicking real conditions provide new insights into the survival of two significant foodborne pathogens on romaine lettuce.

  11. Methods for estimating confidence intervals in interrupted time series analyses of health interventions.

    PubMed

    Zhang, Fang; Wagner, Anita K; Soumerai, Stephen B; Ross-Degnan, Dennis

    2009-02-01

    Interrupted time series (ITS) is a strong quasi-experimental research design, which is increasingly applied to estimate the effects of health services and policy interventions. We describe and illustrate two methods for estimating confidence intervals (CIs) around absolute and relative changes in outcomes calculated from segmented regression parameter estimates. We used multivariate delta and bootstrapping methods (BMs) to construct CIs around relative changes in level and trend, and around absolute changes in outcome based on segmented linear regression analyses of time series data corrected for autocorrelated errors. Using previously published time series data, we estimated CIs around the effect of prescription alerts for interacting medications with warfarin on the rate of prescriptions per 10,000 warfarin users per month. Both the multivariate delta method (MDM) and the BM produced similar results. BM is preferred for calculating CIs of relative changes in outcomes of time series studies, because it does not require large sample sizes when parameter estimates are obtained correctly from the model. Caution is needed when sample size is small.

  12. Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing

    2018-05-01

    The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.

  13. Dating young geomorphic surfaces using age of colonizing Douglas fir in southwestern Washington and northwestern Oregon, USA

    USGS Publications Warehouse

    Pierson, T.C.

    2007-01-01

    Dating of dynamic, young (<500 years) geomorphic landforms, particularly volcanofluvial features, requires higher precision than is possible with radiocarbon dating. Minimum ages of recently created landforms have long been obtained from tree-ring ages of the oldest trees growing on new surfaces. But to estimate the year of landform creation requires that two time corrections be added to tree ages obtained from increment cores: (1) the time interval between stabilization of the new landform surface and germination of the sampled trees (germination lag time or GLT); and (2) the interval between seedling germination and growth to sampling height, if the trees are not cored at ground level. The sum of these two time intervals is the colonization time gap (CTG). Such time corrections have been needed for more precise dating of terraces and floodplains in lowland river valleys in the Cascade Range, where significant eruption-induced lateral shifting and vertical aggradation of channels can occur over years to decades, and where timing of such geomorphic changes can be critical to emergency planning. Earliest colonizing Douglas fir (Pseudotsuga menziesii) were sampled for tree-ring dating at eight sites on lowland (<750 m a.s.l.), recently formed surfaces of known age near three Cascade volcanoes - Mount Rainier, Mount St. Helens and Mount Hood - in southwestern Washington and northwestern Oregon. Increment cores or stem sections were taken at breast height and, where possible, at ground level from the largest, oldest-looking trees at each study site. At least ten trees were sampled at each site unless the total of early colonizers was less. Results indicate that a correction of four years should be used for GLT and 10 years for CTG if the single largest (and presumed oldest) Douglas fir growing on a surface of unknown age is sampled. This approach would have a potential error of up to 20 years. Error can be reduced by sampling the five largest Douglas fir instead of the single largest. A GLT correction of 5 years should be added to the mean ring-count age of the five largest trees growing on the surface being dated, if the trees are cored at ground level. This correction would have an approximate error of ??5 years. If the trees are cored at about 1.4 m above the round surface (breast height), a CTG correction of 11 years should be added to the mean age of the five sampled trees (with an error of about ??7 years).

  14. On the continuous dependence with respect to sampling of the linear quadratic regulator problem for distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.; Wang, C.

    1990-01-01

    The convergence of solutions to the discrete or sampled time linear quadratic regulator problem and associated Riccati equation for infinite dimensional systems to the solutions to the corresponding continuous time problem and equation, as the length of the sampling interval (the sampling rate) tends toward zero (infinity) is established. Both the finite and infinite time horizon problems are studied. In the finite time horizon case, strong continuity of the operators which define the control system and performance index together with a stability and consistency condition on the sampling scheme are required. For the infinite time horizon problem, in addition, the sampled systems must be stabilizable and detectable, uniformly with respect to the sampling rate. Classes of systems for which this condition can be verified are discussed. Results of numerical studies involving the control of a heat/diffusion equation, a hereditary of delay system, and a flexible beam are presented and discussed.

  15. On the continuous dependence with respect to sampling of the linear quadratic regulator problem for distributed parameter system

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.; Wang, C.

    1992-01-01

    The convergence of solutions to the discrete- or sampled-time linear quadratic regulator problem and associated Riccati equation for infinite-dimensional systems to the solutions to the corresponding continuous time problem and equation, as the length of the sampling interval (the sampling rate) tends toward zero(infinity) is established. Both the finite-and infinite-time horizon problems are studied. In the finite-time horizon case, strong continuity of the operators that define the control system and performance index, together with a stability and consistency condition on the sampling scheme are required. For the infinite-time horizon problem, in addition, the sampled systems must be stabilizable and detectable, uniformly with respect to the sampling rate. Classes of systems for which this condition can be verified are discussed. Results of numerical studies involving the control of a heat/diffusion equation, a hereditary or delay system, and a flexible beam are presented and discussed.

  16. Estimation of Rainfall Sampling Uncertainty: A Comparison of Two Diverse Approaches

    NASA Technical Reports Server (NTRS)

    Steiner, Matthias; Zhang, Yu; Baeck, Mary Lynn; Wood, Eric F.; Smith, James A.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    The spatial and temporal intermittence of rainfall causes the averages of satellite observations of rain rate to differ from the "true" average rain rate over any given area and time period, even if the satellite observations are perfectly accurate. The difference of satellite averages based on occasional observation by satellite systems and the continuous-time average of rain rate is referred to as sampling error. In this study, rms sampling error estimates are obtained for average rain rates over boxes 100 km, 200 km, and 500 km on a side, for averaging periods of 1 day, 5 days, and 30 days. The study uses a multi-year, merged radar data product provided by Weather Services International Corp. at a resolution of 2 km in space and 15 min in time, over an area of the central U.S. extending from 35N to 45N in latitude and 100W to 80W in longitude. The intervals between satellite observations are assumed to be equal, and similar In size to what present and future satellite systems are able to provide (from 1 h to 12 h). The sampling error estimates are obtained using a resampling method called "resampling by shifts," and are compared to sampling error estimates proposed by Bell based on earlier work by Laughlin. The resampling estimates are found to scale with areal size and time period as the theory predicts. The dependence on average rain rate and time interval between observations is also similar to what the simple theory suggests.

  17. Does major depression result in lasting personality change?

    PubMed

    Shea, M T; Leon, A C; Mueller, T I; Solomon, D A; Warshaw, M G; Keller, M B

    1996-11-01

    Individuals with a history of depression are characterized by high levels of certain personality traits, particularly neuroticism, introversion, and interpersonal dependency. The authors examined the "scar hypothesis," i.e., the possibility that episodes of major depression result in lasting personality changes that persist beyond recovery from the depression. A large sample of first-degree relatives, spouses, and comparison subjects ascertained in connection with the proband sample from the National Institute of Mental Health Collaborative Program on the Psychobiology of Depression were assessed at two points in time separated by an interval of 6 years. Subjects with a prospectively observed first episode of major depression during the interval were compared with subjects remaining well in terms of change from time 1 to time 2 in self-reported personality traits. All subjects studied were well (had no mental disorders) at the time of both assessments. There was no evidence of negative change from premorbid to postmorbid assessment in any of the personality traits for subjects with a prospectively observed first episode of major depression during the interval. The results suggested a possible association of number and length of episodes with increased levels of emotional reliance and introversion, respectively. The findings suggest that self-reported personality traits do not change after a typical episode of major depression. Future studies are needed to determine whether such change occurs following more severe, chronic, or recurrent episodes of depression.

  18. A possible simplification for the estimation of area under the curve (AUC₀₋₁₂) of enteric-coated mycophenolate sodium in renal transplant patients receiving tacrolimus.

    PubMed

    Fleming, Denise H; Mathew, Binu S; Prasanna, Samuel; Annapandian, Vellaichamy M; John, George T

    2011-04-01

    Enteric-coated mycophenolate sodium (EC-MPS) is widely used in renal transplantation. With a delayed absorption profile, it has not been possible to develop limited sampling strategies to estimate area under the curve (mycophenolic acid [MPA] AUC₀₋₁₂), which have limited time points and are completed in 2 hours. We developed and validated simplified strategies to estimate MPA AUC₀₋₁₂ in an Indian renal transplant population prescribed EC-MPS together with prednisolone and tacrolimus. Intensive pharmacokinetic sampling (17 samples each) was performed in 18 patients to measure MPA AUC₀₋₁₂. The profiles at 1 month were used to develop the simplified strategies and those at 5.5 months used for validation. We followed two approaches. In one, the AUC was calculated using the trapezoidal rule with fewer time points followed by an extrapolation. In the second approach, by stepwise multiple regression analysis, models with different time points were identified and linear regression analysis performed. Using the trapezoidal rule, two equations were developed with six time points and sampling to 6 or 8 hours (8hrAUC[₀₋₁₂exp]) after the EC-MPS dose. On validation, the 8hrAUC(₀₋₁₂exp) compared with total measured AUC₀₋₁₂ had a coefficient of correlation (r²) of 0.872 with a bias and precision (95% confidence interval) of 0.54% (-6.07-7.15) and 9.73% (5.37-14.09), respectively. Second, limited sampling strategies were developed with four, five, six, seven, and eight time points and completion within 2 hours, 4 hours, 6 hours, and 8 hours after the EC-MPS dose. On validation, six, seven, and eight time point equations, all with sampling to 8 hours, had an acceptable r with the total measured MPA AUC₀₋₁₂ (0.817-0.927). In the six, seven, and eight time points, the bias (95% confidence interval) was 3.00% (-4.59 to 10.59), 0.29% (-5.4 to 5.97), and -0.72% (-5.34 to 3.89) and the precision (95% confidence interval) was 10.59% (5.06-16.13), 8.33% (4.55-12.1), and 6.92% (3.94-9.90), respectively. Of the eight simplified approaches, inclusion of seven or eight time points improved the accuracy of the predicted AUC compared with the actual and can be advocated based on the priority of the user.

  19. Shear Bond Strengths of Different Adhesive Systems to Biodentine

    PubMed Central

    Odabaş, Mesut Enes; Bani, Mehmet; Tirali, Resmiye Ebru

    2013-01-01

    The aim of this study was to measure the shear bond strength of different adhesive systems to Biodentine with different time intervals. Eighty specimens of Biodentine were prepared and divided into 8 groups. After 12 minutes, 40 samples were randomly selected and divided into 4 groups of 10 each: group 1: (etch-and-rinse adhesive system) Prime & Bond NT; group 2: (2-step self-etch adhesive system) Clearfil SE Bond; group 3: (1-step self-etch adhesive systems) Clearfil S3 Bond; group 4: control (no adhesive). After the application of adhesive systems, composite resin was applied over Biodentine. This procedure was repeated 24 hours after mixing additional 40 samples, respectively. Shear bond strengths were measured using a universal testing machine, and the data were subjected to 1-way analysis of variance and Scheffé post hoc test. No significant differences were found between all of the adhesive groups at the same time intervals (12 minutes and 24 hours) (P > .05). Among the two time intervals, the lowest value was obtained for group 1 (etch-and-rinse adhesive) at a 12-minute period, and the highest was obtained for group 2 (two-step self-etch adhesive) at a 24-hour period. The placement of composite resin used with self-etch adhesive systems over Biodentine showed better shear bond strength. PMID:24222742

  20. Shear bond strengths of different adhesive systems to biodentine.

    PubMed

    Odabaş, Mesut Enes; Bani, Mehmet; Tirali, Resmiye Ebru

    2013-01-01

    The aim of this study was to measure the shear bond strength of different adhesive systems to Biodentine with different time intervals. Eighty specimens of Biodentine were prepared and divided into 8 groups. After 12 minutes, 40 samples were randomly selected and divided into 4 groups of 10 each: group 1: (etch-and-rinse adhesive system) Prime & Bond NT; group 2: (2-step self-etch adhesive system) Clearfil SE Bond; group 3: (1-step self-etch adhesive systems) Clearfil S(3) Bond; group 4: control (no adhesive). After the application of adhesive systems, composite resin was applied over Biodentine. This procedure was repeated 24 hours after mixing additional 40 samples, respectively. Shear bond strengths were measured using a universal testing machine, and the data were subjected to 1-way analysis of variance and Scheffé post hoc test. No significant differences were found between all of the adhesive groups at the same time intervals (12 minutes and 24 hours) (P > .05). Among the two time intervals, the lowest value was obtained for group 1 (etch-and-rinse adhesive) at a 12-minute period, and the highest was obtained for group 2 (two-step self-etch adhesive) at a 24-hour period. The placement of composite resin used with self-etch adhesive systems over Biodentine showed better shear bond strength.

  1. In-well time-of-travel approach to evaluate optimal purge duration during low-flow sampling of monitoring wells

    USGS Publications Warehouse

    Harte, Philip T.

    2017-01-01

    A common assumption with groundwater sampling is that low (<0.5 L/min) pumping rates during well purging and sampling captures primarily lateral flow from the formation through the well-screened interval at a depth coincident with the pump intake. However, if the intake is adjacent to a low hydraulic conductivity part of the screened formation, this scenario will induce vertical groundwater flow to the pump intake from parts of the screened interval with high hydraulic conductivity. Because less formation water will initially be captured during pumping, a substantial volume of water already in the well (preexisting screen water or screen storage) will be captured during this initial time until inflow from the high hydraulic conductivity part of the screened formation can travel vertically in the well to the pump intake. Therefore, the length of the time needed for adequate purging prior to sample collection (called optimal purge duration) is controlled by the in-well, vertical travel times. A preliminary, simple analytical model was used to provide information on the relation between purge duration and capture of formation water for different gross levels of heterogeneity (contrast between low and high hydraulic conductivity layers). The model was then used to compare these time–volume relations to purge data (pumping rates and drawdown) collected at several representative monitoring wells from multiple sites. Results showed that computation of time-dependent capture of formation water (as opposed to capture of preexisting screen water), which were based on vertical travel times in the well, compares favorably with the time required to achieve field parameter stabilization. If field parameter stabilization is an indicator of arrival time of formation water, which has been postulated, then in-well, vertical flow may be an important factor at wells where low-flow sampling is the sample method of choice.

  2. Bioequivalence and Pharmacokinetic Evaluation Study of Acetaminophen vs. Acetaminophen Plus Caffeine Tablets in Healthy Mexican Volunteers.

    PubMed

    Guzmán, Nora Angélica Núñez; Molina, Daniel Ruiz; Núñez, Benigno Figueroa; Soto-Sosa, Juan Carlos; Abarca, Jorge Eduardo Herrera

    2016-12-01

    The aim of this clinical trial was to establish the bioequivalence of two tablets containing acetaminophen 650 mg (reference) and acetaminophen 650 mg plus caffeine 65 mg (test), administered orally, in fasting conditions in healthy Mexican volunteers. Blood samples were taken from 21 male and five female individuals, during a 24-h period, to characterize the pharmacokinetic profile of acetaminophen. Plasma samples were quantified by ultra-performance liquid chromatography, tandem mass spectrometry. Pharmacokinetic metrics (maximum plasma concentration, area under the curve from time zero to the last sampling time, and area under the curve from time zero to infinity) were used to determine the 90 % confidence interval of the test/reference coefficient. The geometric mean values for maximum plasma concentration obtained for the reference and test products were 9.46 ± 34.21 and 9.72 ± 32.38 µg/mL, respectively, whereas for the area under the curve from time zero to the last sampling time the values obtained were 34.93 ± 32.58 and 35.89 ± 31.03 µg h/mL for the reference and test formulations, respectively. The 90 % confidence intervals were within the acceptance range (80-125 %). The test product was bioequivalent to the reference product. A faster absorption was seen in the test formulation in the Mexican population.

  3. Adrenal Hormones in Common Bottlenose Dolphins (Tursiops truncatus): Influential Factors and Reference Intervals

    PubMed Central

    Hart, Leslie B.; Wells, Randall S.; Kellar, Nick; Balmer, Brian C.; Hohn, Aleta A.; Lamb, Stephen V.; Rowles, Teri; Zolman, Eric S.; Schwacke, Lori H.

    2015-01-01

    Inshore common bottlenose dolphins (Tursiops truncatus) are exposed to a broad spectrum of natural and anthropogenic stressors. In response to these stressors, the mammalian adrenal gland releases hormones such as cortisol and aldosterone to maintain physiological and biochemical homeostasis. Consequently, adrenal gland dysfunction results in disruption of hormone secretion and an inappropriate stress response. Our objective herein was to develop diagnostic reference intervals (RIs) for adrenal hormones commonly associated with the stress response (i.e., cortisol, aldosterone) that account for the influence of intrinsic (e.g., age, sex) and extrinsic (e.g., time) factors. Ultimately, these reference intervals will be used to gauge an individual’s response to chase-capture stress and could indicate adrenal abnormalities. Linear mixed models (LMMs) were used to evaluate demographic and sampling factors contributing to differences in serum cortisol and aldosterone concentrations among bottlenose dolphins sampled in Sarasota Bay, Florida, USA (2000–2012). Serum cortisol concentrations were significantly associated with elapsed time from initial stimulation to sample collection (p<0.05), and RIs were constructed using nonparametric methods based on elapsed sampling time for dolphins sampled in less than 30 minutes following net deployment (95% RI: 0.91–4.21 µg/dL) and following biological sampling aboard a research vessel (95% RI: 2.32–6.68 µg/dL). To examine the applicability of the pre-sampling cortisol RI across multiple estuarine stocks, data from three additional southeast U.S. sites were compared, revealing that all of the dolphins sampled from the other sites (N = 34) had cortisol concentrations within the 95th percentile RI. Significant associations between serum concentrations of aldosterone and variables reported in previous studies (i.e., age, elapsed sampling time) were not observed in the current project (p<0.05). Also, approximately 16% of Sarasota Bay bottlenose dolphin aldosterone concentrations were below the assay’s detection limit (11 pg/mL), thus hindering the ability to derive 95th percentile RIs. Serum aldosterone concentrations from animals sampled at the three additional sites were compared to the detection limit, and the proportion of animals with low aldosterone concentrations was not significantly different than an expected prevalence of 16%. Although this study relied upon long-term, free-ranging bottlenose dolphin health data from a single site, the objective RIs can be used for future evaluation of adrenal function among individuals sampled during capture-release health assessments. PMID:25993341

  4. Influence of glass-ionomer cement on the interface and setting reaction of mineral trioxide aggregate when used as a furcal repair material using laser Raman spectroscopic analysis.

    PubMed

    Nandini, Suresh; Ballal, Suma; Kandaswamy, Deivanayagam

    2007-02-01

    The prolonged setting time of mineral trioxide aggregate (MTA) is the main disadvantage of this material. This study analyzes the influence of glass-ionomer cement on the setting of MTA using laser Raman spectroscopy (LRS). Forty hollow glass molds were taken in which MTA was placed. In Group I specimens, MTA was layered with glass-ionomer cement after 45 minutes. Similar procedures were done for Groups II and III at 4 hours and 3 days, respectively. No glass ionomer was added in Group IV, which were then considered as control samples. Each sample was scanned at various time intervals. At each time interval, the interface between MTA and glass-ionomer cement was also scanned (excluding Group IV). The spectral analysis proved that placement of glass-ionomer cement over MTA after 45 minutes did not affect its setting reaction and calcium salts may be formed in the interface of these two materials.

  5. A comparison of single and multiple stressor protocols to assess acute stress in a coastal shark species, Rhizoprionodon terraenovae.

    PubMed

    Hoffmayer, Eric R; Hendon, Jill M; Parsons, Glenn R; Driggers, William B; Campbell, Matthew D

    2015-10-01

    Elasmobranch stress responses are traditionally measured in the field by either singly or serially sampling an animal after a physiologically stressful event. Although capture and handling techniques are effective at inducing a stress response, differences in protocols could affect the degree of stress experienced by an individual, making meaningful comparisons between the protocols difficult, if not impossible. This study acutely stressed Atlantic sharpnose sharks, Rhizoprionodon terraenovae, by standardized capture (rod and reel) and handling methods and implemented either a single or serial blood sampling protocol to monitor four indicators of the secondary stress response. Single-sampled sharks were hooked and allowed to swim around the boat until retrieved for a blood sample at either 0, 15, 30, 45, or 60 min post-hooking. Serially sampled sharks were retrieved, phlebotomized, released while still hooked, and subsequently resampled at 15, 30, 45, and 60 min intervals post-hooking. Blood was analyzed for hematocrit, and plasma glucose, lactate, and osmolality levels. Although both single and serial sampling protocols resulted in an increase in glucose, no significant difference in glucose level was found between protocols. Serially sampled sharks exhibited cumulatively heightened levels for lactate and osmolality at all time intervals when compared to single-sampled animals at the same time. Maximal concentration differences of 217.5, 9.8, and 41.6 % were reported for lactate, osmolality, and glucose levels, respectively. Hematocrit increased significantly over time for the single sampling protocol but did not change significantly during the serial sampling protocol. The differences in resultant blood chemistry levels between implemented stress protocols and durations are significant and need to be considered when assessing stress in elasmobranchs.

  6. Repeated measurements of mite and pet allergen levels in house dust over a time period of 8 years.

    PubMed

    Antens, C J M; Oldenwening, M; Wolse, A; Gehring, U; Smit, H A; Aalberse, R C; Kerkhof, M; Gerritsen, J; de Jongste, J C; Brunekreef, B

    2006-12-01

    Studies of the association between indoor allergen exposure and the development of allergic diseases have often measured allergen exposure at one point in time. We investigated the variability of house dust mite (Der p 1, Der f 1) and cat (Fel d 1) allergen in Dutch homes over a period of 8 years. Data were obtained in the Dutch PIAMA birth cohort study. Dust from the child's mattress, the parents' mattress and the living room floor was collected at four points in time, when the child was 3 months, 4, 6 and 8 years old. Dust samples were analysed for Der p 1, Der f 1 and Fel d 1 by sandwich enzyme immuno assay. Mite allergen concentrations for the child's mattress, the parents' mattress and the living room floor were moderately correlated between time-points. Agreement was better for cat allergen. For Der p 1 and Der f 1 on the child's mattress, the within-home variance was close to or smaller than the between-home variance in most cases. For Fel d 1, the within-home variance was almost always smaller than the between-home variance. Results were similar for allergen levels expressed per gram of dust and allergen levels expressed per square metre of the sampled surface. Variance ratios were smaller when samples were taken at shorter time intervals than at longer time intervals. Over a period of 4 years, mite and cat allergens measured in house dust are sufficiently stable to use single measurements with confidence in epidemiological studies. The within-home variance was larger when samples were taken 8 years apart so that over such long periods, repetition of sampling is recommended.

  7. Biodegradation and attenuation of steroidal hormones and alkylphenols by stream biofilms and sediments

    USGS Publications Warehouse

    Writer, Jeffrey; Barber, Larry B.; Ryan, Joseph N.; Bradley, Paul M.

    2011-01-01

    Biodegradation of select endocrine-disrupting compounds (17β-estradiol, estrone, 17α-ethynylestradiol, 4-nonylphenol, 4-nonylphenolmonoexthoylate, and 4-nonylphenoldiethoxylate) was evaluated in stream biofilm, sediment, and water matrices collected from locations upstream and downstream from a wastewater treatment plant effluent discharge. Both biologically mediated transformation to intermediate metabolites and biologically mediated mineralization were evaluated in separate time interval experiments. Initial time intervals (0–7 d) evaluated biodegradation by the microbial community dominant at the time of sampling. Later time intervals (70 and 185 d) evaluated the biodegradation potential as the microbial community adapted to the absence of outside energy sources. The sediment matrix was more effective than the biofilm and water matrices at biodegrading 4-nonylphenol and 17β-estradiol. Biodegradation by the sediment matrix of 17α-ethynylestradiol occurred at later time intervals (70 and 185 d) and was not observed in the biofilm or water matrices. Stream biofilms play an important role in the attenuation of endocrine-disrupting compounds in surface waters due to both biodegradation and sorption processes. Because sorption to stream biofilms and bed sediments occurs on a faster temporal scale (<1 h) than the potential to biodegrade the target compounds (50% mineralization at >185 d), these compounds can accumulate in stream biofilms and sediments.

  8. Autocorrelation of location estimates and the analysis of radiotracking data

    USGS Publications Warehouse

    Otis, D.L.; White, Gary C.

    1999-01-01

    The wildlife literature has been contradictory about the importance of autocorrelation in radiotracking data used for home range estimation and hypothesis tests of habitat selection. By definition, the concept of a home range involves autocorrelated movements, but estimates or hypothesis tests based on sampling designs that predefine a time frame of interest, and that generate representative samples of an animal's movement during this time frame, should not be affected by length of the sampling interval and autocorrelation. Intensive sampling of the individual's home range and habitat use during the time frame of the study leads to improved estimates for the individual, but use of location estimates as the sample unit to compare across animals is pseudoreplication. We therefore recommend against use of habitat selection analysis techniques that use locations instead of individuals as the sample unit. We offer a general outline for sampling designs for radiotracking studies.

  9. Relationship between menstruation status and work conditions in Japan.

    PubMed

    Nishikitani, Mariko; Nakao, Mutsuhiro; Tsurugano, Shinobu; Inoure, Mariko; Yano, Eiji

    2017-01-01

    Menstrual problems can significantly impact daily and work life. In reaction to a shrinking population, the Japanese government is encouraging more women to participate in the labor force. Actual success in achieving this aim, however, is limited. Specifically, participation in the workforce by women during their reproductive years is impacted by their health, which involves not only work conditions, but also traditional family circumstances. Therefore, it is important to further assess and gather more information about the health status of women who work during their reproductive years in Japan. Specifically, women's health can be represented by menstruation status, which is a pivotal indicator. In this study, we assessed the association between short rest periods in work intervals and menstruation and other health status indicators among female workers in Japan. Study participants were recruited from the alumnae of a university, which provided a uniform educational level. All 9864 female alumnae were asked to join the survey and 1630 (17%) accepted. The final sample of study participants ( n  = 505) were aged 23-43 years, had maintained the same job status for at least 1 year, and were not shift workers, had no maternal status, and did not lack any related information. The participants were divided into two groups according to interval time, with 11 h between end of work and resumption of daily work as a benchmark. This interval time was based on EU regulations and the goal set by the government of Japan. Health outcomes included: menstrual cycle, dysmenorrhoea symptoms, anxiety regarding health, and satisfaction in terms of health. Multiple logistic regression analyses were conducted to estimate the odds ratios (ORs) and 95% confidence intervals (CIs) for health indexes in association with interval time by adjusting for confounding variables that included both psychosocial and biological factors. We compared the health status of women in the workforce with and without a sufficient interval time of 11 h/day. Workers who had a short interval time had a significantly higher prevalence of anxiety about health and dissatisfaction with their health. For menstruation status, only abnormal menstruation cycles were observed more often among workers in the short interval group than those of the long interval group. However, this association disappeared when biological confounding factors were adjusted in a multivariable regression model. Dysmenorrhea symptoms did not show a statistically significant association with short interval time. This study found a significant association between a short interval time of less than 11 h/day and subjective health indicators and the menstrual health status of women in the workforce. Menstrual health was more affected by biological factors than social psychological factors. A long work time and short interval time could increase worker anxiety and dissatisfaction and may deteriorate the menstrual cycle.

  10. Time-dependent Reliability of Dynamic Systems using Subset Simulation with Splitting over a Series of Correlated Time Intervals

    DTIC Science & Technology

    2013-08-01

    cost due to potential warranty costs, repairs and loss of market share. Reliability is the probability that the system will perform its intended...MCMC and splitting sampling schemes. Our proposed SS/ STP method is presented in Section 4, including accuracy bounds and computational effort

  11. Use of Self-Matching to Control for Stable Patient Characteristics While Addressing Time-Varying Confounding on Treatment Effect: A Case Study of Older Intensive Care Patients.

    PubMed

    Han, Ling; Pisani, M A; Araujo, K L B; Allore, Heather G

    Exposure-crossover design offers a non-experimental option to control for stable baseline confounding through self-matching while examining causal effect of an exposure on an acute outcome. This study extends this approach to longitudinal data with repeated measures of exposure and outcome using data from a cohort of 340 older medical patients in an intensive care unit (ICU). The analytic sample included 92 patients who received ≥1 dose of haloperidol, an antipsychotic medication often used for patients with delirium. Exposure-crossover design was implemented by sampling the 3-day time segments prior ( Induction) and posterior ( Subsequent) to each treatment episode of receiving haloperidol. In the full cohort, there was a trend of increasing delirium severity scores (Mean±SD: 4.4±1.7) over the course of the ICU stay. After exposure-crossover sampling, the delirium severity score decreased from the Induction (4.9) to the Subsequent (4.1) intervals, with the treatment episode falling in-between (4.5). Based on a GEE Poisson model accounting for self-matching and within-subject correlation, the unadjusted mean delirium severity scores was -0.55 (95% CI: -1.10, -0.01) points lower for the Subsequent than the Induction intervals. The association diminished by 32% (-0.38, 95%CI: -0.99, 0.24) after adjusting only for ICU confounding, while being slightly increased by 7% (-0.60, 95%CI: -1.15, -0.04) when adjusting only for baseline characteristics. These results suggest that longitudinal exposure-crossover design is feasible and capable of partially removing stable baseline confounding through self-matching. Loss of power due to eliminating treatment-irrelevant person-time and uncertainty around allocating person-time to comparison intervals remain methodological challenges.

  12. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies

    PubMed Central

    2014-01-01

    Background The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. Methods The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. Results The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. Conclusions If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used. PMID:24552686

  13. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies.

    PubMed

    Kottas, Martina; Kuss, Oliver; Zapf, Antonia

    2014-02-19

    The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used.

  14. Time Intervals in Sequence Sampling, Not Data Modifications, Have a Major Impact on Estimates of HIV Escape Rates

    PubMed Central

    2018-01-01

    The ability of human immunodeficiency virus (HIV) to avoid recognition by humoral and cellular immunity (viral escape) is well-documented, but the strength of the immune response needed to cause such a viral escape remains poorly quantified. Several previous studies observed a more rapid escape of HIV from CD8 T cell responses in the acute phase of infection compared to chronic infection. The rate of HIV escape was estimated with the help of simple mathematical models, and results were interpreted to suggest that CD8 T cell responses causing escape in acute HIV infection may be more efficient at killing virus-infected cells than responses that cause escape in chronic infection, or alternatively, that early escapes occur in epitopes mutations in which there is minimal fitness cost to the virus. However, these conclusions were challenged on several grounds, including linkage and interference of multiple escape mutations due to a low population size and because of potential issues associated with modifying the data to estimate escape rates. Here we use a sampling method which does not require data modification to show that previous results on the decline of the viral escape rate with time since infection remain unchanged. However, using this method we also show that estimates of the escape rate are highly sensitive to the time interval between measurements, with longer intervals biasing estimates of the escape rate downwards. Our results thus suggest that data modifications for early and late escapes were not the primary reason for the observed decline in the escape rate with time since infection. However, longer sampling periods for escapes in chronic infection strongly influence estimates of the escape rate. More frequent sampling of viral sequences in chronic infection may improve our understanding of factors influencing the rate of HIV escape from CD8 T cell responses. PMID:29495443

  15. Time Intervals in Sequence Sampling, Not Data Modifications, Have a Major Impact on Estimates of HIV Escape Rates.

    PubMed

    Ganusov, Vitaly V

    2018-02-27

    The ability of human immunodeficiency virus (HIV) to avoid recognition by humoral and cellular immunity (viral escape) is well-documented, but the strength of the immune response needed to cause such a viral escape remains poorly quantified. Several previous studies observed a more rapid escape of HIV from CD8 T cell responses in the acute phase of infection compared to chronic infection. The rate of HIV escape was estimated with the help of simple mathematical models, and results were interpreted to suggest that CD8 T cell responses causing escape in acute HIV infection may be more efficient at killing virus-infected cells than responses that cause escape in chronic infection, or alternatively, that early escapes occur in epitopes mutations in which there is minimal fitness cost to the virus. However, these conclusions were challenged on several grounds, including linkage and interference of multiple escape mutations due to a low population size and because of potential issues associated with modifying the data to estimate escape rates. Here we use a sampling method which does not require data modification to show that previous results on the decline of the viral escape rate with time since infection remain unchanged. However, using this method we also show that estimates of the escape rate are highly sensitive to the time interval between measurements, with longer intervals biasing estimates of the escape rate downwards. Our results thus suggest that data modifications for early and late escapes were not the primary reason for the observed decline in the escape rate with time since infection. However, longer sampling periods for escapes in chronic infection strongly influence estimates of the escape rate. More frequent sampling of viral sequences in chronic infection may improve our understanding of factors influencing the rate of HIV escape from CD8 T cell responses.

  16. Synthesis of compact patterns for NMR relaxation decay in intelligent "electronic tongue" for analyzing heavy oil composition

    NASA Astrophysics Data System (ADS)

    Lapshenkov, E. M.; Volkov, V. Y.; Kulagin, V. P.

    2018-05-01

    The article is devoted to the problem of pattern creation of the NMR sensor signal for subsequent recognition by the artificial neural network in the intelligent device "the electronic tongue". The specific problem of removing redundant data from the spin-spin relaxation signal pattern that is used as a source of information in analyzing the composition of oil and petroleum products is considered. The method is proposed that makes it possible to remove redundant data of the relaxation decay pattern but without introducing additional distortion. This method is based on combining some relaxation decay curve intervals that increment below the noise level such that the increment of the combined intervals is above the noise level. In this case, the relaxation decay curve samples that are located inside the combined intervals are removed from the pattern. This method was tested on the heavy-oil NMR signal patterns that were created by using the Carr-Purcell-Meibum-Gill (CPMG) sequence for recording the relaxation process. Parameters of CPMG sequence are: 100 μs - time interval between 180° pulses, 0.4s - duration of measurement. As a result, it was revealed that the proposed method allowed one to reduce the number of samples 15 times (from 4000 to 270), and the maximum detected root mean square error (RMS error) equals 0.00239 (equivalent to signal-to-noise ratio 418).

  17. EFFECT ON PERFUSION VALUES OF SAMPLING INTERVAL OF CT PERFUSION ACQUISITIONS IN NEUROENDOCRINE LIVER METASTASES AND NORMAL LIVER

    PubMed Central

    Ng, Chaan S.; Hobbs, Brian P.; Wei, Wei; Anderson, Ella F.; Herron, Delise H.; Yao, James C.; Chandler, Adam G.

    2014-01-01

    Objective To assess the effects of sampling interval (SI) of CT perfusion acquisitions on CT perfusion values in normal liver and liver metastases from neuroendocrine tumors. Methods CT perfusion in 16 patients with neuroendocrine liver metastases were analyzed by distributed parameter modeling to yield tissue blood flow, blood volume, mean transit time, permeability, and hepatic arterial fraction, for tumor and normal liver. CT perfusion values for the reference sampling interval of 0.5s (SI0.5) were compared with those of SI datasets of 1s, 2s, 3s and 4s, using mixed-effects model analyses. Results Increases in SI beyond 1s were associated with significant and increasing departures of CT perfusion parameters from reference values at SI0.5 (p≤0.0009). CT perfusion values deviated from reference with increasing uncertainty with increasing SIs. Findings for normal liver were concordant. Conclusion Increasing SIs beyond 1s yield significantly different CT perfusion parameter values compared to reference values at SI0.5. PMID:25626401

  18. ODP Site 1063 (Bermuda Rise) revisited: Oxygen isotopes, excursions and paleointensity in the Brunhes Chron

    NASA Astrophysics Data System (ADS)

    Channell, J. E. T.; Hodell, D. A.; Curtis, J. H.

    2012-02-01

    An age model for the Brunhes Chron of Ocean Drilling Program (ODP) Site 1063 (Bermuda Rise) is constructed by tandem correlation of oxygen isotope and relative paleointensity data to calibrated reference templates. Four intervals in the Brunhes Chron where paleomagnetic inclinations are negative for both u-channel samples and discrete samples are correlated to the following magnetic excursions with Site 1063 ages in brackets: Laschamp (41 ka), Blake (116 ka), Iceland Basin (190 ka), Pringle Falls (239 ka). These ages are consistent with current age estimates for three of these excursions, but not for "Pringle Falls" which has an apparent age older than a recently published estimate by ˜28 kyr. For each of these excursions (termed Category 1 excursions), virtual geomagnetic poles (VGPs) reach high southerly latitudes implying paired polarity reversals of the Earth's main dipole field, that apparently occurred in a brief time span (<2 kyr in each case), several times shorter than the apparent duration of regular polarity transitions. In addition, several intervals of low paleomagnetic inclination (low and negative in one case) are observed both in u-channel and discrete samples at ˜318 ka (MIS 9), ˜412 ka (MIS 11) and in the 500-600 ka interval (MIS 14-15). These "Category 2" excursions may constitute inadequately recorded (Category 1) excursions, or high amplitude secular variation.

  19. Correlation of lithologic and sonic logs from the COST No. B-2 well with seismic reflection data

    USGS Publications Warehouse

    King, K.C.

    1979-01-01

    The purpose of this study was to correlate events recorded on seismic records with changes in lithology recorded from sample descriptions from the Continental Offshore Stratigraphic Test (COST) No. B-2 well.  The well is located on the U.S. mid-Atlantic Outer Continental Shelf about 146 km east of Atlantic City, N.J. (see location map).  Lithologic data are summarized from the sample descriptions of Smith and others (1976).  Sonic travel times were read at 0.15 m intervals in the well using a long-space sonic logging tool.  Interval velocities, reflection coefficients and a synthetic seismogram were calculated from the sonic log.

  20. Centralized and decentralized global outer-synchronization of asymmetric recurrent time-varying neural network by data-sampling.

    PubMed

    Lu, Wenlian; Zheng, Ren; Chen, Tianping

    2016-03-01

    In this paper, we discuss outer-synchronization of the asymmetrically connected recurrent time-varying neural networks. By using both centralized and decentralized discretization data sampling principles, we derive several sufficient conditions based on three vector norms to guarantee that the difference of any two trajectories starting from different initial values of the neural network converges to zero. The lower bounds of the common time intervals between data samples in centralized and decentralized principles are proved to be positive, which guarantees exclusion of Zeno behavior. A numerical example is provided to illustrate the efficiency of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Recalcitrant Behavior of Temperate Forest Tree Seeds: Storage, Biochemistry, and Physiology

    Treesearch

    Kristina F. Connor; Sharon Sowa

    2002-01-01

    The recalcitrant behavior of seeds of live oak (Quercus virginiana Mill.), and Durand oak (Quercus durandii Buckl.) was examined after hydrated storage at two temperatures, +4°C and -2°C for up to 1 year. Samples were collected and analyses performed at monthly intervals. At each sampling time, seeds were tested for viability and...

  2. In vivo gamma-rays induced initial DNA damage and the effect of famotidine in mouse leukocytes as assayed by the alkaline comet assay.

    PubMed

    Mozdarani, Hossein; Nasirian, Borzo; Haeri, S Abolghasem

    2007-03-01

    Ionizing radiation induces a variety of lesions in DNA, each of which can be used as a bio-indicator for biological dosimetry or the study of the radioprotective effects of substances. To assess gamma ray-induced DNA damage in vivo in mouse leukocytes at various doses and the effect of famotidine, blood was collected from Balb/c male mice after irradiation with 4 Gy gamma-rays at different time intervals post-irradiation. To assess the response, mice were irradiated with doses of gamma-rays at 1 to 4 Grays. Famotidine was injected intra-peritoneally (i.p) at a dose of 5 mg/kg at various time intervals before irradiation. Four slides were prepared from each sample and alkaline comet assay was performed using standard protocols. Results obtained show that radiation significantly increases DNA damage in leukocytes in a dose dependent manner (p < 0.01) when using appropriate sampling time after irradiation, because increasing sampling time after irradiation resulted in a time dependent disappearance of DNA damage. Treatment with only 5 mg/kg famotidine before 4 Gy irradiation led to almost 50% reduction in DNA damage when compared with those animals which received radiation alone. The radioprotective capability of famotidine might be attributed to radical scavenging properties and an anti-oxidation mechanism.

  3. A validation of ground ambulance pre-hospital times modeled using geographic information systems.

    PubMed

    Patel, Alka B; Waters, Nigel M; Blanchard, Ian E; Doig, Christopher J; Ghali, William A

    2012-10-03

    Evaluating geographic access to health services often requires determining the patient travel time to a specified service. For urgent care, many research studies have modeled patient pre-hospital time by ground emergency medical services (EMS) using geographic information systems (GIS). The purpose of this study was to determine if the modeling assumptions proposed through prior United States (US) studies are valid in a non-US context, and to use the resulting information to provide revised recommendations for modeling travel time using GIS in the absence of actual EMS trip data. The study sample contained all emergency adult patient trips within the Calgary area for 2006. Each record included four components of pre-hospital time (activation, response, on-scene and transport interval). The actual activation and on-scene intervals were compared with those used in published models. The transport interval was calculated within GIS using the Network Analyst extension of Esri ArcGIS 10.0 and the response interval was derived using previously established methods. These GIS derived transport and response intervals were compared with the actual times using descriptive methods. We used the information acquired through the analysis of the EMS trip data to create an updated model that could be used to estimate travel time in the absence of actual EMS trip records. There were 29,765 complete EMS records for scene locations inside the city and 529 outside. The actual median on-scene intervals were longer than the average previously reported by 7-8 minutes. Actual EMS pre-hospital times across our study area were significantly higher than the estimated times modeled using GIS and the original travel time assumptions. Our updated model, although still underestimating the total pre-hospital time, more accurately represents the true pre-hospital time in our study area. The widespread use of generalized EMS pre-hospital time assumptions based on US data may not be appropriate in a non-US context. The preference for researchers should be to use actual EMS trip records from the proposed research study area. In the absence of EMS trip data researchers should determine which modeling assumptions more accurately reflect the EMS protocols across their study area.

  4. Comparison of oral and intramuscular recombinant canine distemper vaccination in African wild dogs (Lycaon pictus).

    PubMed

    Connolly, Maren; Thomas, Patrick; Woodroffe, Rosie; Raphael, Bonnie L

    2013-12-01

    A series of three doses of recombinant canary-pox-vectored canine distemper virus vaccine was administered at 1-mo intervals, orally (n = 8) or intramuscularly (n = 13), to 21 previously unvaccinated juvenile African wild dogs (Lycaon pictus) at the Wildlife Conservation Society's Bronx Zoo. Titers were measured by serum neutralization at each vaccination and at intervals over a period of 3.5-21.5 mo after the initial vaccination. All postvaccination titers were negative for orally vaccinated animals at all sampling time points. Of the animals that received intramuscular vaccinations, 100% had presumed protective titers by the end of the course of vaccination, but only 50% of those sampled at 6.5 mo postvaccination had positive titers. None of the three animals sampled at 21.5 mo postvaccination had positive titers.

  5. Changes in crash risk following re-timing of traffic signal change intervals.

    PubMed

    Retting, Richard A; Chapline, Janella F; Williams, Allan F

    2002-03-01

    More than I million motor vehicle crashes occur annually at signalized intersections in the USA. The principal method used to prevent crashes associated with routine changes in signal indications is employment of a traffic signal change interval--a brief yellow and all-red period that follows the green indication. No universal practice exists for selecting the duration of change intervals, and little is known about the influence of the duration of the change interval on crash risk. The purpose of this study was to estimate potential crash effects of modifying the duration of traffic signal change intervals to conform with values associated with a proposed recommended practice published by the Institute of Transportation Engineers. A sample of 122 intersections was identified and randomly assigned to experimental and control groups. Of 51 eligible experimental sites, 40 (78%) needed signal timing changes. For the 3-year period following implementation of signal timing changes, there was an 8% reduction in reportable crashes at experimental sites relative to those occurring at control sites (P = 0.08). For injury crashes, a 12% reduction at experimental sites relative to those occurring at control sites was found (P = 0.03). Pedestrian and bicycle crashes at experimental sites decreased 37% (P = 0.03) relative to controls. Given these results and the relatively low cost of re-timing traffic signals, modifying the duration of traffic signal change intervals to conform with values associated with the Institute of Transportation Engineers' proposed recommended practice should be strongly considered by transportation agencies to reduce the frequency of urban motor vehicle crashes.

  6. Soil moisture determination study. [Guymon, Oklahoma

    NASA Technical Reports Server (NTRS)

    Blanchard, B. J.

    1979-01-01

    Soil moisture data collected in conjunction with aircraft sensor and SEASAT SAR data taken near Guymon, Oklahoma are summarized. In order to minimize the effects of vegetation and roughness three bare and uniformly smooth fields were sampled 6 times at three day intervals on the flight days from August 2 through 17. Two fields remained unirrigated and dry. A similar pair of fields was irrigated at different times during the sample period. In addition, eighteen other fields were sampled on the nonflight days with no field being sampled more than 24 hours from a flight time. The aircraft sensors used included either black and white or color infrared photography, L and C band passive microwave radiometers, the 13.3, 4.75, 1.6 and .4 GHz scatterometers, the 11 channel modular microwave scanner, and the PRT5.

  7. Postmortem Cholesterol Levels in Peripheral Nerve Tissue: Preliminar Considerations on Interindividual and Intraindividual Variation.

    PubMed

    Vacchiano, Giuseppe; Luna Maldonado, Aurelio; Matas Ros, Maria; Fiorenza, Elisa; Silvestre, Angela; Simonetti, Biagio; Pieri, Maria

    2018-06-01

    The study reports the evolution of the demyelinization process based on cholesterol ([CHOL]) levels quantified in median nerve samples and collected at different times-from death from both right and left wrists. The statistical data show that the phenomenon evolves differently in the right and left nerves. Such a difference can reasonably be attributed to a different multicenter evolution of the demyelinization. For data analysis, the enrolled subjects were grouped by similar postmortem intervals (PMIs), considering 3 intervals: PMI < 48 hours, 48 hours < PMI < 78 hours, and PMI > 78 hours. Data obtained from tissue dissected within 48 hours of death allowed for a PMI estimation according to the following equations: PMI = 0.000 + 0.7623 [CHOL]right (R = 0.581) for the right wrist and PMI = 0.000 + 0.8911 [CHOL]left (R = 0.794) for the left wrist.At present, this correlation cannot be considered to be definitive because of the limitation of the small size of the samples analyzed, because the differences in the sampling time and the interindividual and intraindividual variation may influence the demyelinization process.

  8. Circulating intact and cleaved forms of the urokinase-type plasminogen activator receptor: biological variation, reference intervals and clinical useful cut-points.

    PubMed

    Thurison, Tine; Christensen, Ib J; Lund, Ida K; Nielsen, Hans J; Høyer-Hansen, Gunilla

    2015-01-15

    High levels of circulating forms of the urokinase-type plasminogen activator receptor (uPAR) are significantly associated to poor prognosis in cancer patients. Our aim was to determine biological variations and reference intervals of the uPAR forms in blood, and in addition, to test the clinical relevance of using these as cut-points in colorectal cancer (CRC) prognosis. uPAR forms were measured in citrated and EDTA plasma samples using time-resolved fluorescence immunoassays. Diurnal, intra- and inter-individual variations were assessed in plasma samples from cohorts of healthy individuals. Reference intervals were determined in plasma from healthy individuals randomly selected from a Danish multi-center cross-sectional study. A cohort of CRC patients was selected from the same cross-sectional study. The reference intervals showed a slight increase with age and women had ~20% higher levels. The intra- and inter-individual variations were ~10% and ~20-30%, respectively and the measured levels of the uPAR forms were within the determined 95% reference intervals. No diurnal variation was found. Applying the normal upper limit of the reference intervals as cut-point for dichotomizing CRC patients revealed significantly decreased overall survival of patients with levels above this cut-point of any uPAR form. The reference intervals for the different uPAR forms are valid and the upper normal limits are clinically relevant cut-points for CRC prognosis. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Effect of adhesive materials on shear bond strength of a mineral trioxide aggregate.

    PubMed

    Ali, Ahmed; Banerjee, Avijit; Mannocci, Francesco

    2016-02-01

    To compare the shear bond strength (SBS) and fractography between mineral trioxide aggregate (MTA) and glass-ionomer cement (GIC) or resin composite (RC) after varying MTA setting time intervals. MTA was mixed and packed into standardized cavities (4 mm diameter x 3 mm depth) in acrylic blocks. RC with 37% H₃PO₄ and type 2 (etch and rinse) adhesive, or conventional GIC was bonded to the exposed MTA sample surfaces after 10-minute, 24-hour, 72-hour and 30-day MTA setting intervals (n = 10/group, eight groups). Samples were stored (37°C, 24 hours, 100% humidity) before SBS testing and statistical analysis (ANOVA, Tukey LSD, P < 0.05). Fractography was undertaken using stereomicroscopy for all samples and three random samples/group by using SEM. Significant differences between all groups were found (P= 0.002). SBS of RC:MTA (Max 5.09 ± 1.79 MPa) was higher than the SBS of GIC:MTA (Max 3.74 ± 0.70 MPa) in 24-hour, 72-hour and 30-day groups except in the 10-minute MTA setting time groups, where SBS of GIC:MTA was higher. There was a significant effect of time on SBS of RC: MTA (P = 0.008) and no effect on SBS of GIC:MTA (P = 3.00). Fractography revealed mixed (adhesive/cohesive) failures in all groups; in RC:MTA groups there was a decrease in adhesive failure with time in contrast to the GIC:MTA groups.

  10. Detection of Gastrointestinal Pathogens from Stool Samples on Hemoccult Cards by Multiplex PCR.

    PubMed

    Alberer, Martin; Schlenker, Nicklas; Bauer, Malkin; Helfrich, Kerstin; Mengele, Carolin; Löscher, Thomas; Nothdurft, Hans Dieter; Bretzel, Gisela; Beissner, Marcus

    2017-01-01

    Purpose . Up to 30% of international travelers are affected by travelers' diarrhea (TD). Reliable data on the etiology of TD is lacking. Sufficient laboratory capacity at travel destinations is often unavailable and transporting conventional stool samples to the home country is inconvenient. We evaluated the use of Hemoccult cards for stool sampling combined with a multiplex PCR for the detection of model viral, bacterial, and protozoal TD pathogens. Methods . Following the creation of serial dilutions for each model pathogen, last positive dilution steps (LPDs) and thereof calculated last positive sample concentrations (LPCs) were compared between conventional stool samples and card samples. Furthermore, card samples were tested after a prolonged time interval simulating storage during a travel duration of up to 6 weeks. Results . The LPDs/LPCs were comparable to testing of conventional stool samples. After storage on Hemoccult cards, the recovery rate was 97.6% for C. jejuni , 100% for E . histolytica , 97.6% for norovirus GI, and 100% for GII. Detection of expected pathogens was possible at weekly intervals up to 42 days. Conclusion . Stool samples on Hemoccult cards stored at room temperature can be used in combination with a multiplex PCR as a reliable tool for testing of TD pathogens.

  11. The impact of children's internalizing and externalizing problems on parenting: Transactional processes and reciprocal change over time.

    PubMed

    Serbin, Lisa A; Kingdon, Danielle; Ruttle, Paula L; Stack, Dale M

    2015-11-01

    Most theoretical models of developmental psychopathology involve a transactional, bidirectional relation between parenting and children's behavior problems. The present study utilized a cross-lagged panel, multiple interval design to model change in bidirectional relations between child and parent behavior across successive developmental periods. Two major categories of child behavior problems, internalizing and externalizing, and two aspects of parenting, positive (use of support and structure) and harsh discipline (use of physical punishment), were modeled across three time points spaced 3 years apart. Two successive developmental intervals, from approximately age 7.5 to 10.5 and from 10.5 to 13.5, were included. Mother-child dyads (N = 138; 65 boys) from a lower income longitudinal sample of families participated, with standardized measures of mothers rating their own parenting behavior and teachers reporting on child's behavior. Results revealed different types of reciprocal relations between specific aspects of child and parent behavior, with internalizing problems predicting an increase in positive parenting over time, which subsequently led to a reduction in internalizing problems across the successive 3-year interval. In contrast, externalizing predicted reduced levels of positive parenting in a reciprocal sequence that extended across two successive intervals and predicted increased levels of externalizing over time. Implications for prevention and early intervention are discussed.

  12. A 4.2 ps Time-Interval RMS Resolution Time-to-Digital Converter Using a Bin Decimation Method in an UltraScale FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Yonggang; Liu, Chong

    2016-10-01

    The common solution for a field programmable gate array (FPGA)-based time-to-digital converter (TDC) is constructing a tapped delay line (TDL) for time interpolation to yield a sub-clock time resolution. The granularity and uniformity of the delay elements of TDL determine the TDC time resolution. In this paper, we propose a dual-sampling TDL architecture and a bin decimation method that could make the delay elements as small and uniform as possible, so that the implemented TDCs can achieve a high time resolution beyond the intrinsic cell delay. Two identical full hardware-based TDCs were implemented in a Xilinx UltraScale FPGA for performance evaluation. For fixed time intervals in the range from 0 to 440 ns, the average time-interval RMS resolution is measured by the two TDCs with 4.2 ps, thus the timestamp resolution of single TDC is derived as 2.97 ps. The maximum hit rate of the TDC is as high as half the system clock rate of FPGA, namely 250 MHz in our demo prototype. Because the conventional online bin-by-bin calibration is not needed, the implementation of the proposed TDC is straightforward and relatively resource-saving.

  13. Solid sorbent air sampler

    NASA Technical Reports Server (NTRS)

    Galen, T. J. (Inventor)

    1986-01-01

    A fluid sampler for collecting a plurality of discrete samples over separate time intervals is described. The sampler comprises a sample assembly having an inlet and a plurality of discreet sample tubes each of which has inlet and outlet sides. A multiport dual acting valve is provided in the sampler in order to sequentially pass air from the sample inlet into the selected sample tubes. The sample tubes extend longitudinally of the housing and are located about the outer periphery thereof so that upon removal of an enclosure cover, they are readily accessible for operation of the sampler in an analysis mode.

  14. Assessment of flubendiamide residues in pigeon pea in different agro-climatic zones of India.

    PubMed

    Kale, V D; Walunj, A R; Battu, R S; Sahoo, Sanjay K; Singh, Balwinder; Paramasivam, M; Roy, Sankhajit; Banerjee, Tirthankar; Banerjee, Hemanta; Rao, Cherukuri Sreenivasa; Reddy, D Jagdishwar; Reddy, K Narasimha; Reddy, C Narendra; Tripathy, Vandana; Jaya, Maisnam; Pant, Shashi; Gupta, Monika; Singh, Geeta; Sharma, K K

    2012-07-01

    Supervised field trials were conducted at the research farms of four agricultural universities located at different agro-climatic zones of India to find out the harvest time residues of flubendiamide and its des-iodo metabolite on pigeon pea (Cajanus cajan) during the year 2006-2007. Two spray applications of flubendiamide 20 WDG at 50 g (T(1)) and 100 g (T(2)) a.i./ha were given to the crop at 15-days interval. The foliage samples at different time intervals were drawn at only one location, however, the harvest time samples of pigeon pea grain, shell, and straw were drawn at all the four locations. The residues were estimated by HPLC coupled with UV-VIS variable detector. No residues of flubendiamide and its des-iodo metabolite were found at harvest of the crop at or above the LOQ level of 0.05 μg/g. On the basis of the data generated, a pre-harvest interval (PHI) of 28 days has been recommended and the flubendiamide 20 WDG has been registered for use on pigeon pea by Central Insecticide Board and Registration Committee, Ministry of Agriculture, Government of India and the MRL has been fixed by Ministry of Health and Family Welfare, Government of India under Prevention of Food and Adulteration as 0.05 μg/g on pigeon pea grains.

  15. A gravimetric technique for evaluating flow continuity from two infusion devices.

    PubMed

    Leff, R D; True, W R; Roberts, R J

    1987-06-01

    A computerized gravimetric technique for examining the flow continuity from infusion devices was developed, and two infusion devices with different mechanisms of pump operation were evaluated to illustrate this technique. A BASIC program that records serial weight measurements and calculates weight change from previous determinations was written for and interfaced with a gravimetric balance and IBM PC. A plot of effused weight (normalized weight change that reflects the difference between desired timed-sample interval and actual time) versus time (desired timed-sample interval) was constructed. The gravimetric technique was evaluated using both a peristaltic-type and a piston-type infusion pump. Intravenous solution (5% dextrose and 0.9% sodium chloride) was effused at 10 mL/hr and collected in a beaker. Weights were measured at 10-second intervals over a two-hour infusion period, and the weights of the effused solution were plotted versus time. Flow continuity differed between the two infusion devices. Actual effused weight decreased to 0.007 g/10 sec during the refill cycle of the piston-type pump; the mean (+/- S.D.) effused weight was 0.029 +/- 0.002 g/10 sec. The desired effusion rate was 0.028 g/10 sec. The peristaltic pump had greater flow continuity, with a mean effusion weight of 0.028 +/- 0.003 g/10 sec. The gravimetric technique described in this report can be used to quantitatively depict the effusion profiles of infusion devices. Further studies are needed to identify the degree of flow continuity that is clinically acceptable for infusion devices.

  16. Effects of time and sampling location on concentrations of β-hydroxybutyric acid in dairy cows.

    PubMed

    Mahrt, A; Burfeind, O; Heuwieser, W

    2014-01-01

    Two trials were conducted to examine factors potentially influencing the measurement of blood β-hydroxybutyric acid (BHBA) in dairy cows. The objective of the first trial was to study effects of sampling time on BHBA concentration in continuously fed dairy cows. Furthermore, we determined test characteristics of a single BHBA measurement at a random time of the day to diagnose subclinical ketosis considering commonly used cut-points (1.2 and 1.4 mmol/L). Finally, we set out to evaluate if test characteristics could be enhanced by repeating measurements after different time intervals. During 4 herd visits, a total of 128 cows (8 to 28 d in milk) fed 10 times daily were screened at 0900 h and preselected by BHBA concentration. Blood samples were drawn from the tail vessels and BHBA concentrations were measured using an electronic BHBA meter (Precision Xceed, Abbott Diabetes Care Ltd., Witney, UK). Cows with BHBA concentrations ≥0.8 mmol/L at this time were enrolled in the trial (n=92). Subsequent BHBA measurements took place every 3h for a total of 8 measurements during 24 h. The effect of sampling time on BHBA concentrations was tested in a repeated-measures ANOVA repeating sampling time. Sampling time did not affect BHBA concentrations in continuously fed dairy cows. Defining the average daily BHBA concentration calculated from the 8 measurements as the gold standard, a single measurement at a random time of the day to diagnose subclinical ketosis had a sensitivity of 0.90 or 0.89 at the 2 BHBA cut-points (1.2 and 1.4 mmol/L). Specificity was 0.88 or 0.90 using the same cut-points. Repeating measurements after different time intervals improved test characteristics only slightly. In the second experiment, we compared BHBA concentrations of samples drawn from 3 different blood sampling locations (tail vessels, jugular vein, and mammary vein) of 116 lactating dairy cows. Concentrations of BHBA differed in samples from the 3 sampling locations. Mean BHBA concentration was 0.3 mmol/L lower when measured in the mammary vein compared with the jugular vein and 0.4 mmol/L lower in the mammary vein compared with the tail vessels. We conclude that to measure BHBA, blood samples of continuously fed dairy cows can be drawn at any time of the day. A single measurement provides very good test characteristics for on-farm conditions. Blood samples for BHBA measurement should be drawn from the jugular vein or tail vessels; the mammary vein should not be used for this purpose. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  17. A novel statistical methodology to overcome sampling irregularities in the forest inventory data and to model forest changes under dynamic disturbance regimes

    Treesearch

    Nikolay Strigul; Jean Lienard

    2015-01-01

    Forest inventory datasets offer unprecedented opportunities to model forest dynamics under evolving environmental conditions but they are analytically challenging due to irregular sampling time intervals of the same plot, across the years. We propose here a novel method to model dynamic changes in forest biomass and basal area using forest inventory data. Our...

  18. Examining Temporal Sample Scale and Model Choice with Spatial Capture-Recapture Models in the Common Leopard Panthera pardus.

    PubMed

    Goldberg, Joshua F; Tempa, Tshering; Norbu, Nawang; Hebblewhite, Mark; Mills, L Scott; Wangchuk, Tshewang R; Lukacs, Paul

    2015-01-01

    Many large carnivores occupy a wide geographic distribution, and face threats from habitat loss and fragmentation, poaching, prey depletion, and human wildlife-conflicts. Conservation requires robust techniques for estimating population densities and trends, but the elusive nature and low densities of many large carnivores make them difficult to detect. Spatial capture-recapture (SCR) models provide a means for handling imperfect detectability, while linking population estimates to individual movement patterns to provide more accurate estimates than standard approaches. Within this framework, we investigate the effect of different sample interval lengths on density estimates, using simulations and a common leopard (Panthera pardus) model system. We apply Bayesian SCR methods to 89 simulated datasets and camera-trapping data from 22 leopards captured 82 times during winter 2010-2011 in Royal Manas National Park, Bhutan. We show that sample interval length from daily, weekly, monthly or quarterly periods did not appreciably affect median abundance or density, but did influence precision. We observed the largest gains in precision when moving from quarterly to shorter intervals. We therefore recommend daily sampling intervals for monitoring rare or elusive species where practicable, but note that monthly or quarterly sample periods can have similar informative value. We further develop a novel application of Bayes factors to select models where multiple ecological factors are integrated into density estimation. Our simulations demonstrate that these methods can help identify the "true" explanatory mechanisms underlying the data. Using this method, we found strong evidence for sex-specific movement distributions in leopards, suggesting that sexual patterns of space-use influence density. This model estimated a density of 10.0 leopards/100 km2 (95% credibility interval: 6.25-15.93), comparable to contemporary estimates in Asia. These SCR methods provide a guide to monitor and observe the effect of management interventions on leopards and other species of conservation interest.

  19. Examining Temporal Sample Scale and Model Choice with Spatial Capture-Recapture Models in the Common Leopard Panthera pardus

    PubMed Central

    Goldberg, Joshua F.; Tempa, Tshering; Norbu, Nawang; Hebblewhite, Mark; Mills, L. Scott; Wangchuk, Tshewang R.; Lukacs, Paul

    2015-01-01

    Many large carnivores occupy a wide geographic distribution, and face threats from habitat loss and fragmentation, poaching, prey depletion, and human wildlife-conflicts. Conservation requires robust techniques for estimating population densities and trends, but the elusive nature and low densities of many large carnivores make them difficult to detect. Spatial capture-recapture (SCR) models provide a means for handling imperfect detectability, while linking population estimates to individual movement patterns to provide more accurate estimates than standard approaches. Within this framework, we investigate the effect of different sample interval lengths on density estimates, using simulations and a common leopard (Panthera pardus) model system. We apply Bayesian SCR methods to 89 simulated datasets and camera-trapping data from 22 leopards captured 82 times during winter 2010–2011 in Royal Manas National Park, Bhutan. We show that sample interval length from daily, weekly, monthly or quarterly periods did not appreciably affect median abundance or density, but did influence precision. We observed the largest gains in precision when moving from quarterly to shorter intervals. We therefore recommend daily sampling intervals for monitoring rare or elusive species where practicable, but note that monthly or quarterly sample periods can have similar informative value. We further develop a novel application of Bayes factors to select models where multiple ecological factors are integrated into density estimation. Our simulations demonstrate that these methods can help identify the “true” explanatory mechanisms underlying the data. Using this method, we found strong evidence for sex-specific movement distributions in leopards, suggesting that sexual patterns of space-use influence density. This model estimated a density of 10.0 leopards/100 km2 (95% credibility interval: 6.25–15.93), comparable to contemporary estimates in Asia. These SCR methods provide a guide to monitor and observe the effect of management interventions on leopards and other species of conservation interest. PMID:26536231

  20. "Reliability of the Norwegian version of the short physical performance battery in older people with and without dementia".

    PubMed

    Olsen, Cecilie Fromholt; Bergland, Astrid

    2017-06-09

    The purpose of the study was to establish the test-retest reliability of the Norwegian version of the Short Physical Performance Battery (SPPB). This was a cross- sectional reliability study. A convenience sample of 61 older adults with a mean age of 88.4(8.1) was tested by two different physiotherapists at two time points. The mean time interval between tests was 2.5 days. The Intraclass Correlation Coefficient model 3.1 (ICC, 3.1) with 95% confidence intervals as well as the weighted Kappa (K) were used as measures of relative reliability. The Standard Error of Measurement (SEM) and Minimal Detectable Change (MDC) were used to measure absolute reliability. The results were also analyzed for a subgroup of 24 older people with dementia. The ICC reflected high relative reliability for the SPPB summary score and the 4 m walk test (4mwt), both for the total sample (ICC = 0.92, and 0.91 respectively)) and for the subgroup with dementia (ICC = 0.84 and 0.90 respectively). Furthermore, weighted Ks for the SPPB subscales were 0.64 for the chair stand, 0.80 for gait and 0.52 for balance for the total sample and almost identical for the subgroup with dementia. MDC-values at the 95% confidence intervals (MDC95) were calculated at 0.8 for the total score of SPPB and 0.39 m/s for the 4mwt in the total sample. For the subgroup with dementia MDC95 was 1.88 for the total score of SPPB and 0.28 m/s for 4mwt. The SPPB total score and the timed walking test showed overall high relative and absolute reliability for the total sample indicating that the Norwegian version of the SPPB is reliable when used by trained physiotherapists with older people. The reliability of the Norwegian SPPB in older people with dementia seems high, but due to a small sample size this needs further investigation.

  1. Optimal spatio-temporal design of water quality monitoring networks for reservoirs: Application of the concept of value of information

    NASA Astrophysics Data System (ADS)

    Maymandi, Nahal; Kerachian, Reza; Nikoo, Mohammad Reza

    2018-03-01

    This paper presents a new methodology for optimizing Water Quality Monitoring (WQM) networks of reservoirs and lakes using the concept of the value of information (VOI) and utilizing results of a calibrated numerical water quality simulation model. With reference to the value of information theory, water quality of every checkpoint with a specific prior probability differs in time. After analyzing water quality samples taken from potential monitoring points, the posterior probabilities are updated using the Baye's theorem, and VOI of the samples is calculated. In the next step, the stations with maximum VOI is selected as optimal stations. This process is repeated for each sampling interval to obtain optimal monitoring network locations for each interval. The results of the proposed VOI-based methodology is compared with those obtained using an entropy theoretic approach. As the results of the two methodologies would be partially different, in the next step, the results are combined using a weighting method. Finally, the optimal sampling interval and location of WQM stations are chosen using the Evidential Reasoning (ER) decision making method. The efficiency and applicability of the methodology are evaluated using available water quantity and quality data of the Karkheh Reservoir in the southwestern part of Iran.

  2. Joint Command Decision Support 21st Century Technology Demonstration: Human Factors Style Guide

    DTIC Science & Technology

    2009-03-01

    education, religion , etc.) or for a variable sampled at discrete intervals. Histograms (bar charts without spaces between the bars) shall be used when...with Zulu time as the default. The seconds are optional. All colons are required and the Zulu (Z) shall be capitalized. 3. The display shall...include Zulu time and local time; Zulu time shall be presented above local time. 4. The date time group shall be presented on the COMDAT status bar on the

  3. Accelerometer Data Analysis and Presentation Techniques

    NASA Technical Reports Server (NTRS)

    Rogers, Melissa J. B.; Hrovat, Kenneth; McPherson, Kevin; Moskowitz, Milton E.; Reckart, Timothy

    1997-01-01

    The NASA Lewis Research Center's Principal Investigator Microgravity Services project analyzes Orbital Acceleration Research Experiment and Space Acceleration Measurement System data for principal investigators of microgravity experiments. Principal investigators need a thorough understanding of data analysis techniques so that they can request appropriate analyses to best interpret accelerometer data. Accelerometer data sampling and filtering is introduced along with the related topics of resolution and aliasing. Specific information about the Orbital Acceleration Research Experiment and Space Acceleration Measurement System data sampling and filtering is given. Time domain data analysis techniques are discussed and example environment interpretations are made using plots of acceleration versus time, interval average acceleration versus time, interval root-mean-square acceleration versus time, trimmean acceleration versus time, quasi-steady three dimensional histograms, and prediction of quasi-steady levels at different locations. An introduction to Fourier transform theory and windowing is provided along with specific analysis techniques and data interpretations. The frequency domain analyses discussed are power spectral density versus frequency, cumulative root-mean-square acceleration versus frequency, root-mean-square acceleration versus frequency, one-third octave band root-mean-square acceleration versus frequency, and power spectral density versus frequency versus time (spectrogram). Instructions for accessing NASA Lewis Research Center accelerometer data and related information using the internet are provided.

  4. Daily and Long Term Variations of Out-Door Gamma Dose Rate in Khorasan Province, Iran

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toossi, M. T. Bahreyni; Bayani, SH.

    2008-08-07

    In Iran before 1996, only a few hot spots had been identified, no systematic study had been envisaged. Since then preparation of out-door environmental gamma radiation map of Iran was defined as a long term goal in our center, at the same time simultaneous monitoring of outdoor gamma level in Khorasan was also proposed. A Rados area monitoring system (AAM-90) including 10 intelligent RD-02 detector and all associated components were purchased. From 2003 gradually seven stations have been setup in Khorasan. For all seven stations monthly average and one hour daily average on four time intervals have been computed. Statisticallymore » no significant differences have been observed. This is also true for monthly averages. The overall average dose rate for present seven stations varies from 0.11 {mu}Sv{center_dot}h{sup -1} for Ferdows, to 0.04 {mu}Sv{center_dot}h{sup -1} for Dargaz. Based on our data, 50 minutes sample in any time interval is an accurate sample size to estimate out door Gamma dose rate.« less

  5. Cross-sample entropy of foreign exchange time series

    NASA Astrophysics Data System (ADS)

    Liu, Li-Zhi; Qian, Xi-Yuan; Lu, Heng-Yao

    2010-11-01

    The correlation of foreign exchange rates in currency markets is investigated based on the empirical data of DKK/USD, NOK/USD, CAD/USD, JPY/USD, KRW/USD, SGD/USD, THB/USD and TWD/USD for a period from 1995 to 2002. Cross-SampEn (cross-sample entropy) method is used to compare the returns of every two exchange rate time series to assess their degree of asynchrony. The calculation method of confidence interval of SampEn is extended and applied to cross-SampEn. The cross-SampEn and its confidence interval for every two of the exchange rate time series in periods 1995-1998 (before the Asian currency crisis) and 1999-2002 (after the Asian currency crisis) are calculated. The results show that the cross-SampEn of every two of these exchange rates becomes higher after the Asian currency crisis, indicating a higher asynchrony between the exchange rates. Especially for Singapore, Thailand and Taiwan, the cross-SampEn values after the Asian currency crisis are significantly higher than those before the Asian currency crisis. Comparison with the correlation coefficient shows that cross-SampEn is superior to describe the correlation between time series.

  6. Understory Bird Communities in Amazonian Rainforest Fragments: Species Turnover through 25 Years Post-Isolation in Recovering Landscapes

    PubMed Central

    Stouffer, Philip C.; Johnson, Erik I.; Bierregaard, Richard O.; Lovejoy, Thomas E.

    2011-01-01

    Inferences about species loss following habitat conversion are typically drawn from short-term surveys, which cannot reconstruct long-term temporal dynamics of extinction and colonization. A long-term view can be critical, however, to determine the stability of communities within fragments. Likewise, landscape dynamics must be considered, as second growth structure and overall forest cover contribute to processes in fragments. Here we examine bird communities in 11 Amazonian rainforest fragments of 1–100 ha, beginning before the fragments were isolated in the 1980s, and continuing through 2007. Using a method that accounts for imperfect detection, we estimated extinction and colonization based on standardized mist-net surveys within discreet time intervals (1–2 preisolation samples and 4–5 post-isolation samples). Between preisolation and 2007, all fragments lost species in an area-dependent fashion, with loss of as few as <10% of preisolation species from 100-ha fragments, but up to 70% in 1-ha fragments. Analysis of individual time intervals revealed that the 2007 result was not due to gradual species loss beginning at isolation; both extinction and colonization occurred in every time interval. In the last two samples, 2000 and 2007, extinction and colonization were approximately balanced. Further, 97 of 101 species netted before isolation were detected in at least one fragment in 2007. Although a small subset of species is extremely vulnerable to fragmentation, and predictably goes extinct in fragments, developing second growth in the matrix around fragments encourages recolonization in our landscapes. Species richness in these fragments now reflects local turnover, not long-term attrition of species. We expect that similar processes could be operating in other fragmented systems that show unexpectedly low extinction. PMID:21731616

  7. Understory bird communities in Amazonian rainforest fragments: species turnover through 25 years post-isolation in recovering landscapes.

    PubMed

    Stouffer, Philip C; Johnson, Erik I; Bierregaard, Richard O; Lovejoy, Thomas E

    2011-01-01

    Inferences about species loss following habitat conversion are typically drawn from short-term surveys, which cannot reconstruct long-term temporal dynamics of extinction and colonization. A long-term view can be critical, however, to determine the stability of communities within fragments. Likewise, landscape dynamics must be considered, as second growth structure and overall forest cover contribute to processes in fragments. Here we examine bird communities in 11 Amazonian rainforest fragments of 1-100 ha, beginning before the fragments were isolated in the 1980s, and continuing through 2007. Using a method that accounts for imperfect detection, we estimated extinction and colonization based on standardized mist-net surveys within discreet time intervals (1-2 preisolation samples and 4-5 post-isolation samples). Between preisolation and 2007, all fragments lost species in an area-dependent fashion, with loss of as few as <10% of preisolation species from 100-ha fragments, but up to 70% in 1-ha fragments. Analysis of individual time intervals revealed that the 2007 result was not due to gradual species loss beginning at isolation; both extinction and colonization occurred in every time interval. In the last two samples, 2000 and 2007, extinction and colonization were approximately balanced. Further, 97 of 101 species netted before isolation were detected in at least one fragment in 2007. Although a small subset of species is extremely vulnerable to fragmentation, and predictably goes extinct in fragments, developing second growth in the matrix around fragments encourages recolonization in our landscapes. Species richness in these fragments now reflects local turnover, not long-term attrition of species. We expect that similar processes could be operating in other fragmented systems that show unexpectedly low extinction.

  8. Influence of an acidic beverage (Coca-Cola) on the pharmacokinetics of phenytoin in healthy rabbits.

    PubMed

    Kondal, A; Garg, S K

    2003-12-01

    This study was carried out to evaluate the influence of an acidic beverage (Coca-Cola) on the pharmacokinetics of phenytoin in rabbits. In a cross-over study, phenytoin was given orally at a dose of 30 mg/kg and blood samples were taken at different intervals from 0-24 h. After a washout period of 7 days, Coca-Cola (5 ml/kg) was administered in combination with phenytoin (30 mg/kg) and blood samples were taken at various time intervals from 0-24 h. The same rabbits continued to receive Coca-Cola (5 ml/kg) for another 7 days. On the 8th day, Coca-Cola (5 ml/kg) in combination with phenytoin (30 mg/kg) was administered and blood samples were taken at similar intervals. Plasma was separated and assayed for phenytoin by high performance liquid chromatography (HPLC) and various pharmacokinetic parameters were calculated. It was concluded that an acidic beverage (Coca-Cola) increases the extent of absorption of phenytoin by significantly increasing the Cmax and AUC(o-á) of phenytoin. These results warrant the reduction of phenytoin dose when administered in combination with Coca-Cola to avoid any toxicity. (c) 2003 Prous Science

  9. Breaking through the bandwidth barrier in distributed fiber vibration sensing by sub-Nyquist randomized sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Jingdong; Zhu, Tao; Zheng, Hua; Kuang, Yang; Liu, Min; Huang, Wei

    2017-04-01

    The round trip time of the light pulse limits the maximum detectable frequency response range of vibration in phase-sensitive optical time domain reflectometry (φ-OTDR). We propose a method to break the frequency response range restriction of φ-OTDR system by modulating the light pulse interval randomly which enables a random sampling for every vibration point in a long sensing fiber. This sub-Nyquist randomized sampling method is suits for detecting sparse-wideband- frequency vibration signals. Up to MHz resonance vibration signal with over dozens of frequency components and 1.153MHz single frequency vibration signal are clearly identified for a sensing range of 9.6km with 10kHz maximum sampling rate.

  10. Analysis of wind bias change with respect to time at Cape Kennedy, Florida, and Vandenberg AFB, California

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    1978-01-01

    A statistical analysis is presented of the temporal variability of wind vectors at 1 km altitude intervals from 0 to 27 km altitude after applying a digital filter to the original wind profile data sample.

  11. Proportionality between Doppler noise and integrated signal path electron density validated by differenced S-X range

    NASA Technical Reports Server (NTRS)

    Berman, A. L.

    1977-01-01

    Observations of Viking differenced S-band/X-band (S-X) range are shown to correlate strongly with Viking Doppler noise. A ratio of proportionality between downlink S-band plasma-induced range error and two-way Doppler noise is calculated. A new parameter (similar to the parameter epsilon which defines the ratio of local electron density fluctuations to mean electron density) is defined as a function of observed data sample interval (Tau) where the time-scale of the observations is 15 Tau. This parameter is interpreted to yield the ratio of net observed phase (or electron density) fluctuations to integrated electron density (in RMS meters/meter). Using this parameter and the thin phase-changing screen approximation, a value for the scale size L is calculated. To be consistent with Doppler noise observations, it is seen necessary for L to be proportional to closest approach distance a, and a strong function of the observed data sample interval, and hence the time-scale of the observations.

  12. Manual control models of industrial management

    NASA Technical Reports Server (NTRS)

    Crossman, E. R. F. W.

    1972-01-01

    The industrial engineer is often required to design and implement control systems and organization for manufacturing and service facilities, to optimize quality, delivery, and yield, and minimize cost. Despite progress in computer science most such systems still employ human operators and managers as real-time control elements. Manual control theory should therefore be applicable to at least some aspects of industrial system design and operations. Formulation of adequate model structures is an essential prerequisite to progress in this area; since real-world production systems invariably include multilevel and multiloop control, and are implemented by timeshared human effort. A modular structure incorporating certain new types of functional element, has been developed. This forms the basis for analysis of an industrial process operation. In this case it appears that managerial controllers operate in a discrete predictive mode based on fast time modelling, with sampling interval related to plant dynamics. Successive aggregation causes reduced response bandwidth and hence increased sampling interval as a function of level.

  13. Sample size calculation for studies with grouped survival data.

    PubMed

    Li, Zhiguo; Wang, Xiaofei; Wu, Yuan; Owzar, Kouros

    2018-06-10

    Grouped survival data arise often in studies where the disease status is assessed at regular visits to clinic. The time to the event of interest can only be determined to be between two adjacent visits or is right censored at one visit. In data analysis, replacing the survival time with the endpoint or midpoint of the grouping interval leads to biased estimators of the effect size in group comparisons. Prentice and Gloeckler developed a maximum likelihood estimator for the proportional hazards model with grouped survival data and the method has been widely applied. Previous work on sample size calculation for designing studies with grouped data is based on either the exponential distribution assumption or the approximation of variance under the alternative with variance under the null. Motivated by studies in HIV trials, cancer trials and in vitro experiments to study drug toxicity, we develop a sample size formula for studies with grouped survival endpoints that use the method of Prentice and Gloeckler for comparing two arms under the proportional hazards assumption. We do not impose any distributional assumptions, nor do we use any approximation of variance of the test statistic. The sample size formula only requires estimates of the hazard ratio and survival probabilities of the event time of interest and the censoring time at the endpoints of the grouping intervals for one of the two arms. The formula is shown to perform well in a simulation study and its application is illustrated in the three motivating examples. Copyright © 2018 John Wiley & Sons, Ltd.

  14. Experimental and numerical investigation of low-drag intervals in turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Park, Jae Sung; Ryu, Sangjin; Lee, Jin

    2017-11-01

    It has been widely investigated that there is a substantial intermittency between high and low drag states in wall-bounded shear flows. Recent experimental and computational studies in a turbulent channel flow have identified low-drag time intervals based on wall shear stress measurements. These intervals are a weak turbulence state characterized by low-speed streaks and weak streamwise vortices. In this study, the spatiotemporal dynamics of low-drag intervals in a turbulent boundary layer is investigated using experiments and simulations. The low-drag intervals are monitored based on the wall shear stress measurement. We show that near the wall conditionally-sampled mean velocity profiles during low-drag intervals closely approach that of a low-drag nonlinear traveling wave solution as well as that of the so-called maximum drag reduction asymptote. This observation is consistent with the channel flow studies. Interestingly, the large spatial stretching of the streak is very evident in the wall-normal direction during low-drag intervals. Lastly, a possible connection between the mean velocity profile during the low-drag intervals and the Blasius profile will be discussed. This work was supported by startup funds from the University of Nebraska-Lincoln.

  15. Willingness to pay for flexible working conditions of people with type 2 diabetes: discrete choice experiments.

    PubMed

    Nexo, M A; Cleal, B; Hagelund, Lise; Willaing, I; Olesen, K

    2017-12-15

    The increasing number of people with chronic diseases challenges workforce capacity. Type 2 diabetes (T2D) can have work-related consequences, such as early retirement. Laws of most high-income countries require workplaces to provide accommodations to enable people with chronic disabilities to manage their condition at work. A barrier to successful implementation of such accommodations can be lack of co-workers' willingness to support people with T2D. This study aimed to examine the willingness to pay (WTP) of people with and without T2D for five workplace initiatives that help individuals with type 2 diabetes manage their diabetes at work. Three samples with employed Danish participants were drawn from existing online panels: a general population sample (n = 600), a T2D sample (n = 693), and a matched sample of people without diabetes (n = 539). Participants completed discrete choice experiments eliciting their WTP (reduction in monthly salary, €/month) for five hypothetical workplace initiatives: part-time job, customized work, extra breaks with pay, and time off for medical consultations with and without pay. WTP was estimated by conditional logits models. Bootstrapping was used to estimate confidence intervals for WTP. There was an overall WTP for all initiatives. Average WTP for all attributes was 34 €/month (95% confidence interval [CI]: 27-43] in the general population sample, 32 €/month (95% CI: 26-38) in the T2D sample, and 55 €/month (95% CI: 43-71) in the matched sample. WTP for additional breaks with pay was considerably lower than for the other initiatives in all samples. People with T2D had significantly lower WTP than people without diabetes for part-time work, customized work, and time off without pay, but not for extra breaks or time off with pay. For people with and without T2D, WTP was present for initiatives that could improve management of diabetes at the workplace. WTP was lowest among people with T2D. Implementation of these initiatives seems feasible and may help unnecessary exclusion of people with T2D from work.

  16. Note: A fast pneumatic sample-shuttle with attenuated shocks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biancalana, Valerio; Dancheva, Yordanka; Stiaccini, Leonardo

    2014-03-15

    We describe a home-built pneumatic shuttle suitable for the fast displacement of samples in the vicinity of a highly sensitive atomic magnetometer. The samples are magnetized at 1 T using a Halbach assembly of magnets. The device enables the remote detection of free-induction-decay in ultra-low-field and zero-field nuclear magnetic resonance (NMR) experiments, in relaxometric measurements and in other applications involving the displacement of magnetized samples within time intervals as short as a few tens of milliseconds. Other possible applications of fast sample shuttling exist in radiological studies, where samples have to be irradiated and then analyzed in a cold environment.

  17. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  18. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  19. A validation of ground ambulance pre-hospital times modeled using geographic information systems

    PubMed Central

    2012-01-01

    Background Evaluating geographic access to health services often requires determining the patient travel time to a specified service. For urgent care, many research studies have modeled patient pre-hospital time by ground emergency medical services (EMS) using geographic information systems (GIS). The purpose of this study was to determine if the modeling assumptions proposed through prior United States (US) studies are valid in a non-US context, and to use the resulting information to provide revised recommendations for modeling travel time using GIS in the absence of actual EMS trip data. Methods The study sample contained all emergency adult patient trips within the Calgary area for 2006. Each record included four components of pre-hospital time (activation, response, on-scene and transport interval). The actual activation and on-scene intervals were compared with those used in published models. The transport interval was calculated within GIS using the Network Analyst extension of Esri ArcGIS 10.0 and the response interval was derived using previously established methods. These GIS derived transport and response intervals were compared with the actual times using descriptive methods. We used the information acquired through the analysis of the EMS trip data to create an updated model that could be used to estimate travel time in the absence of actual EMS trip records. Results There were 29,765 complete EMS records for scene locations inside the city and 529 outside. The actual median on-scene intervals were longer than the average previously reported by 7–8 minutes. Actual EMS pre-hospital times across our study area were significantly higher than the estimated times modeled using GIS and the original travel time assumptions. Our updated model, although still underestimating the total pre-hospital time, more accurately represents the true pre-hospital time in our study area. Conclusions The widespread use of generalized EMS pre-hospital time assumptions based on US data may not be appropriate in a non-US context. The preference for researchers should be to use actual EMS trip records from the proposed research study area. In the absence of EMS trip data researchers should determine which modeling assumptions more accurately reflect the EMS protocols across their study area. PMID:23033894

  20. Complications following incident stroke resulting in readmissions: an analysis of data from three Scottish health surveys.

    PubMed

    Ponomarev, Dmitry; Miller, Claire; Govan, Lindsay; Haig, Caroline; Wu, Olivia; Langhorne, Peter

    2015-08-01

    Stroke is widely recognized as the major contributor to morbidity and mortality in the United Kingdom. We analyzed the data obtained from the three consecutive Scottish Health Surveys and the Scottish Morbidity records, with the aim of identifying risk factors for, and timing of, common poststroke complications. There were 19434 individuals sampled during three Scottish Health Surveys in 1995, 1998, and 2001. For these individuals their morbidity and mortality outcomes were obtained in 2007. Incident stroke prevalence, risk factors for a range of poststroke complications, and average times until such complications in the sample were established. Of the total of 168 incident stroke admissions (0·86% of the survey), 16·1% people died during incident stroke hospitalization. Of the remaining 141 stroke survivors, 75·2% were rehospitalized at least once. The most frequent reason for readmission after stroke was a cardiovascular complication (28·6%), median time until event 1412 days, followed by infection (17·3%, median 1591 days). The risk of cardiovascular readmission was higher in those with 'poor' self-assessed health (odds ratio 7·70; 95% confidence interval 1·64-43·27), smokers (odds ratio 4·24; 95% confidence interval 1·11-21·59), and doubled with every five years increase in age (odds ratio 1·97; 95% confidence interval 1·46-2·65). 'Poor' self-assessed health increased chance of readmission for infection (odds ratio 14·11; 95% confidence interval 2·27-276·56). Cardiovascular events and infections are the most frequent poststroke complications resulting in readmissions. The time period until event provides a possibility to focus monitoring on those people at risk of readmission and introduce preventative measures, thereby reducing readmission-associated costs. © 2013 The Authors. International Journal of Stroke © 2013 World Stroke Organization.

  1. Detection of Gastrointestinal Pathogens from Stool Samples on Hemoccult Cards by Multiplex PCR

    PubMed Central

    Schlenker, Nicklas; Bauer, Malkin; Helfrich, Kerstin; Mengele, Carolin; Löscher, Thomas; Nothdurft, Hans Dieter; Bretzel, Gisela; Beissner, Marcus

    2017-01-01

    Purpose. Up to 30% of international travelers are affected by travelers' diarrhea (TD). Reliable data on the etiology of TD is lacking. Sufficient laboratory capacity at travel destinations is often unavailable and transporting conventional stool samples to the home country is inconvenient. We evaluated the use of Hemoccult cards for stool sampling combined with a multiplex PCR for the detection of model viral, bacterial, and protozoal TD pathogens. Methods. Following the creation of serial dilutions for each model pathogen, last positive dilution steps (LPDs) and thereof calculated last positive sample concentrations (LPCs) were compared between conventional stool samples and card samples. Furthermore, card samples were tested after a prolonged time interval simulating storage during a travel duration of up to 6 weeks. Results. The LPDs/LPCs were comparable to testing of conventional stool samples. After storage on Hemoccult cards, the recovery rate was 97.6% for C. jejuni, 100% for E. histolytica, 97.6% for norovirus GI, and 100% for GII. Detection of expected pathogens was possible at weekly intervals up to 42 days. Conclusion. Stool samples on Hemoccult cards stored at room temperature can be used in combination with a multiplex PCR as a reliable tool for testing of TD pathogens. PMID:28408937

  2. Effects of sampling interval on spatial patterns and statistics of watershed nitrogen concentration

    USGS Publications Warehouse

    Wu, S.-S.D.; Usery, E.L.; Finn, M.P.; Bosch, D.D.

    2009-01-01

    This study investigates how spatial patterns and statistics of a 30 m resolution, model-simulated, watershed nitrogen concentration surface change with sampling intervals from 30 m to 600 m for every 30 m increase for the Little River Watershed (Georgia, USA). The results indicate that the mean, standard deviation, and variogram sills do not have consistent trends with increasing sampling intervals, whereas the variogram ranges remain constant. A sampling interval smaller than or equal to 90 m is necessary to build a representative variogram. The interpolation accuracy, clustering level, and total hot spot areas show decreasing trends approximating a logarithmic function. The trends correspond to the nitrogen variogram and start to level at a sampling interval of 360 m, which is therefore regarded as a critical spatial scale of the Little River Watershed. Copyright ?? 2009 by Bellwether Publishing, Ltd. All right reserved.

  3. Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas

    2014-01-01

    Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…

  4. Simulating recurrent event data with hazard functions defined on a total time scale.

    PubMed

    Jahn-Eimermacher, Antje; Ingel, Katharina; Ozga, Ann-Kathrin; Preussler, Stella; Binder, Harald

    2015-03-08

    In medical studies with recurrent event data a total time scale perspective is often needed to adequately reflect disease mechanisms. This means that the hazard process is defined on the time since some starting point, e.g. the beginning of some disease, in contrast to a gap time scale where the hazard process restarts after each event. While techniques such as the Andersen-Gill model have been developed for analyzing data from a total time perspective, techniques for the simulation of such data, e.g. for sample size planning, have not been investigated so far. We have derived a simulation algorithm covering the Andersen-Gill model that can be used for sample size planning in clinical trials as well as the investigation of modeling techniques. Specifically, we allow for fixed and/or random covariates and an arbitrary hazard function defined on a total time scale. Furthermore we take into account that individuals may be temporarily insusceptible to a recurrent incidence of the event. The methods are based on conditional distributions of the inter-event times conditional on the total time of the preceeding event or study start. Closed form solutions are provided for common distributions. The derived methods have been implemented in a readily accessible R script. The proposed techniques are illustrated by planning the sample size for a clinical trial with complex recurrent event data. The required sample size is shown to be affected not only by censoring and intra-patient correlation, but also by the presence of risk-free intervals. This demonstrates the need for a simulation algorithm that particularly allows for complex study designs where no analytical sample size formulas might exist. The derived simulation algorithm is seen to be useful for the simulation of recurrent event data that follow an Andersen-Gill model. Next to the use of a total time scale, it allows for intra-patient correlation and risk-free intervals as are often observed in clinical trial data. Its application therefore allows the simulation of data that closely resemble real settings and thus can improve the use of simulation studies for designing and analysing studies.

  5. Influence of acidic beverage (Coca-Cola) on pharmacokinetics of ibuprofen in healthy rabbits.

    PubMed

    Kondal, Amit; Garg, S K

    2003-11-01

    The study was aimed at determining the effect of Coca-Cola on the pharmacokinetics of ibuprofen in rabbits. In a cross-over study, ibuprofen was given orally in a dose of 56 mg/kg, prepared as 0.5% suspension in carboxymethyl cellulose (CMC) and blood samples (1 ml) were drawn at different time intervals from 0-12 hr. After a washout period of 7 days, Coca-Cola in a dose of (5 ml/kg) was administered along with ibuprofen (56 mg/kg) and blood samples were drawn from 0-12 hr. To these rabbits, 5 ml/kg Coca-Cola was administered once daily for another 7 days. On 8th day, Coca-Cola (5 ml/kg) along with ibuprofen (56 mg/kg), prepared as a suspension was administered and blood samples (1 ml each) were drawn at similar time intervals. Plasma was separated and assayed for ibuprofen by HPLC technique and various pharmacokinetic parameters were calculated. The Cmax and AUC0-alpha of ibuprofen were significantly increased after single and multiple doses of Coca-Cola, thereby indicating increased extent of absorption of ibuprofen. The results warrant the reduction of ibuprofen daily dosage, frequency when administered with Coca-Cola.

  6. Best Practices to Achieve the Lowest Uncertainty in Measuring with Respect

    Science.gov Websites

    been sitting in a cabinet from time to time. If control charts are used, then this interval could be 6 packaged cells or module for use in control charts to monitor the test bed and any potential drift in the reference device's calibration. Measure the control sample at least once a week. Plot percentage deviation

  7. Confidence Intervals for Proportion Estimates in Complex Samples. Research Report. ETS RR-06-21

    ERIC Educational Resources Information Center

    Oranje, Andreas

    2006-01-01

    Confidence intervals are an important tool to indicate uncertainty of estimates and to give an idea of probable values of an estimate if a different sample from the population was drawn or a different sample of measures was used. Standard symmetric confidence intervals for proportion estimates based on a normal approximation can yield bounds…

  8. Investigation of the influence of sampling schemes on quantitative dynamic fluorescence imaging

    PubMed Central

    Dai, Yunpeng; Chen, Xueli; Yin, Jipeng; Wang, Guodong; Wang, Bo; Zhan, Yonghua; Nie, Yongzhan; Wu, Kaichun; Liang, Jimin

    2018-01-01

    Dynamic optical data from a series of sampling intervals can be used for quantitative analysis to obtain meaningful kinetic parameters of probe in vivo. The sampling schemes may affect the quantification results of dynamic fluorescence imaging. Here, we investigate the influence of different sampling schemes on the quantification of binding potential (BP) with theoretically simulated and experimentally measured data. Three groups of sampling schemes are investigated including the sampling starting point, sampling sparsity, and sampling uniformity. In the investigation of the influence of the sampling starting point, we further summarize two cases by considering the missing timing sequence between the probe injection and sampling starting time. Results show that the mean value of BP exhibits an obvious growth trend with an increase in the delay of the sampling starting point, and has a strong correlation with the sampling sparsity. The growth trend is much more obvious if throwing the missing timing sequence. The standard deviation of BP is inversely related to the sampling sparsity, and independent of the sampling uniformity and the delay of sampling starting time. Moreover, the mean value of BP obtained by uniform sampling is significantly higher than that by using the non-uniform sampling. Our results collectively suggest that a suitable sampling scheme can help compartmental modeling of dynamic fluorescence imaging provide more accurate results and simpler operations. PMID:29675325

  9. The effect of short-duration sprint interval exercise on plasma postprandial triacylglycerol levels in young men.

    PubMed

    Allen, Edward; Gray, Partick; Kollias-Pearson, Angeliki; Oag, Erlend; Pratt, Katrina; Henderson, Jennifer; Gray, Stuart Robert

    2014-01-01

    It is well established that regular exercise can reduce the risk of cardiovascular disease, although the most time-efficient exercise protocol to confer benefits has yet to be established. The aim of the current study was to determine the effects of short-duration sprint interval exercise on postprandial triacylglycerol. Fifteen healthy male participants completed two 2 day trials. On day 1, participants rested (control) or carried out twenty 6 s sprints, interspersed with 24 s recovery (sprint interval exercise--14 min for total exercise session). On day 2, participants consumed a high-fat meal for breakfast with blood samples collected at baseline, 2 h and 4 h. Gas exchange was also measured at these time points. On day 2 of control and sprint interval exercise trials, there were no differences (P < 0.05) between trials in plasma glucose, triacylglycerol, insulin or respiratory exchange ratio (RER). The area under the curve for plasma triacylglycerol was 7.67 ± 2.37 mmol · l(-1) x 4 h(-1) in the control trial and 7.26 ± 2.49 mmol · l(-1) x 4 h(-1) in the sprint interval exercise trial. Although the sprint exercise protocol employed had no significant effect on postprandial triacylglycerol, there was a clear variability in responses that warrants further investigation.

  10. The enhancement in electrical analysis of the nitrogen doped amorphous carbon thin films (a-C:N) prepared by aerosol-assisted CVD

    NASA Astrophysics Data System (ADS)

    Fadzilah, A. N.; Dayana, K.; Rusop, M.

    2018-05-01

    This paper reports on the deposition of Nitrogen doped amorphous carbon (a-C:N) by Aerosol-assisted Chemical Vapor Deposition (AACVD) using natural source of camphor oil as the precursor material. 5 samples were deposited at 5 different deposition times from 15 min to 90 min, with 15 min interval for each sample. The highest slope of linear graph was noted at the sample with 45 min deposition time, showing the lowest electrical resistance of the sample. From I-V characteristic, the sample deposited at 45 min has the highest electrical conductivity due to high sp2 carbon bonding ratio. Nanostructured behavior of N doped a-C:N was also investigated by FESEM micrograph resulting with the particle size less than 100nm.

  11. Soil Carbon Variability and Change Detection in the Forest Inventory Analysis Database of the United States

    NASA Astrophysics Data System (ADS)

    Wu, A. M.; Nater, E. A.; Dalzell, B. J.; Perry, C. H.

    2014-12-01

    The USDA Forest Service's Forest Inventory Analysis (FIA) program is a national effort assessing current forest resources to ensure sustainable management practices, to assist planning activities, and to report critical status and trends. For example, estimates of carbon stocks and stock change in FIA are reported as the official United States submission to the United Nations Framework Convention on Climate Change. While the main effort in FIA has been focused on aboveground biomass, soil is a critical component of this system. FIA sampled forest soils in the early 2000s and has remeasurement now underway. However, soil sampling is repeated on a 10-year interval (or longer), and it is uncertain what magnitude of changes in soil organic carbon (SOC) may be detectable with the current sampling protocol. We aim to identify the sensitivity and variability of SOC in the FIA database, and to determine the amount of SOC change that can be detected with the current sampling scheme. For this analysis, we attempt to answer the following questions: 1) What is the sensitivity (power) of SOC data in the current FIA database? 2) How does the minimum detectable change in forest SOC respond to changes in sampling intervals and/or sample point density? Soil samples in the FIA database represent 0-10 cm and 10-20 cm depth increments with a 10-year sampling interval. We are investigating the variability of SOC and its change over time for composite soil data in each FIA region (Pacific Northwest, Interior West, Northern, and Southern). To guide future sampling efforts, we are employing statistical power analysis to examine the minimum detectable change in SOC storage. We are also investigating the sensitivity of SOC storage changes under various scenarios of sample size and/or sample frequency. This research will inform the design of future FIA soil sampling schemes and improve the information available to international policy makers, university and industry partners, and the public.

  12. Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data

    ERIC Educational Resources Information Center

    Bonett, Douglas G.; Price, Robert M.

    2012-01-01

    Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…

  13. Assessing total fungal concentrations on commercial passenger aircraft using mixed-effects modeling.

    PubMed

    McKernan, Lauralynn Taylor; Hein, Misty J; Wallingford, Kenneth M; Burge, Harriet; Herrick, Robert

    2008-01-01

    The primary objective of this study was to compare airborne fungal concentrations onboard commercial passenger aircraft at various in-flight times with concentrations measured inside and outside airport terminals. A secondary objective was to investigate the use of mixed-effects modeling of repeat measures from multiple sampling intervals and locations. Sequential triplicate culturable and total spore samples were collected on wide-body commercial passenger aircraft (n = 12) in the front and rear of coach class during six sampling intervals: boarding, midclimb, early cruise, midcruise, late cruise, and deplaning. Comparison samples were collected inside and outside airport terminals at the origin and destination cities. The MIXED procedure in SAS was used to model the mean and the covariance matrix of the natural log transformed fungal concentrations. Five covariance structures were tested to determine the appropriate models for analysis. Fixed effects considered included the sampling interval and, for samples obtained onboard the aircraft, location (front/rear of coach section), occupancy rate, and carbon dioxide concentrations. Overall, both total culturable and total spore fungal concentrations were low while the aircraft were in flight. No statistical difference was observed between measurements made in the front and rear sections of the coach cabin for either culturable or total spore concentrations. Both culturable and total spore concentrations were significantly higher outside the airport terminal compared with inside the airport terminal (p-value < 0.0001) and inside the aircraft (p-value < 0.0001). On the aircraft, the majority of total fungal exposure occurred during the boarding and deplaning processes, when the aircraft utilized ancillary ventilation and passenger activity was at its peak.

  14. Reference Intervals for Urinary Cotinine Levels and the Influence of Sampling Time and Other Predictors on Its Excretion Among Italian Schoolchildren

    PubMed Central

    Protano, Carmela; Andreoli, Roberta; Manigrasso, Maurizio; Vitali, Matteo

    2018-01-01

    (1) Background: Environmental Tobacco Smoke (ETS) exposure remains a public health problem worldwide. The aims are to establish urinary (u-) cotinine reference values for healthy Italian children, to evaluate the role of the sampling time and of other factors on children’s u-cotinine excretion. (2) Methods: A cross-sectional study was performed on 330 children. Information on participants was gathered by a questionnaire and u-cotinine was determined in two samples for each child, collected during the evening and the next morning. (3) Results: Reference intervals (as the 2.5th and 97.5th percentiles of the distribution) in evening and morning samples were respectively equal to 0.98–4.29 and 0.91–4.50 µg L−1 (ETS unexposed) and 1.39–16.34 and 1.49–20.95 µg L−1 (ETS exposed). No statistical differences were recovered between median values found in evening and morning samples, both in ETS unexposed and exposed. Significant predictors of u-cotinine excretions were ponderal status according to body mass index of children (β = 0.202; p-value = 0.041 for evening samples; β = 0.169; p-value = 0.039 for morning samples) and paternal educational level (β = −0.258; p-value = 0.010; for evening samples; β = −0.013; p-value = 0.003 for morning samples). (4) Conclusions: The results evidenced the need of further studies for assessing the role of confounding factors on ETS exposure, and the necessity of educational interventions on smokers for rising their awareness about ETS. PMID:29690510

  15. Development of a New Paradigm for Analysis of Disdrometric Data

    NASA Astrophysics Data System (ADS)

    Larsen, Michael L.; Kostinski, Alexander B.

    2017-04-01

    A number of disdrometers currently on the market are able to characterize hydrometeors on a drop-by-drop basis with arrival timestamps associated with each arriving hydrometeor. This allows an investigator to parse a time series into disjoint intervals that have equal numbers of drops, instead of the traditional subdivision into equal time intervals. Such a "fixed-N" partitioning of the data can provide several advantages over the traditional equal time binning method, especially within the context of quantifying measurement uncertainty (which typically scales with the number of hydrometeors in each sample). An added bonus is the natural elimination of measurements that are devoid of all drops. This analysis method is investigated by utilizing data from a dense array of disdrometers located near Charleston, South Carolina, USA. Implications for the usefulness of this method in future studies are explored.

  16. The influence of sampling interval on the accuracy of trail impact assessment

    USGS Publications Warehouse

    Leung, Y.-F.; Marion, J.L.

    1999-01-01

    Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.

  17. National Survey of Adult and Pediatric Reference Intervals in Clinical Laboratories across Canada: A Report of the CSCC Working Group on Reference Interval Harmonization.

    PubMed

    Adeli, Khosrow; Higgins, Victoria; Seccombe, David; Collier, Christine P; Balion, Cynthia M; Cembrowski, George; Venner, Allison A; Shaw, Julie

    2017-11-01

    Reference intervals are widely used decision-making tools in laboratory medicine, serving as health-associated standards to interpret laboratory test results. Numerous studies have shown wide variation in reference intervals, even between laboratories using assays from the same manufacturer. Lack of consistency in either sample measurement or reference intervals across laboratories challenges the expectation of standardized patient care regardless of testing location. Here, we present data from a national survey conducted by the Canadian Society of Clinical Chemists (CSCC) Reference Interval Harmonization (hRI) Working Group that examines variation in laboratory reference sample measurements, as well as pediatric and adult reference intervals currently used in clinical practice across Canada. Data on reference intervals currently used by 37 laboratories were collected through a national survey to examine the variation in reference intervals for seven common laboratory tests. Additionally, 40 clinical laboratories participated in a baseline assessment by measuring six analytes in a reference sample. Of the seven analytes examined, alanine aminotransferase (ALT), alkaline phosphatase (ALP), and creatinine reference intervals were most variable. As expected, reference interval variation was more substantial in the pediatric population and varied between laboratories using the same instrumentation. Reference sample results differed between laboratories, particularly for ALT and free thyroxine (FT4). Reference interval variation was greater than test result variation for the majority of analytes. It is evident that there is a critical lack of harmonization in laboratory reference intervals, particularly for the pediatric population. Furthermore, the observed variation in reference intervals across instruments cannot be explained by the bias between the results obtained on instruments by different manufacturers. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  18. Complex reference values for endocrine and special chemistry biomarkers across pediatric, adult, and geriatric ages: establishment of robust pediatric and adult reference intervals on the basis of the Canadian Health Measures Survey.

    PubMed

    Adeli, Khosrow; Higgins, Victoria; Nieuwesteeg, Michelle; Raizman, Joshua E; Chen, Yunqi; Wong, Suzy L; Blais, David

    2015-08-01

    Defining laboratory biomarker reference values in a healthy population and understanding the fluctuations in biomarker concentrations throughout life and between sexes are critical to clinical interpretation of laboratory test results in different disease states. The Canadian Health Measures Survey (CHMS) has collected blood samples and health information from the Canadian household population. In collaboration with the Canadian Laboratory Initiative on Pediatric Reference Intervals (CALIPER), the data have been analyzed to determine reference value distributions and reference intervals for several endocrine and special chemistry biomarkers in pediatric, adult, and geriatric age groups. CHMS collected data and blood samples from thousands of community participants aged 3 to 79 years. We used serum samples to measure 13 immunoassay-based special chemistry and endocrine markers. We assessed reference value distributions and, after excluding outliers, calculated age- and sex-specific reference intervals, along with corresponding 90% CIs, according to CLSI C28-A3 guidelines. We observed fluctuations in biomarker reference values across the pediatric, adult, and geriatric age range, with stratification required on the basis of age for all analytes. Additional sex partitions were required for apolipoprotein AI, homocysteine, ferritin, and high sensitivity C-reactive protein. The unique collaboration between CALIPER and CHMS has enabled, for the first time, a detailed examination of the changes in various immunochemical markers that occur in healthy individuals of different ages. The robust age- and sex-specific reference intervals established in this study provide insight into the complex biological changes that take place throughout development and aging and will contribute to improved clinical test interpretation. © 2015 American Association for Clinical Chemistry.

  19. Average variograms to guide soil sampling

    NASA Astrophysics Data System (ADS)

    Kerry, R.; Oliver, M. A.

    2004-10-01

    To manage land in a site-specific way for agriculture requires detailed maps of the variation in the soil properties of interest. To predict accurately for mapping, the interval at which the soil is sampled should relate to the scale of spatial variation. A variogram can be used to guide sampling in two ways. A sampling interval of less than half the range of spatial dependence can be used, or the variogram can be used with the kriging equations to determine an optimal sampling interval to achieve a given tolerable error. A variogram might not be available for the site, but if the variograms of several soil properties were available on a similar parent material and or particular topographic positions an average variogram could be calculated from these. Averages of the variogram ranges and standardized average variograms from four different parent materials in southern England were used to suggest suitable sampling intervals for future surveys in similar pedological settings based on half the variogram range. The standardized average variograms were also used to determine optimal sampling intervals using the kriging equations. Similar sampling intervals were suggested by each method and the maps of predictions based on data at different grid spacings were evaluated for the different parent materials. Variograms of loss on ignition (LOI) taken from the literature for other sites in southern England with similar parent materials had ranges close to the average for a given parent material showing the possible wider application of such averages to guide sampling.

  20. Multi-year longitudinal profiles of cortisol and corticosterone recovered from baleen of North Atlantic right whales (Eubalaena glacialis).

    PubMed

    Hunt, Kathleen E; Lysiak, Nadine S; Moore, Michael; Rolland, Rosalind M

    2017-12-01

    Research into stress physiology of mysticete whales has been hampered by difficulty in obtaining repeated physiological samples from individuals over time. We investigated whether multi-year longitudinal records of glucocorticoids can be reconstructed from serial sampling along full-length baleen plates (representing ∼10years of baleen growth), using baleen recovered from two female North Atlantic right whales (Eubalaena glacialis) of known reproductive history. Cortisol and corticosterone were quantified with immunoassay of subsamples taken every 4cm (representing ∼60d time intervals) along a full-length baleen plate from each female. In both whales, corticosterone was significantly elevated during known pregnancies (inferred from calf sightings and necropsy data) as compared to intercalving intervals; cortisol was significantly elevated during pregnancies in one female but not the other. Within intercalving intervals, corticosterone was significantly elevated during the first year (lactation year) and/or the second year (post-lactation year) as compared to later years of the intercalving interval, while cortisol showed more variable patterns. Cortisol occasionally showed brief high elevations ("spikes") not paralleled by corticosterone, suggesting that the two glucocorticoids might be differentially responsive to certain stressors. Generally, immunoreactive corticosterone was present in higher concentration in baleen than immunoreactive cortisol; corticosterone:cortisol ratio was usually >4 and was highly variable in both individuals. Further investigation of baleen cortisol and corticosterone profiles could prove fruitful for elucidating long-term, multi-year patterns in stress physiology of large whales, determined retrospectively from stranded or archived specimens. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Mass load estimation errors utilizing grab sampling strategies in a karst watershed

    USGS Publications Warehouse

    Fogle, A.W.; Taraba, J.L.; Dinger, J.S.

    2003-01-01

    Developing a mass load estimation method appropriate for a given stream and constituent is difficult due to inconsistencies in hydrologic and constituent characteristics. The difficulty may be increased in flashy flow conditions such as karst. Many projects undertaken are constrained by budget and manpower and do not have the luxury of sophisticated sampling strategies. The objectives of this study were to: (1) examine two grab sampling strategies with varying sampling intervals and determine the error in mass load estimates, and (2) determine the error that can be expected when a grab sample is collected at a time of day when the diurnal variation is most divergent from the daily mean. Results show grab sampling with continuous flow to be a viable data collection method for estimating mass load in the study watershed. Comparing weekly, biweekly, and monthly grab sampling, monthly sampling produces the best results with this method. However, the time of day the sample is collected is important. Failure to account for diurnal variability when collecting a grab sample may produce unacceptable error in mass load estimates. The best time to collect a sample is when the diurnal cycle is nearest the daily mean.

  2. A national cross-sectional study of adherence to timely mammography use in Malta.

    PubMed

    Marmarà, Danika; Marmarà, Vincent; Hubbard, Gill

    2018-03-27

    Routine mammography improves survival. To achieve health benefits, women must attend breast screening regularly at recommended time intervals. Maltese women are routinely invited to undergo mammography at three-year intervals at an organized breast screening programme (MBSP) or can opt to attend a private clinic. Previous research shows that health beliefs, particularly perceived barriers, were the most significant predictors of uptake to the first MBSP invitation. Whether these beliefs and other factors are predictive of adherence with recommended time intervals for mammography at organized or private screening in Malta is unknown. For the first time, this paper explores the predictors for Maltese women screened within or exceeding the recommended three-year frequency in organized or private screening in Malta. Information was obtained from a cross-sectional survey of 404 women, aged 50 to 60 years at the time of their first MBSP invitation, where women's characteristics, knowledge, health beliefs and illness perceptions were compared. The main variable of interest was women's mammography attendance within a three-year interval (ADHERENT) or exceeding three years (NON-ADHERENT). Data were analysed using descriptive statistics, chi-square test, Mann Whitney test, Independent Samples t-test and Shapiro Wilk test. At the time of the survey, 80.2% (n = 324) had been screened within three years (ADHERENT), 5.9% (n = 24) had exceeded the three-year frequency (NON-ADHERENT) while 13.9% (n = 56) never had a mammogram. No significant associations were found between ADHERENT or NON-ADHERENT women in relation to sociodemographic or health status variables (p > 0.05). Knowledge of screening frequency was significantly associated with women's mammography adherence (χ2 = 5.5, p = 0.020). Health beliefs were the strongest significant predictors to describe the variance between ADHERENT and NON-ADHERENT screeners. When Mann Whitney test and Independent Samples t-test were applied on mammography adherence, perceived barriers and cues to action were found to be the most important predictors (p = 0.000, p = 0.039 respectively). To increase routine and timely mammography practices, women who are non-adherent to recommended time frequency guidelines should be targeted, together with their health beliefs, predominantly perceived barriers and cues to action.

  3. 40 CFR 91.314 - Analyzer accuracy and specifications.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... deflection should generally not be used. (2) Some high resolution read-out systems, such as computers, data...-second time interval. (b) Operating procedure for analyzers and sampling system. Follow the start-up and... systems may be used provided that additional calibrations are made to ensure the accuracy of the...

  4. Analysis of vector wind change with respect to time for Vandenberg Air Force Base, California

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    1978-01-01

    A statistical analysis of the temporal variability of wind vectors at 1 km altitude intervals from 0 to 27 km altitude taken from a 10-year data sample of twice-daily rawinsode wind measurements over Vandenberg Air Force Base, California is presented.

  5. Rate of Visual Information Pick-Up in Learning Disabled and Normal Boys.

    ERIC Educational Resources Information Center

    Bryant, Susan K.; And Others

    1983-01-01

    A span-of-apprehension task and a backward masking technique were combined to allow measurement of the apprehension span of a sample of 34 learning disabled and normal boys about 8 to 13 years old at various time intervals following stimulus presentation. (Author/SW)

  6. Statistical analysis of environmental monitoring data: does a worst case time for monitoring clean rooms exist?

    PubMed

    Cundell, A M; Bean, R; Massimore, L; Maier, C

    1998-01-01

    To determine the relationship between the sampling time of the environmental monitoring, i.e., viable counts, in aseptic filling areas and the microbial count and frequency of alerts for air, surface and personnel microbial monitoring, statistical analyses were conducted on 1) the frequency of alerts versus the time of day for routine environmental sampling conducted in calendar year 1994, and 2) environmental monitoring data collected at 30-minute intervals during routine aseptic filling operations over two separate days in four different clean rooms with multiple shifts and equipment set-ups at a parenteral manufacturing facility. Statistical analyses showed, except for one floor location that had significantly higher number of counts but no alert or action level samplings in the first two hours of operation, there was no relationship between the number of counts and the time of sampling. Further studies over a 30-day period at the floor location showed no relationship between time of sampling and microbial counts. The conclusion reached in the study was that there is no worst case time for environmental monitoring at that facility and that sampling any time during the aseptic filling operation will give a satisfactory measure of the microbial cleanliness in the clean room during the set-up and aseptic filling operation.

  7. Geologic and hydraulic characteristics of selected shaly geologic units in Oklahoma

    USGS Publications Warehouse

    Becker, C.J.; Overton, M.D.; Johnson, K.S.; Luza, K.V.

    1997-01-01

    Information was collected on the geologic and hydraulic characteristics of three shale-dominated units in Oklahoma-the Dog Creek Shale and Chickasha Formation in Canadian County, Hennessey Group in Oklahoma County, and the Boggy Formation in Pittsburg County. The purpose of this project was to gain insight into the characteristics controlling fluid flow in shaly units that could be targeted for confinement of hazardous waste in the State and to evaluate methods of measuring hydraulic characteristics of shales. Permeameter results may not indicate in-place small-scale hydraulic characteristics, due to pretest disturbance and deterioration of core samples. The Dog Creek Shale and Chickasha Formation hydraulic conductivities measured by permeameter methods ranged from 2.8 times 10 to the negative 11 to 3.0 times 10 to the negative 7 meter per second in nine samples and specific storage from 3.3 times 10 to the negative 4 to 1.6 times 10 to the negative 3 per meter in four samples. Hennessey Group hydraulic conductivities ranged from 4.0 times 10 to the negative 12 to 4.0 times 10 to the negative 10 meter per second in eight samples. Hydraulic conductivity in the Boggy Formation ranged from 1.7 times 10 to the negative 12 to 1.0 times 10 to the negative 8 meter per second in 17 samples. The hydraulic properties of isolated borehole intervals of average length of 4.5 meters in the Hennessey Group and the Boggy Formation were evaluated by a pressurized slug-test method. Hydraulic conductivities obtained with this method tend to be low because intervals with features that transmitted large volumes of water were not tested. Hennessey Group hydraulic conductivities measured by this method ranged from 3.0 times 10 to the negative 13 to 1.1 times 10 to the negative 9 meter per second; the specific storage values are small and may be unreliable. Boggy Formation hydraulic conductivities ranged from 2.0 times 10 to the negative 13 to 2.7 times 10 to the negative 10 meter per second and specific storage values in these tests also are small and may be unreliable. A substantially higher hydraulic conductivity of 3.0 times 10 to the negative 8 meter per second was measured in one borehole 30 meters deep in the Boggy Formation using an open hole slug-test method.

  8. Oversampling of digitized images. [effects on interpolation in signal processing

    NASA Technical Reports Server (NTRS)

    Fischel, D.

    1976-01-01

    Oversampling is defined as sampling with a device whose characteristic width is greater than the interval between samples. This paper shows why oversampling should be avoided and discusses the limitations in data processing if circumstances dictate that oversampling cannot be circumvented. Principally, oversampling should not be used to provide interpolating data points. Rather, the time spent oversampling should be used to obtain more signal with less relative error, and the Sampling Theorem should be employed to provide any desired interpolated values. The concepts are applicable to single-element and multielement detectors.

  9. Reaching multi-nanosecond timescales in combined QM/MM molecular dynamics simulations through parallel horsetail sampling.

    PubMed

    Martins-Costa, Marilia T C; Ruiz-López, Manuel F

    2017-04-15

    We report an enhanced sampling technique that allows to reach the multi-nanosecond timescale in quantum mechanics/molecular mechanics molecular dynamics simulations. The proposed technique, called horsetail sampling, is a specific type of multiple molecular dynamics approach exhibiting high parallel efficiency. It couples a main simulation with a large number of shorter trajectories launched on independent processors at periodic time intervals. The technique is applied to study hydrogen peroxide at the water liquid-vapor interface, a system of considerable atmospheric relevance. A total simulation time of a little more than 6 ns has been attained for a total CPU time of 5.1 years representing only about 20 days of wall-clock time. The discussion of the results highlights the strong influence of the solvation effects at the interface on the structure and the electronic properties of the solute. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  10. Gravity separation of pericardial fat in cardiotomy suction blood: an in vitro model.

    PubMed

    Kinard, M Rhett; Shackelford, Anthony G; Sistino, Joseph J

    2009-06-01

    Fat emboli generated during cardiac surgery have been shown to cause neurologic complications in patients postoperatively. Cardiotomy suction has been known to be a large generator of emboli. This study will examine the efficacy of a separation technique in which the cardiotomy suction blood is stored in a cardiotomy reservoir for various time intervals to allow spontaneous separation of fat from blood by density. Soybean oil was added to heparinized porcine blood to simulate the blood of a patient with hypertriglyceridemia (> 150 mg/dL). Roller pump suction was used to transfer the room temperature blood into the cardiotomy reservoir. Blood was removed from the reservoir in 200-mL aliquots at 0, 15, 30 45, and 60 minutes. Samples were taken at each interval and centrifuged to facilitate further separation of liquid fat. Fat content in each sample was determined by a point-of-care triglyceride analyzer. Three trials were conducted for a total of 30 samples. The 0-minute group was considered a baseline and was compared to the other four times. Fat concentration was reduced significantly in the 45- and 60-minute groups compared to the 0-, 15-, and 30-minute groups (p < .05). Gravity separation of cardiotomy suction blood is effective; however, it may require retention of blood for more time than is clinically acceptable during a routing coronary artery bypass graft surgery.

  11. Time-Resolved Molecular Characterization of Limonene/Ozone Aerosol using High-Resolution Electrospray Ionization Mass Spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bateman, Adam P.; Nizkorodov, Serguei; Laskin, Julia

    2009-09-09

    Molecular composition of limonene/O3 secondary organic aerosol (SOA) was investigated using high-resolution electrospray ionization mass spectrometry (HR-ESI-MS) as a function of reaction time. SOA was generated by ozonation of D-limonene in a reaction chamber and sampled at different time intervals using a cascade impactor. The SOA samples were extracted into acetonitrile and analyzed using a HR-ESI-MS instrument with a resolving power of 100,000 (m/Δm). The resulting mass spectra provided detailed information about the extent of oxidation inferred from the O:C ratios, double bond equivalency (DBE) factors, and aromaticity indexes (AI) in hundreds of identified individual SOA species.

  12. Tracking a changing environment: optimal sampling, adaptive memory and overnight effects.

    PubMed

    Dunlap, Aimee S; Stephens, David W

    2012-02-01

    Foraging in a variable environment presents a classic problem of decision making with incomplete information. Animals must track the changing environment, remember the best options and make choices accordingly. While several experimental studies have explored the idea that sampling behavior reflects the amount of environmental change, we take the next logical step in asking how change influences memory. We explore the hypothesis that memory length should be tied to the ecological relevance and the value of the information learned, and that environmental change is a key determinant of the value of memory. We use a dynamic programming model to confirm our predictions and then test memory length in a factorial experiment. In our experimental situation we manipulate rates of change in a simple foraging task for blue jays over a 36 h period. After jays experienced an experimentally determined change regime, we tested them at a range of retention intervals, from 1 to 72 h. Manipulated rates of change influenced learning and sampling rates: subjects sampled more and learned more quickly in the high change condition. Tests of retention revealed significant interactions between retention interval and the experienced rate of change. We observed a striking and surprising difference between the high and low change treatments at the 24h retention interval. In agreement with earlier work we find that a circadian retention interval is special, but we find that the extent of this 'specialness' depends on the subject's prior experience of environmental change. Specifically, experienced rates of change seem to influence how subjects balance recent information against past experience in a way that interacts with the passage of time. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Method and Apparatus for Evaluating Multilayer Objects for Imperfections

    NASA Technical Reports Server (NTRS)

    Heyman, Joseph S. (Inventor); Abedin, Nurul (Inventor); Sun, Kuen J. (Inventor)

    1999-01-01

    A multilayer object having multiple layers arranged in a stacking direction is evaluated for imperfections such as voids, delaminations and microcracks. First. an acoustic wave is transmitted into the object in the stacking direction via an appropriate transducer/waveguide combination. The wave propagates through the multilayer object and is received by another transducer/waveguide combination preferably located on the same surface as the transmitting combination. The received acoustic wave is correlated with the presence or absence of imperfections by, e.g., generating pulse echo signals indicative of the received acoustic wave. wherein the successive signals form distinct groups over time. The respective peak amplitudes of each group are sampled and curve fit to an exponential curve. wherein a substantial fit of approximately 80-90% indicates an absence of imperfections and a significant deviation indicates the presence of imperfections. Alternatively, the time interval between distinct groups can be measured. wherein equal intervals indicate the absence of imperfections and unequal intervals indicate the presence of imperfections.

  14. Method and apparatus for evaluating multilayer objects for imperfections

    NASA Technical Reports Server (NTRS)

    Heyman, Joseph S. (Inventor); Abedin, Nurul (Inventor); Sun, Kuen J. (Inventor)

    1997-01-01

    A multilayer object having multiple layers arranged in a stacking direction is evaluated for imperfections such as voids, delaminations and microcracks. First, an acoustic wave is transmitted into the object in the stacking direction via an appropriate transducer/waveguide combination. The wave propagates through the multilayer object and is received by another transducer/waveguide combination preferably located on the same surface as the transmitting combination. The received acoustic wave is correlated with the presence or absence of imperfections by, e.g., generating pulse echo signals indicative of the received acoustic wave, wherein the successive signals form distinct groups over time. The respective peak amplitudes of each group are sampled and curve fit to an exponential curve, wherein a substantial fit of approximately 80-90% indicates an absence of imperfections and a significant deviation indicates the presence of imperfections. Alternatively, the time interval between distinct groups can be measured, wherein equal intervals indicate the absence of imperfections and unequal intervals indicate the presence of imperfections.

  15. Balloon borne in-situ detection of OH in the stratosphere from 37 to 23 km

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stimpfle, R.M.; Lapson, L.B., Wennberg, P.O.; Anderson, J.G.

    1989-12-01

    The OH number density in the stratosphere has been measured over the altitude interval of 37 to 23 km at midday via balloon-borne gondola launched from Palestine, Texas on July 6, 1988. OH radicals are detected with a laser induced fluorescence instrument employing a 17 kHz repetition rate copper vapor laser pumped dye laser optically coupled to an enclosed flow, in-situ sampling chamber. OH abundances ranged from 88 {plus minus} 31 pptv (1.1 {plus minus} 0.4 {times} 10{sup 7} molec cm{sup {minus}3}) in the 36 to 35 km interval to 0.9 {plus minus} 0.8 pptv (8.7 {plus minus} 7.7 {times}10{supmore » 5} molec cm{sup {minus}3}) in the 24 to 23 km interval. The stated uncertainty ({plus minus} 1{sigma}) includes that from both measurement precision and accuracy. Simultaneous detection of ozone and water vapor densities was carried out with separate on-board instruments.« less

  16. Combining Speed Information Across Space

    NASA Technical Reports Server (NTRS)

    Verghese, Preeti; Stone, Leland S.

    1995-01-01

    We used speed discrimination tasks to measure the ability of observers to combine speed information from multiple stimuli distributed across space. We compared speed discrimination thresholds in a classical discrimination paradigm to those in an uncertainty/search paradigm. Thresholds were measured using a temporal two-interval forced-choice design. In the discrimination paradigm, the n gratings in each interval all moved at the same speed and observers were asked to choose the interval with the faster gratings. Discrimination thresholds for this paradigm decreased as the number of gratings increased. This decrease was not due to increasing the effective stimulus area as a control experiment that increased the area of a single grating did not show a similar improvement in thresholds. Adding independent speed noise to each of the n gratings caused thresholds to decrease at a rate similar to the original no-noise case, consistent with observers combining an independent sample of speed from each grating in both the added- and no-noise cases. In the search paradigm, observers were asked to choose the interval in which one of the n gratings moved faster. Thresholds in this case increased with the number of gratings, behavior traditionally attributed to an input bottleneck. However, results from the discrimination paradigm showed that the increase was not due to observers' inability to process these gratings. We have also shown that the opposite trends of the data in the two paradigms can be predicted by a decision theory model that combines independent samples of speed information across space. This demonstrates that models typically used in classical detection and discrimination paradigms are also applicable to search paradigms. As our model does not distinguish between samples in space and time, it predicts that discrimination performance should be the same regardless of whether the gratings are presented in two spatial intervals or two temporal intervals. Our last experiment largely confirmed this prediction.

  17. 40 CFR 1065.245 - Sample flow meter for batch sampling.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... rates or total flow sampled into a batch sampling system over a test interval. You may use the... rates or total raw exhaust flow over a test interval. (b) Component requirements. We recommend that you... averaging Pitot tube, or a hot-wire anemometer. Note that your overall system for measuring sample flow must...

  18. System implications of the ambulance arrival-to-patient contact interval on response interval compliance.

    PubMed

    Campbell, J P; Gratton, M C; Salomone, J A; Lindholm, D J; Watson, W A

    1994-01-01

    In some emergency medical services (EMS) system designs, response time intervals are mandated with monetary penalties for noncompliance. These times are set with the goal of providing rapid, definitive patient care. The time interval of vehicle at scene-to-patient access (VSPA) has been measured, but its effect on response time interval compliance has not been determined. To determine the effect of the VSPA interval on the mandated code 1 (< 9 min) and code 2 (< 13 min) response time interval compliance in an urban, public-utility model system. A prospective, observational study used independent third-party riders to collect the VSPA interval for emergency life-threatening (code 1) and emergency nonlife-threatening (code 2) calls. The VSPA interval was added to the 9-1-1 call-to-dispatch and vehicle dispatch-to-scene intervals to determine the total time interval from call received until paramedic access to the patient (9-1-1 call-to-patient access). Compliance with the mandated response time intervals was determined using the traditional time intervals (9-1-1 call-to-scene) plus the VSPA time intervals (9-1-1 call-to-patient access). Chi-square was used to determine statistical significance. Of the 216 observed calls, 198 were matched to the traditional time intervals. Sixty-three were code 1, and 135 were code 2. Of the code 1 calls, 90.5% were compliant using 9-1-1 call-to-scene intervals dropping to 63.5% using 9-1-1 call-to-patient access intervals (p < 0.0005). Of the code 2 calls, 94.1% were compliant using 9-1-1 call-to-scene intervals. Compliance decreased to 83.7% using 9-1-1 call-to-patient access intervals (p = 0.012). The addition of the VSPA interval to the traditional time intervals impacts system response time compliance. Using 9-1-1 call-to-scene compliance as a basis for measuring system performance underestimates the time for the delivery of definitive care. This must be considered when response time interval compliances are defined.

  19. Ehrenfest model with large jumps in finance

    NASA Astrophysics Data System (ADS)

    Takahashi, Hisanao

    2004-02-01

    Changes (returns) in stock index prices and exchange rates for currencies are argued, based on empirical data, to obey a stable distribution with characteristic exponent α<2 for short sampling intervals and a Gaussian distribution for long sampling intervals. In order to explain this phenomenon, an Ehrenfest model with large jumps (ELJ) is introduced to explain the empirical density function of price changes for both short and long sampling intervals.

  20. Construction and testing of a simple and economical soil greenhouse gas automatic sampler

    USGS Publications Warehouse

    Ginting, D.; Arnold, S.L.; Arnold, N.S.; Tubbs, R.S.

    2007-01-01

    Quantification of soil greenhouse gas emissions requires considerable sampling to account for spatial and/or temporal variation. With manual sampling, additional personnel are often not available to sample multiple sites within a narrow time interval. The objectives were to construct an automatic gas sampler and to compare the accuracy and precision of automatic versus manual sampling. The automatic sampler was tested with carbon dioxide (CO2) fluxes that mimicked the range of CO2 fluxes during a typical corn-growing season in eastern Nebraska. Gas samples were drawn from the chamber at 0, 5, and 10 min manually and with the automatic sampler. The three samples drawn with the automatic sampler were transferred to pre-vacuumed vials after 1 h; thus the samples in syringe barrels stayed connected with the increasing CO2 concentration in the chamber. The automatic sampler sustains accuracy and precision in greenhouse gas sampling while improving time efficiency and reducing labor stress. Copyright ?? Taylor & Francis Group, LLC.

  1. Blood and Plasma Biochemistry Reference Intervals for Wild Juvenile American Alligators ( Alligator mississippiensis ).

    PubMed

    Hamilton, Matthew T; Kupar, Caitlin A; Kelley, Meghan D; Finger, John W; Tuberville, Tracey D

    2016-07-01

    : American alligators ( Alligator mississippiensis ) are one of the most studied crocodilian species in the world, yet blood and plasma biochemistry information is limited for juvenile alligators in their northern range, where individuals may be exposed to extreme abiotic and biotic stressors. We collected blood samples over a 2-yr period from 37 juvenile alligators in May, June, and July to establish reference intervals for 22 blood and plasma analytes. We observed no effect of either sex or blood collection time on any analyte investigated. However, our results indicate a significant correlation between a calculated body condition index and aspartate aminotransferase and creatine kinase. Glucose, total protein, and potassium varied significantly between sampling sessions. In addition, glucose and potassium were highly correlated between the two point-of-care devices used, although they were significantly lower with the i-STAT 1 CG8+ cartridge than with the Vetscan VS2 Avian/Reptile Rotor. The reference intervals presented herein should provide baseline data for evaluating wild juvenile alligators in the northern portion of their range.

  2. Multivariate survivorship analysis using two cross-sectional samples.

    PubMed

    Hill, M E

    1999-11-01

    As an alternative to survival analysis with longitudinal data, I introduce a method that can be applied when one observes the same cohort in two cross-sectional samples collected at different points in time. The method allows for the estimation of log-probability survivorship models that estimate the influence of multiple time-invariant factors on survival over a time interval separating two samples. This approach can be used whenever the survival process can be adequately conceptualized as an irreversible single-decrement process (e.g., mortality, the transition to first marriage among a cohort of never-married individuals). Using data from the Integrated Public Use Microdata Series (Ruggles and Sobek 1997), I illustrate the multivariate method through an investigation of the effects of race, parity, and educational attainment on the survival of older women in the United States.

  3. Changes in the saltwater interface corresponding to the installation of a seepage barrier near Lake Okeechobee, Florida

    USGS Publications Warehouse

    Prinos, Scott T.; Valderrama, Robert

    2015-01-01

    At five of the monitoring-well cluster locations, a long-screened well was also installed for monitoring and comparison purposes. These long-screened wells are 160 to 200 ft deep, and have open intervals ranging from 145 to 185 ft in length. Water samples were collected at depth intervals of about 5 to 10 ft, using 3-ft-long straddle packers to isolate each sampling interval. The results of monitoring conducted using these long-screened interval wells were generally too variable to identify any changes that might be associated with the seepage barrier. Samples from one of these long-screened interval wells failed to detect the saltwater interface evident in samples and TSEMIL datasets from a collocated well cluster. This failure may have been caused by downward flow of freshwater from above the saltwater interface in the well bore.

  4. Confidence intervals for correlations when data are not normal.

    PubMed

    Bishara, Anthony J; Hittner, James B

    2017-02-01

    With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.

  5. Atmospheric measurements on Mars - The Viking meteorology experiment

    NASA Technical Reports Server (NTRS)

    Chamberlain, T. E.; Cole, H. L.; Dutton, R. G.; Greene, G. C.; Tillman, J. E.

    1976-01-01

    The Viking meteorology experiment is one of nine experiments to be carried out on the surface of Mars by each of two Viking Landers positioned at different latitudes and longitudes in the Northern Hemisphere. The meteorology experiment will measure pressure, temperature, wind speed, and wind direction at 1.5-hr intervals throughout the Martian day. The duration of each measurement period, the interval between data samples for a measurement period, and the time at which the measurement period is started will be varied throughout the mission. The scientific investigation and the sensors and electronics used for making the atmospheric measurement are discussed.

  6. Cuing effects for informational masking

    NASA Astrophysics Data System (ADS)

    Richards, Virginia M.; Neff, Donna L.

    2004-01-01

    The detection of a tone added to a random-frequency, multitone masker can be very poor even when the maskers have little energy in the frequency region of the signal. This paper examines the effects of adding a pretrial cue to reduce uncertainty for the masker or the signal. The first two experiments examined the effect of cuing a fixed-frequency signal as the number of masker components and presentation methods were manipulated. Cue effectiveness varied across observers, but could reduce thresholds by as much as 20 dB. Procedural comparisons indicated observers benefited more from having two masker samples to compare, with or without a signal cue, than having a single interval with one masker sample and a signal cue. The third experiment used random-frequency signals and compared no-cue, signal-cue, and masker-cue conditions, and also systematically varied the time interval between cue offset and trial onset. Thresholds with a cued random-frequency signal remained higher than for a cued fixed-frequency signal. For time intervals between the cue and trial of 50 ms or longer, thresholds were approximately the same with a signal or a masker cue and lower than when there was no cue. Without a cue or with a masker cue, analyses of possible decision strategies suggested observers attended to the potential signal frequencies, particularly the highest signal frequency. With a signal cue, observers appeared to attend to the frequency of the subsequent signal.

  7. Heinrich Events in the Southern Hemisphere: A Tropical Trigger?

    NASA Astrophysics Data System (ADS)

    Farmer, E. C.; Choi, W. S.; Quadri, M.

    2006-12-01

    Heinrich events, or periodic pulses of ice-rafted debris into the North Atlantic, have long been envisioned as having a high-latitude trigger. This trigger is most often cited as internal ice-sheet dynamics. McIntyre and Molfino (1996) proposed a tropical trigger for Heinrich Events: they suggested that precession-related strengthening of the North African monsoonal atmospheric circulation would alter oceanic circulation in such a way as to reduce heat transport to the high latitudes. Discovery of possible evidence (Mg/Ca ratios and faunal abundance) for Heinrich Event 1 (H1) in the Benguela upwelling region (Farmer et al. 2005) stimulated a search for a Heinrich Event 2 (H2) signal. From the review by Hemming (2004), we estimated the time horizon in which we would expect to find H2 as 22.4 - 26.8 ka. We extrapolated the published ODP1084B age model (Farmer et al. 2005) to estimate the depth interval of this time horizon, and counted the relative abundance of the four dominant species of planktonic foraminifera (G. inflata, G. bulloides, N. pachyderma right- and left-coiling) in each sample within this interval. We found an interval spanning five centimeters, or approximately 22.9 to 23.1 ka, in which the relative abundance of N. pachyderma (left-coiling) rose to a an average of 31.5%. Compared to the average level of 23.5% in the surrounding samples, and our pooled standard deviation of 3.4%, this shift appears significant. Increases in relative abundance of N. pachyderma (left-coiling) in earlier intervals are associated with decreased Mg/Ca-based sea surface temperatures (SSTs) and are inferred to be a likely result of increased upwelling (Farmer et al. 2005). Although this new interval is shorter than would be expected from Heinrich event studies elsewhere, the shift in relative abundance of N. pachyderma (left-coiling) is consistent with an H2 signal. Better age control is needed to constrain the timing of this event, however, and data from an additional proxy such as Mg/Ca is needed to confirm the signal.

  8. Six Sessions of Sprint Interval Training Improves Running Performance in Trained Athletes.

    PubMed

    Koral, Jerome; Oranchuk, Dustin J; Herrera, Roberto; Millet, Guillaume Y

    2018-03-01

    Koral, J, Oranchuk, DJ, Herrera, R, and Millet, GY. Six sessions of sprint interval training improves running performance in trained athletes. J Strength Cond Res 32(3): 617-623, 2018-Sprint interval training (SIT) is gaining popularity with endurance athletes. Various studies have shown that SIT allows for similar or greater endurance, strength, and power performance improvements than traditional endurance training but demands less time and volume. One of the main limitations in SIT research is that most studies were performed in a laboratory using expensive treadmills or ergometers. The aim of this study was to assess the performance effects of a novel short-term and highly accessible training protocol based on maximal shuttle runs in the field (SIT-F). Sixteen (12 male, 4 female) trained trail runners completed a 2-week procedure consisting of 4-7 bouts of 30 seconds at maximal intensity interspersed by 4 minutes of recovery, 3 times a week. Maximal aerobic speed (MAS), time to exhaustion at 90% of MAS before test (Tmax at 90% MAS), and 3,000-m time trial (TT3000m) were evaluated before and after training. Data were analyzed using a paired samples t-test, and Cohen's (d) effect sizes were calculated. Maximal aerobic speed improved by 2.3% (p = 0.01, d = 0.22), whereas peak power (PP) and mean power (MP) increased by 2.4% (p = 0.009, d = 0.33) and 2.8% (p = 0.002, d = 0.41), respectively. TT3000m was 6% shorter (p < 0.001, d = 0.35), whereas Tmax at 90% MAS was 42% longer (p < 0.001, d = 0.74). Sprint interval training in the field significantly improved the 3,000-m run, time to exhaustion, PP, and MP in trained trail runners. Sprint interval training in the field is a time-efficient and cost-free means of improving both endurance and power performance in trained athletes.

  9. Six Sessions of Sprint Interval Training Improves Running Performance in Trained Athletes

    PubMed Central

    Oranchuk, Dustin J.; Herrera, Roberto; Millet, Guillaume Y.

    2018-01-01

    Abstract Koral, J, Oranchuk, DJ, Herrera, R, and Millet, GY. Six sessions of sprint interval training improves running performance in trained athletes. J Strength Cond Res 32(3): 617–623, 2018—Sprint interval training (SIT) is gaining popularity with endurance athletes. Various studies have shown that SIT allows for similar or greater endurance, strength, and power performance improvements than traditional endurance training but demands less time and volume. One of the main limitations in SIT research is that most studies were performed in a laboratory using expensive treadmills or ergometers. The aim of this study was to assess the performance effects of a novel short-term and highly accessible training protocol based on maximal shuttle runs in the field (SIT-F). Sixteen (12 male, 4 female) trained trail runners completed a 2-week procedure consisting of 4–7 bouts of 30 seconds at maximal intensity interspersed by 4 minutes of recovery, 3 times a week. Maximal aerobic speed (MAS), time to exhaustion at 90% of MAS before test (Tmax at 90% MAS), and 3,000-m time trial (TT3000m) were evaluated before and after training. Data were analyzed using a paired samples t-test, and Cohen's (d) effect sizes were calculated. Maximal aerobic speed improved by 2.3% (p = 0.01, d = 0.22), whereas peak power (PP) and mean power (MP) increased by 2.4% (p = 0.009, d = 0.33) and 2.8% (p = 0.002, d = 0.41), respectively. TT3000m was 6% shorter (p < 0.001, d = 0.35), whereas Tmax at 90% MAS was 42% longer (p < 0.001, d = 0.74). Sprint interval training in the field significantly improved the 3,000-m run, time to exhaustion, PP, and MP in trained trail runners. Sprint interval training in the field is a time-efficient and cost-free means of improving both endurance and power performance in trained athletes. PMID:29076961

  10. Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies

    PubMed Central

    Gülhan, Orekıcı Temel

    2016-01-01

    Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes. PMID:27478491

  11. Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies.

    PubMed

    Erdoğan, Semra; Gülhan, Orekıcı Temel

    2016-01-01

    Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes.

  12. Complex Envelope Properties, Interpretation, Filtering, and Evaluation

    DTIC Science & Technology

    1991-02-01

    BEHAVIOR The example of y(t) in (45) (when n# ± /2) illustrates the general rule that if a time function has a discontinuity of value D at time to, then...integration rule like trapezoidal, the "alias-free" interval in the time domain is approximately halved, as shown below. This does not necessarily mean that...approximated by the trapezoidal rule and the results added together. The two final approximations of interest come from sampling the results for causal real y(t

  13. Comparison of direct observational methods for measuring stereotypic behavior in children with autism spectrum disorders.

    PubMed

    Gardenier, Nicole Ciotti; MacDonald, Rebecca; Green, Gina

    2004-01-01

    We compared partial-interval recording (PIR) and momentary time sampling (MTS) estimates against continuous measures of the actual durations of stereotypic behavior in young children with autism or pervasive developmental disorder-not otherwise specified. Twenty-two videotaped samples of stereotypy were scored using a low-tech duration recording method, and relative durations (i.e., proportions of observation periods consumed by stereotypy) were calculated. Then 10, 20, and 30s MTS and 10s PIR estimates of relative durations were derived from the raw duration data. Across all samples, PIR was found to grossly overestimate the relative duration of stereotypy. Momentary time sampling both over- and under-estimated the relative duration of stereotypy, but with much smaller errors than PIR (Experiment 1). These results were replicated across 27 samples of low, moderate and high levels of stereotypy (Experiment 2).

  14. Influence of In-Well Convection on Well Sampling

    USGS Publications Warehouse

    Vroblesky, Don A.; Casey, Clifton C.; Lowery, Mark A.

    2006-01-01

    Convective transport of dissolved oxygen (DO) from shallow to deeper parts of wells was observed as the shallow water in wells in South Carolina became cooler than the deeper water in the wells due to seasonal changes. Wells having a relatively small depth to water were more susceptible to thermally induced convection than wells where the depth to water was greater because the shallower water levels were more influenced by air temperature. The potential for convective transport of DO to maintain oxygenated conditions in a well was diminished as ground-water exchange through the well screen increased and as oxygen demand increased. Convective flow did not transport oxygen to the screened interval when the screened interval was deeper than the range of the convective cell. The convective movement of water in wells has potential implications for passive, or no-purge, and low-flow sampling approaches. Transport of DO to the screened interval can adversely affect the ability of passive samplers to produce accurate concentrations of oxygen-sensitive solutes, such as iron. Other potential consequences include mixing the screened-interval water with casing water and potentially allowing volatilization loss at the water surface. A field test of diffusion samplers in a convecting well during the winter, however, showed good agreement of chlorinated solvent concentrations with pumped samples, indicating that there was no negative impact of the convection on the utility of the samplers to collect volatile organic compound concentrations in that well. In the cases of low-flow sampling, convective circulation can cause the pumped sample to be a mixture of casing water and aquifer water. This can substantially increase the equilibration time of oxygen as an indicator parameter and can give false indications of the redox state. Data from this investigation show that simple in-well devices can effectively mitigate convective transport of oxygen. The devices can range from inflatable packers to simple baffle systems.

  15. The Lambert-Beer law in time domain form and its application.

    PubMed

    Mosorov, Volodymyr

    2017-10-01

    The majority of current radioisotope gauges utilize measurements of intensity for a chosen sampling time interval using a detector. Such an approach has several disadvantages: temporal resolution of the gauge is fixed and the accuracy of the measurements is not the same for different count rate. The solution can be the use of a stronger radioactive source, but it will be conflicted with ALARA (As Low As Reasonably Achievable) principle. Therefore, the article presents an alternative approach which is based on modified Lambert-Beer law. The basis of the approach is the registration of time intervals instead of the registration of counts. It allows to increase the temporal resolution of a gauge without the necessity of using a stronger radioactive source and the accuracy of the measurements will not depend on count rate. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. The effect of preinjury sleep difficulties on neurocognitive impairment and symptoms after sport-related concussion.

    PubMed

    Sufrinko, Alicia; Pearce, Kelly; Elbin, R J; Covassin, Tracey; Johnson, Eric; Collins, Michael; Kontos, Anthony P

    2015-04-01

    Researchers have reported that sleep duration is positively related to baseline neurocognitive performance. However, researchers have yet to examine the effect of preinjury sleep difficulties on postconcussion impairments. To compare neurocognitive impairment and symptoms of athletes with preinjury sleep difficulties to those without after a sport-related concussion (SRC). Cohort study; Level of evidence, 3. The sample included 348 adolescent and adult athletes (age, mean ± SD, 17.43 ± 2.34 years) with a diagnosed SRC. The sample was divided into 2 groups: (1) 34 (10%) participants with preinjury sleep difficulties (sleeping less as well as having trouble falling asleep; SLEEP SX) and (2) 231 (66%) participants without preinjury sleep difficulties (CONTROL). The remaining 84 (24%) participants with minimal sleep difficulties (1 symptom) were excluded. Participants completed the Immediate Postconcussion Assessment and Cognitive Test (ImPACT) and Postconcussion Symptom Scale (PCSS) at baseline and 3 postinjury intervals (2, 5-7, and 10-14 days after injury). A series of repeated-measures analyses of covariance with Bonferroni correction, controlling for baseline non-sleep-related symptoms, were conducted to compare postinjury neurocognitive performance between groups. Follow-up exploratory t tests examined between-group differences at each time interval. A series of analyses of variance were used to examine total PCSS score, sleep-related, and non-sleep-related symptoms across time intervals between groups. Groups differed significantly in PCSS scores across postinjury intervals for reaction time (P < .001), with the preinjury SLEEP SX group performing worse than controls at 5-7 days (mean ± SD, 0.70 ± 0.32 [SLEEP SX], 0.60 ± 0.14 [CONTROL]) and 10-14 days (0.61 ± 0.17 [SLEEP SX]; 0.57 ± 0.10 [CONTROL]) after injury. Groups also differed significantly on verbal memory performance (P = .04), with the SLEEP SX (68.21 ± 18.64) group performing worse than the CONTROL group (76.76 ± 14.50) 2 days after injury. The SLEEP SX group reported higher total symptom (P = .02) and sleep-related symptom (P = .02) scores across postinjury time intervals. Preinjury sleep difficulties may exacerbate neurocognitive impairment and symptoms after concussion. The findings may help clinicians identify athletes who are at risk for worse impairments after a concussion due to preinjury sleep difficulties. © 2015 The Author(s).

  17. Do flexible inter-injection intervals improve the effects of botulinum toxin A treatment in reducing impairment and disability in patients with spasticity?

    PubMed

    Trompetto, Carlo; Marinelli, Lucio; Mori, Laura; Puce, Luca; Pelosin, Elisa; Serrati, Carlo; Fattapposta, Francesco; Rinalduzzi, Steno; Abbruzzese, Giovanni; Currà, Antonio

    2017-05-01

    In patients treated with botulinum toxin-A (BoNT-A), toxin-directed antibody formation was related to the dosage and frequency of injections, leading to the empirical adoption of minimum time intervals between injections of 3months or longer. However, recent data suggest that low immunogenicity of current BoNT-A preparations could allow more frequent injections. Our hypothesis is that a short time interval between injections may be safe and effective in reducing upper limb spasticity and related disability. IncobotulinumtoxinA was injected under ultrasound guidance in spastic muscles of 11 subjects, who were evaluated just before BoNT-A injection (T0), and 1month (T1), 2months (T2) and 4months (T3) after injecting. At T1, in the case of persistent disability related to spasticity interfering with normal activities, patients received an additional toxin dose. Seven subjects received the additional dose at T1 because of persistent disability; 4 of them had a decrease of disability 1month later (T2). Rethinking the injection scheme for BoNT-A treatment may have a major impact in the management of spasticity and related disability. Future studies with larger sample sizes are warranted to confirm that injection schedules with short time intervals should no longer be discouraged in clinical practice. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Role of enamel deminerlization and remineralization on microtensile bond strength of resin composite.

    PubMed

    Rizvi, Abbas; Zafar, Muhammad S; Al-Wasifi, Yasser; Fareed, Wamiq; Khurshid, Zohaib

    2016-01-01

    This study is aimed to establish the microtensile bond strength of enamel following exposure to an aerated drink at various time intervals with/without application of remineralization agent. In addition, degree of remineralization and demineralization of tooth enamel has been assessed using polarized light microscopy. Seventy extracted human incisors split into two halves were immersed in aerated beverage (cola drink) for 5 min and stored in saliva until the time of microtensile bond testing. Prepared specimens were divided randomly into two study groups; remineralizing group (n = 70): specimens were treated for remineralization using casein phosphopeptides and amorphous calcium phosphate (CPP-ACP) remineralization agent (Recaldent™; GC Europe) and control group (n = 70): no remineralization treatment; specimens were kept in artificial saliva. All specimens were tested for microtensile bond strength at regular intervals (1 h, 1 days, 2 days, 1 week, and 2 weeks) using a universal testing machine. The results statistically analyzed (P = 0.05) using two-way ANOVA test. Results showed statistically significant increase in bond strength in CPP-ACP tested group (P < 0.05) at all-time intervals. The bond strength of remineralizing group samples at 2 days (~13.64 megapascals [MPa]) is comparable to that of control group after 1 week (~12.44 MPa). CPP-ACP treatment of teeth exposed to an aerated drink provided significant increase in bond strength at a shorter interval compared to teeth exposed to saliva alone.

  19. Emergency Medical Services Intervals and Survival in Trauma: Assessment of the “Golden Hour” in a North American Prospective Cohort

    PubMed Central

    Newgard, Craig D.; Schmicker, Robert H.; Hedges, Jerris R.; Trickett, John P.; Davis, Daniel P.; Bulger, Eileen M.; Aufderheide, Tom P.; Minei, Joseph P.; Hata, J. Steven; Gubler, K. Dean; Brown, Todd B.; Yelle, Jean-Denis; Bardarson, Berit; Nichol, Graham

    2010-01-01

    Study objective The first hour after the onset of out-of-hospital traumatic injury is referred to as the “golden hour,” yet the relationship between time and outcome remains unclear. We evaluate the association between emergency medical services (EMS) intervals and mortality among trauma patients with field-based physiologic abnormality. Methods This was a secondary analysis of an out-of-hospital, prospective cohort registry of adult (aged ≥15 years) trauma patients transported by 146 EMS agencies to 51 Level I and II trauma hospitals in 10 sites across North America from December 1, 2005, through March 31, 2007. Inclusion criteria were systolic blood pressure less than or equal to 90 mm Hg, respiratory rate less than 10 or greater than 29 breaths/min, Glasgow Coma Scale score less than or equal to 12, or advanced airway intervention. The outcome was inhospital mortality. We evaluated EMS intervals (activation, response, on-scene, transport, and total time) with logistic regression and 2-step instrumental variable models, adjusted for field-based confounders. Results There were 3,656 trauma patients available for analysis, of whom 806 (22.0%) died. In multivariable analyses, there was no significant association between time and mortality for any EMS interval: activation (odds ratio [OR] 1.00; 95% confidence interval [CI] 0.95 to 1.05), response (OR 1.00; 95% CI 9.97 to 1.04), on-scene (OR 1.00; 95% CI 0.99 to 1.01), transport (OR 1.00; 95% CI 0.98 to 1.01), or total EMS time (OR 1.00; 95% CI 0.99 to 1.01). Subgroup and instrumental variable analyses did not qualitatively change these findings. Conclusion In this North American sample, there was no association between EMS intervals and mortality among injured patients with physiologic abnormality in the field. PMID:19783323

  20. [Structure of maxillary sinus mucous membrane under normal conditions and in odontogenic perforative sinusitis].

    PubMed

    Baĭdik, O D; Logvinov, S V; Zubarev, S G; Sysoliatin, P G; Gurin, A A

    2011-01-01

    Methods of light, electron microscopy and immunohistochemistry were used to study the samples of maxillary sinus (MS) mucous membrane (MM) under normal conditions and in odontogenic sinusitis. To study the normal structure, the samples were obtained at autopsy from 26 human corpses 12-24 hours after death. Electron microscopic and immunohistochemical study was performed on biopsies of grossly morphologically unchanged MS MM, obtained during the operations for retention cysts in 6 patients. MS MM in perforative sinusitis was studied using the biopsies obtained from 43 patients. The material is broken into 4 groups depending on perforative sinusitis duration. Under normal conditions, MS MM is lined with a pseudostratified columnar ciliated epithelium. Degenerative changes of ciliated epithelial cells were already detected at short time intervals after MS perforations and become apparent due to reduction of specific volume of mitochondria and, rough endoplasmic reticulum, and increase of nuclear-cytoplasmic ratio. In the globlet cells, the reduction of nuclear-cytoplasmic ratio was associated with the disturbance of the secretory product release. At time intervals exceeding 3 months, epithelium underwent metaplasia into simple cuboidal and stratified squamous keratinized, while in MS MM lamina propria, cellular infiltration was increased. CD4+ cell content in sinus MM gradually increased, while at late periods after perforation occurrence it decreased. Low CD4+ cell count within the epithelium and the absence of muromidase on the surface of MS MM was detected. With the increase of the time interval since MS perforation, the number of CD8+ and CD20+ cells in MS MM was found to increase.

  1. Continuous inventories and the components of change

    Treesearch

    Frnacis A. Roesch

    2004-01-01

    The consequences of conducting a continuous inventory that utilizes measurements on overlapping temporal intervals of varying length on compatible estimation systems for the components of growth are explored. The time interpenetrating sample design of the USDA Forest Service Forest Inventory and Analysis Program is used as an example. I show why estimation of the...

  2. IDENTIFICATION OF SOURCES AND ESTIMATION OF EMISSION PROFILES FROM HIGHLY TIME-RESOLVED POLLUTANT MEASUREMENTS IN TAMPA, FL

    EPA Science Inventory

    Aerosol slurry samples were collected at 30-min intervals for sequential 1-month periods at each of two sites (Sydney and "Dairy") in the Tampa Bay area during the 2002 Bay Regional Atmospheric Chemistry Experiment using the University of Maryland Semicontinuous Elements in Aeros...

  3. Formation Flying Control Implementation in Highly Elliptical Orbits

    NASA Technical Reports Server (NTRS)

    Capo-Lugo, Pedro A.; Bainum, Peter M.

    2009-01-01

    The Tschauner-Hempel equations are widely used to correct the separation distance drifts between a pair of satellites within a constellation in highly elliptical orbits [1]. This set of equations was discretized in the true anomaly angle [1] to be used in a digital steady-state hierarchical controller [2]. This controller [2] performed the drift correction between a pair of satellites within the constellation. The objective of a discretized system is to develop a simple algorithm to be implemented in the computer onboard the satellite. The main advantage of the discrete systems is that the computational time can be reduced by selecting a suitable sampling interval. For this digital system, the amount of data will depend on the sampling interval in the true anomaly angle [3]. The purpose of this paper is to implement the discrete Tschauner-Hempel equations and the steady-state hierarchical controller in the computer onboard the satellite. This set of equations is expressed in the true anomaly angle in which a relation will be formulated between the time and the true anomaly angle domains.

  4. Profile local linear estimation of generalized semiparametric regression model for longitudinal data.

    PubMed

    Sun, Yanqing; Sun, Liuquan; Zhou, Jie

    2013-07-01

    This paper studies the generalized semiparametric regression model for longitudinal data where the covariate effects are constant for some and time-varying for others. Different link functions can be used to allow more flexible modelling of longitudinal data. The nonparametric components of the model are estimated using a local linear estimating equation and the parametric components are estimated through a profile estimating function. The method automatically adjusts for heterogeneity of sampling times, allowing the sampling strategy to depend on the past sampling history as well as possibly time-dependent covariates without specifically model such dependence. A [Formula: see text]-fold cross-validation bandwidth selection is proposed as a working tool for locating an appropriate bandwidth. A criteria for selecting the link function is proposed to provide better fit of the data. Large sample properties of the proposed estimators are investigated. Large sample pointwise and simultaneous confidence intervals for the regression coefficients are constructed. Formal hypothesis testing procedures are proposed to check for the covariate effects and whether the effects are time-varying. A simulation study is conducted to examine the finite sample performances of the proposed estimation and hypothesis testing procedures. The methods are illustrated with a data example.

  5. Reference intervals for putative biomarkers of drug-induced liver injury and liver regeneration in healthy human volunteers.

    PubMed

    Francis, Ben; Clarke, Joanna I; Walker, Lauren E; Brillant, Nathalie; Jorgensen, Andrea L; Park, B Kevin; Pirmohamed, Munir; Antoine, Daniel J

    2018-05-02

    The potential of mechanistic biomarkers to improve the prediction of drug-induced liver injury (DILI) and hepatic regeneration is widely acknowledged. We sought to determine reference intervals for new biomarkers of DILI and regeneration as well as to characterize their natural variability and impact of diurnal variation. Serum samples from 200 healthy volunteers were recruited as part of a cross sectional study; of these, 50 subjects had weekly serial sampling over 3 weeks, while 24 had intensive blood sampling over a 24h period. Alanine aminotransferase (ALT), MicroRNA-122 (miR-122), high mobility group box-1 (HMGB1), total keratin-18 (FL-K18), caspase cleaved keratin-18 (cc-K18), glutamate dehydrogenase (GLDH) and colony stimulating factor-1 (CSF-1) were assessed by validated assays. Reference intervals were established for each biomarker based on the 97.5% quantile (90% CI) following the assessment of fixed effects in univariate and multivariable models (ALT 50 (41-50) U/l, miR-122 3548 (2912-4321) copies/µl, HMGB1 2.3 (2.2-2.4) ng/ml, FL-K18 475 (456-488) U/l, cc-K18 272 (256-291) U/l, GLDH 27 (26-30) U/l and CSF-1 2.4 (2.3-2.9) ng/ml). There was a small but significant intra-individual time random effect detected but no significant impact of diurnal variation was observed, with the exception of GLDH. Reference intervals for novel DILI biomarkers have been described for the first time. An upper limit of a reference range might represent the most appropriate method to utilize these data. Regulatory authorities have published letters of support encouraging further qualification of leading candidate biomarkers. These data can now be used to interpret data from exploratory clinical DILI studies and to assist their further qualification. Drug-induced liver injury (DILI) has a big impact on patient health and the development of new medicines. Unfortunately, currently used blood-based tests to assess liver injury and recovery suffer from insufficiencies. Newer blood-based tests (biomarkers) have been described that accurately predict the onset and recovery from DILI. Here, in this study we describe reference intervals from investigations designed, for the first time, with the intension to assess the natural variation of these newer biomarkers in healthy volunteers. The outcomes of these results can be used to aid the interpretation of data from patients with suspected liver toxicity. Copyright © 2018. Published by Elsevier B.V.

  6. Temporal performance of amorphous selenium mammography detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao Bo; Zhao Wei

    2005-01-01

    We investigated temporal performance of amorphous selenium (a-Se) detectors specifically designed for mammographic imaging. Our goal is to quantify the inherent lag and ghosting of a-Se photoconductor as a function of imaging conditions. Two small area electroded a-Se samples, one positively and the other negatively biased on the entrance side of x rays, were used in the experiments. The study of lag and ghosting was performed by delivering a number of raw exposures as experienced in screening mammography to the samples at different electric field strength E{sub Se} while measuring the current through the a-Se sample. Ghosting at different operationalmore » conditions was quantified as the percentage x-ray sensitivity (x-ray generated photocurrent measured from the sample) reduction compared to before irradiation. Lag was determined by measuring the residual current of a-Se at a given time after the end of each x-ray exposure. Both lag and ghosting were measured as a function of E{sub Se} and cumulative exposure. The values of E{sub Se} used in our experiments ranged from 1 to 20 V/{mu}m. It was found that ghosting increases with exposure and decreases with E{sub Se} for both samples because of the dominant effect of recombination between trapped electrons and x-ray generated holes. Lag on the other hand has different dependence on E{sub Se} and cumulative exposure. At E{sub Se}{<=}10 V/{mu}m, the first frame lag for both samples changed slowly with cumulative exposure, with a range of 0.2%-1.7% for the positively biased sample and 0.5%-8% for the negatively biased sample. Overall the positively biased sample has better temporal performance than the negatively biased sample due to the lower density of trapped electrons. The impact of time interval between exposures on the temporal performance was also investigated. Recovery of ghosting with longer time interval was observed, which was attributed to the neutralization of trapped electrons by injected holes through dark current.« less

  7. Improving regression-model-based streamwater constituent load estimates derived from serially correlated data

    USGS Publications Warehouse

    Aulenbach, Brent T.

    2013-01-01

    A regression-model based approach is a commonly used, efficient method for estimating streamwater constituent load when there is a relationship between streamwater constituent concentration and continuous variables such as streamwater discharge, season and time. A subsetting experiment using a 30-year dataset of daily suspended sediment observations from the Mississippi River at Thebes, Illinois, was performed to determine optimal sampling frequency, model calibration period length, and regression model methodology, as well as to determine the effect of serial correlation of model residuals on load estimate precision. Two regression-based methods were used to estimate streamwater loads, the Adjusted Maximum Likelihood Estimator (AMLE), and the composite method, a hybrid load estimation approach. While both methods accurately and precisely estimated loads at the model’s calibration period time scale, precisions were progressively worse at shorter reporting periods, from annually to monthly. Serial correlation in model residuals resulted in observed AMLE precision to be significantly worse than the model calculated standard errors of prediction. The composite method effectively improved upon AMLE loads for shorter reporting periods, but required a sampling interval of at least 15-days or shorter, when the serial correlations in the observed load residuals were greater than 0.15. AMLE precision was better at shorter sampling intervals and when using the shortest model calibration periods, such that the regression models better fit the temporal changes in the concentration–discharge relationship. The models with the largest errors typically had poor high flow sampling coverage resulting in unrepresentative models. Increasing sampling frequency and/or targeted high flow sampling are more efficient approaches to ensure sufficient sampling and to avoid poorly performing models, than increasing calibration period length.

  8. Bias Assessment of General Chemistry Analytes using Commutable Samples.

    PubMed

    Koerbin, Gus; Tate, Jillian R; Ryan, Julie; Jones, Graham Rd; Sikaris, Ken A; Kanowski, David; Reed, Maxine; Gill, Janice; Koumantakis, George; Yen, Tina; St John, Andrew; Hickman, Peter E; Simpson, Aaron; Graham, Peter

    2014-11-01

    Harmonisation of reference intervals for routine general chemistry analytes has been a goal for many years. Analytical bias may prevent this harmonisation. To determine if analytical bias is present when comparing methods, the use of commutable samples, or samples that have the same properties as the clinical samples routinely analysed, should be used as reference samples to eliminate the possibility of matrix effect. The use of commutable samples has improved the identification of unacceptable analytical performance in the Netherlands and Spain. The International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) has undertaken a pilot study using commutable samples in an attempt to determine not only country specific reference intervals but to make them comparable between countries. Australia and New Zealand, through the Australasian Association of Clinical Biochemists (AACB), have also undertaken an assessment of analytical bias using commutable samples and determined that of the 27 general chemistry analytes studied, 19 showed sufficiently small between method biases as to not prevent harmonisation of reference intervals. Application of evidence based approaches including the determination of analytical bias using commutable material is necessary when seeking to harmonise reference intervals.

  9. Spectral and cross-spectral analysis of uneven time series with the smoothed Lomb-Scargle periodogram and Monte Carlo evaluation of statistical significance

    NASA Astrophysics Data System (ADS)

    Pardo-Igúzquiza, Eulogio; Rodríguez-Tovar, Francisco J.

    2012-12-01

    Many spectral analysis techniques have been designed assuming sequences taken with a constant sampling interval. However, there are empirical time series in the geosciences (sediment cores, fossil abundance data, isotope analysis, …) that do not follow regular sampling because of missing data, gapped data, random sampling or incomplete sequences, among other reasons. In general, interpolating an uneven series in order to obtain a succession with a constant sampling interval alters the spectral content of the series. In such cases it is preferable to follow an approach that works with the uneven data directly, avoiding the need for an explicit interpolation step. The Lomb-Scargle periodogram is a popular choice in such circumstances, as there are programs available in the public domain for its computation. One new computer program for spectral analysis improves the standard Lomb-Scargle periodogram approach in two ways: (1) It explicitly adjusts the statistical significance to any bias introduced by variance reduction smoothing, and (2) it uses a permutation test to evaluate confidence levels, which is better suited than parametric methods when neighbouring frequencies are highly correlated. Another novel program for cross-spectral analysis offers the advantage of estimating the Lomb-Scargle cross-periodogram of two uneven time series defined on the same interval, and it evaluates the confidence levels of the estimated cross-spectra by a non-parametric computer intensive permutation test. Thus, the cross-spectrum, the squared coherence spectrum, the phase spectrum, and the Monte Carlo statistical significance of the cross-spectrum and the squared-coherence spectrum can be obtained. Both of the programs are written in ANSI Fortran 77, in view of its simplicity and compatibility. The program code is of public domain, provided on the website of the journal (http://www.iamg.org/index.php/publisher/articleview/frmArticleID/112/). Different examples (with simulated and real data) are described in this paper to corroborate the methodology and the implementation of these two new programs.

  10. Temporal Doppler Effect and Future Orientation: Adaptive Function and Moderating Conditions.

    PubMed

    Gan, Yiqun; Miao, Miao; Zheng, Lei; Liu, Haihua

    2017-06-01

    The objectives of this study were to examine whether the temporal Doppler effect exists in different time intervals and whether certain individual and environmental factors act as moderators of the effect. Using hierarchical linear modeling, we examined the existence of the temporal Doppler effect and the moderating effect of future orientation among 139 university students (Study 1), and then the moderating conditions of the temporal Doppler effect using two independent samples of 143 and 147 university students (Studies 2 and 3). Results indicated that the temporal Doppler effect existed in all of our studies, and that future orientation moderated the temporal Doppler effect. Further, time interval perception mediated the relationship between future orientation and the motivation to cope at long time intervals. Finally, positive affect was found to enhance the temporal Doppler effect, whereas control deprivation did not influence the effect. The temporal Doppler effect is moderated by the personality trait of future orientation and by the situational variable of experimentally manipulated positive affect. We have identified personality and environmental processes that could enhance the temporal Doppler effect, which could be valuable in cases where attention to a future task is necessary. © 2016 Wiley Periodicals, Inc.

  11. Reliable change, sensitivity, and specificity of a multidimensional concussion assessment battery: implications for caution in clinical practice.

    PubMed

    Register-Mihalik, Johna K; Guskiewicz, Kevin M; Mihalik, Jason P; Schmidt, Julianne D; Kerr, Zachary Y; McCrea, Michael A

    2013-01-01

    To provide reliable change confidence intervals for common clinical concussion measures using a healthy sample of collegiate athletes and to apply these reliable change parameters to a sample of concussed collegiate athletes. Two independent samples were included in the study and evaluated on common clinical measures of concussion. The healthy sample included male, collegiate football student-athletes (n = 38) assessed at 2 time points. The concussed sample included college-aged student-athletes (n = 132) evaluated before and after a concussion. Outcome measures included symptom severity scores, Automated Neuropsychological Assessment Metrics throughput scores, and Sensory Organization Test composite scores. Application of the reliable change parameters suggests that a small percentage of concussed participants were impaired on each measure. We identified a low sensitivity of the entire battery (all measures combined) of 50% but high specificity of 96%. Clinicians should be trained in understanding clinical concussion measures and should be aware of evidence suggesting the multifaceted battery is more sensitive than any single measure. Clinicians should be cautioned that sensitivity to balance and neurocognitive impairments was low for each individual measure. Applying the confidence intervals to our injured sample suggests that these measures do not adequately identify postconcussion impairments when used in isolation.

  12. Reference Intervals of Common Clinical Chemistry Analytes for Adults in Hong Kong.

    PubMed

    Lo, Y C; Armbruster, David A

    2012-04-01

    Defining reference intervals is a major challenge because of the difficulty in recruiting volunteers to participate and testing samples from a significant number of healthy reference individuals. Historical literature citation intervals are often suboptimal because they're be based on obsolete methods and/or only a small number of poorly defined reference samples. Blood donors in Hong Kong gave permission for additional blood to be collected for reference interval testing. The samples were tested for twenty-five routine analytes on the Abbott ARCHITECT clinical chemistry system. Results were analyzed using the Rhoads EP evaluator software program, which is based on the CLSI/IFCC C28-A guideline, and defines the reference interval as the 95% central range. Method specific reference intervals were established for twenty-five common clinical chemistry analytes for a Chinese ethnic population. The intervals were defined for each gender separately and for genders combined. Gender specific or combined gender intervals were adapted as appropriate for each analyte. A large number of healthy, apparently normal blood donors from a local ethnic population were tested to provide current reference intervals for a new clinical chemistry system. Intervals were determined following an accepted international guideline. Laboratories using the same or similar methodologies may adapt these intervals if deemed validated and deemed suitable for their patient population. Laboratories using different methodologies may be able to successfully adapt the intervals for their facilities using the reference interval transference technique based on a method comparison study.

  13. Population Pharmacokinetic Model-Based Evaluation of Standard Dosing Regimens for Cefuroxime Used in Coronary Artery Bypass Graft Surgery with Cardiopulmonary Bypass.

    PubMed

    Alqahtani, Saeed A; Alsultan, Abdullah S; Alqattan, Hussain M; Eldemerdash, Ahmed; Albacker, Turki B

    2018-04-01

    The purpose of this study was to investigate the population pharmacokinetics (PK) of cefuroxime in patients undergoing coronary artery bypass graft (CABG) surgery. In this observational pharmacokinetic study, multiple blood samples were collected over a 48-h interval of intravenous cefuroxime administration. The samples were analyzed by using a validated high-performance liquid chromatography (HPLC) method. Population pharmacokinetic models were developed using Monolix (version 4.4) software. Pharmacokinetic-pharmacodynamic (PD) simulations were performed to explore the ability of different dosage regimens to achieve the pharmacodynamic targets. A total of 468 blood samples from 78 patients were analyzed. The PK for cefuroxime were best described by a two-compartment model with between-subject variability on clearance, the volume of distribution of the central compartment, and the volume of distribution of the peripheral compartment. The clearance of cefuroxime was related to creatinine clearance (CL CR ). Dosing simulations showed that standard dosing regimens of 1.5 g could achieve the PK-PD target of the percentage of the time that the free concentration is maintained above the MIC during a dosing interval ( fT MIC ) of 65% for an MIC of 8 mg/liter in patients with a CL CR of 30, 60, or 90 ml/min, whereas this dosing regimen failed to achieve the PK-PD target in patients with a CL CR of ≥125 ml/min. In conclusion, administration of standard doses of 1.5 g three times daily provided adequate antibiotic prophylaxis in patients undergoing CABG surgery. Lower doses failed to achieve the PK-PD target. Patients with high CL CR values required either higher doses or shorter intervals of cefuroxime dosing. On the other hand, lower doses (1 g three times daily) produced adequate target attainment for patients with low CL CR values (≤30 ml/min). Copyright © 2018 American Society for Microbiology.

  14. Stability and accuracy of total and free PSA values in samples stored at room temperature.

    PubMed

    Forde, J C; Blake, O; Crowley, V E; Lynch, T H

    2016-11-01

    In 2010, an estimated 476,076 total PSA tests were performed in Ireland, at a cost of €3.6 million with the majority ordered by general practitioners. We aimed to replicate storage conditions at room temperature and see if prolonged storage affected total and free PSA values. Blood samples were taken from 20 male patients in four VACUETTE ® Serum Separator tubes (Greiner-Bio-One, Austria) and stored at room temperature (22 °C) for different time intervals (4, 8, 24, 48 h) before being centrifuged and analyzed. Total PSA (tPSA) and free PSA (fPSA) values were determined using the Tosoh AIA 1800 assay (Tokyo, Japan). Mean tPSA values were measured at 4, 8, 24 and 48 h with values of 7.9, 8.1, 7.8 and 8.0 μg/L, respectively. Values ranged from -1.26 to +2.53 % compared to the initial 4 h interval reading, indicating tPSA remained consistent at room temperature. The tPSA showed no significance between groups (ANOVA, p = 0.283). Mean fPSA values at 4, 8, 24 and 48 h were 2.05, 2.04, 1.83, 1.82 μg/L, respectively. At 24 and 48 h there was 10.73 and 11.22 % reduction, respectively, in fPSA compared to the 4-h time interval, indicating prolonged storage resulted in reduced fPSA values. After 24 h, there was an 8.8 % reduction in the free/total PSA %. The fPSA showed significant differences between groups (ANOVA, p = 0.024). Our recommendation is that samples that have been stored for prolonged amounts of time (greater than 24 h) should not be used for free PSA testing.

  15. Predicting long-term survival after coronary artery bypass graft surgery.

    PubMed

    Karim, Md N; Reid, Christopher M; Huq, Molla; Brilleman, Samuel L; Cochrane, Andrew; Tran, Lavinia; Billah, Baki

    2018-02-01

    To develop a model for predicting long-term survival following coronary artery bypass graft surgery. This study included 46 573 patients from the Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZCTS) registry, who underwent isolated coronary artery bypass graft surgery between 2001 and 2014. Data were randomly split into development (23 282) and validation (23 291) samples. Cox regression models were fitted separately, using the important preoperative variables, for 4 'time intervals' (31-90 days, 91-365 days, 1-3 years and >3 years), with optimal predictors selected using the bootstrap bagging technique. Model performance was assessed both in validation data and in combined data (development and validation samples). Coefficients of all 4 final models were estimated on the combined data adjusting for hospital-level clustering. The Kaplan-Meier mortality rates estimated in the sample were 1.7% at 90 days, 2.8% at 1 year, 4.4% at 2 years and 6.1% at 3 years. Age, peripheral vascular disease, respiratory disease, reduced ejection fraction, renal dysfunction, arrhythmia, diabetes, hypercholesterolaemia, cerebrovascular disease, hypertension, congestive heart failure, steroid use and smoking were included in all 4 models. However, their magnitude of effect varied across the time intervals. Harrell's C-statistics was 0.83, 0.78, 0.75 and 0.74 for 31-90 days, 91-365 days, 1-3 years and >3 years models, respectively. Models showed excellent discrimination and calibration in validation data. Models were developed for predicting long-term survival at 4 time intervals after isolated coronary artery bypass graft surgery. These models can be used in conjunction with the existing 30-day mortality prediction model. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  16. Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?

    ERIC Educational Resources Information Center

    Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.

    2005-01-01

    Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…

  17. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Qian; Harman, Ciaran J.; Kirchner, James W.

    2018-02-01

    River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1) fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2) the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling - in the form of spectral slope (β) or other equivalent scaling parameters (e.g., Hurst exponent) - are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1) they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β = 0) to Brown noise (β = 2) and (2) their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths) in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb-Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among all methods for a wide range of prescribed β values and gap distributions. The aliasing method, however, does not itself account for sampling irregularity, and this introduces some bias in the result. Nonetheless, the wavelet method is recommended for estimating β in irregular time series until improved methods are developed. Finally, all methods' performances depend strongly on the sampling irregularity, highlighting that the accuracy and precision of each method are data specific. Accurately quantifying the strength of fractal scaling in irregular water-quality time series remains an unresolved challenge for the hydrologic community and for other disciplines that must grapple with irregular sampling.

  18. Prevalence Incidence Mixture Models

    Cancer.gov

    The R package and webtool fits Prevalence Incidence Mixture models to left-censored and irregularly interval-censored time to event data that is commonly found in screening cohorts assembled from electronic health records. Absolute and relative risk can be estimated for simple random sampling, and stratified sampling (the two approaches of superpopulation and a finite population are supported for target populations). Non-parametric (absolute risks only), semi-parametric, weakly-parametric (using B-splines), and some fully parametric (such as the logistic-Weibull) models are supported.

  19. Assessing heat load in drylot dairy cattle: Refining on-farm sampling methodology.

    PubMed

    Tresoldi, Grazyne; Schütz, Karin E; Tucker, Cassandra B

    2016-11-01

    Identifying dairy cattle experiencing heat stress and adopting appropriate mitigation strategies can improve welfare and profitability. However, little is known about how cattle use heat abatement resources (shade, sprayed water) on drylot dairies. It is also unclear how often we need to observe animals to measure high heat load, or the relevance of specific aspects of this response, particularly in terms of panting. Our objectives were to describe and determine sampling intervals to measure cattle use of heat abatement resources, respiration rate (RR) and panting characteristics (drooling, open mouth, protruding tongue), and to evaluate the relationship between the latter 2. High-producing cows were chosen from 4 drylots (8 cows/dairy, n=32) and observed for at least 5.9h (1000 to 1800h, excluding milking) when air temperature, humidity, and the combined index averaged 33°C, 30%, and 79, respectively. Use of heat abatement resources was recorded continuously; RR and the presence and absence of each panting characteristic were recorded every 5min. From the observed values, estimates using the specified sub-sampling intervals were calculated for heat abatement resource use (1, 5, 10, 15, 20, 30, 60, 90, and 120min), and for RR and panting (10, 15, 20, 30, 60, 90, and 120min). Estimates and observed values were compared using linear regression. Sampling intervals were considered accurate if they met 3 criteria: R 2 ≥0.9, intercept=0, and slope=1. The relationship between RR and each panting characteristic was analyzed using mixed models. Cows used shade (at corral or over feed bunk) and feed bunk area (where water was sprayed) for about 90 and 50% of the observed time, respectively, and used areas with no cooling for 2min at a time, on average. Cows exhibited drooling (34±4% of observations) more often than open mouth and protruding tongue (11±3 and 8±3% of observations, respectively). Respiration rate varied depending on the presence of panting (with vs. without drool present: 97±3 vs. 74±3 breaths/min; open vs. closed mouth: 104±4 vs. 85±4 breaths/min; protruding vs. non-protruding tongue: 105±5 vs. 91±5 breaths/min). Accurate estimates were obtained when using sampling intervals ≤90min for RR, ≤60min for corral shade and sprayed water use, and ≤30min for drooling. In a hot and dry climate, cows kept in drylots had higher RR when showing panting characteristics than when these were absent, and used shade extensively, avoiding areas with no cooling. In general, 30min intervals were most efficient for measuring heat load responses. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  20. Sonoclot evaluation of whole blood coagulation in healthy adult dogs.

    PubMed

    Babski, Danielle M; Brainard, Benjamin M; Krimer, Paula M; Ralph, Alan G; Pittman, Jennifer R; Koenig, Amie

    2012-12-01

    To establish a standard protocol for analysis of canine whole blood and generate reference intervals for healthy dogs using the Sonoclot analyzer, and to compare Sonoclot values to standard and viscoelastic coagulation tests. Prospective study. Veterinary University research facility and teaching hospital. Twelve healthy random source dogs and 52 healthy dogs from the general veterinary school population. Blood sampling for viscoelastic coagulation testing. Blood was collected from 12 healthy adult dogs by jugular venipuncture. After a rest period at room temperature of 30, 60, or 120 minutes, 340 μL of citrated blood was added to 20 μL of 0.2 M CaCl(2) in 1 of 2 cuvette types warmed to 37° C. Cuvettes contained a magnetic stir-bar with glass beads (gbACT+) or only a magnetic stir-bar (nonACT). Reference interval samples were collected from 52 healthy adult dogs and analyzed in duplicate. The ACT, CR, and PF were not affected by duration of rest period for either cuvette type. ACT variability was decreased when using gbACT+ cuvettes (P < 0.05). In normal dogs reference intervals (mean ± 2 SD) using gbACT+ cuvettes were: ACT 56.0-154.0 seconds, CR 14.85-46.0, and PF 2.1-4.05. ACT correlated to TEG R-time, K-time, and angle, while CR correlated with all TEG parameters. Fibrinogen correlated with ACT, CR, and PF. Sonoclot did not correlate with other common coagulation tests. Sonoclot provides viscoelastic evaluation of canine whole blood coagulation and correlated to several TEG parameters and fibrinogen. A standard protocol and reference intervals were established. © Veterinary Emergency and Critical Care Society 2012.

  1. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    PubMed

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  2. An error criterion for determining sampling rates in closed-loop control systems

    NASA Technical Reports Server (NTRS)

    Brecher, S. M.

    1972-01-01

    The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.

  3. Hybrid Model Predictive Control for Sequential Decision Policies in Adaptive Behavioral Interventions.

    PubMed

    Dong, Yuwen; Deshpande, Sunil; Rivera, Daniel E; Downs, Danielle S; Savage, Jennifer S

    2014-06-01

    Control engineering offers a systematic and efficient method to optimize the effectiveness of individually tailored treatment and prevention policies known as adaptive or "just-in-time" behavioral interventions. The nature of these interventions requires assigning dosages at categorical levels, which has been addressed in prior work using Mixed Logical Dynamical (MLD)-based hybrid model predictive control (HMPC) schemes. However, certain requirements of adaptive behavioral interventions that involve sequential decision making have not been comprehensively explored in the literature. This paper presents an extension of the traditional MLD framework for HMPC by representing the requirements of sequential decision policies as mixed-integer linear constraints. This is accomplished with user-specified dosage sequence tables, manipulation of one input at a time, and a switching time strategy for assigning dosages at time intervals less frequent than the measurement sampling interval. A model developed for a gestational weight gain (GWG) intervention is used to illustrate the generation of these sequential decision policies and their effectiveness for implementing adaptive behavioral interventions involving multiple components.

  4. Ex vivo 12 h bactericidal activity of oral co-amoxiclav (1.125 g) against beta-lactamase-producing Haemophilus influenzae.

    PubMed

    Bronner, S; Pompei, D; Elkhaïli, H; Dhoyen, N; Monteil, H; Jehl, F

    2001-10-01

    The aim of the study was to evaluate the in vitro/ex vivo bactericidal activity of a new coamoxiclav single-dose sachet formulation (1 g amoxicillin + 0.125 g clavulanic acid) against a beta-lactamase-producing strain of Haemophilus influenzae. The evaluation covered the 12 h period after antibiotic administration. Serum specimens from the 12 healthy volunteers included in the pharmacokinetic study were pooled by time point and in equal volumes. Eight of 12 pharmacokinetic sampling time points were included in the study. At time points 0.5, 0.75, 1, 1.5, 2.5, 5, 8 and 12 h post-dosing, the kinetics of bactericidal activity were determined for each of the serial dilutions. Each specimen was serially diluted from 1:2 to 1:256. The index of surviving bacteria (ISB) was subsequently determined for each pharmacokinetic time point. For all the serum samples, bactericidal activity was fast (3-6 h), marked (3-6 log(10) reduction in the initial inoculum) and sustained over the 12 h between-dosing interval. The results obtained also confirmed that the potency of the amoxicillin plus clavulanic acid combination was time dependent against the species under study and that the time interval over which the concentrations were greater than the MIC (t > MIC) was 100% for the strain under study. The data thus generated constitute an interesting prerequisite with a view to using co-amoxiclav 1.125 g in a bd oral regimen.

  5. A confidence interval analysis of sampling effort, sequencing depth, and taxonomic resolution of fungal community ecology in the era of high-throughput sequencing.

    PubMed

    Oono, Ryoko

    2017-01-01

    High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions 'how and why are communities different?' This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences.

  6. A confidence interval analysis of sampling effort, sequencing depth, and taxonomic resolution of fungal community ecology in the era of high-throughput sequencing

    PubMed Central

    2017-01-01

    High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions ‘how and why are communities different?’ This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences. PMID:29253889

  7. Confidence intervals from single observations in forest research

    Treesearch

    Harry T. Valentine; George M. Furnival; Timothy G. Gregoire

    1991-01-01

    A procedure for constructing confidence intervals and testing hypothese from a single trial or observation is reviewed. The procedure requires a prior, fixed estimate or guess of the outcome of an experiment or sampling. Two examples of applications are described: a confidence interval is constructed for the expected outcome of a systematic sampling of a forested tract...

  8. Improving EEG-Based Motor Imagery Classification for Real-Time Applications Using the QSA Method.

    PubMed

    Batres-Mendoza, Patricia; Ibarra-Manzano, Mario A; Guerra-Hernandez, Erick I; Almanza-Ojeda, Dora L; Montoro-Sanjose, Carlos R; Romero-Troncoso, Rene J; Rostro-Gonzalez, Horacio

    2017-01-01

    We present an improvement to the quaternion-based signal analysis (QSA) technique to extract electroencephalography (EEG) signal features with a view to developing real-time applications, particularly in motor imagery (IM) cognitive processes. The proposed methodology (iQSA, improved QSA) extracts features such as the average, variance, homogeneity, and contrast of EEG signals related to motor imagery in a more efficient manner (i.e., by reducing the number of samples needed to classify the signal and improving the classification percentage) compared to the original QSA technique. Specifically, we can sample the signal in variable time periods (from 0.5 s to 3 s, in half-a-second intervals) to determine the relationship between the number of samples and their effectiveness in classifying signals. In addition, to strengthen the classification process a number of boosting-technique-based decision trees were implemented. The results show an 82.30% accuracy rate for 0.5 s samples and 73.16% for 3 s samples. This is a significant improvement compared to the original QSA technique that offered results from 33.31% to 40.82% without sampling window and from 33.44% to 41.07% with sampling window, respectively. We can thus conclude that iQSA is better suited to develop real-time applications.

  9. Improving EEG-Based Motor Imagery Classification for Real-Time Applications Using the QSA Method

    PubMed Central

    Batres-Mendoza, Patricia; Guerra-Hernandez, Erick I.; Almanza-Ojeda, Dora L.; Montoro-Sanjose, Carlos R.

    2017-01-01

    We present an improvement to the quaternion-based signal analysis (QSA) technique to extract electroencephalography (EEG) signal features with a view to developing real-time applications, particularly in motor imagery (IM) cognitive processes. The proposed methodology (iQSA, improved QSA) extracts features such as the average, variance, homogeneity, and contrast of EEG signals related to motor imagery in a more efficient manner (i.e., by reducing the number of samples needed to classify the signal and improving the classification percentage) compared to the original QSA technique. Specifically, we can sample the signal in variable time periods (from 0.5 s to 3 s, in half-a-second intervals) to determine the relationship between the number of samples and their effectiveness in classifying signals. In addition, to strengthen the classification process a number of boosting-technique-based decision trees were implemented. The results show an 82.30% accuracy rate for 0.5 s samples and 73.16% for 3 s samples. This is a significant improvement compared to the original QSA technique that offered results from 33.31% to 40.82% without sampling window and from 33.44% to 41.07% with sampling window, respectively. We can thus conclude that iQSA is better suited to develop real-time applications. PMID:29348744

  10. SIMULATION FROM ENDPOINT-CONDITIONED, CONTINUOUS-TIME MARKOV CHAINS ON A FINITE STATE SPACE, WITH APPLICATIONS TO MOLECULAR EVOLUTION.

    PubMed

    Hobolth, Asger; Stone, Eric A

    2009-09-01

    Analyses of serially-sampled data often begin with the assumption that the observations represent discrete samples from a latent continuous-time stochastic process. The continuous-time Markov chain (CTMC) is one such generative model whose popularity extends to a variety of disciplines ranging from computational finance to human genetics and genomics. A common theme among these diverse applications is the need to simulate sample paths of a CTMC conditional on realized data that is discretely observed. Here we present a general solution to this sampling problem when the CTMC is defined on a discrete and finite state space. Specifically, we consider the generation of sample paths, including intermediate states and times of transition, from a CTMC whose beginning and ending states are known across a time interval of length T. We first unify the literature through a discussion of the three predominant approaches: (1) modified rejection sampling, (2) direct sampling, and (3) uniformization. We then give analytical results for the complexity and efficiency of each method in terms of the instantaneous transition rate matrix Q of the CTMC, its beginning and ending states, and the length of sampling time T. In doing so, we show that no method dominates the others across all model specifications, and we give explicit proof of which method prevails for any given Q, T, and endpoints. Finally, we introduce and compare three applications of CTMCs to demonstrate the pitfalls of choosing an inefficient sampler.

  11. Multiport well design for sampling of ground water at closely spaced vertical intervals

    USGS Publications Warehouse

    Delin, G.N.; Landon, M.K.

    1996-01-01

    Detailed vertical sampling is useful in aquifers where vertical mixing is limited and steep vertical gradients in chemical concentrations are expected. Samples can be collected at closely spaced vertical intervals from nested wells with short screened intervals. However, this approach may not be appropriate in all situations. An easy-to-construct and easy-to-install multiport sampling well to collect ground-water samples from closely spaced vertical intervals was developed and tested. The multiport sampling well was designed to sample ground water from surficial sand-and-gravel aquifers. The device consists of multiple stainless-steel tubes within a polyvinyl chloride (PVC) protective casing. The tubes protrude through the wall of the PVC casing at the desired sampling depths. A peristaltic pump is used to collect ground-water samples from the sampling ports. The difference in hydraulic head between any two sampling ports can be measured with a vacuum pump and a modified manometer. The usefulness and versatility of this multiport well design was demonstrated at an agricultural research site near Princeton, Minnesota where sampling ports were installed to a maximum depth of about 12 m below land surface. Tracer experiments were conducted using potassium bromide to document the degree to which short-circuiting occurred between sampling ports. Samples were successfully collected for analysis of major cations and anions, nutrients, selected herbicides, isotopes, dissolved gases, and chlorofluorcarbon concentrations.

  12. What Is the Shape of Developmental Change?

    PubMed Central

    Adolph, Karen E.; Robinson, Scott R.; Young, Jesse W.; Gill-Alvarez, Felix

    2009-01-01

    Developmental trajectories provide the empirical foundation for theories about change processes during development. However, the ability to distinguish among alternative trajectories depends on how frequently observations are sampled. This study used real behavioral data, with real patterns of variability, to examine the effects of sampling at different intervals on characterization of the underlying trajectory. Data were derived from a set of 32 infant motor skills indexed daily during the first 18 months. Larger sampling intervals (2-31 days) were simulated by systematically removing observations from the daily data and interpolating over the gaps. Infrequent sampling caused decreasing sensitivity to fluctuations in the daily data: Variable trajectories erroneously appeared as step-functions and estimates of onset ages were increasingly off target. Sensitivity to variation decreased as an inverse power function of sampling interval, resulting in severe degradation of the trajectory with intervals longer than 7 days. These findings suggest that sampling rates typically used by developmental researchers may be inadequate to accurately depict patterns of variability and the shape of developmental change. Inadequate sampling regimes therefore may seriously compromise theories of development. PMID:18729590

  13. Monte Carlo-based interval transformation analysis for multi-criteria decision analysis of groundwater management strategies under uncertain naphthalene concentrations and health risks

    NASA Astrophysics Data System (ADS)

    Ren, Lixia; He, Li; Lu, Hongwei; Chen, Yizhong

    2016-08-01

    A new Monte Carlo-based interval transformation analysis (MCITA) is used in this study for multi-criteria decision analysis (MCDA) of naphthalene-contaminated groundwater management strategies. The analysis can be conducted when input data such as total cost, contaminant concentration and health risk are represented as intervals. Compared to traditional MCDA methods, MCITA-MCDA has the advantages of (1) dealing with inexactness of input data represented as intervals, (2) mitigating computational time due to the introduction of Monte Carlo sampling method, (3) identifying the most desirable management strategies under data uncertainty. A real-world case study is employed to demonstrate the performance of this method. A set of inexact management alternatives are considered in each duration on the basis of four criteria. Results indicated that the most desirable management strategy lied in action 15 for the 5-year, action 8 for the 10-year, action 12 for the 15-year, and action 2 for the 20-year management.

  14. Effects of spectrometer band pass, sampling, and signal-to-noise ratio on spectral identification using the Tetracorder algorithm

    USGS Publications Warehouse

    Swayze, G.A.; Clark, R.N.; Goetz, A.F.H.; Chrien, T.H.; Gorelick, N.S.

    2003-01-01

    Estimates of spectrometer band pass, sampling interval, and signal-to-noise ratio required for identification of pure minerals and plants were derived using reflectance spectra convolved to AVIRIS, HYDICE, MIVIS, VIMS, and other imaging spectrometers. For each spectral simulation, various levels of random noise were added to the reflectance spectra after convolution, and then each was analyzed with the Tetracorder spectra identification algorithm [Clark et al., 2003]. The outcome of each identification attempt was tabulated to provide an estimate of the signal-to-noise ratio at which a given percentage of the noisy spectra were identified correctly. Results show that spectral identification is most sensitive to the signal-to-noise ratio at narrow sampling interval values but is more sensitive to the sampling interval itself at broad sampling interval values because of spectral aliasing, a condition when absorption features of different materials can resemble one another. The band pass is less critical to spectral identification than the sampling interval or signal-to-noise ratio because broadening the band pass does not induce spectral aliasing. These conclusions are empirically corroborated by analysis of mineral maps of AVIRIS data collected at Cuprite, Nevada, between 1990 and 1995, a period during which the sensor signal-to-noise ratio increased up to sixfold. There are values of spectrometer sampling and band pass beyond which spectral identification of materials will require an abrupt increase in sensor signal-to-noise ratio due to the effects of spectral aliasing. Factors that control this threshold are the uniqueness of a material's diagnostic absorptions in terms of shape and wavelength isolation, and the spectral diversity of the materials found in nature and in the spectral library used for comparison. Array spectrometers provide the best data for identification when they critically sample spectra. The sampling interval should not be broadened to increase the signal-to-noise ratio in a photon-noise-limited system when high levels of accuracy are desired. It is possible, using this simulation method, to select optimum combinations of band-pass, sampling interval, and signal-to-noise ratio values for a particular application that maximize identification accuracy and minimize the volume of imaging data.

  15. Pesticide leaching via subsurface drains in different hydrologic situations

    NASA Astrophysics Data System (ADS)

    Zajíček, Antonín; Fučík, Petr; Liška, Marek; Dobiáš, Jakub

    2017-04-01

    esticides and their degradates in tile drainage waters were studied in two small, predominantly agricultural, tile-drained subcatchments in the Bohemian-Moravian Highlands, Czech Republic. The goal was to evaluate their occurence and the dymamics of their concentrations in drainage waters in different hydrologic situations using discharge and concentration monitoring together with 18O and 2H isotope analysis for Mean Residence Time (MRT) estimation and hydrograph separations during rainfall - runoff (R-R) events. The drainage and stream discharges were measured continuously at the closing outlets of three drainage groups and one small stream. During periods of prevailing base and interflow, samples were collected manually in two-week intervals for isotope analysis and during the spraying period (March to October) also for pesticide analysis. During R-R events, samples were taken by automatic samplers in intervals varying from 20 min (summer) to 1 hour (winter). To enable isotopic analysis, precipitation was sampled both manually at two-week intervals and also using an automatic rainfall sampler which collected samples of precipitation during the R-R events at 20-min. intervals. The isotopic analysis showed, that MRT of drainage base flow and interflow varies from 2,2 to 3,3 years, while MRT of base flow and interflow in surface stream is several months. During R-R events, the proportion of event water varied from 0 to 60 % in both drainage and surface runoff. The occurrence of pesticides and their degradates in drainage waters is strongly dependent on the hydrologic situation. While degradates were permanently present in drainage waters in high but varying concentrations according to instantaneous runoff composition, parent matters were detected almost exclusively during R-R events. In periods with prevailing base flow and interflow (grab samples), especially ESA forms of chloracetanilide degradates occured in high concentrations in all samples. Average sum of degradates varried between 1 730 - 5 760 ng/l. During R-R events, pesticide concentration varried according to runoff composition and time between sprayng and event. Event with no protortiom of event water in drainage runoff were typical by incereas in degradates concentrations (up to 20 000ng/l) and none or low occurence of parent matters. Events with significant event water proportion in drainage runoff were characterised by decrease in degradates concentrations and (when event happened soon affter spraying) by presence of paternal pesticides in drinage runoff. Instanteous concentrations of paren matters can be extremely high in that causes, up to 23 000 ng/l in drainage waters and up to 40 000 ng/l in small stream. Above results suggest that drainage systems could act as significant source of pesticide leaching. When parent compounds leaches via tile drainage systems, there are some border conditions that must exist together such as the occurence of R-R event soon after the pests application and the presence of event water (or water with short residence time in the catchment) in the drainage runoff.

  16. How preservation time changes the linear viscoelastic properties of porcine liver.

    PubMed

    Wex, C; Stoll, A; Fröhlich, M; Arndt, S; Lippert, H

    2013-01-01

    The preservation time of a liver graft is one of the crucial factors for the success of a liver transplantation. Grafts are kept in a preservation solution to delay cell destruction and cellular edema and to maximize organ function after transplantation. However, longer preservation times are not always avoidable. In this paper we focus on the mechanical changes of porcine liver with increasing preservation time, in order to establish an indicator for the quality of a liver graft dependent on preservation time. A time interval of 26 h was covered and the rheological properties of liver tissue studied using a stress-controlled rheometer. For samples of 1 h preservation time 0.8% strain was found as the limit of linear viscoelasticity. With increasing preservation time a decrease in the complex shear modulus as an indicator for stiffness was observed for the frequency range from 0.1 to 10 Hz. A simple fractional derivative representation of the Kelvin Voigt model was applied to gain further information about the changes of the mechanical properties of liver with increasing preservation time. Within the small shear rate interval of 0.0001-0.01 s⁻¹ the liver showed Newtonian-like flow behavior.

  17. Estimating the Seasonal Importance of Precipitation to Plant Source Water over Time and Space with Water Isotopes

    NASA Astrophysics Data System (ADS)

    Nelson, D. B.; Kahmen, A.

    2017-12-01

    The stable isotopic composition of hydrogen and oxygen are physical properties of water molecules that can carry information on their sources or transport histories. This provides a useful tool for assessing the importance of rainfall at different times of the year for plant growth, provided that rainwater values vary over time and that waters do not partially evaporate after deposition. We tested the viability of this approach using data from samples collected at nineteen sites throughout Europe at monthly intervals over two consecutive growing seasons in 2014 and 2015. We compared isotope measurements of plant xylem water with soil water from multiple depths, and measured and modeled precipitation isotope values. Paired analyses of oxygen and hydrogen isotope values were used to screen out a limited number of water samples that were influenced by evaporation, with the majority of all water samples indicating meteoric sources. The isotopic composition of soil and xylem waters varied over the course of an individual growing season, with many trending towards more enriched values, suggesting integration of the plant-relevant water pool at a timescale shorter than the annual mean. We then quantified how soil water residence times varied at each site by calculating the interval between measured xylem water and the most recently preceding match in modeled precipitation isotope values. Results suggest a generally increasing interval between rainfall and plant uptake throughout each year, with source water corresponding to dates in the spring, likely reflecting a combination of spring rain, and mixing with winter and summer precipitation. The seasonally evolving spatial distribution of source water-precipitation lag values was then modeled as a function of location and climatology to develop continental-scale predictions. This spatial portrait of the average date for filling the plant source water pool provides insights on the seasonal importance of rainfall for plant growth. It also permits continental scale predictions of monthly plant source water isotope values, with applications to improving isotopic paleoclimate proxies from plants such as tree rings or sedimentary leaf waxes, and for using oxygen and hydrogen isotopes to track the origins of agricultural products.

  18. User's manual for the Graphical Constituent Loading Analysis System (GCLAS)

    USGS Publications Warehouse

    Koltun, G.F.; Eberle, Michael; Gray, J.R.; Glysson, G.D.

    2006-01-01

    This manual describes the Graphical Constituent Loading Analysis System (GCLAS), an interactive cross-platform program for computing the mass (load) and average concentration of a constituent that is transported in stream water over a period of time. GCLAS computes loads as a function of an equal-interval streamflow time series and an equal- or unequal-interval time series of constituent concentrations. The constituent-concentration time series may be composed of measured concentrations or a combination of measured and estimated concentrations. GCLAS is not intended for use in situations where concentration data (or an appropriate surrogate) are collected infrequently or where an appreciable amount of the concentration values are censored. It is assumed that the constituent-concentration time series used by GCLAS adequately represents the true time-varying concentration. Commonly, measured constituent concentrations are collected at a frequency that is less than ideal (from a load-computation standpoint), so estimated concentrations must be inserted in the time series to better approximate the expected chemograph. GCLAS provides tools to facilitate estimation and entry of instantaneous concentrations for that purpose. Water-quality samples collected for load computation frequently are collected in a single vertical or at single point in a stream cross section. Several factors, some of which may vary as a function of time and (or) streamflow, can affect whether the sample concentrations are representative of the mean concentration in the cross section. GCLAS provides tools to aid the analyst in assessing whether concentrations in samples collected in a single vertical or at single point in a stream cross section exhibit systematic bias with respect to the mean concentrations. In cases where bias is evident, the analyst can construct coefficient relations in GCLAS to reduce or eliminate the observed bias. GCLAS can export load and concentration data in formats suitable for entry into the U.S. Geological Survey's National Water Information System. GCLAS can also import and export data in formats that are compatible with various commonly used spreadsheet and statistics programs.

  19. Applying the Pseudo-Panel Approach to International Large-Scale Assessments: A Methodology for Analyzing Subpopulation Trend Data

    ERIC Educational Resources Information Center

    Hooper, Martin

    2017-01-01

    TIMSS and PIRLS assess representative samples of students at regular intervals, measuring trends in student achievement and student contexts for learning. Because individual students are not tracked over time, analysis of international large-scale assessment data is usually conducted cross-sectionally. Gustafsson (2007) proposed examining the data…

  20. The Teenage Nonviolence Test: Internal Consistency and Stability.

    ERIC Educational Resources Information Center

    Mayton, Daniel M., II; Weedman, Jonathon; Sonnen, Jennifer; Grubb, Celeste; Hirose, Masa

    This research study was designed to establish the reliability of the Teenage Nonviolence Test (TNT). The consistency and factor structure of the TNT using a sample of 376 adolescents were evaluated. The stability of the TNT was assessed over time by administering the TNT twice with a two week intervening interval to 87 adolescents. The TNT appears…

  1. High Resolution Time Series Observations of Bio-Optical and Physical Variability in the Arabian Sea

    DTIC Science & Technology

    1998-09-30

    1995-October 20, 1995). Multi-variable moored systems ( MVMS ) were deployed by our group at 35 and 80m. The MVMS utilizes a VMCM to measure currents...similar to that of the UCSB MVMSs. WORK COMPLETED Our MVMS interdisciplinary systems with sampling intervals of a few minutes were placed on a mooring

  2. Interval Timing Is Preserved Despite Circadian Desynchrony in Rats: Constant Light and Heavy Water Studies.

    PubMed

    Petersen, Christian C; Mistlberger, Ralph E

    2017-08-01

    The mechanisms that enable mammals to time events that recur at 24-h intervals (circadian timing) and at arbitrary intervals in the seconds-to-minutes range (interval timing) are thought to be distinct at the computational and neurobiological levels. Recent evidence that disruption of circadian rhythmicity by constant light (LL) abolishes interval timing in mice challenges this assumption and suggests a critical role for circadian clocks in short interval timing. We sought to confirm and extend this finding by examining interval timing in rats in which circadian rhythmicity was disrupted by long-term exposure to LL or by chronic intake of 25% D 2 O. Adult, male Sprague-Dawley rats were housed in a light-dark (LD) cycle or in LL until free-running circadian rhythmicity was markedly disrupted or abolished. The rats were then trained and tested on 15- and 30-sec peak-interval procedures, with water restriction used to motivate task performance. Interval timing was found to be unimpaired in LL rats, but a weak circadian activity rhythm was apparently rescued by the training procedure, possibly due to binge feeding that occurred during the 15-min water access period that followed training each day. A second group of rats in LL were therefore restricted to 6 daily meals scheduled at 4-h intervals. Despite a complete absence of circadian rhythmicity in this group, interval timing was again unaffected. To eliminate all possible temporal cues, we tested a third group of rats in LL by using a pseudo-randomized schedule. Again, interval timing remained accurate. Finally, rats tested in LD received 25% D 2 O in place of drinking water. This markedly lengthened the circadian period and caused a failure of LD entrainment but did not disrupt interval timing. These results indicate that interval timing in rats is resistant to disruption by manipulations of circadian timekeeping previously shown to impair interval timing in mice.

  3. Knowledge-based nonuniform sampling in multidimensional NMR.

    PubMed

    Schuyler, Adam D; Maciejewski, Mark W; Arthanari, Haribabu; Hoch, Jeffrey C

    2011-07-01

    The full resolution afforded by high-field magnets is rarely realized in the indirect dimensions of multidimensional NMR experiments because of the time cost of uniformly sampling to long evolution times. Emerging methods utilizing nonuniform sampling (NUS) enable high resolution along indirect dimensions by sampling long evolution times without sampling at every multiple of the Nyquist sampling interval. While the earliest NUS approaches matched the decay of sampling density to the decay of the signal envelope, recent approaches based on coupled evolution times attempt to optimize sampling by choosing projection angles that increase the likelihood of resolving closely-spaced resonances. These approaches employ knowledge about chemical shifts to predict optimal projection angles, whereas prior applications of tailored sampling employed only knowledge of the decay rate. In this work we adapt the matched filter approach as a general strategy for knowledge-based nonuniform sampling that can exploit prior knowledge about chemical shifts and is not restricted to sampling projections. Based on several measures of performance, we find that exponentially weighted random sampling (envelope matched sampling) performs better than shift-based sampling (beat matched sampling). While shift-based sampling can yield small advantages in sensitivity, the gains are generally outweighed by diminished robustness. Our observation that more robust sampling schemes are only slightly less sensitive than schemes highly optimized using prior knowledge about chemical shifts has broad implications for any multidimensional NMR study employing NUS. The results derived from simulated data are demonstrated with a sample application to PfPMT, the phosphoethanolamine methyltransferase of the human malaria parasite Plasmodium falciparum.

  4. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    PubMed

    Fung, Tak; Keenan, Kevin

    2014-01-01

    The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  5. Automated saliva processing for LC-MS/MS: Improving laboratory efficiency in cortisol and cortisone testing.

    PubMed

    Antonelli, Giorgia; Padoan, Andrea; Artusi, Carlo; Marinova, Mariela; Zaninotto, Martina; Plebani, Mario

    2016-04-01

    The aim of this study was to implement in our routine practice an automated saliva preparation protocol for quantification of cortisol (F) and cortisone (E) by LC-MS/MS using a liquid handling platform, maintaining the previously defined reference intervals with the manual preparation. Addition of internal standard solution to saliva samples and calibrators and SPE on μ-elution 96-well plate were performed by liquid handling platform. After extraction, the eluates were submitted to LC-MS/MS analysis. The manual steps within the entire process were to transfer saliva samples in suitable tubes, to put the cap mat and transfer of the collection plate to the LC auto sampler. Transference of the reference intervals from the manual to the automated procedure was established by Passing Bablok regression on 120 saliva samples analyzed simultaneously with the two procedures. Calibration curves were linear throughout the selected ranges. The imprecision ranged from 2 to 10%, with recoveries from 95 to 116%. Passing Bablok regression demonstrated no significant bias. The liquid handling platform translates the manual steps into automated operations allowing for saving hands-on time, while maintaining assay reproducibility and ensuring reliability of results, making it implementable in our routine with the previous established reference intervals. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  6. Geographic and temporal validity of prediction models: Different approaches were useful to examine model performance

    PubMed Central

    Austin, Peter C.; van Klaveren, David; Vergouwe, Yvonne; Nieboer, Daan; Lee, Douglas S.; Steyerberg, Ewout W.

    2017-01-01

    Objective Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting We illustrated different analytic methods for validation using a sample of 14,857 patients hospitalized with heart failure at 90 hospitals in two distinct time periods. Bootstrap resampling was used to assess internal validity. Meta-analytic methods were used to assess geographic transportability. Each hospital was used once as a validation sample, with the remaining hospitals used for model derivation. Hospital-specific estimates of discrimination (c-statistic) and calibration (calibration intercepts and slopes) were pooled using random effects meta-analysis methods. I2 statistics and prediction interval width quantified geographic transportability. Temporal transportability was assessed using patients from the earlier period for model derivation and patients from the later period for model validation. Results Estimates of reproducibility, pooled hospital-specific performance, and temporal transportability were on average very similar, with c-statistics of 0.75. Between-hospital variation was moderate according to I2 statistics and prediction intervals for c-statistics. Conclusion This study illustrates how performance of prediction models can be assessed in settings with multicenter data at different time periods. PMID:27262237

  7. A critical evaluation of the Beckman Coulter Access hsTnI: Analytical performance, reference interval and concordance.

    PubMed

    Pretorius, Carel J; Tate, Jillian R; Wilgen, Urs; Cullen, Louise; Ungerer, Jacobus P J

    2018-05-01

    We investigated the analytical performance, outlier rate, carryover and reference interval of the Beckman Coulter Access hsTnI in detail and compared it with historical and other commercial assays. We compared the imprecision, detection capability, analytical sensitivity, outlier rate and carryover against two previous Access AccuTnI assay versions. We established the reference interval with stored samples from a previous study and compared the concordances and variances with the Access AccuTnI+3 as well as with two commercial assays. The Access hsTnI had excellent analytical sensitivity with the calibration slope 5.6 times steeper than the Access AccuTnI+3. The detection capability was markedly improved with the SD of the blank 0.18-0.20 ng/L, LoB 0.29-0.33 ng/L and LoD 0.58-0.69 ng/L. All the reference interval samples had a result above the LoB value. At a mean concentration of 2.83 ng/L the SD was 0.28 ng/L (CV 9.8%). Carryover (0.005%) and outlier (0.046%) rates were similar to the Access AccuTnI+3. The combined male and female 99th percentile reference interval was 18.2 ng/L (90% CI 13.2-21.1 ng/L). Concordance amongst the assays was poor with only 16.7%, 19.6% and 15.2% of samples identified by all 4 assays as above the 99th, 97.5th and 95th percentiles. Analytical imprecision was a minor contributor to the observed variances between assays. The Beckman Coulter Access hsTnI assay has excellent analytical sensitivity and precision characteristics close to zero. This allows cTnI measurement in all healthy individuals and the capability to identify numerically small differences between serial samples as statistically significant. Concordance in healthy individuals remains poor amongst assays. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.

  8. Reliability of confidence intervals calculated by bootstrap and classical methods using the FIA 1-ha plot design

    Treesearch

    H. T. Schreuder; M. S. Williams

    2000-01-01

    In simulation sampling from forest populations using sample sizes of 20, 40, and 60 plots respectively, confidence intervals based on the bootstrap (accelerated, percentile, and t-distribution based) were calculated and compared with those based on the classical t confidence intervals for mapped populations and subdomains within those populations. A 68.1 ha mapped...

  9. Continuous-time interval model identification of blood glucose dynamics for type 1 diabetes

    NASA Astrophysics Data System (ADS)

    Kirchsteiger, Harald; Johansson, Rolf; Renard, Eric; del Re, Luigi

    2014-07-01

    While good physiological models of the glucose metabolism in type 1 diabetic patients are well known, their parameterisation is difficult. The high intra-patient variability observed is a further major obstacle. This holds for data-based models too, so that no good patient-specific models are available. Against this background, this paper proposes the use of interval models to cover the different metabolic conditions. The control-oriented models contain a carbohydrate and insulin sensitivity factor to be used for insulin bolus calculators directly. Available clinical measurements were sampled on an irregular schedule which prompts the use of continuous-time identification, also for the direct estimation of the clinically interpretable factors mentioned above. An identification method is derived and applied to real data from 28 diabetic patients. Model estimation was done on a clinical data-set, whereas validation results shown were done on an out-of-clinic, everyday life data-set. The results show that the interval model approach allows a much more regular estimation of the parameters and avoids physiologically incompatible parameter estimates.

  10. Secretaries, depression and absenteeism.

    PubMed

    Garrison, R; Eaton, W W

    1992-01-01

    This study examines the prevalence of Major Depressive Disorder; missed work; and mental health services use among secretaries and other women employed full-time. In a random sample of 3,484 women employed full-time, women employed as secretaries were significantly more likely to be depressed than other women even after controlling for socio-demographic characteristics (odds ratio = 1.69, 95% confidence interval = 1.05, 2.73). Secretaries were significantly more likely to report missing work in the last three months (odds ratio = 1.77, confidence interval = 1.01, 3.11); a finding not attributable to depression. Secretaries were also more likely to seek mental health services, but this finding was not significant (odds ratio = 1.78, confidence interval = 0.55, 5.78). It is possible that these findings are attributable to a selection effect whereby depressed women, and women who are likely to miss work, become secretaries. A second possibility is that women employed as secretaries have more "nonwork role stress" than other employed women. Alternatively, job conditions which result in dissatisfaction and stress may lead to depression and absenteeism. We believe our findings warrant further investigation into the work environment of secretaries.

  11. Near-real-time trace element measurements in a rural, traffic-influenced environment with some fireworks

    NASA Astrophysics Data System (ADS)

    Furger, Markus; Slowik, Jay G.; Cruz Minguillón, María; Hueglin, Christoph; Koch, Chris; Prévôt, André S. H.; Baltensperger, Urs

    2016-04-01

    Aerosol-bound trace elements can affect the environment in significant ways especially when they are toxic. Characterizing the trace element spatial and temporal variability is a prerequisite for human exposure studies. The requirement for high time resolution and consequently the low sample masses asked for analysis methods not easily accessible, such as synchrotron radiation-induced X-ray fluorescence spectrometry (SR-XRF). In recent years, instrumentation that samples and analyzes airborne particulate matter with time resolutions of less than an hour in near real time has entered the market. We present the results of a three-week campaign in a rural environment close to a freeway. The measurement period included the fireworks of the Swiss National Day. The XRF instrument was set up at the monitoring station Härkingen of the Swiss Monitoring Network for Air Pollution (NABEL). It was configured to sample and analyze ambient PM10 aerosols in 1-hour intervals. Sample analysis with XRF was performed by the instrument immediately after collection, i.e. during the next sampling interval. 24 elements were analyzed and quantified (Si, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Se, Cd, Sn, Sb, Ba, Pt, Hg, Pb, Bi). The element concentrations obtained by the XRF instrument were compared to those determined by ICP-AES and ICP-MS in PM10 samples collected by NABEL high volume samplers. The results demonstrate the capability of the instrument to measure over a wide range of concentrations, from a few ng m-3 to μg m-3, under ambient conditions. The time resolution allows for the characterization of diurnal variations of element concentrations, which provides information on the contribution of emission sources, such as road traffic, soil, or fireworks. Some elements (V, Co, As, Pt) were below their detection limit during most of the time, but As could be quantified during the fireworks. Transition metals Cr, Mn, Fe, Cu, Zn could be attributed to freeway traffic. K, S, Ba, and Bi were strongly linked to the fireworks. The field test provided good evidence for the applicability and ease of use of the instrument. It provided also an idea on the sensitivity of the method in realistic, ambient conditions, although the 3-week period was too short for a thorough assessment, e.g. for different weather conditions.

  12. Monitoring of NMR porosity changes in the full-size core salvage through the drying process

    NASA Astrophysics Data System (ADS)

    Fattakhov, Artur; Kosarev, Victor; Doroginitskii, Mikhail; Skirda, Vladimir

    2015-04-01

    Currently the principle of nuclear magnetic resonance (NMR) is one of the most popular technologies in the field of borehole geophysics and core analysis. Results of NMR studies allow to calculate the values of the porosity and permeability of sedimentary rocks with sufficient reliability. All standard tools for the study of core salvage on the basis of NMR have significant limitations: there is considered only long relaxation times corresponding to the mobile formation fluid. Current trends in energy obligate to move away from conventional oil to various alternative sources of energy. One of these sources are deposits of bitumen and high-viscosity oil. In Kazan (Volga Region) Federal University (Russia) there was developed a mobile unit for the study of the full-length core salvage by the NMR method ("NMR-Core") together with specialists of "TNG-Group" (a company providing maintenance services to oil companies). This unit is designed for the study of core material directly on the well, after removing it from the core receiver. The maximum diameter of the core sample may be up to 116 mm, its length (or length of the set of samples) may be up to 1000 mm. Positional precision of the core sample relative to the measurement system is 1 mm, and the spatial resolution along the axis of the core is 10 mm. Acquisition time of the 1 m core salvage varies depending on the mode of research and is at least 20 minutes. Furthermore, there is implemented a special investigation mode of the core samples with super small relaxation times (for example, heavy oil) is in the tool. The aim of this work is tracking of the NMR porosity changes in the full-size core salvage in time. There was used a water-saturated core salvage from the shallow educational well as a sample. The diameter of the studied core samples is 93 mm. There was selected several sections length of 1m from the 200-meter coring interval. The studied core samples are being measured several times. The time interval between the measurements is from 1 hour to 48 hours. Making the measurements it possible to draw conclusions about that the processes of NMR porosity changes in time as a result of evaporation of the part of fluid from the surface layer of the core salvage and suggest a core analysis technique directly on the well. This work is supported by the grant of Ministry of Education and Science of the Russian Federation (project No. 02.G25.31.0029).

  13. Agreement between microscopic examination and bacterial culture of bile samples for detection of bactibilia in dogs and cats with hepatobiliary disease.

    PubMed

    Pashmakova, Medora B; Piccione, Julie; Bishop, Micah A; Nelson, Whitney R; Lawhon, Sara D

    2017-05-01

    OBJECTIVE To evaluate the agreement between results of microscopic examination and bacterial culture of bile samples from dogs and cats with hepatobiliary disease for detection of bactibilia. DESIGN Cross-sectional study. ANIMALS 31 dogs and 21 cats with hepatobiliary disease for which subsequent microscopic examination and bacterial culture of bile samples was performed from 2004 through 2014. PROCEDURES Electronic medical records of included dogs and cats were reviewed to extract data regarding diagnosis, antimicrobials administered, and results of microscopic examination and bacterial culture of bile samples. Agreement between these 2 diagnostic tests was assessed by calculation of the Cohen κ value. RESULTS 17 (33%) dogs and cats had bactibilia identified by microscopic examination of bile samples, and 11 (21%) had bactibilia identified via bacterial culture. Agreement between these 2 tests was substantial (percentage agreement [positive and negative results], 85%; κ = 0.62; 95% confidence interval, 0.38 to 0.89) and improved to almost perfect when calculated for only animals that received no antimicrobials within 24 hours prior to sample collection (percentage agreement, 94%; κ = 0.84; 95% confidence interval, 0.61 to 1.00). CONCLUSIONS AND CLINICAL RELEVANCE Results indicated that agreement between microscopic examination and bacterial culture of bile samples for detection of bactibilia is optimized when dogs and cats are not receiving antimicrobials at the time of sample collection. Concurrent bacterial culture and microscopic examination of bile samples are recommended for all cats and dogs evaluated for hepatobiliary disease.

  14. Monthly Fluctuations of Insomnia Symptoms in a Population-Based Sample

    PubMed Central

    Morin, Charles M.; LeBlanc, M.; Ivers, H.; Bélanger, L.; Mérette, Chantal; Savard, Josée; Jarrin, Denise C.

    2014-01-01

    Study Objectives: To document the monthly changes in sleep/insomnia status over a 12-month period; to determine the optimal time intervals to reliably capture new incident cases and recurrent episodes of insomnia and the likelihood of its persistence over time. Design: Participants were 100 adults (mean age = 49.9 years; 66% women) randomly selected from a larger population-based sample enrolled in a longitudinal study of the natural history of insomnia. They completed 12 monthly telephone interviews assessing insomnia, use of sleep aids, stressful life events, and physical and mental health problems in the previous month. A total of 1,125 interviews of a potential 1,200 were completed. Based on data collected at each assessment, participants were classified into one of three subgroups: good sleepers, insomnia symptoms, and insomnia syndrome. Results: At baseline, 42 participants were classified as good sleepers, 34 met criteria for insomnia symptoms, and 24 for an insomnia syndrome. There were significant fluctuations of insomnia over time, with 66% of the participants changing sleep status at least once over the 12 monthly assessments (51.5% for good sleepers, 59.5% for insomnia syndrome, and 93.4% for insomnia symptoms). Changes of status were more frequent among individuals with insomnia symptoms at baseline (mean = 3.46, SD = 2.36) than among those initially classified as good sleepers (mean = 2.12, SD = 2.70). Among the subgroup with insomnia symptoms at baseline, 88.3% reported improved sleep (i.e., became good sleepers) at least once over the 12 monthly assessments compared to 27.7% whose sleep worsened (i.e., met criteria for an insomnia syndrome) during the same period. Among individuals classified as good sleepers at baseline, risks of developing insomnia symptoms and syndrome over the subsequent months were, respectively, 48.6% and 14.5%. Monthly assessment over an interval of 6 months was found most reliable to estimate incidence rates, while an interval of 3 months proved the most reliable for defining chronic insomnia. Conclusions: Monthly assessment of insomnia and sleep patterns revealed significant variability over the course of a 12-month period. These findings highlight the importance for future epidemiological studies of conducting repeated assessment at shorter than the typical yearly interval in order to reliably capture the natural course of insomnia over time. Citation: Morin CM; LeBlanc M; Ivers H; Bélanger L; Mérette C; Savard J; Jarrin DC. Monthly fluctuations of insomnia symptoms in a population-based sample. SLEEP 2014;37(2):319-326. PMID:24497660

  15. Dinoflagellate cyst biostratigraphy of the Upper Cretaceous succession in the sub-Arctic region

    NASA Astrophysics Data System (ADS)

    Radmacher, Wiesława; Tyszka, Jarosław; Mangerud, Gunn; Pearce, Martin

    2017-04-01

    The study provides a solid basis for the first palynostratigraphic zonation of the Upper Cretaceous sub-Arctic succession. Dinoflagellate cysts from the unique composite section, combining samples from the shallow stratigraphic core 6711/4-U-1 and core-samples from well 6707/10-1 in the Norwegian Sea, were studied and compared to palynological data from the south-western Barents Sea, wells 7119/12-1, 7119/9-1, 7120/7-3, 7120/5-1 and 7121/5-1. Dinoflagellate cysts diagnostic for late Maastrichtian that are missing in the Barents Sea are recorded in both sections in the Norwegian Sea. This adds new valuable data from the time interval often represented by a significant regional hiatus in the area. Seven new and three previously recognised zones are identified, based on top and base occurrence of selected age diagnostic taxa. In addition, one Abundance Subzone is introduced. The biostratigraphic zonation includes: the intra late Albian to intra early Cenomanian Subtilisphaera kalaalliti Interval Zone sensu Nøhr-Hansen (1993); the intra early Cenomanian to intra late Cenomanian Palaeohystrichophora infusorioides-Palaeohystrichophora palaeoinfusa Interval Zone sensu Radmacher et al. (2014); the intra Turonian to ?intra early Coniacian Heterosphaeridium difficile Interval Zone sensu Nøhr-Hansen (2012); the ?intra early Coniacian to late Santonian Dinopterygium alatum Interval Zone sensu Radmacher et al. (2014); the ?early Campanian Palaeoglenodinium cretaceum Interval Zone sensu Radmacher et al. (2014); the intra Campanian Hystrichosphaeridium dowlingii-Heterosphaeridium spp. Interval Zone sensu Radmacher et al. (2015); the intra late Campanian Chatangiella bondarenkoi Interval Zone sensu Radmacher et al. (2014) encompassing the Heterosphaeridium bellii Abundance Subzone; the early Maastrichtian Cerodinium diebelii Interval Zone sensu Nøhr-Hansen (1996) and the intra late Maastrichtian Wodehouseia spinata Range Zone sensu Nøhr-Hansen (1996). The Heterosphaeridium bellii is a newly described species important for biostratigraphical and palaeoenvironmental interpretations. Comparison of the recorded dinoflagellate cyst events with the published data from adjacent areas, such as west and east Greenland, North Sea, offshore eastern Canada and northern Siberia allows for sub-Arctic interregional correlations. This research was partially supported by EEA Financial Mechanism and Norwegian Financial Mechanism and the Research Council of Norway.

  16. The effect of sampling rate on observed statistics in a correlated random walk

    PubMed Central

    Rosser, G.; Fletcher, A. G.; Maini, P. K.; Baker, R. E.

    2013-01-01

    Tracking the movement of individual cells or animals can provide important information about their motile behaviour, with key examples including migrating birds, foraging mammals and bacterial chemotaxis. In many experimental protocols, observations are recorded with a fixed sampling interval and the continuous underlying motion is approximated as a series of discrete steps. The size of the sampling interval significantly affects the tracking measurements, the statistics computed from observed trajectories, and the inferences drawn. Despite the widespread use of tracking data to investigate motile behaviour, many open questions remain about these effects. We use a correlated random walk model to study the variation with sampling interval of two key quantities of interest: apparent speed and angle change. Two variants of the model are considered, in which reorientations occur instantaneously and with a stationary pause, respectively. We employ stochastic simulations to study the effect of sampling on the distributions of apparent speeds and angle changes, and present novel mathematical analysis in the case of rapid sampling. Our investigation elucidates the complex nature of sampling effects for sampling intervals ranging over many orders of magnitude. Results show that inclusion of a stationary phase significantly alters the observed distributions of both quantities. PMID:23740484

  17. Monte Carlo Method for Determining Earthquake Recurrence Parameters from Short Paleoseismic Catalogs: Example Calculations for California

    USGS Publications Warehouse

    Parsons, Tom

    2008-01-01

    Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques [e.g., Ellsworth et al., 1999]. In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means [e.g., NIST/SEMATECH, 2006]. For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDF?s, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.

  18. Monte Carlo method for determining earthquake recurrence parameters from short paleoseismic catalogs: Example calculations for California

    USGS Publications Warehouse

    Parsons, T.

    2008-01-01

    Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques (e.g., Ellsworth et al., 1999). In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means (e.g., NIST/SEMATECH, 2006). For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDFs, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.

  19. Combined VSWIR/TIR Products Overview: Issues and Examples

    NASA Technical Reports Server (NTRS)

    Knox, Robert G.

    2010-01-01

    The presentation provides a summary of VSWIR data collected at 19-day intervals for most areas. TIR data was collected both day and night on a 5-day cycle (more frequently at higher latitudes), the TIR swath is four times as wide as VSWIR, and the 5-day orbit repeat is approximate. Topics include nested swath geometry for reference point design and coverage simulations for sample FLUXNET tower sites. Other points examined include variation in latitude for revisit frequency, overpass times, and TIR overlap geometry and timing between VSWIR data collections.

  20. HIF1α protein and mRNA expression as a new marker for post mortem interval estimation in human gingival tissue.

    PubMed

    Fais, Paolo; Mazzotti, Maria Carla; Teti, Gabriella; Boscolo-Berto, Rafael; Pelotti, Susi; Falconi, Mirella

    2018-06-01

    Estimating the post mortem interval (PMI) is still a crucial step in Forensic Pathology. Although several methods are available for assessing the PMI, a precise estimation is still quite unreliable and can be inaccurate. The present study aimed to investigate the immunohistochemical distribution and mRNA expression of hypoxia inducible factor (HIF-1α) in post mortem gingival tissues to establish a correlation between the presence of HIF-1α and the time since death, with the final goal of achieving a more accurate PMI estimation. Samples of gingival tissues were obtained from 10 cadavers at different PMIs (1-3 days, 4-5 days and 8-9 days), and were processed for immunohistochemistry and quantitative reverse transcription-polymerase chain reaction. The results showed a time-dependent correlation of HIF-1α protein and its mRNA with different times since death, which suggests that HIF-1α is a potential marker for PMI estimation. The results showed a high HIF-1α protein signal that was mainly localized in the stratum basale of the oral mucosa in samples collected at a short PMI (1-3 days). It gradually decreased in samples collected at a medium PMI (4-5 days), but it was not detected in samples collected at a long PMI (8-9 days). These results are in agreement with the mRNA data. These data indicate an interesting potential utility of Forensic Anatomy-based techniques, such as immunohistochemistry, as important complementary tools to be used in forensic investigations. © 2018 The Authors. Journal of Anatomy published by John Wiley & Sons Ltd on behalf of Anatomical Society.

  1. Practical Advice on Calculating Confidence Intervals for Radioprotection Effects and Reducing Animal Numbers in Radiation Countermeasure Experiments

    PubMed Central

    Landes, Reid D.; Lensing, Shelly Y.; Kodell, Ralph L.; Hauer-Jensen, Martin

    2014-01-01

    The dose of a substance that causes death in P% of a population is called an LDP, where LD stands for lethal dose. In radiation research, a common LDP of interest is the radiation dose that kills 50% of the population by a specified time, i.e., lethal dose 50 or LD50. When comparing LD50 between two populations, relative potency is the parameter of interest. In radiation research, this is commonly known as the dose reduction factor (DRF). Unfortunately, statistical inference on dose reduction factor is seldom reported. We illustrate how to calculate confidence intervals for dose reduction factor, which may then be used for statistical inference. Further, most dose reduction factor experiments use hundreds, rather than tens of animals. Through better dosing strategies and the use of a recently available sample size formula, we also show how animal numbers may be reduced while maintaining high statistical power. The illustrations center on realistic examples comparing LD50 values between a radiation countermeasure group and a radiation-only control. We also provide easy-to-use spreadsheets for sample size calculations and confidence interval calculations, as well as SAS® and R code for the latter. PMID:24164553

  2. Temperature-time issues in bioburden control for planetary protection

    NASA Astrophysics Data System (ADS)

    Clark, Benton C.

    2004-01-01

    Heat energy, administered in the form of an elevated temperature heat soak over a specific interval of time, is a well-known method for inactivating organisms. Sterilization protocols, from commercial pasteurization to laboratory autoclaving, specify both temperature and time, as well as water activity, for treatments to achieve either acceptable reduction of bioburden or complete sterilization. In practical applications of planetary protection, whether to reduce spore load in forward or roundtrip contamination, or to exterminate putative organisms in returned samples from bodies suspected of possible life, avoidance of expensive or potentially damaging treatments of hardware (or samples) could be accomplished if reciprocal relationships between time duration and soak temperature could be established. Conservative rules can be developed from consideration of empirical test data, derived relationships, current standards and various theoretical or proven mechanisms for thermal damage to biological systems.

  3. Sampling factors influencing accuracy of sperm kinematic analysis.

    PubMed

    Owen, D H; Katz, D F

    1993-01-01

    Sampling conditions that influence the accuracy of experimental measurement of sperm head kinematics were studied by computer simulation methods. Several archetypal sperm trajectories were studied. First, mathematical models of typical flagellar beats were input to hydrodynamic equations of sperm motion. The instantaneous swimming velocities of such sperm were computed over sequences of flagellar beat cycles, from which the resulting trajectories were determined. In a second, idealized approach, direct mathematical models of trajectories were utilized, based upon similarities to the previous hydrodynamic constructs. In general, it was found that analyses of sampling factors produced similar results for the hydrodynamic and idealized trajectories. A number of experimental sampling factors were studied, including the number of sperm head positions measured per flagellar beat, and the time interval over which these measurements are taken. It was found that when one flagellar beat is sampled, values of amplitude of lateral head displacement (ALH) and linearity (LIN) approached their actual values when five or more sample points per beat were taken. Mean angular displacement (MAD) values, however, remained sensitive to sampling rate even when large sampling rates were used. Values of MAD were also much more sensitive to the initial starting point of the sampling procedure than were ALH or LIN. On the basis of these analyses of measurement accuracy for individual sperm, simulations were then performed of cumulative effects when studying entire populations of motile cells. It was found that substantial (double digit) errors occurred in the mean values of curvilinear velocity (VCL), LIN, and MAD under the conditions of 30 video frames per second and 0.5 seconds of analysis time. Increasing the analysis interval to 1 second did not appreciably improve the results. However, increasing the analysis rate to 60 frames per second significantly reduced the errors. These findings thus suggest that computer-aided sperm analysis (CASA) application at 60 frames per second will significantly improve the accuracy of kinematic analysis in most applications to human and other mammalian sperm.

  4. Generalization of Turbulent Pair Dispersion to Large Initial Separations

    NASA Astrophysics Data System (ADS)

    Shnapp, Ron; Liberzon, Alex; International Collaboration for Turbulence Research

    2018-06-01

    We present a generalization of turbulent pair dispersion to large initial separations (η

  5. Data that describe at-a-point temporal variations in the transport rate and particle-size distribution of bedload; East Fork River, Wyoming, and Fall River, Colorado

    USGS Publications Warehouse

    Gomez, Basil; Emmett, W.W.

    1990-01-01

    Data from the East Fork River, Wyoming, and the Fall River, Colorado, that document at-a-point temporal variations in the transport rate and particle-size distribution of bedload, associated with the downstream migration of dunes, are presented. Bedload sampling was undertaken, using a 76.2 x 76.2 mm Helley-Smith sampler, on three separate occasions at each site in June 1988. In each instance, the sampling time was 30 seconds and the sampling intervals 5 minutes. The sampling period ranged from 4.92 to 8.25 hours. Water stage did not vary appreciably during any of the sampling periods. (USGS)

  6. Role of enamel deminerlization and remineralization on microtensile bond strength of resin composite

    PubMed Central

    Rizvi, Abbas; Zafar, Muhammad S.; Al-Wasifi, Yasser; Fareed, Wamiq; Khurshid, Zohaib

    2016-01-01

    Objective: This study is aimed to establish the microtensile bond strength of enamel following exposure to an aerated drink at various time intervals with/without application of remineralization agent. In addition, degree of remineralization and demineralization of tooth enamel has been assessed using polarized light microscopy. Materials and Methods: Seventy extracted human incisors split into two halves were immersed in aerated beverage (cola drink) for 5 min and stored in saliva until the time of microtensile bond testing. Prepared specimens were divided randomly into two study groups; remineralizing group (n = 70): specimens were treated for remineralization using casein phosphopeptides and amorphous calcium phosphate (CPP-ACP) remineralization agent (Recaldent™; GC Europe) and control group (n = 70): no remineralization treatment; specimens were kept in artificial saliva. All specimens were tested for microtensile bond strength at regular intervals (1 h, 1 days, 2 days, 1 week, and 2 weeks) using a universal testing machine. The results statistically analyzed (P = 0.05) using two-way ANOVA test. Results: Results showed statistically significant increase in bond strength in CPP-ACP tested group (P < 0.05) at all-time intervals. The bond strength of remineralizing group samples at 2 days (~13.64 megapascals [MPa]) is comparable to that of control group after 1 week (~12.44 MPa). Conclusions: CPP-ACP treatment of teeth exposed to an aerated drink provided significant increase in bond strength at a shorter interval compared to teeth exposed to saliva alone. PMID:27403057

  7. A pharmacometric case study regarding the sensitivity of structural model parameter estimation to error in patient reported dosing times.

    PubMed

    Knights, Jonathan; Rohatagi, Shashank

    2015-12-01

    Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times.

  8. Comparison of nutritional status assessment parameters in predicting length of hospital stay in cancer patients.

    PubMed

    Mendes, J; Alves, P; Amaral, T F

    2014-06-01

    Undernutrition has been associated with an increased length of hospital stay which may reflect the patient prognosis. The aim of this study was to quantify and compare the association between nutritional status and handgrip strength at hospital admission with time to discharge in cancer patients. An observational prospective study was conducted in an oncology center. Patient-Generated Subjective Global Assessment, Nutritional Risk Screening 2002 and handgrip strength were conducted in a probabilistic sample of 130 cancer patients. The association between baseline nutritional status, handgrip strength and time to discharge was evaluated using survival analysis with discharge alive as the outcome. Nutritional risk ranged from 42.3 to 53.1% depending on the tool used. According to Patient-Generated Subjective Global Assessment severe undernutrition was present in 22.3% of the sample. The association between baseline data and time to discharge was stronger in patients with low handgrip strength (adjusted hazard ratio, low handgrip strength: 0.33; 95% confidence interval: 0.19-0.55), compared to undernourished patients evaluated by the other tools; Patient-Generated Subjective Global Assessment: (adjusted hazard ratio, severe undernutrition: 0.45; 95% confidence interval: 0.27-0.75) and Nutritional Risk Screening 2002: (adjusted hazard ratio, with nutritional risk: 0.55; 95% confidence interval: 0.37-0.80). An approximate 3-fold decrease in probability of discharge alive was observed in patients with low handgrip strength. Decreasing handgrip strength tertiles allowed to discriminate between patients who will have longer hospital stay, as well as undernutrition and nutritional risk assessed by Patient-Generated Subjective Global Assessment and Nutritional Risk Screening 2002. Copyright © 2013 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  9. DNA and RNA profiling of excavated human remains with varying postmortem intervals.

    PubMed

    van den Berge, M; Wiskerke, D; Gerretsen, R R R; Tabak, J; Sijen, T

    2016-11-01

    When postmortem intervals (PMIs) increase such as with longer burial times, human remains suffer increasingly from the taphonomic effects of decomposition processes such as autolysis and putrefaction. In this study, various DNA analysis techniques and a messenger RNA (mRNA) profiling method were applied to examine for trends in nucleic acid degradation and the postmortem interval. The DNA analysis techniques include highly sensitive DNA quantitation (with and without degradation index), standard and low template STR profiling, insertion and null alleles (INNUL) of retrotransposable elements typing and mitochondrial DNA profiling. The used mRNA profiling system targets genes with tissue specific expression for seven human organs as reported by Lindenbergh et al. (Int J Legal Med 127:891-900, 27) and has been applied to forensic evidentiary traces but not to excavated tissues. The techniques were applied to a total of 81 brain, lung, liver, skeletal muscle, heart, kidney and skin samples obtained from 19 excavated graves with burial times ranging from 4 to 42 years. Results show that brain and heart are the organs in which both DNA and RNA remain remarkably stable, notwithstanding long PMIs. The other organ tissues either show poor overall profiling results or vary for DNA and RNA profiling success, with sometimes DNA and other times RNA profiling being more successful. No straightforward relations were observed between nucleic acid profiling results and the PMI. This study shows that not only DNA but also RNA molecules can be remarkably stable and used for profiling of long-buried human remains, which corroborate forensic applications. The insight that the brain and heart tissues tend to provide the best profiling results may change sampling policies in identification cases of degrading cadavers.

  10. The effect of the interval-between-sessions on prefrontal transcranial direct current stimulation (tDCS) on cognitive outcomes: a systematic review and meta-analysis.

    PubMed

    Dedoncker, Josefien; Brunoni, Andre R; Baeken, Chris; Vanderhasselt, Marie-Anne

    2016-10-01

    Recently, there has been wide interest in the effects of transcranial direct current stimulation (tDCS) of the dorsolateral prefrontal cortex (DLPFC) on cognitive functioning. However, many methodological questions remain unanswered. One of them is whether the time interval between active and sham-controlled stimulation sessions, i.e. the interval between sessions (IBS), influences DLPFC tDCS effects on cognitive functioning. Therefore, a systematic review and meta-analysis was performed of experimental studies published in PubMed, Science Direct, and other databases from the first data available to February 2016. Single session sham-controlled within-subject studies reporting the effects of tDCS of the DLPFC on cognitive functioning in healthy controls and neuropsychiatric patients were included. Cognitive tasks were categorized in tasks assessing memory, attention, and executive functioning. Evaluation of 188 trials showed that anodal vs. sham tDCS significantly decreased response times and increased accuracy, and specifically for the executive functioning tasks, in a sample of healthy participants and neuropsychiatric patients (although a slightly different pattern of improvement was found in analyses for both samples separately). The effects of cathodal vs. sham tDCS (45 trials), on the other hand, were not significant. IBS ranged from less than 1 h to up to 1 week (i.e. cathodal tDCS) or 2 weeks (i.e. anodal tDCS). This IBS length had no influence on the estimated effect size when performing a meta-regression of IBS on reaction time and accuracy outcomes in all three cognitive categories, both for anodal and cathodal stimulation. Practical recommendations and limitations of the study are further discussed.

  11. 40Ar 39Ar age constraints on neogene sedimentary beds, Upper Ramparts, half-way Pillar and Canyon village sites, Porcupine river, east-central Alaska

    USGS Publications Warehouse

    Kunk, Michael J.; Rieck, H.; Fouch, T.D.; Carter, L.D.

    1994-01-01

    40Ar 39Ar ages of volcanic rocks are used to provide numerical constraints on the age of middle and upper Miocene sedimentary strata collected along the Porcupine River. Intercalated sedimentary rocks north of latitude 67??10???N in the Porcupine terrane of east-central Alaska contain a rich record of plant fossils. The fossils are valuable indicators of this interior region's paleoclimate during the time of their deposition. Integration of the 40Ar 39Ar results with paleomagnetic and sedimentological data allows for refinements in estimating the timing of deposition and duration of selected sedimentary intervals. 40Ar 39Ar plateau age spectra, from whole rock basalt samples, collected along the Upper Ramparts and near Half-way Pillar on the Porcupine River, range from 15.7 ?? 0.1 Ma at site 90-6 to 14.4 ?? 0.1 Ma at site 90-2. With exception of the youngest basalt flow at site 90-2, all of the samples are of reversed magnetic polarity, and all 40Ar 39Ar age spectrum results are consistent with the deposition of the entire stratigraphic section during a single interval of reversed magnetic polarity. The youngest flow at site 90-2 was emplaced during an interval of normal polarity. With age, paleomagnetic and sedimentological data, the ages of the Middle Miocene sedimentary rocks between the basalt flows at sites 90-1 and 90-2 can be assigned to an interval within the limits of analytical precision of 15.2 ?? 0.1 Ma; thus, the sediments were deposited during the peak of the Middle Miocene thermal maximum. Sediments in the upper parts of sites 90-1 and 90-2 were probably deposited during cooling from the Middle Miocene thermal maximum. 40Ar 39Ar results of plagioclase and biotite from a single tephra, collected at sites 90-7 and 90-8 along the Canyon Village section of the Porcupine River, indicate an age of 6.57 ?? 0.02 Ma for its time of eruption and deposition. These results, together with sedimentological and paleomagnetic data, suggest that all of the Upper Miocene lacustrine sedimentary rocks at these sites were deposited during a single interval of reversed magnetic polarity and may represent a duration of only about 40,000 years. The age of this tephra corresponds with a late late Miocene warm climatic interval. The results from the Upper Ramparts and Half-way Pillar sites are used to estimate a minimum interval of continental flood basalt activity of 1.1-1.5 million years, and to set limits for the timing and duration of Tertiary extensional tectonic activity in the Porcupine terrane. Our data indicate that the oroclinal flexure that formed before the deposition of the basalts at the eastern end of the Brooks Range was created prior to 15.7 ?? 0.1 Ma. ?? 1994.

  12. Estimation of time-delayed mutual information and bias for irregularly and sparsely sampled time-series

    PubMed Central

    Albers, D. J.; Hripcsak, George

    2012-01-01

    A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database. PMID:22536009

  13. Effect of supplemental oxygen on post-exercise inflammatory response and oxidative stress.

    PubMed

    White, Jodii; Dawson, Brian; Landers, Grant; Croft, Kevin; Peeling, Peter

    2013-04-01

    This investigation explored the influence of supplemental oxygen administered during the recovery periods of an interval-based running session on the post-exercise markers of reactive oxygen species (ROS) and inflammation. Ten well-trained male endurance athletes completed two sessions of 10 × 3 min running intervals at 85 % of the maximal oxygen consumption velocity (vVO(2)peak) on a motorised treadmill. A 90-s recovery period was given between each interval, during which time the participants were administered either a hyperoxic (HYP) (Fraction of Inspired Oxygen (FIO2) 99.5 %) or normoxic (NORM) (FIO2 21 %) gas, in a randomized, single-blind fashion. Pulse oximetry (SpO(2)), heart rate (HR), blood lactate (BLa), perceived exertion (RPE), and perceived recovery (TQRper) were recorded during each trial. Venous blood samples were taken pre-exercise, post-exercise and 1 h post-exercise to measure Interleukin-6 (IL-6) and Isoprostanes (F2-IsoP). The S(p)O(2) was significantly lower than baseline following all interval repetitions in both experimental trials (p < 0.05). The S(p)O(2) recovery time was significantly quicker in the HYP when compared to the NORM (p < 0.05), with a trend for improved perceptual recovery. The IL-6 and F2-IsoP were significantly elevated immediately post-exercise, but had significantly decreased by 1 h post-exercise in both trials (p < 0.05). There were no differences in IL-6 or F2-IsoP levels between trials. Supplemental oxygen provided during the recovery periods of interval based exercise improves the recovery time of SPO(2) but has no effect on post-exercise ROS or inflammatory responses.

  14. Individualized Infliximab Treatment Guided by Patient-managed eHealth in Children and Adolescents with Inflammatory Bowel Disease.

    PubMed

    Carlsen, Katrine; Houen, Gunnar; Jakobsen, Christian; Kallemose, Thomas; Paerregaard, Anders; Riis, Lene B; Munkholm, Pia; Wewer, Vibeke

    2017-09-01

    To individualize timing of infliximab (IFX) treatment in children and adolescents with inflammatory bowel disease (IBD) using a patient-managed eHealth program. Patients with IBD, 10 to 17 years old, treated with IFX were prospectively included. Starting 4 weeks after their last infusion, patients reported a weekly symptom score and provided a stool sample for fecal calprotectin analysis. Based on symptom scores and fecal calprotectin results, the eHealth program calculated a total inflammation burden score that determined the timing of the next IFX infusion (4-12 wk after the previous infusion). Quality of Life was scored by IMPACT III. A control group was included to compare trough levels of IFX antibodies and concentrations and treatment intervals. Patients and their parents evaluated the eHealth program. There were 29 patients with IBD in the eHealth group and 21 patients with IBD in the control group. During the control period, 94 infusions were provided in the eHealth group (mean interval 9.5 wk; SD 2.3) versus 105 infusions in the control group (mean interval 6.9 wk; SD 1.4). Treatment intervals were longer in the eHealth group (P < 0.001). Quality of Life did not change during the study. Appearance of IFX antibodies did not differ between the 2 groups. Eighty percent of patients reported increased disease control and 63% (86% of parents) reported an improved knowledge of the disease. Self-managed, eHealth-individualized timing of IFX treatments, with treatment intervals of 4 to 12 weeks, was accompanied by no significant development of IFX antibodies. Patients reported better control and improved knowledge of their IBD.

  15. Psychophysics of remembering.

    PubMed Central

    White, K G; Wixted, J T

    1999-01-01

    We present a new model of remembering in the context of conditional discrimination. For procedures such as delayed matching to sample, the effect of the sample stimuli at the time of remembering is represented by a pair of Thurstonian (normal) distributions of effective stimulus values. The critical assumption of the model is that, based on prior experience, each effective stimulus value is associated with a ratio of reinforcers obtained for previous correct choices of the comparison stimuli. That ratio determines the choice that is made on the basis of the matching law. The standard deviations of the distributions are assumed to increase with increasing retention-interval duration, and the distance between their means is assumed to be a function of other factors that influence overall difficulty of the discrimination. It is a behavioral model in that choice is determined by its reinforcement history. The model predicts that the biasing effects of the reinforcer differential increase with decreasing discriminability and with increasing retention-interval duration. Data from several conditions using a delayed matching-to-sample procedure with pigeons support the predictions. PMID:10028693

  16. Astrochronology of the Pliensbachian-Toarcian transition in the Foum Tillicht section (central High Atlas, Morroco)

    NASA Astrophysics Data System (ADS)

    Martinez, Mathieu; Bodin, Stéphane; Krencker, François-Nicolas

    2015-04-01

    The Pliensbachian and Toarcian stages (Early Jurassic) are marked by a series of carbon cycle disturbances, major climatic changes and severe faunal turnovers. An accurate knowledge of the timing of the Pliensbachian-Toarcian age is a key for quantifying fluxes and rhythms of faunal and geochemical processes during these major environmental perturbations. Although many studies provided astrochronological frameworks of the Toarcian Stage and the Toarcian oceanic anoxic event, no precise time frame exists for the Pliensbachian-Toarcian transition, often condensed in the previously studied sections. Here, we provide an astrochronology of the Pliensbachian-Toarcian transition in the Foum Tillicht section (central High Atlas, Morocco). The section is composed of decimetric hemipelagic marl-limestone alternations accompanied by cyclic fluctuations in the δ13Cmicrite. In this section, the marl-limestone alternations reflect cyclic sea-level/climatic changes, which triggers rhythmic migrations of the surrounding carbonate platforms and modulates the amount of carbonate exported to the basin. The studied interval encompasses 142.15 m of the section, from the base of the series to a hiatus in the Early Toarcian, marked by an erosional surface. The Pliensbachian-Toarcian (P-To) Event, a negative excursion in carbonate δ13Cmicrite, is observed pro parte in this studied interval. δ13Cmicrite measurements were performed every ~2 m at the base of the section and every 0.20 m within the P-To Event interval. Spectral analyses were performed using the multi-taper method and the evolutive Fast Fourier Transform to get the accurate assessment of the main significant periods and their evolution throughout the studied interval. Two main cycles are observed in the series: the 405-kyr eccentricity cycles is observed throughout the series, while the obliquity cycles is observed within the P-To Event, in the most densely sampled interval. The studied interval covers a 3.6-Myr interval. The duration of the part of P-To Event covered in this analysis is assessed at 0.70 Myr. In addition, the interval from the base of the Toarcian to the first occurrence of the calcareous nannofossil C. superbus has a duration assessed from 0.47 to 0.55 Myr. This duration is significantly higher than most of assessments obtained by former cyclostratigraphy analyses, showing that previous studies underestimated the duration of this interval, often condensed in the Western Tethys. This study shows the potential of the Foum Tillicht section to provide a refined time frame of the Pliensbachian-Toarcian boundary, which could be integrated in the next Geological Time Scale.

  17. In Vitro UV-Visible Spectroscopy Study of Yellow Laser Irradiation on Human Blood

    NASA Astrophysics Data System (ADS)

    Fuad, Siti Sakinah Mohd; Suardi, N.; Mustafa, I. S.

    2018-04-01

    This experimental study was performed to investigate the effect of low level yellow laser of 589nm wavelength with various laser irradiation time. Human blood samples with random diseases are irradiated with yellow laser of power density of 450mW/cm2 from 10 minutes to 60 minutes at 10 minutes intervals. The morphology of the red blood cell were also observed for different irradiation time. The result shows that there is a significant different in the absorption of light with varying laser irradiation time (p<0.01). The maximum absorption recorded at 40 minutes of irradiation at 340nm peak. Blood smear of the samples reveals that there are observable changes in the morphology of the red blood cell at 40 minutes and 60 minutes of irradiation.

  18. Depth-dependent groundwater quality sampling at City of Tallahassee test well 32, Leon County, Florida, 2013

    USGS Publications Warehouse

    McBride, W. Scott; Wacker, Michael A.

    2015-01-01

    A test well was drilled by the City of Tallahassee to assess the suitability of the site for the installation of a new well for public water supply. The test well is in Leon County in north-central Florida. The U.S. Geological Survey delineated high-permeability zones in the Upper Floridan aquifer, using borehole-geophysical data collected from the open interval of the test well. A composite water sample was collected from the open interval during high-flow conditions, and three discrete water samples were collected from specified depth intervals within the test well during low-flow conditions. Water-quality, source tracer, and age-dating results indicate that the open interval of the test well produces water of consistently high quality throughout its length. The cavernous nature of the open interval makes it likely that the highly permeable zones are interconnected in the aquifer by secondary porosity features.

  19. Monitoring Progress in Vocal Development in Young Cochlear Implant Recipients: Relationships between Speech Samples and Scores from the Conditioned Assessment of Speech Production (CASP)

    PubMed Central

    Ertmer, David J.; Jung, Jongmin

    2012-01-01

    Background Evidence of auditory-guided speech development can be heard as the prelinguistic vocalizations of young cochlear implant recipients become increasingly complex, phonetically diverse, and speech-like. In research settings, these changes are most often documented by collecting and analyzing speech samples. Sampling, however, may be too time-consuming and impractical for widespread use in clinical settings. The Conditioned Assessment of Speech Production (CASP; Ertmer & Stoel-Gammon, 2008) is an easily administered and time-efficient alternative to speech sample analysis. The current investigation examined the concurrent validity of the CASP and data obtained from speech samples recorded at the same intervals. Methods Nineteen deaf children who received CIs before their third birthdays participated in the study. Speech samples and CASP scores were gathered at 6, 12, 18, and 24 months post-activation. Correlation analyses were conducted to assess the concurrent validity of CASP scores and data from samples. Results CASP scores showed strong concurrent validity with scores from speech samples gathered across all recording sessions (6 – 24 months). Conclusions The CASP was found to be a valid, reliable, and time-efficient tool for assessing progress in vocal development during young CI recipient’s first 2 years of device experience. PMID:22628109

  20. Experiential sampling in the study of multiple personality disorder.

    PubMed

    Loewenstein, R J; Hamilton, J; Alagna, S; Reid, N; deVries, M

    1987-01-01

    The authors describe the application of experiential sampling, a new time-sampling method, to the assessment of rapid state changes in a woman with multiple personality disorder. She was signaled at random intervals during study periods and asked to provide information on alternate personality switches, amnesia, and mood state. The alternates displayed some characteristics that were as different as those occurring between separate individuals studied previously with this method. There were notable discrepancies between the self-report study data and information reported during therapy hours. The authors conclude that the phenomenology of multiple personality disorder is frequently more complex than is suspected early in the course of treatment.

  1. Salinity of ground water at sampling wells located in southeastern Nassau County, Long Island, New York

    USGS Publications Warehouse

    Lusczynski, Norbert J.

    1950-01-01

    In 1939, a special program for the systematic collection of chloride data in southeastern Nassau County was inaugurated in which three agencies participated. The Nassau County Department of Public Works constructed the sampling wells, the Ground Water Branch of the U.S. Geological Survey began to collect at period intervals water samples which were analysed at the Mount Prospect Laboratory of the New York Department of Water Supply, Gas and Electricity, The Nassau County Department of Public Works and the U.S. Geological Survey have continued financial cooperation for the maintenance of this program up to the present time.

  2. The Impact of Surgical Timing in Acute Traumatic Spinal Cord Injury

    DTIC Science & Technology

    2016-10-01

    However, their study was con - ducted with a small sample of 63 patients and only cer- vical T-SCI, and did not account for other possible factors that... con - tributors are the time of transfer from the site of trauma to the SCI center, the interval between the first medical assessment and surgical plan...requiring surgery will depend upon the availability of the operating rooms and of the spine surgeons, con - sidering the high number of elective cases

  3. Water-quality response to a high-elevation wildfire in the Colorado Front Range

    USGS Publications Warehouse

    Mast, M. Alisa; Murphy, Sheila F.; Clow, David W.; Penn, Colin A.; Sexstone, Graham A.

    2016-01-01

    Water quality of the Big Thompson River in the Front Range of Colorado was studied for 2 years following a high-elevation wildfire that started in October 2012 and burned 15% of the watershed. A combination of fixed-interval sampling and continuous water-quality monitors was used to examine the timing and magnitude of water-quality changes caused by the wildfire. Prefire water quality was well characterized because the site has been monitored at least monthly since the early 2000s. Major ions and nitrate showed the largest changes in concentrations; major ion increases were greatest in the first postfire snowmelt period, but nitrate increases were greatest in the second snowmelt period. The delay in nitrate release until the second snowmelt season likely reflected a combination of factors including fire timing, hydrologic regime, and rates of nitrogen transformations. Despite the small size of the fire, annual yields of dissolved constituents from the watershed increased 20–52% in the first 2 years following the fire. Turbidity data from the continuous sensor indicated high-intensity summer rain storms had a much greater effect on sediment transport compared to snowmelt. High-frequency sensor data also revealed that weekly sampling missed the concentration peak during snowmelt and short-duration spikes during rain events, underscoring the challenge of characterizing postfire water-quality response with fixed-interval sampling.

  4. Effects of hemolysis and lipemia interference on kaolin-activated thromboelastography, and comparison with conventional coagulation tests.

    PubMed

    Tang, Ning; Jin, Xi; Sun, Ziyong; Jian, Cui

    2017-04-01

    The effects of hemolysis and lipemia on thromboelastography (TEG) analysis have been scarcely evaluated in human samples, and neglected in clinical practice. We aimed to investigate the effects of in vitro mechanical hemolysis and lipemia on TEG analysis and conventional coagulation tests. Twenty-four healthy volunteers were enrolled in the study. Besides the controls, three groups with slight, moderate and severe mechanical hemolysis were constituted according to free hemoglobin (Hb) concentrations of 0.5-1.0, 2.0-6.0 and 7.0-13.0 g/L, respectively; and three groups with mild, moderate and high lipemia were established according to triglyceride concentrations of ∼6.0, ∼12.0, and ∼18.0 mmol/L, respectively. Four TEG parameters, reaction time (R), coagulation time (K), angle (α), and maximum amplitude (MA), were measured alongside conventional plasma tests including prothrombin time (PT), activated partial thromboplastin time (APTT) and fibrinogen (FIB) by mechanical method, and platelet count by optical method. Results showed that the median R and MA values at moderate and severe hemolysis and K at severe hemolysis exceeded respective reference intervals, and were considered unacceptable. Median values of TEG parameters in lipemic samples were all within reference intervals. Bias values of conventional plasma tests PT, APTT and FIB in hemolyzed or lipemic samples were all lower than the Clinical Laboratory Improvement Amendments (CLIA) allowable limits. Bias values of platelet count at moderate to severe hemolysis and lipemia exceeded the CLIA allowable limits. In conclusion, the detection of TEG was in general more affected by mechanical hemolysis than plasma coagulation tests. Pre-analytical variables should be taken into account when unexpected TEG results are obtained.

  5. Autoverification process improvement by Six Sigma approach: Clinical chemistry & immunoassay.

    PubMed

    Randell, Edward W; Short, Garry; Lee, Natasha; Beresford, Allison; Spencer, Margaret; Kennell, Marina; Moores, Zoë; Parry, David

    2018-05-01

    This study examines effectiveness of a project to enhance an autoverification (AV) system through application of Six Sigma (DMAIC) process improvement strategies. Similar AV systems set up at three sites underwent examination and modification to produce improved systems while monitoring proportions of samples autoverified, the time required for manual review and verification, sample processing time, and examining characteristics of tests not autoverified. This information was used to identify areas for improvement and monitor the impact of changes. Use of reference range based criteria had the greatest impact on the proportion of tests autoverified. To improve AV process, reference range based criteria was replaced with extreme value limits based on a 99.5% test result interval, delta check criteria were broadened, and new specimen consistency rules were implemented. Decision guidance tools were also developed to assist staff using the AV system. The mean proportion of tests and samples autoverified improved from <62% for samples and <80% for tests, to >90% for samples and >95% for tests across all three sites. The new AV system significantly decreased turn-around time and total sample review time (to about a third), however, time spent for manual review of held samples almost tripled. There was no evidence of compromise to the quality of testing process and <1% of samples held for exceeding delta check or extreme limits required corrective action. The Six Sigma (DMAIC) process improvement methodology was successfully applied to AV systems resulting in an increase in overall test and sample AV by >90%, improved turn-around time, reduced time for manual verification, and with no obvious compromise to quality or error detection. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  6. Ensemble-Based Source Apportionment of Fine Particulate Matter and Emergency Department Visits for Pediatric Asthma

    PubMed Central

    Gass, Katherine; Balachandran, Sivaraman; Chang, Howard H.; Russell, Armistead G.; Strickland, Matthew J.

    2015-01-01

    Epidemiologic studies utilizing source apportionment (SA) of fine particulate matter have shown that particles from certain sources might be more detrimental to health than others; however, it is difficult to quantify the uncertainty associated with a given SA approach. In the present study, we examined associations between source contributions of fine particulate matter and emergency department visits for pediatric asthma in Atlanta, Georgia (2002–2010) using a novel ensemble-based SA technique. Six daily source contributions from 4 SA approaches were combined into an ensemble source contribution. To better account for exposure uncertainty, 10 source profiles were sampled from their posterior distributions, resulting in 10 time series with daily SA concentrations. For each of these time series, Poisson generalized linear models with varying lag structures were used to estimate the health associations for the 6 sources. The rate ratios for the source-specific health associations from the 10 imputed source contribution time series were combined, resulting in health associations with inflated confidence intervals to better account for exposure uncertainty. Adverse associations with pediatric asthma were observed for 8-day exposure to particles generated from diesel-fueled vehicles (rate ratio = 1.06, 95% confidence interval: 1.01, 1.10) and gasoline-fueled vehicles (rate ratio = 1.10, 95% confidence interval: 1.04, 1.17). PMID:25776011

  7. Effect of addition of lycopene to calcium hydroxide and chlorhexidine as intracanal medicament on fracture resistance of radicular dentin at two different time intervals: An in vitro study.

    PubMed

    Madhusudhana, Koppolu; Archanagupta, Kasamsetty; Suneelkumar, Chinni; Lavanya, Anumula; Deepthi, Mandava

    2015-01-01

    Long-term use of intracanal medicaments such as calcium hydroxide (CH) reduces the fracture resistance of dentin. The present study was undertaken to evaluate the fracture resistance of radicular dentin on long-term use of CH, chlorhexidine (CHX) with lycopene (LP). To compare the fracture resistance of radicular dentin when intracanal medicaments such as CH, CHX with LP were used for 1-week and 1-month time interval. Sixty single-rooted extracted human permanent premolars were collected, and complete instrumentation was done. Samples were divided into three groups based on intracanal medicament used. Group 1 - no medicament was placed (CON), group 2 - mixture of 1.5 g of CH and 1 ml of 2% CHX (CHCHX), group 3 - mixture of 1.5 g of CH, 1 ml of CHX and 1 ml of 5% LP solution (CHCHXLP). After storage period of each group for 1-week and 1-month, middle 8 mm root cylinder was sectioned and tested for fracture resistance. Results were analyzed using paired t-test. At 1-month time interval, there was a statistically significant difference in fracture resistance between CHCHX and CHCHXLP groups. Addition of LP has not decreased the fracture resistance of radicular dentin after 1-month.

  8. The matching law in and within groups of rats1

    PubMed Central

    Graft, D. A.; Lea, S. E. G.; Whitworth, T. L.

    1977-01-01

    In each of the two experiments, a group of five rats lived in a complex maze containing four small single-lever operant chambers. In two of these chambers, food was available on variable-interval schedules of reinforcement. In Experiment I, nine combinations of variable intervals were used, and the aggregate lever-pressing rates (by the five rats together) were studied. The log ratio of the rates in the two chambers was linearly related to the log ratio of the reinforcement rates in them; this is an instance of Herrnstein's matching law, as generalized by Baum. Summing over the two food chambers, food consumption decreased, and response output increased, as the time required to earn each pellet increased. In Experiment II, the behavior of individual rats was observed by time-sampling on selected days, while different variable-interval schedules were arranged in the two chambers where food was available. Individual lever-pressing rates for the rats were obtained, and their median bore the same “matching” relationship to the reinforcement rates as the group aggregate in Experiment I. There were differences between the rats in their distribution of time and responses between the two food chambers; these differences were correlated with differences in the proportions of reinforcements the rats obtained from each chamber. PMID:16811975

  9. An examination of wilderness first aid knowledge, self-efficacy, and skill retention.

    PubMed

    Schumann, Scott A; Schimelpfenig, Tod; Sibthorp, Jim; Collins, Rachel H

    2012-09-01

    The purpose of this study was to examine the retention of wilderness first aid (WFA) knowledge, self-efficacy beliefs, and skills over time in a sample of WFA course participants. Seventy-two open enrollment (volunteer) WFA course participants were assessed at 4 months, 8 months, or 12 months after training. Changes in WFA knowledge and self-efficacy were assessed by written instruments after the course and at the follow-up interval (4, 8, or 12 months). The WFA skills were assessed by a scored medical scenario at the follow-up interval. As the time interval increased, WFA knowledge, self-efficacy, and skill proficiency decreased. The WFA knowledge and self-efficacy beliefs were not highly correlated with skill performance. Without additional training, regular use of the course content, or efforts to refresh thinking on key topics, the ability of WFA students to effectively apply their learning will likely decrease as time from training increases. With respect to these WFA courses, student scores on written tests did not accurately reflect competence in performing practical skills related to a medical scenario. In addition, student self-confidence in the ability to perform such skills did not strongly correlate with actual skills and ability. Copyright © 2012 Wilderness Medical Society. Published by Elsevier Inc. All rights reserved.

  10. Comment on short-term variation in subjective sleepiness.

    PubMed

    Eriksen, Claire A; Akerstedt, Torbjörn; Kecklund, Göran; Akerstedt, Anna

    2005-12-01

    Subjective sleepiness at different times is often measured in studies on sleep loss, night work, or drug effects. However, the context at the time of rating may influence results. The present study examined sleepiness throughout the day at hourly intervals and during controlled activities [reading, writing, walking, social interaction (discussion), etc.] by 10-min. intervals for 3 hr. This was done on a normal working day preceded by a scheduled early rising (to invite sleepiness) for six subjects. Analysis showed a significant U-shaped pattern across the day with peaks in the early morning and late evening. A walk and social interaction were associated with low sleepiness, compared to sedentary and quiet office work. None of this was visible in the hourly ratings. There was also a pronounced afternoon increase in sleepiness, that was not observable with hourly ratings. It was concluded that there are large variations in sleepiness related to time of day and also to context and that sparse sampling of subjective sleepiness may miss much of this variation.

  11. Dating human skeletal remains: investigating the viability of measuring the equilibrium between 210Po and 210Pb as a means of estimating the post-mortem interval.

    PubMed

    Swift, B

    1998-11-30

    Estimating the post-mortem interval in skeletal remains is a notoriously difficult task; forensic pathologists often rely heavily upon experience in recognising morphological appearances. Previous techniques have involved measuring physical or chemical changes within the hydroxyapatite matrix, radiocarbon dating and 90Sr dating, though no individual test has been advocated. Within this paper it is proposed that measuring the equilibrium between two naturally occurring radio-isotopes, 210Po and 210Pb, and comparison with post-mortem examination samples would produce a new method of dating human skeletal remains. Possible limitations exist, notably the effect of diagenesis, time limitations and relative cost, though this technique could provide a relatively accurate means of determining the post-mortem interval. It is therefore proposed that a large study be undertaken to provide a calibration scale against which bones uncovered can be dated.

  12. Repeat sample intraocular pressure variance in induced and naturally ocular hypertensive monkeys.

    PubMed

    Dawson, William W; Dawson, Judyth C; Hope, George M; Brooks, Dennis E; Percicot, Christine L

    2005-12-01

    To compare repeat-sample means variance of laser induced ocular hypertension (OH) in rhesus monkeys with the repeat-sample mean variance of natural OH in age-range matched monkeys of similar and dissimilar pedigrees. Multiple monocular, retrospective, intraocular pressure (IOP) measures were recorded repeatedly during a short sampling interval (SSI, 1-5 months) and a long sampling interval (LSI, 6-36 months). There were 5-13 eyes in each SSI and LSI subgroup. Each interval contained subgroups from the Florida with natural hypertension (NHT), induced hypertension (IHT1) Florida monkeys, unrelated (Strasbourg, France) induced hypertensives (IHT2), and Florida age-range matched controls (C). Repeat-sample individual variance means and related IOPs were analyzed by a parametric analysis of variance (ANOV) and results compared to non-parametric Kruskal-Wallis ANOV. As designed, all group intraocular pressure distributions were significantly different (P < or = 0.009) except for the two (Florida/Strasbourg) induced OH groups. A parametric 2 x 4 design ANOV for mean variance showed large significant effects due to treatment group and sampling interval. Similar results were produced by the nonparametric ANOV. Induced OH sample variance (LSI) was 43x the natural OH sample variance-mean. The same relationship for the SSI was 12x. Laser induced ocular hypertension in rhesus monkeys produces large IOP repeat-sample variance mean results compared to controls and natural OH.

  13. Timescale- and Sensory Modality-Dependency of the Central Tendency of Time Perception.

    PubMed

    Murai, Yuki; Yotsumoto, Yuko

    2016-01-01

    When individuals are asked to reproduce intervals of stimuli that are intermixedly presented at various times, longer intervals are often underestimated and shorter intervals overestimated. This phenomenon may be attributed to the central tendency of time perception, and suggests that our brain optimally encodes a stimulus interval based on current stimulus input and prior knowledge of the distribution of stimulus intervals. Two distinct systems are thought to be recruited in the perception of sub- and supra-second intervals. Sub-second timing is subject to local sensory processing, whereas supra-second timing depends on more centralized mechanisms. To clarify the factors that influence time perception, the present study investigated how both sensory modality and timescale affect the central tendency. In Experiment 1, participants were asked to reproduce sub- or supra-second intervals, defined by visual or auditory stimuli. In the sub-second range, the magnitude of the central tendency was significantly larger for visual intervals compared to auditory intervals, while visual and auditory intervals exhibited a correlated and comparable central tendency in the supra-second range. In Experiment 2, the ability to discriminate sub-second intervals in the reproduction task was controlled across modalities by using an interval discrimination task. Even when the ability to discriminate intervals was controlled, visual intervals exhibited a larger central tendency than auditory intervals in the sub-second range. In addition, the magnitude of the central tendency for visual and auditory sub-second intervals was significantly correlated. These results suggest that a common modality-independent mechanism is responsible for the supra-second central tendency, and that both the modality-dependent and modality-independent components of the timing system contribute to the central tendency in the sub-second range.

  14. Hematology and serum clinical chemistry reference intervals for free-ranging Scandinavian gray wolves (Canis lupus).

    PubMed

    Thoresen, Stein I; Arnemo, Jon M; Liberg, Olof

    2009-06-01

    Scandinavian free-ranging wolves (Canis lupus) are endangered, such that laboratory data to assess their health status is increasingly important. Although wolves have been studied for decades, most biological information comes from captive animals. The objective of the present study was to establish reference intervals for 30 clinical chemical and 8 hematologic analytes in Scandinavian free-ranging wolves. All wolves were tracked and chemically immobilized from a helicopter before examination and blood sampling in the winter of 7 consecutive years (1998-2004). Seventy-nine blood samples were collected from 57 gray wolves, including 24 juveniles (24 samples), 17 adult females (25 samples), and 16 adult males (30 samples). Whole blood and serum samples were stored at refrigeration temperature for 1-3 days before hematologic analyses and for 1-5 days before serum biochemical analyses. Reference intervals were calculated as 95% confidence intervals except for juveniles where the minimum and maximum values were used. Significant differences were observed between adult and juvenile wolves for RBC parameters, alkaline phosphatase and amylase activities, and total protein, albumin, gamma-globulins, cholesterol, creatinine, calcium, chloride, magnesium, phosphate, and sodium concentrations. Compared with published reference values for captive wolves, reference intervals for free-ranging wolves reflected exercise activity associated with capture (higher creatine kinase activity, higher glucose concentration), and differences in nutritional status (higher urea concentration).

  15. High resolution data acquisition

    DOEpatents

    Thornton, G.W.; Fuller, K.R.

    1993-04-06

    A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock, pulse train, and analog circuitry for generating a triangular wave synchronously with the pulse train (as seen in diagram on patent). The triangular wave has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter counts the clock pulse train during the interval to form a gross event interval time. A computer then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.

  16. High resolution data acquisition

    DOEpatents

    Thornton, Glenn W.; Fuller, Kenneth R.

    1993-01-01

    A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock (38) pulse train (37) and analog circuitry (44) for generating a triangular wave (46) synchronously with the pulse train (37). The triangular wave (46) has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter (18, 32) forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter (26) counts the clock pulse train (37) during the interval to form a gross event interval time. A computer (52) then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.

  17. Interpregnancy interval and risk of autistic disorder.

    PubMed

    Gunnes, Nina; Surén, Pål; Bresnahan, Michaeline; Hornig, Mady; Lie, Kari Kveim; Lipkin, W Ian; Magnus, Per; Nilsen, Roy Miodini; Reichborn-Kjennerud, Ted; Schjølberg, Synnve; Susser, Ezra Saul; Øyen, Anne-Siri; Stoltenberg, Camilla

    2013-11-01

    A recent California study reported increased risk of autistic disorder in children conceived within a year after the birth of a sibling. We assessed the association between interpregnancy interval and risk of autistic disorder using nationwide registry data on pairs of singleton full siblings born in Norway. We defined interpregnancy interval as the time from birth of the first-born child to conception of the second-born child in a sibship. The outcome of interest was autistic disorder in the second-born child. Analyses were restricted to sibships in which the second-born child was born in 1990-2004. Odds ratios (ORs) were estimated by fitting ordinary logistic models and logistic generalized additive models. The study sample included 223,476 singleton full-sibling pairs. In sibships with interpregnancy intervals <9 months, 0.25% of the second-born children had autistic disorder, compared with 0.13% in the reference category (≥ 36 months). For interpregnancy intervals shorter than 9 months, the adjusted OR of autistic disorder in the second-born child was 2.18 (95% confidence interval 1.42-3.26). The risk of autistic disorder in the second-born child was also increased for interpregnancy intervals of 9-11 months in the adjusted analysis (OR = 1.71 [95% CI = 1.07-2.64]). Consistent with a previous report from California, interpregnancy intervals shorter than 1 year were associated with increased risk of autistic disorder in the second-born child. A possible explanation is depletion of micronutrients in mothers with closely spaced pregnancies.

  18. [Evaluation of the principles of distribution of electrocardiographic R-R intervals for elaboration of methods of automated diagnosis of cardiac rhythm disorders].

    PubMed

    Tsukerman, B M; Finkel'shteĭn, I E

    1987-07-01

    A statistical analysis of prolonged ECG records has been carried out in patients with various heart rhythm and conductivity disorders. The distribution of absolute R-R duration values and relationships between adjacent intervals have been examined. A two-step algorithm has been constructed that excludes anomalous and "suspicious" intervals from a sample of consecutively recorded R-R intervals, until only the intervals between contractions of veritably sinus origin remain in the sample. The algorithm has been developed into a programme for microcomputer Electronica NC-80. It operates reliably even in cases of complex combined rhythm and conductivity disorders.

  19. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    PubMed

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  20. Accuracy in parameter estimation for targeted effects in structural equation modeling: sample size planning for narrow confidence intervals.

    PubMed

    Lai, Keke; Kelley, Ken

    2011-06-01

    In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about the magnitude of the population targeted effects. With the goal of obtaining sufficiently narrow confidence intervals for the model parameters of interest, sample size planning methods for SEM are developed from the accuracy in parameter estimation approach. One method plans for the sample size so that the expected confidence interval width is sufficiently narrow. An extended procedure ensures that the obtained confidence interval will be no wider than desired, with some specified degree of assurance. A Monte Carlo simulation study was conducted that verified the effectiveness of the procedures in realistic situations. The methods developed have been implemented in the MBESS package in R so that they can be easily applied by researchers. © 2011 American Psychological Association

  1. Comparison of Techniques for Sampling Adult Necrophilous Insects From Pig Carcasses.

    PubMed

    Cruise, Angela; Hatano, Eduardo; Watson, David W; Schal, Coby

    2018-02-06

    Studies of the pre-colonization interval and mechanisms driving necrophilous insect ecological succession depend on effective sampling of adult insects and knowledge of their diel and successional activity patterns. The number of insects trapped, their diversity, and diel periodicity were compared with four sampling methods on neonate pigs. Sampling method, time of day and decomposition age of the pigs significantly affected the number of insects sampled from pigs. We also found significant interactions of sampling method and decomposition day, time of sampling and decomposition day. No single method was superior to the other methods during all three decomposition days. Sampling times after noon yielded the largest samples during the first 2 d of decomposition. On day 3 of decomposition however, all sampling times were equally effective. Therefore, to maximize insect collections from neonate pigs, the method used to sample must vary by decomposition day. The suction trap collected the most species-rich samples, but sticky trap samples were the most diverse, when both species richness and evenness were factored into a Shannon diversity index. Repeated sampling during the noon to 18:00 hours period was most effective to obtain the maximum diversity of trapped insects. The integration of multiple sampling techniques would most effectively sample the necrophilous insect community. However, because all four tested methods were deficient at sampling beetle species, future work should focus on optimizing the most promising methods, alone or in combinations, and incorporate hand-collections of beetles. © The Author(s) 2018. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Venetoclax does not prolong the QT interval in patients with hematological malignancies: an exposure-response analysis.

    PubMed

    Freise, Kevin J; Dunbar, Martin; Jones, Aksana K; Hoffman, David; Enschede, Sari L Heitner; Wong, Shekman; Salem, Ahmed Hamed

    2016-10-01

    Venetoclax (ABT-199/GDC-0199) is a selective first-in-class B cell lymphoma-2 inhibitor being developed for the treatment of hematological malignancies. The aim of this study was to determine the potential of venetoclax to prolong the corrected QT (QTc) interval and to evaluate the relationship between systemic venetoclax concentration and QTc interval. The study population included 176 male and female patients with relapsed or refractory chronic lymphocytic leukemia/small lymphocytic lymphoma (n = 105) or non-Hodgkin's lymphoma (n = 71) enrolled in a phase 1 safety, pharmacokinetic, and efficacy study. Electrocardiograms were collected in triplicate at time-matched points (2, 4, 6, and 8 h) prior to the first venetoclax administration and after repeated venetoclax administration to achieve steady state conditions. Venetoclax doses ranged from 100 to 1200 mg daily. Plasma venetoclax samples were collected after steady state electrocardiogram measurements. The mean and upper bound of the 2-sided 90 % confidence interval (CI) QTc change from baseline were <5 and <10 ms, respectively, at all time points and doses (<400, 400, and >400 mg). Three subjects had single QTc values >500 ms and/or ΔQTc > 60 ms. The effect of venetoclax concentration on both ΔQTc and QTc was not statistically significant (P > 0.05). At the mean maximum concentrations achieved with therapeutic (400 mg) and supra-therapeutic (1200 mg) venetoclax doses, the estimated drug effects on QTc were 0.137 (90 % CI [-1.01 to 1.28]) and 0.263 (90 % CI [-1.92 to 2.45]) ms, respectively. Venetoclax does not prolong QTc interval even at supra-therapeutic doses, and there is no relationship between venetoclax concentrations and QTc interval.

  3. Priorities for treatment, care and information if faced with serious illness: a comparative population-based survey in seven European countries.

    PubMed

    Higginson, Irene J; Gomes, Barbara; Calanzani, Natalia; Gao, Wei; Bausewein, Claudia; Daveson, Barbara A; Deliens, Luc; Ferreira, Pedro L; Toscani, Franco; Gysels, Marjolein; Ceulemans, Lucas; Simon, Steffen T; Cohen, Joachim; Harding, Richard

    2014-02-01

    Health-care costs are growing, with little population-based data about people's priorities for end-of-life care, to guide service development and aid discussions. We examined variations in people's priorities for treatment, care and information across seven European countries. Telephone survey of a random sample of households; we asked respondents their priorities if 'faced with a serious illness, like cancer, with limited time to live' and used multivariable logistic regressions to identify associated factors. Members of the general public aged ≥ 16 years residing in England, Flanders, Germany, Italy, the Netherlands, Portugal and Spain. In total, 9344 individuals were interviewed. Most people chose 'improve quality of life for the time they had left', ranging from 57% (95% confidence interval: 55%-60%, Italy) to 81% (95% confidence interval: 79%-83%, Spain). Only 2% (95% confidence interval: 1%-3%, England) to 6% (95% confidence interval: 4%-7%, Flanders) said extending life was most important, and 15% (95% confidence interval: 13%-17%, Spain) to 40% (95% confidence interval: 37%-43%, Italy) said quality and extension were equally important. Prioritising quality of life was associated with higher education in all countries (odds ratio = 1.3 (Flanders) to 7.9 (Italy)), experience of caregiving or bereavement (England, Germany, Portugal), prioritising pain/symptom control over having a positive attitude and preferring death in a hospice/palliative care unit. Those prioritising extending life had the highest home death preference of all groups. Health status did not affect priorities. Across all countries, extending life was prioritised by a minority, regardless of health status. Treatment and care needs to be reoriented with patient education and palliative care becoming mainstream for serious conditions such as cancer.

  4. Plasma creatinine in dogs: intra- and inter-laboratory variation in 10 European veterinary laboratories

    PubMed Central

    2011-01-01

    Background There is substantial variation in reported reference intervals for canine plasma creatinine among veterinary laboratories, thereby influencing the clinical assessment of analytical results. The aims of the study was to determine the inter- and intra-laboratory variation in plasma creatinine among 10 veterinary laboratories, and to compare results from each laboratory with the upper limit of its reference interval. Methods Samples were collected from 10 healthy dogs, 10 dogs with expected intermediate plasma creatinine concentrations, and 10 dogs with azotemia. Overlap was observed for the first two groups. The 30 samples were divided into 3 batches and shipped in random order by postal delivery for plasma creatinine determination. Statistical testing was performed in accordance with ISO standard methodology. Results Inter- and intra-laboratory variation was clinically acceptable as plasma creatinine values for most samples were usually of the same magnitude. A few extreme outliers caused three laboratories to fail statistical testing for consistency. Laboratory sample means above or below the overall sample mean, did not unequivocally reflect high or low reference intervals in that laboratory. Conclusions In spite of close analytical results, further standardization among laboratories is warranted. The discrepant reference intervals seem to largely reflect different populations used in establishing the reference intervals, rather than analytical variation due to different laboratory methods. PMID:21477356

  5. Measuring the EMS patient access time interval and the impact of responding to high-rise buildings.

    PubMed

    Morrison, Laurie J; Angelini, Mark P; Vermeulen, Marian J; Schwartz, Brian

    2005-01-01

    To measure the patient access time interval and characterize its contribution to the total emergency medical services (EMS) response time interval; to compare the patient access time intervals for patients located three or more floors above ground with those less than three floors above or below ground, and specifically in the apartment subgroup; and to identify barriers that significantly impede EMS access to patients in high-rise apartments. An observational study of all patients treated by an emergency medical technician paramedics (EMT-P) crew was conducted using a trained independent observer to collect time intervals and identify potential barriers to access. Of 118 observed calls, 25 (21%) originated from patients three or more floors above ground. The overall median and 90th percentile (95% confidence interval) patient access time intervals were 1.61 (1.27, 1.91) and 3.47 (3.08, 4.05) minutes, respectively. The median interval was 2.73 (2.22, 3.03) minutes among calls from patients located three or more stories above ground compared with 1.25 (1.07, 1.55) minutes among those at lower levels. The patient access time interval represented 23.5% of the total EMS response time interval among calls originating less than three floors above or below ground and 32.2% of those located three or more stories above ground. The most frequently encountered barriers to access included security code entry requirements, lack of directional signs, and inability to fit the stretcher into the elevator. The patient access time interval is significantly long and represents a substantial component of the total EMS response time interval, especially among ambulance calls originating three or more floors above ground. A number of barriers appear to contribute to delayed paramedic access.

  6. Temperature-Time Issues in Bioburden Control for Planetary Protection

    NASA Astrophysics Data System (ADS)

    Clark, B.

    Heat energy, administered in the form of an elevated temperature heat soak over a specific interval of time, is a well-known method of inactivating organisms. Ster- ilization protocols, from commercial pasteurization to laboratory autoclaving, specify both the temperature and the time, as well as water activity, for treatments to achieve either acceptable reduction of bioburden or complete sterilization. In practical applications of planetary protection, whether to reduce spore load in for- ward or roundtrip contamination, or to exterminate putative organisms in returned samples from planetary bodies suspected of possible life, avoidance of expensive or potentially damaging treatments of hardware (or samples) could be accomplished if reciprocal relationships between time duration and soak temperature could be established. Conservative rules can be developed from consideration of empirical test data, derived relationships, current standards and various theoretical or proven mechanisms for thermal damage to biological systems.

  7. Estimation of reference intervals from small samples: an example using canine plasma creatinine.

    PubMed

    Geffré, A; Braun, J P; Trumel, C; Concordet, D

    2009-12-01

    According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.

  8. Dieting and smoking initiation in early adolescent girls and boys: a prospective study.

    PubMed Central

    Austin, S B; Gortmaker, S L

    2001-01-01

    OBJECTIVES: This analysis tested the relation between dieting frequency and risk of smoking initiation in a longitudinal sample of adolescents. METHODS: From 1995 to 1997, 1295 middle school girls and boys participated in a nutrition and physical activity intervention study. The prospective association between dieting frequency at baseline and smoking initiation 2 years later was tested. RESULTS: Compared with girls who reported no dieting at baseline, girls who dieted up to once per week had 2 times the adjusted odds of becoming smokers (odds ratio = 2.0; 95% confidence interval = 1.1, 3.5), and girls who dieted more often had 4 times the adjusted odds of becoming smokers (odds ratio = 3.9; 95% confidence interval = 1.5, 10.4). CONCLUSIONS: Dieting among girls may exacerbate risk of initiating smoking, with increasing risk with greater dieting frequency. PMID:11236412

  9. A rapid sample screening method for authenticity control of whiskey using capillary electrophoresis with online preconcentration.

    PubMed

    Heller, Melina; Vitali, Luciano; Oliveira, Marcone Augusto Leal; Costa, Ana Carolina O; Micke, Gustavo Amadeu

    2011-07-13

    The present study aimed to develop a methodology using capillary electrophoresis for the determination of sinapaldehyde, syringaldehyde, coniferaldehyde, and vanillin in whiskey samples. The main objective was to obtain a screening method to differentiate authentic samples from seized samples suspected of being false using the phenolic aldehydes as chemical markers. The optimized background electrolyte was composed of 20 mmol L(-1) sodium tetraborate with 10% MeOH at pH 9.3. The study examined two kinds of sample stacking, using a long-end injection mode: normal sample stacking (NSM) and sample stacking with matrix removal (SWMR). In SWMR, the optimized injection time of the samples was 42 s (SWMR42); at this time, no matrix effects were observed. Values of r were >0.99 for the both methods. The LOD and LOQ were better than 100 and 330 mg mL(-1) for NSM and better than 22 and 73 mg L(-1) for SWMR. The CE-UV reliability in the aldehyde analysis in the real sample was compared statistically with LC-MS/MS methodology, and no significant differences were found, with a 95% confidence interval between the methodologies.

  10. Multicenter Clinical Trials Using 18F-FDG PET to Measure Early Response to Oncologic Therapy: Effects of Injection-to-Acquisition Time Variability on Required Sample Size.

    PubMed

    Kurland, Brenda F; Muzi, Mark; Peterson, Lanell M; Doot, Robert K; Wangerin, Kristen A; Mankoff, David A; Linden, Hannah M; Kinahan, Paul E

    2016-02-01

    Uptake time (interval between tracer injection and image acquisition) affects the SUV measured for tumors in (18)F-FDG PET images. With dissimilar uptake times, changes in tumor SUVs will be under- or overestimated. This study examined the influence of uptake time on tumor response assessment using a virtual clinical trials approach. Tumor kinetic parameters were estimated from dynamic (18)F-FDG PET scans of breast cancer patients and used to simulate time-activity curves for 45-120 min after injection. Five-minute uptake time frames followed 4 scenarios: the first was a standardized static uptake time (the SUV from 60 to 65 min was selected for all scans), the second was uptake times sampled from an academic PET facility with strict adherence to standardization protocols, the third was a distribution similar to scenario 2 but with greater deviation from standards, and the fourth was a mixture of hurried scans (45- to 65-min start of image acquisition) and frequent delays (58- to 115-min uptake time). The proportion of out-of-range scans (<50 or >70 min, or >15-min difference between paired scans) was 0%, 20%, 44%, and 64% for scenarios 1, 2, 3, and 4, respectively. A published SUV correction based on local linearity of uptake-time dependence was applied in a separate analysis. Influence of uptake-time variation was assessed as sensitivity for detecting response (probability of observing a change of ≥30% decrease in (18)F-FDG PET SUV given a true decrease of 40%) and specificity (probability of observing an absolute change of <30% given no true change). Sensitivity was 96% for scenario 1, and ranged from 73% for scenario 4 (95% confidence interval, 70%-76%) to 92% (90%-93%) for scenario 2. Specificity for all scenarios was at least 91%. Single-arm phase II trials required an 8%-115% greater sample size for scenarios 2-4 than for scenario 1. If uptake time is known, SUV correction methods may raise sensitivity to 87%-95% and reduce the sample size increase to less than 27%. Uptake-time deviations from standardized protocols occur frequently, potentially decreasing the performance of (18)F-FDG PET response biomarkers. Correcting SUV for uptake time improves sensitivity, but algorithm refinement is needed. Stricter uptake-time control and effective correction algorithms could improve power and decrease costs for clinical trials using (18)F-FDG PET endpoints. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  11. Modulation of human time processing by subthalamic deep brain stimulation.

    PubMed

    Wojtecki, Lars; Elben, Saskia; Timmermann, Lars; Reck, Christiane; Maarouf, Mohammad; Jörgens, Silke; Ploner, Markus; Südmeyer, Martin; Groiss, Stefan Jun; Sturm, Volker; Niedeggen, Michael; Schnitzler, Alfons

    2011-01-01

    Timing in the range of seconds referred to as interval timing is crucial for cognitive operations and conscious time processing. According to recent models of interval timing basal ganglia (BG) oscillatory loops are involved in time interval recognition. Parkinsońs disease (PD) is a typical disease of the basal ganglia that shows distortions in interval timing. Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a powerful treatment of PD which modulates motor and cognitive functions depending on stimulation frequency by affecting subcortical-cortical oscillatory loops. Thus, for the understanding of BG-involvement in interval timing it is of interest whether STN-DBS can modulate timing in a frequency dependent manner by interference with oscillatory time recognition processes. We examined production and reproduction of 5 and 15 second intervals and millisecond timing in a double blind, randomised, within-subject repeated-measures design of 12 PD-patients applying no, 10-Hz- and ≥ 130-Hz-STN-DBS compared to healthy controls. We found under(re-)production of the 15-second interval and a significant enhancement of this under(re-)production by 10-Hz-stimulation compared to no stimulation, ≥ 130-Hz-STN-DBS and controls. Milliseconds timing was not affected. We provide first evidence for a frequency-specific modulatory effect of STN-DBS on interval timing. Our results corroborate the involvement of BG in general and of the STN in particular in the cognitive representation of time intervals in the range of multiple seconds.

  12. Modulation of Human Time Processing by Subthalamic Deep Brain Stimulation

    PubMed Central

    Timmermann, Lars; Reck, Christiane; Maarouf, Mohammad; Jörgens, Silke; Ploner, Markus; Südmeyer, Martin; Groiss, Stefan Jun; Sturm, Volker; Niedeggen, Michael; Schnitzler, Alfons

    2011-01-01

    Timing in the range of seconds referred to as interval timing is crucial for cognitive operations and conscious time processing. According to recent models of interval timing basal ganglia (BG) oscillatory loops are involved in time interval recognition. Parkinsońs disease (PD) is a typical disease of the basal ganglia that shows distortions in interval timing. Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a powerful treatment of PD which modulates motor and cognitive functions depending on stimulation frequency by affecting subcortical-cortical oscillatory loops. Thus, for the understanding of BG-involvement in interval timing it is of interest whether STN-DBS can modulate timing in a frequency dependent manner by interference with oscillatory time recognition processes. We examined production and reproduction of 5 and 15 second intervals and millisecond timing in a double blind, randomised, within-subject repeated-measures design of 12 PD-patients applying no, 10-Hz- and ≥130-Hz-STN-DBS compared to healthy controls. We found under(re-)production of the 15-second interval and a significant enhancement of this under(re-)production by 10-Hz-stimulation compared to no stimulation, ≥130-Hz-STN-DBS and controls. Milliseconds timing was not affected. We provide first evidence for a frequency-specific modulatory effect of STN-DBS on interval timing. Our results corroborate the involvement of BG in general and of the STN in particular in the cognitive representation of time intervals in the range of multiple seconds. PMID:21931767

  13. Strongyle infections and parasitic control strategies in German horses - a risk assessment.

    PubMed

    Schneider, Stephanie; Pfister, Kurt; Becher, Anne M; Scheuerle, Miriam C

    2014-11-12

    As a consequence of the increasing levels of anthelmintic resistance in cyathostomes, new strategies for equine parasite control are being implemented. To assess the potential risks of these, the occurrence of strongyles was evaluated in a group of 1887 horses. The distribution of fecal egg counts (FECs), the frequency of anthelmintic drug use, and the deworming intervals were also analyzed. Between June 2012 and May 2013, 1887 fecal samples from either selectively or strategically dewormed horses were collected at 195 horse farms all over Germany and analyzed quantitatively with a modified McMaster technique. All samples with FEC ≥20 eggs per gram (EPG) were subjected to coproculture to generate third-stage larvae (LIII) for species differentiation. Egg counts were below the limit of detection (20 EPG) in 1046 (55.4%) samples and above it in 841 (44.6%) samples. Strongylus vulgaris larvae were identified in two of the 841 positive samples. Infections with cyathostomes were found on every farm. The most frequently applied anthelmintic was ivermectin (788/50.8%), followed by pyrantel (336/21.6%). The mean time since last treatment was 6.3 months. High-egg-shedding (>500 EPG) strategically dewormed horses (183/1357) were treated, on average, three times/year. The planned treatment date was already exceeded by 72.5% of the high egg-shedders and by 58.1% of the moderate (200-500 EPG) and low egg-shedders (20-199 EPG). S. vulgaris seems to be rare in Germany and no difference in its frequency has yet been found between selectively treated horses and horses receiving treatment in strategic intervals. However, inconsistent parasite control has been observed. Therefore, to minimize the risks for disease, consistent and efficient parasite control should be implemented.

  14. Scale-dependent Normalized Amplitude and Weak Spectral Anisotropy of Magnetic Field Fluctuations in the Solar Wind Turbulence

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Tu, Chuanyi; Marsch, Eckart; He, Jiansen; Wang, Linghua

    2016-01-01

    Turbulence in the solar wind was recently reported to be anisotropic, with the average power spectral index close to -2 when sampling parallel to the local mean magnetic field B0 and close to -5/3 when sampling perpendicular to the local B0. This result was widely considered to be observational evidence for the critical balance theory (CBT), which is derived by making the assumption that the turbulence strength is close to one. However, this basic assumption has not yet been checked carefully with observational data. Here we present for the first time the scale-dependent magnetic-field fluctuation amplitude, which is normalized by the local B0 and evaluated for both parallel and perpendicular sampling directions, using two 30-day intervals of Ulysses data. From our results, the turbulence strength is evaluated as much less than one at small scales in the parallel direction. An even stricter criterion is imposed when selecting the wavelet coefficients for a given sampling direction, so that the time stationarity of the local B0 is better ensured during the local sampling interval. The spectral index for the parallel direction is then found to be -1.75, whereas the spectral index in the perpendicular direction remains close to -1.65. These two new results, namely that the value of the turbulence strength is much less than one in the parallel direction and that the angle dependence of the spectral index is weak, cannot be explained by existing turbulence theories, like CBT, and thus will require new theoretical considerations and promote further observations of solar-wind turbulence.

  15. Virus occurrence in private and public wells in a fractured dolostone aquifer in Canada

    NASA Astrophysics Data System (ADS)

    Allen, Amy S.; Borchardt, Mark A.; Kieke, Burney A.; Dunfield, Kari E.; Parker, Beth L.

    2017-06-01

    Groundwater samples from 22 wells completed in a regional fractured dolostone aquifer in the Guelph region of southern Ontario, Canada, were collected over an 8-month period and analyzed for viruses and Campylobacter jejuni. Only 8% of the 118 samples exhibited viruses at extremely low concentrations, but of the 22 wells sampled, 10 (45%) were positive for human enteric viruses (polyomavirus, adenovirus A, and GII norovirus) including 5 of the 8 public supply wells (62.5%) and 5 of the 11 private wells (45%). Each virus-positive well had only one virus occurrence with six sampling events during the 8-month sampling campaign and only one virus type was detected in each well. The probability of virus detection was positively associated with well open-interval length. Virus concentration (in the wells that were virus-positive) was negatively associated with well depth and open-interval length and positively associated with overburden thickness (i.e., the thickness of unconsolidated materials overlying bedrock facies) and the amount of precipitation 8-14 and 15-21 days prior to the sampling date. The ephemeral nature of the virus detections and the low detection rate on a per sample basis were consistent with previous studies. The percentage of virus-positive wells, however, was much higher than previous studies, but consistent with the fact that the hydrogeologic conditions of fractured bedrock aquifers create wide capture zones and short groundwater travel times to wells making them more vulnerable to contamination occurrence but at very low concentrations.

  16. Interaction effects between the 5-hydroxy tryptamine transporter-linked polymorphic region (5-HTTLPR) genotype and family conflict on adolescent alcohol use and misuse.

    PubMed

    Kim, Jueun; Park, Aesoon; Glatt, Stephen J; Eckert, Tanya L; Vanable, Peter A; Scott-Sheldon, Lori A J; Carey, Kate B; Ewart, Craig K; Carey, Michael P

    2015-02-01

    To investigate whether the effects of family conflict on adolescent drinking differed as a function of 5-hydroxy tryptamine transporter-linked polymorphic region (5-HTTLPR) genotype cross-sectionally and prospectively in two independent samples of adolescents. Path analysis and multi-group analysis of two prospective datasets were conducted. United States and United Kingdom. Sample 1 was 175 adolescents in the United States (mean age = 15 at times 1 and 2 with a 6-month interval); Sample 2 was 4916 adolescents in the United Kingdon (mean age = 12 at time 1 and 15 at time 2). In both samples, demographics, tri-allelic 5-HTTLPR genotype and perceived family conflict were assessed at time 1. Alcohol use (frequency of drinking) and alcohol misuse (frequency of intoxication, frequency of drinking three or more drinks, maximum number of drinks) were assessed at times 1 and 2. A significant gene-environment interaction on alcohol misuse at time 1 was found in both sample 1 (β = 0.57, P = 0.001) and sample 2 (β = 0.19, P = 0.01), indicating that the 5-HTTLPR low-activity allele carriers exposed to higher levels of family conflict were more likely to engage in alcohol misuse than non-carriers. A significant gene-environment interaction effect on change in alcohol misuse over time was found only in sample 1 (β = 0.48, P = 0.04) but not in sample 2. Compared with non-carriers, adolescents carrying the 5-HTTLPR low-activity allele are more susceptible to the effects of family conflict on alcohol misuse. © 2014 Society for the Study of Addiction.

  17. Accuracy of time-domain and frequency-domain methods used to characterize catchment transit time distributions

    NASA Astrophysics Data System (ADS)

    Godsey, S. E.; Kirchner, J. W.

    2008-12-01

    The mean residence time - the average time that it takes rainfall to reach the stream - is a basic parameter used to characterize catchment processes. Heterogeneities in these processes lead to a distribution of travel times around the mean residence time. By examining this travel time distribution, we can better predict catchment response to contamination events. A catchment system with shorter residence times or narrower distributions will respond quickly to contamination events, whereas systems with longer residence times or longer-tailed distributions will respond more slowly to those same contamination events. The travel time distribution of a catchment is typically inferred from time series of passive tracers (e.g., water isotopes or chloride) in precipitation and streamflow. Variations in the tracer concentration in streamflow are usually damped compared to those in precipitation, because precipitation inputs from different storms (with different tracer signatures) are mixed within the catchment. Mathematically, this mixing process is represented by the convolution of the travel time distribution and the precipitation tracer inputs to generate the stream tracer outputs. Because convolution in the time domain is equivalent to multiplication in the frequency domain, it is relatively straightforward to estimate the parameters of the travel time distribution in either domain. In the time domain, the parameters describing the travel time distribution are typically estimated by maximizing the goodness of fit between the modeled and measured tracer outputs. In the frequency domain, the travel time distribution parameters can be estimated by fitting a power-law curve to the ratio of precipitation spectral power to stream spectral power. Differences between the methods of parameter estimation in the time and frequency domain mean that these two methods may respond differently to variations in data quality, record length and sampling frequency. Here we evaluate how well these two methods of travel time parameter estimation respond to different sources of uncertainty and compare the methods to one another. We do this by generating synthetic tracer input time series of different lengths, and convolve these with specified travel-time distributions to generate synthetic output time series. We then sample both the input and output time series at various sampling intervals and corrupt the time series with realistic error structures. Using these 'corrupted' time series, we infer the apparent travel time distribution, and compare it to the known distribution that was used to generate the synthetic data in the first place. This analysis allows us to quantify how different record lengths, sampling intervals, and error structures in the tracer measurements affect the apparent mean residence time and the apparent shape of the travel time distribution.

  18. Weighted regression analysis and interval estimators

    Treesearch

    Donald W. Seegrist

    1974-01-01

    A method for deriving the weighted least squares estimators for the parameters of a multiple regression model. Confidence intervals for expected values, and prediction intervals for the means of future samples are given.

  19. Supporting the growth of peer-professional workforces in healthcare settings: an evaluation of a targeted training approach for volunteer leaders of the STEPS Program.

    PubMed

    Turner, Benjamin; Kennedy, Areti; Kendall, Melissa; Muenchberger, Heidi

    2014-01-01

    To examine the effectiveness of a targeted training approach to foster and support a peer-professional workforce in the delivery of a community rehabilitation program for adults with acquired brain injury (ABI) and their families. A prospective longitudinal design was used to evaluate the effectiveness of a targeted two-day training forum for peer (n = 25) and professional (n = 15) leaders of the Skills to Enable People and Communities Program. Leaders completed a set of questionnaires (General Self-Efficacy Scale - GSES, Rosenberg Self-Esteem Scale, Volunteer Motivation Inventory - VMI and Community Involvement Scale - CIS) both prior to and immediately following the forum. Data analysis entailed paired sample t-test to explore changes in scores over time, and independent sample t-tests for comparisons between the two participant groups. The results indicated a significant increase in scores over time for the GSES (p = 0.047). Improvements in leaders' volunteer motivations and community involvement were also observed between the two time intervals. The between group comparisons highlighted that the peer leader group scored significantly higher than the professional leader group on the CIS and several domains of the VMI at both time intervals. The study provides an enhanced understanding of the utility of innovative workforce solutions for community rehabilitation after ABI; and further highlights the benefits of targeted training approaches to support the development of such workforce configurations.

  20. Exact intervals and tests for median when one sample value possibly an outliner

    NASA Technical Reports Server (NTRS)

    Keller, G. J.; Walsh, J. E.

    1973-01-01

    Available are independent observations (continuous data) that are believed to be a random sample. Desired are distribution-free confidence intervals and significance tests for the population median. However, there is the possibility that either the smallest or the largest observation is an outlier. Then, use of a procedure for rejection of an outlying observation might seem appropriate. Such a procedure would consider that two alternative situations are possible and would select one of them. Either (1) the n observations are truly a random sample, or (2) an outlier exists and its removal leaves a random sample of size n-1. For either situation, confidence intervals and tests are desired for the median of the population yielding the random sample. Unfortunately, satisfactory rejection procedures of a distribution-free nature do not seem to be available. Moreover, all rejection procedures impose undesirable conditional effects on the observations, and also, can select the wrong one of the two above situations. It is found that two-sided intervals and tests based on two symmetrically located order statistics (not the largest and smallest) of the n observations have this property.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurley, D.F.; Whitehouse, J.M.

    A dedicated low-flow groundwater sample collection system was designed for implementation in a post-closure ACL monitoring program at the Yaworski Lagoon NPL site in Canterbury, Connecticut. The system includes dedicated bladder pumps with intake ports located in the screened interval of the monitoring wells. This sampling technique was implemented in the spring of 1993. The system was designed to simultaneously obtain samples directly from the screened interval of nested wells in three distinct water bearing zones. Sample collection is begun upon stabilization of field parameters. Other than line volume, no prior purging of the well is required. It was foundmore » that dedicated low-flow sampling from the screened interval provides a method of representative sample collection without the bias of suspended solids introduced by traditional techniques of pumping and bailing. Analytical data indicate that measured chemical constituents are representative of groundwater migrating through the screened interval. Upon implementation of the low-flow monitoring system, analytical results exhibited a decrease in concentrations of some organic compounds and metals. The system has also proven to be a cost effective alternative to pumping and bailing which generate large volumes of purge water requiring containment and disposal.« less

  2. Sensitivity, specificity, and reproducibility of four measures of laboratory turnaround time.

    PubMed

    Valenstein, P N; Emancipator, K

    1989-04-01

    The authors studied the performance of four measures of laboratory turnaround time: the mean, median, 90th percentile, and proportion of tests reported within a predetermined cut-off interval (proportion of acceptable tests [PAT]). Measures were examined with the use of turnaround time data from 11,070 stat partial thromboplastin times, 16,761 urine cultures, and 28,055 stat electrolyte panels performed by a single laboratory. For laboratories with long turnaround times, the most important quality of a turnaround time measure is high reproducibility, so that improvement in reporting speed can be distinguished from random variation resulting from sampling. The mean was found to be the most reproducible of the four measures, followed by the median. The mean achieved acceptable precision with sample sizes of 100-500 tests. For laboratories with normally rapid turnaround times, the most important quality of a measure is high sensitivity and specificity for detecting whether turnaround time has dropped below standards. The PAT was found to be the best measure of turnaround time in this setting but required sample sizes of at least 500 tests to achieve acceptable accuracy. Laboratory turnaround time may be measured for different reasons. The method of measurement should be chosen with an eye toward its intended application.

  3. The "Chaos Theory" and nonlinear dynamics in heart rate variability analysis: does it work in short-time series in patients with coronary heart disease?

    PubMed

    Krstacic, Goran; Krstacic, Antonija; Smalcelj, Anton; Milicic, Davor; Jembrek-Gostovic, Mirjana

    2007-04-01

    Dynamic analysis techniques may quantify abnormalities in heart rate variability (HRV) based on nonlinear and fractal analysis (chaos theory). The article emphasizes clinical and prognostic significance of dynamic changes in short-time series applied on patients with coronary heart disease (CHD) during the exercise electrocardiograph (ECG) test. The subjects were included in the series after complete cardiovascular diagnostic data. Series of R-R and ST-T intervals were obtained from exercise ECG data after sampling digitally. The range rescaled analysis method determined the fractal dimension of the intervals. To quantify fractal long-range correlation's properties of heart rate variability, the detrended fluctuation analysis technique was used. Approximate entropy (ApEn) was applied to quantify the regularity and complexity of time series, as well as unpredictability of fluctuations in time series. It was found that the short-term fractal scaling exponent (alpha(1)) is significantly lower in patients with CHD (0.93 +/- 0.07 vs 1.09 +/- 0.04; P < 0.001). The patients with CHD had higher fractal dimension in each exercise test program separately, as well as in exercise program at all. ApEn was significant lower in CHD group in both RR and ST-T ECG intervals (P < 0.001). The nonlinear dynamic methods could have clinical and prognostic applicability also in short-time ECG series. Dynamic analysis based on chaos theory during the exercise ECG test point out the multifractal time series in CHD patients who loss normal fractal characteristics and regularity in HRV. Nonlinear analysis technique may complement traditional ECG analysis.

  4. Response interval comparison between urban fire departments and ambulance services.

    PubMed

    Jermyn, B D

    1999-01-01

    To measure the response intervals of fire departments compared with ambulance services in three urban centers to determine whether defibrillators should be added to fire vehicles. A prospective sample of 1,882 code 4 (life-threatening) tiered calls were collected over a six-month period from March 1, 1994, to August 31, 1994. A matched pairs experimental design compared the response interval of the fire department with that of the ambulance service for each call. This emergency medical services (EMS) system encompasses three urban centers with populations of 80,000, 95,000, and 170,000. In two of three of the urban centers, the fire department arrived on scene more than a minute sooner than the ambulance service: Cambridge (n = 571, mean = 2.22 min, p < 0.0001); Kitchener (n = 1,011, mean = 1.24 min, p < 0.003); and Waterloo (n = 300, mean = 0.69 min, p < 0.98). The shorter response interval of fire departments suggests placing defibrillators on fire response vehicles in an effort to decrease the time to defibrillation for cardiac arrest victims in this EMS system.

  5. Proceedings of the Annual Precise Time and Time Interval (PTTI) applications and Planning Meeting (25th) Held in Marina Del Rey, California on 29 November - 2 December 1993

    DTIC Science & Technology

    1993-12-02

    electronics. In other words, while the main driving force of the past has been the desire for greater performance by way of accuracy, the future will demand ...that can match him in terms of number of years in the program; but there are a lot of folks that are brand new to the program. What is precise time...International Telecommunications Union (ITU). The additional development of a digital-filter view of all of these two-sample variances113) has

  6. Real-time control systems: feedback, scheduling and robustness

    NASA Astrophysics Data System (ADS)

    Simon, Daniel; Seuret, Alexandre; Sename, Olivier

    2017-08-01

    The efficient control of real-time distributed systems, where continuous components are governed through digital devices and communication networks, needs a careful examination of the constraints arising from the different involved domains inside co-design approaches. Thanks to the robustness of feedback control, both new control methodologies and slackened real-time scheduling schemes are proposed beyond the frontiers between these traditionally separated fields. A methodology to design robust aperiodic controllers is provided, where the sampling interval is considered as a control variable of the system. Promising experimental results are provided to show the feasibility and robustness of the approach.

  7. Programmable noise bandwidth reduction by means of digital averaging

    NASA Technical Reports Server (NTRS)

    Poklemba, John J. (Inventor)

    1993-01-01

    Predetection noise bandwidth reduction is effected by a pre-averager capable of digitally averaging the samples of an input data signal over two or more symbols, the averaging interval being defined by the input sampling rate divided by the output sampling rate. As the averaged sample is clocked to a suitable detector at a much slower rate than the input signal sampling rate the noise bandwidth at the input to the detector is reduced, the input to the detector having an improved signal to noise ratio as a result of the averaging process, and the rate at which such subsequent processing must operate is correspondingly reduced. The pre-averager forms a data filter having an output sampling rate of one sample per symbol of received data. More specifically, selected ones of a plurality of samples accumulated over two or more symbol intervals are output in response to clock signals at a rate of one sample per symbol interval. The pre-averager includes circuitry for weighting digitized signal samples using stored finite impulse response (FIR) filter coefficients. A method according to the present invention is also disclosed.

  8. Department of Defense Precise Time and Time Interval program improvement plan

    NASA Technical Reports Server (NTRS)

    Bowser, J. R.

    1981-01-01

    The United States Naval Observatory is responsible for ensuring uniformity in precise time and time interval operations including measurements, the establishment of overall DOD requirements for time and time interval, and the accomplishment of objectives requiring precise time and time interval with minimum cost. An overview of the objectives, the approach to the problem, the schedule, and a status report, including significant findings relative to organizational relationships, current directives, principal PTTI users, and future requirements as currently identified by the users are presented.

  9. Correlates of depression in bipolar disorder

    PubMed Central

    Moore, Paul J.; Little, Max A.; McSharry, Patrick E.; Goodwin, Guy M.; Geddes, John R.

    2014-01-01

    We analyse time series from 100 patients with bipolar disorder for correlates of depression symptoms. As the sampling interval is non-uniform, we quantify the extent of missing and irregular data using new measures of compliance and continuity. We find that uniformity of response is negatively correlated with the standard deviation of sleep ratings (ρ = –0.26, p = 0.01). To investigate the correlation structure of the time series themselves, we apply the Edelson–Krolik method for correlation estimation. We examine the correlation between depression symptoms for a subset of patients and find that self-reported measures of sleep and appetite/weight show a lower average correlation than other symptoms. Using surrogate time series as a reference dataset, we find no evidence that depression is correlated between patients, though we note a possible loss of information from sparse sampling. PMID:24352942

  10. The Simulation Realization of Pavement Roughness in the Time Domain

    NASA Astrophysics Data System (ADS)

    XU, H. L.; He, L.; An, D.

    2017-10-01

    As the needs for the dynamic study on the vehicle-pavement system and the simulated vibration table test, how to simulate the pavement roughness actually is important guarantee for whether calculation and test can reflect the actual situation or not. Using the power spectral density function, the simulation of pavement roughness can be realized by Fourier inverse transform. The main idea of this method was that the spectrum amplitude and random phase were obtained separately according to the power spectrum, and then the simulation of pavement roughness was obtained in the time domain through the Fourier inverse transform (IFFT). In the process, the sampling interval (Δl) was 0.1m, and the sampling points(N) was 4096, which satisfied the accuracy requirements. Using this method, the simulate results of pavement roughness (A~H grades) were obtain in the time domain.

  11. Extensive theoretical/numerical comparative studies on H2 and generalised H2 norms in sampled-data systems

    NASA Astrophysics Data System (ADS)

    Kim, Jung Hoon; Hagiwara, Tomomichi

    2017-11-01

    This paper is concerned with linear time-invariant (LTI) sampled-data systems (by which we mean sampled-data systems with LTI generalised plants and LTI controllers) and studies their H2 norms from the viewpoint of impulse responses and generalised H2 norms from the viewpoint of the induced norms from L2 to L∞. A new definition of the H2 norm of LTI sampled-data systems is first introduced through a sort of intermediate standpoint of those for the existing two definitions. We then establish unified treatment of the three definitions of the H2 norm through a matrix function G(τ) defined on the sampling interval [0, h). This paper next considers the generalised H2 norms, in which two types of the L∞ norm of the output are considered as the temporal supremum magnitude under the spatial 2-norm and ∞-norm of a vector-valued function. We further give unified treatment of the generalised H2 norms through another matrix function F(θ) which is also defined on [0, h). Through a close connection between G(τ) and F(θ), some theoretical relationships between the H2 and generalised H2 norms are provided. Furthermore, appropriate extensions associated with the treatment of G(τ) and F(θ) to the closed interval [0, h] are discussed to facilitate numerical computations and comparisons of the H2 and generalised H2 norms. Through theoretical and numerical studies, it is shown that the two generalised H2 norms coincide with neither of the three H2 norms of LTI sampled-data systems even though all the five definitions coincide with each other when single-output continuous-time LTI systems are considered as a special class of LTI sampled-data systems. To summarise, this paper clarifies that the five control performance measures are mutually related with each other but they are also intrinsically different from each other.

  12. Development of a unique multi-contaminant air sampling device for a childhood asthma cohort in an agricultural environment.

    PubMed

    Armstrong, Jenna L; Fitzpatrick, Cole F; Loftus, Christine T; Yost, Michael G; Tchong-French, Maria; Karr, Catherine J

    2013-09-01

    This research describes the design, deployment, performance, and acceptability of a novel outdoor active air sampler to provide simultaneous measurements of multiple contaminants at timed intervals for the Aggravating Factors of Asthma in Rural Environment (AFARE) study-a longitudinal cohort of 50 children in Yakima Valley, Washington. The sampler was constructed of multiple sampling media connected to individual critical orifices and a rotary vane vacuum pump. It was connected to a timed control valve system to collect 24 hours samples every six days over 18 months. We describe a spatially representative approach with both quantitative and qualitative location criteria to deploy a network of 14 devices at participant residences in a rural region (20 × 60 km). Overall the sampler performed well, as the concurrent mean sample flow rates were within or above the ranges of recommended sampling rates for each exposure metric of interest. Acceptability was high among the study population of Hispanic farmworker participant households. The sampler design may prove useful for future urban and rural community-based studies with aims at collecting multiple contaminant data during specific time periods.

  13. The impact of time and field conditions on brown bear (Ursus arctos) faecal DNA amplification

    USGS Publications Warehouse

    Murphy, M.A.; Kendall, K.C.; Robinson, A.; Waits, L.P.

    2007-01-01

    To establish longevity of faecal DNA samples under varying summer field conditions, we collected 53 faeces from captive brown bears (Ursus arctos) on a restricted vegetation diet. Each faeces was divided, and one half was placed on a warm, dry field site while the other half was placed on a cool, wet field site on Moscow Mountain, Idaho, USA. Temperature, relative humidity, and dew point data were collected on each site, and faeces were sampled for DNA extraction at <1, 3, 6, 14, 30, 45, and 60 days. Faecal DNA sample viability was assessed by attempting PCR amplification of a mitochondrial DNA (mtDNA) locus (???150 bp) and a nuclear DNA (nDNA) microsatellite locus (180-200 bp). Time in the field, temperature, and dew point impacted mtDNA and nDNA amplification success with the greatest drop in success rates occurring between 1 and 3 days. In addition, genotyping errors significantly increased over time at both field sites. Based on these results, we recommend collecting samples at frequent transect intervals and focusing sampling efforts during drier portions of the year when possible. ?? 2007 Springer Science+Business Media, Inc.

  14. A Transactional Approach to Person-Environment Fit: Reciprocal Relations between Personality Development and Career Role Growth across Young to Middle Adulthood

    ERIC Educational Resources Information Center

    Wille, Bart; Beyers, Wim; De Fruyt, Filip

    2012-01-01

    In order to enhance our understanding of person-environment transactions, the present longitudinal cohort study examined the dynamic interactions between career role development and personality development over a time interval of 15 years. A sample of college alumni (N = 260) provided self-reports on Big five traits three months prior to…

  15. Unraveling the redox evolution of the Yangtze Block across the Precambrian/Cambrian transition

    NASA Astrophysics Data System (ADS)

    Diamond, C. W.; Zhang, F.; Chen, Y.; Lyons, T. W.

    2016-12-01

    Rocks preserved on the South China Craton have played a critical role in refining our understanding of the co-evolution of life and Earth's surface environments in the Late Neoproterozoic and earliest Paleozoic. From the earliest metazoan embryos to the many examples of exceptional preservation throughout the Cambrian Explosion, South China has preserved an outstanding record of animal evolution across this critical transition. Similarly, rocks preserved in South China hold key insights into the changing ocean chemistry that accompanied this extraordinary time. Recent work form Sahoo and others (2016, Geobiology) used redox sensitive metal enrichments in the Ediacaran Doushantuo Formation to demonstrate that the redox state of the Latest Neoproterozoic oceans was highly dynamic, rather than stably oxygenated or anoxic as had both been suggested previously. In an attempt to follow on from this and other studies, we have examined samples from a drill core taken in eastern Guizhou capturing deep-water facies of the Liuchapo and Jiumenchong formations, which contain the Precambrian/Cambrian boundary. In addition to containing the boundary, the sampled interval contains an enigmatic, widespread horizon that is strongly enriched in Ni and Mo. We have taken a multi-proxy approach in our investigation of this layer, the possible implications it has for the strata above and below (i.e., how its presence affects their utility as archives of paleo-redox conditions), and what those strata can tell us about local and global redox conditions during this pivotal time in Earth's history. Our Fe speciation data indicate that conditions were sulfidic at this location throughout the majority of the sampled interval. While redox sensitive metal concentrations are dramatically enriched in the Ni/Mo interval, their concentrations return to modest enrichments above it and continue to decrease upward. This trend suggests that while the conditions that favored extreme enrichment during the deposition of the Ni/Mo layer may have continued to provide a source of metals above the layer itself, by the time this source was exhausted, the background reservoir of these metals was low, sufficient only to provide small enrichments - consistent with the notion that deep ocean anoxia was a regular, if not dominant, feature of the Cambrian world.

  16. Refining the Early Devonian time scale using Milankovitch cyclicity in Lochkovian-Pragian sediments (Prague Synform, Czech Republic)

    NASA Astrophysics Data System (ADS)

    Da Silva, A. C.; Hladil, J.; Chadimová, L.; Slavík, L.; Hilgen, F. J.; Bábek, O.; Dekkers, M. J.

    2016-12-01

    The Early Devonian geological time scale (base of the Devonian at 418.8 ± 2.9 Myr, Becker et al., 2012) suffers from poor age control, with associated large uncertainties between 2.5 and 4.2 Myr on the stage boundaries. Identifying orbital cycles from sedimentary successions can serve as a very powerful chronometer to test and, where appropriate, improve age models. Here, we focus on the Lochkovian and Pragian, the two lowermost Devonian stages. High-resolution magnetic susceptibility (χin - 5 to 10 cm sampling interval) and gamma ray spectrometry (GRS - 25 to 50 cm sampling interval) records were gathered from two main limestone sections, Požár-CS (118 m, spanning the Lochkov and Praha Formations) and Pod Barrandovem (174 m; Praha Formation), both in the Czech Republic. An additional section (Branžovy, 65 m, Praha Formation) was sampled for GRS (every 50 cm). The χin and GRS records are very similar, so χin variations are driven by variations in the samples' paramagnetic clay mineral content, reflecting changes in detrital input. Therefore, climatic variations are very likely captured in our records. Multiple spectral analysis and statistical techniques such as: Continuous Wavelet Transform, Evolutive Harmonic Analysis, Multi-taper method and Average Spectral Misfit, were used in concert to reach an optimal astronomical interpretation. The Požár-CS section shows distinctly varying sediment accumulation rates. The Lochkovian (essentially equivalent to the Lochkov Formation (Fm.)) is interpreted to include a total of nineteen 405 kyr eccentricity cycles, constraining its duration to 7.7 ± 2.8 Myr. The Praha Fm. includes fourteen 405 kyr eccentricity cycles in the three sampled sections, while the Pragian Stage only includes about four 405 kyr eccentricity cycles, thus exhibiting durations of 5.7 ± 0.6 Myr and 1.7 ± 0.7 Myr respectively. Because the Lochkov Fm. contains an interval with very low sediment accumulation rate and because the Praha Fm. was cross-validated in three different sections, the uncertainty in the duration of the Lochkov Fm. and the Lochkovian is larger than that of the Praha Fm. and Pragian. The new floating time scales for the Lochkovian and Pragian stages have an unprecedented precision, with reduction in the uncertainty by a factor of 1.7 for the Lochkovian and of ∼6 for the Pragian. Furthermore, longer orbital modulation cycles are also identified with periodicities of ∼1000 kyr and 2000-2500 kyr.

  17. Testing the molecular clock using mechanistic models of fossil preservation and molecular evolution

    PubMed Central

    2017-01-01

    Molecular sequence data provide information about relative times only, and fossil-based age constraints are the ultimate source of information about absolute times in molecular clock dating analyses. Thus, fossil calibrations are critical to molecular clock dating, but competing methods are difficult to evaluate empirically because the true evolutionary time scale is never known. Here, we combine mechanistic models of fossil preservation and sequence evolution in simulations to evaluate different approaches to constructing fossil calibrations and their impact on Bayesian molecular clock dating, and the relative impact of fossil versus molecular sampling. We show that divergence time estimation is impacted by the model of fossil preservation, sampling intensity and tree shape. The addition of sequence data may improve molecular clock estimates, but accuracy and precision is dominated by the quality of the fossil calibrations. Posterior means and medians are poor representatives of true divergence times; posterior intervals provide a much more accurate estimate of divergence times, though they may be wide and often do not have high coverage probability. Our results highlight the importance of increased fossil sampling and improved statistical approaches to generating calibrations, which should incorporate the non-uniform nature of ecological and temporal fossil species distributions. PMID:28637852

  18. Using regression methods to estimate stream phosphorus loads at the Illinois River, Arkansas

    USGS Publications Warehouse

    Haggard, B.E.; Soerens, T.S.; Green, W.R.; Richards, R.P.

    2003-01-01

    The development of total maximum daily loads (TMDLs) requires evaluating existing constituent loads in streams. Accurate estimates of constituent loads are needed to calibrate watershed and reservoir models for TMDL development. The best approach to estimate constituent loads is high frequency sampling, particularly during storm events, and mass integration of constituents passing a point in a stream. Most often, resources are limited and discrete water quality samples are collected on fixed intervals and sometimes supplemented with directed sampling during storm events. When resources are limited, mass integration is not an accurate means to determine constituent loads and other load estimation techniques such as regression models are used. The objective of this work was to determine a minimum number of water-quality samples needed to provide constituent concentration data adequate to estimate constituent loads at a large stream. Twenty sets of water quality samples with and without supplemental storm samples were randomly selected at various fixed intervals from a database at the Illinois River, northwest Arkansas. The random sets were used to estimate total phosphorus (TP) loads using regression models. The regression-based annual TP loads were compared to the integrated annual TP load estimated using all the data. At a minimum, monthly sampling plus supplemental storm samples (six samples per year) was needed to produce a root mean square error of less than 15%. Water quality samples should be collected at least semi-monthly (every 15 days) in studies less than two years if seasonal time factors are to be used in the regression models. Annual TP loads estimated from independently collected discrete water quality samples further demonstrated the utility of using regression models to estimate annual TP loads in this stream system.

  19. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    NASA Technical Reports Server (NTRS)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  20. Trichinella spiralis: effect of high temperature on infectivity in pork.

    PubMed

    Kotula, A W; Murrell, K D; Acosta-Stein, L; Lamb, L; Douglass, L

    1983-08-01

    Twenty gram samples of homogenized Boston shoulder from swine experimentally infected with Trichinella spiralis were sealed in plastic pouches, pressed to a uniform thickness of 2 mm, and subjected to water bath temperatures of 49, 52, 55, 60, and 63 +/- 0.5 C for intervals of 2 min to 6 hr, especially within the interval of 0 to 15 min. These times included a period of about 1 min at the start and a period of about 1 min at the end for temperature equilibration. Treated samples were rapidly chilled to 25 C and then digested in a 1% pepsin-HCl solution at 37 C for 18 hr to recover T. spiralis larvae. The recovered larvae were suspended in 2 ml saline; 1 ml of this suspension was introduced into the stomach of each of two rats. The linear equation, log (time) = 17.3 -0.302 (temperature), was calculated from the time required at each temperature for the inactivation of T. spiralis larvae. The correlation coefficient for that relationship was r = -0.994. Larvae heated in the meat to 55 C for 4 min retained their infectivity, but were rendered noninfective after 6 min at 55 C. At 60 C, larvae were not infective after only 2 min (zero dwell time); whereas at 52 C, 47 min were required to render the larvae noninfective. Larvae in meat heated to 49 C were infective after 5 hr but not after 6 hr. These data demonstrate that the destruction of infectivity of T. spiralis is time-temperature related.

  1. Variability of reflectance measurements with sensor altitude and canopy type

    NASA Technical Reports Server (NTRS)

    Daughtry, C. S. T.; Vanderbilt, V. C.; Pollara, V. J.

    1981-01-01

    Data were acquired on canopies of mature corn planted in 76 cm rows, mature soybeans planted in 96 cm rows with 71 percent soil cover, and mature soybeans planed in 76 cm rows with 100 percent soil cover. A LANDSAT band radiometer with a 15 degree field of view was used at ten altitudes ranging from 0.2 m to 10 m above the canopy. At each altitude, measurements were taken at 15 cm intervals also a 2.0 m transect perpendicular to the crop row direction. Reflectance data were plotted as a function of altitude and horizontal position to verify that the variance of measurements at low altitudes was attributable to row effects which disappear at higher altitudes where the sensor integrate across several rows. The coefficient of variation of reflectance decreased exponentially as the sensor was elevated. Systematic sampling (at odd multiples of 0.5 times the row spacing interval) required fewer measurements than simple random sampling over row crop canopies.

  2. Consistency of signal intensity and T2* in frozen ex vivo heart muscle, kidney, and liver tissue.

    PubMed

    Kaye, Elena A; Josan, Sonal; Lu, Aiming; Rosenberg, Jarrett; Daniel, Bruce L; Pauly, Kim Butts

    2010-03-01

    To investigate tissue dependence of the MRI-based thermometry in frozen tissue by quantification and comparison of signal intensity and T2* of ex vivo frozen tissue of three different types: heart muscle, kidney, and liver. Tissue samples were frozen and imaged on a 0.5 Tesla MRI scanner with ultrashort echo time (UTE) sequence. Signal intensity and T2* were determined as the temperature of the tissue samples was decreased from room temperature to approximately -40 degrees C. Statistical analysis was performed for (-20 degrees C, -5 degrees C) temperature interval. The findings of this study demonstrate that signal intensity and T2* are consistent across three types of tissue for (-20 degrees C, -5 degrees C) temperature interval. Both parameters can be used to calculate a single temperature calibration curve for all three types of tissue and potentially in the future serve as a foundation for tissue-independent MRI-based thermometry.

  3. Periodicity in extinction and the problem of catastrophism in the history of life

    NASA Technical Reports Server (NTRS)

    Sepkoski, J. J. Jr; Sepkoski JJ, J. r. (Principal Investigator)

    1989-01-01

    The hypothesis that extinction events have recurred periodically over the last quarter billion years is greatly strengthened by new data on the stratigraphic ranges of marine animal genera. In the interval from the Permian to Recent, these data encompass some 13,000 generic extinctions, providing a more sensitive indicator of species-level extinctions than previously used familial data. Extinction time series computed from the generic data display nine strong peaks that are nearly uniformly spaced at 26 Ma intervals over the last 270 Ma. Most of these peaks correspond to extinction events recognized in more detailed, if limited, biostratigraphic studies. These new data weaken or negate most arguments against periodicity, which have involved criticisms of the taxonomic data base, sampling intervals, chronometric time scales, and statistical methods used in previous analyses. The criticisms are reviewed in some detail and various new calculations and simulations, including one assessing the effects of paraphyletic taxa, are presented. Although the new data strengthen the case for periodicity, they offer little new insight into the deriving mechanism behind the pattern. However, they do suggest that many of the periodic events may not have been catastrophic, occurring instead over several stratigraphic stages or substages.

  4. Impact of different sampling strategies on score results of the Nine Equivalents of Nursing Manpower Use Score (NEMS).

    PubMed

    Junger, A; Hartmann, B; Klasen, J; Brenck, F; Röhrig, R; Hempelmann, G

    2007-01-01

    Prospective observational study to assess the impact of two different sampling strategies on the score results of the NEMS, used widely to estimate the amount of nursing workload in an ICU. NEMS scores of all patients admitted to the surgical ICU over a one-year period were automatically calculated twice a day with a patient data management system for each patient day on ICU using two different sampling strategies (NEMS(individual): 24-hour intervals starting from the time of admission; NEMS(8 a.m.): 24-hour intervals starting at 8 a.m.). NEMS(individual) and NEMS(8 a.m.) were collected on 3236 patient days; 687 patients were involved. Significantly lower scores were found for the NEMS(8 a.m.) (25.0 +/- 8.7) compared to the NEMS(individual) (26.1 +/- 8.9, p < 0.01); the interclass correlation coefficient (ICC) was good but not excellent: 0.78. The inter-rater correlation between the two NEMS scores was high or very high (kappa = 0.6-1.0) for six out of nine variables of the NEMS. Different sampling strategies produce different score values, especially due to the end of stay. This has to be taken into account when using the NEMS in quality assurance projects and multi-center studies.

  5. Monthly fluctuations of insomnia symptoms in a population-based sample.

    PubMed

    Morin, Charles M; Leblanc, M; Ivers, H; Bélanger, L; Mérette, Chantal; Savard, Josée; Jarrin, Denise C

    2014-02-01

    To document the monthly changes in sleep/insomnia status over a 12-month period; to determine the optimal time intervals to reliably capture new incident cases and recurrent episodes of insomnia and the likelihood of its persistence over time. Participants were 100 adults (mean age = 49.9 years; 66% women) randomly selected from a larger population-based sample enrolled in a longitudinal study of the natural history of insomnia. They completed 12 monthly telephone interviews assessing insomnia, use of sleep aids, stressful life events, and physical and mental health problems in the previous month. A total of 1,125 interviews of a potential 1,200 were completed. Based on data collected at each assessment, participants were classified into one of three subgroups: good sleepers, insomnia symptoms, and insomnia syndrome. At baseline, 42 participants were classified as good sleepers, 34 met criteria for insomnia symptoms, and 24 for an insomnia syndrome. There were significant fluctuations of insomnia over time, with 66% of the participants changing sleep status at least once over the 12 monthly assessments (51.5% for good sleepers, 59.5% for insomnia syndrome, and 93.4% for insomnia symptoms). Changes of status were more frequent among individuals with insomnia symptoms at baseline (mean = 3.46, SD = 2.36) than among those initially classified as good sleepers (mean = 2.12, SD = 2.70). Among the subgroup with insomnia symptoms at baseline, 88.3% reported improved sleep (i.e., became good sleepers) at least once over the 12 monthly assessments compared to 27.7% whose sleep worsened (i.e., met criteria for an insomnia syndrome) during the same period. Among individuals classified as good sleepers at baseline, risks of developing insomnia symptoms and syndrome over the subsequent months were, respectively, 48.6% and 14.5%. Monthly assessment over an interval of 6 months was found most reliable to estimate incidence rates, while an interval of 3 months proved the most reliable for defining chronic insomnia. Monthly assessment of insomnia and sleep patterns revealed significant variability over the course of a 12-month period. These findings highlight the importance for future epidemiological studies of conducting repeated assessment at shorter than the typical yearly interval in order to reliably capture the natural course of insomnia over time.

  6. Subjective versus objective evening chronotypes in bipolar disorder.

    PubMed

    Gershon, Anda; Kaufmann, Christopher N; Depp, Colin A; Miller, Shefali; Do, Dennis; Zeitzer, Jamie M; Ketter, Terence A

    2018-01-01

    Disturbed sleep timing is common in bipolar disorder (BD). However, most research is based upon self-reports. We examined relationships between subjective versus objective assessments of sleep timing in BD patients versus controls. We studied 61 individuals with bipolar I or II disorder and 61 healthy controls. Structured clinical interviews assessed psychiatric diagnoses, and clinician-administered scales assessed current mood symptom severity. For subjective chronotype, we used the Composite Scale of Morningness (CSM) questionnaire, using original and modified (1, ¾, ⅔, and ½ SD below mean CSM score) thresholds to define evening chronotype. Objective chronotype was calculated as the percentage of nights (50%, 66.7%, 75%, or 90% of all nights) with sleep interval midpoints at or before (non-evening chronotype) vs. after (evening chronotype) 04:15:00 (4:15:00a.m.), based on 25-50 days of continuous actigraph data. BD participants and controls differed significantly with respect to CSM mean scores and CSM evening chronotypes using modified, but not original, thresholds. Groups also differed significantly with respect to chronotype based on sleep interval midpoint means, and based on the threshold of 75% of sleep intervals with midpoints after 04:15:00. Subjective and objective chronotypes correlated significantly with one another. Twenty-one consecutive intervals were needed to yield an evening chronotype classification match of ≥ 95% with that made using the 75% of sleep intervals threshold. Limited sample size/generalizability. Subjective and objective chronotype measurements were correlated with one another in participants with BD. Using population-specific thresholds, participants with BD had a later chronotype than controls. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Lack of effect of oral cabotegravir on the pharmacokinetics of a levonorgestrel/ethinyl oestradiol‐containing oral contraceptive in healthy adult women

    PubMed Central

    Trezza, Christine; Ford, Susan L.; Gould, Elizabeth; Lou, Yu; Huang, Chuyun; Ritter, James M.; Buchanan, Ann M.; Spreen, William

    2017-01-01

    Aims This study aimed to investigate whether cabotegravir (CAB), an integrase inhibitor in development for treatment and prevention of human immunodeficiency virus‐1, influences the pharmacokinetics (PK) of a levonorgestrel (LNG) and ethinyl oestradiol (EO)–containing oral contraceptive (OC) in healthy women. Methods In this open‐label, fixed‐sequence crossover study, healthy female subjects received LNG 0.15 mg/EO 0.03 mg tablet once daily Days 1–10 alone and with oral CAB 30 mg once daily Days 11–21. At the end of each treatment period, subjects underwent predose sampling for concentrations of follicle‐stimulating hormone, luteinizing hormone, and progesterone and serial PK sampling for plasma LNG, EO, and CAB concentrations. Results Twenty women were enrolled, and 19 completed the study. One subject was withdrawn due to an adverse event unrelated to study medications. Geometric least squares mean ratios (90% confidence interval) of LNG + CAB vs. LNG alone for LNG area under the plasma concentration–time curve over the dosing interval of duration τ and maximum observed plasma concentration were 1.12 (1.07–1.18) and 1.05 (0.96–1.15), respectively. Geometric least squares mean ratio (90% confidence interval) of EO + CAB vs. EO alone for EO area under the plasma concentration–time curve over the dosing interval of duration τ and maximum observed plasma concentration were 1.02 (0.97–1.08) and 0.92 (0.83–1.03), respectively. Steady‐state CAB PK parameters were comparable to historical values. There was no apparent difference in mean luteinizing hormone, follicle‐stimulating hormone, and progesterone concentrations between periods. No clinically significant trends in laboratory values, vital signs, or electrocardiography values were observed. Conclusions Repeat doses of oral CAB had no significant effect on LNG/EO PK or pharmacodynamics, which supports CAB coadministration with LNG/EO OCs in clinical practice. PMID:28087972

  8. When continuous observations just won't do: developing accurate and efficient sampling strategies for the laying hen.

    PubMed

    Daigle, Courtney L; Siegford, Janice M

    2014-03-01

    Continuous observation is the most accurate way to determine animals' actual time budget and can provide a 'gold standard' representation of resource use, behavior frequency, and duration. Continuous observation is useful for capturing behaviors that are of short duration or occur infrequently. However, collecting continuous data is labor intensive and time consuming, making multiple individual or long-term data collection difficult. Six non-cage laying hens were video recorded for 15 h and behavioral data collected every 2 s were compared with data collected using scan sampling intervals of 5, 10, 15, 30, and 60 min and subsamples of 2 second observations performed for 10 min every 30 min, 15 min every 1 h, 30 min every 1.5 h, and 15 min every 2 h. Three statistical approaches were used to provide a comprehensive analysis to examine the quality of the data obtained via different sampling methods. General linear mixed models identified how the time budget from the sampling techniques differed from continuous observation. Correlation analysis identified how strongly results from the sampling techniques were associated with those from continuous observation. Regression analysis identified how well the results from the sampling techniques were associated with those from continuous observation, changes in magnitude, and whether a sampling technique had bias. Static behaviors were well represented with scan and time sampling techniques, while dynamic behaviors were best represented with time sampling techniques. Methods for identifying an appropriate sampling strategy based upon the type of behavior of interest are outlined and results for non-caged laying hens are presented. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. A computer system for analysis and transmission of spirometry waveforms using volume sampling.

    PubMed

    Ostler, D V; Gardner, R M; Crapo, R O

    1984-06-01

    A microprocessor-controlled data gathering system for telemetry and analysis of spirometry waveforms was implemented using a completely digital design. Spirometry waveforms were obtained from an optical shaft encoder attached to a rolling seal spirometer. Time intervals between 10-ml volume changes (volume sampling) were stored. The digital design eliminated problems of analog signal sampling. The system measured flows up to 12 liters/sec with 5% accuracy and volumes up to 10 liters with 1% accuracy. Transmission of 10 waveforms took about 3 min. Error detection assured that no data were lost or distorted during transmission. A pulmonary physician at the central hospital reviewed the volume-time and flow-volume waveforms and interpretations generated by the central computer before forwarding the results and consulting with the rural physician. This system is suitable for use in a major hospital, rural hospital, or small clinic because of the system's simplicity and small size.

  10. Atmospheric Delta14C Record from Wellington (1954-1993)

    DOE Data Explorer

    Manning, M R. [National Institute of Water and Atmospheric Research, Ltd., Lower Hutt, New Zealand; Melhuish, W. H. [National Institute of Water and Atmospheric Research, Ltd., Lower Hutt, New Zealand

    1994-09-01

    Trays containing ~2 L of 5 normal NaOH carbonate-free solution are typically exposed for intervals of 1-2 weeks, and the atmospheric CO2 absorbed during that time is recovered by acid evolution. Considerable fractionation occurs during absorption into the NaOH solution, and the standard fractionation correction (Stuiver and Polach 1977) is used to determine a δ 14C value corrected to δ 13C = -25 per mil. Some samples reported here were taken using BaOH solution or with extended tray exposure times. These variations in procedure do not appear to affect the results (Manning et al. 1990). A few early measurements were made by bubbling air through columns of NaOH for several hours. These samples have higher δ 13C values. Greater details on the sampling methods are provided in Manning et al. (1990) and Rafter and Fergusson (1959).

  11. Application of a simple recording system to the analysis of free-play behavior in autistic children1

    PubMed Central

    Boer, Arend P.

    1968-01-01

    An observational system, which has been developed to facilitate recording of the total behavioral repertoire of autistic children, involves time-sampling recording of behavior with the help of a common Stenograph machine. Categories which exhausted all behavior were defined. Each category corresponded with a designated key on the Stenograph machine. The observer depressed one key at each 1-sec interval. The observer was paced by audible beats from a metronome. A naive observer can be used with this method. The observer is not mechanically limited and a minimum of observer training is required to obtain reliable measures. The data sampled during a five-week observation period indicated the stability of a taxonomic instrument of behavior based upon direct, time-sampling observations and the stability of spontaneous autistic behavior. Results showed that the behavior of the subjects was largely nonrandom and unsocialized in character. PMID:16795193

  12. Application of a simple recording system to the analysis of free-play behavior in autistic children.

    PubMed

    Boer, A P

    1968-01-01

    An observational system, which has been developed to facilitate recording of the total behavioral repertoire of autistic children, involves time-sampling recording of behavior with the help of a common Stenograph machine. Categories which exhausted all behavior were defined. Each category corresponded with a designated key on the Stenograph machine. The observer depressed one key at each 1-sec interval. The observer was paced by audible beats from a metronome. A naive observer can be used with this method. The observer is not mechanically limited and a minimum of observer training is required to obtain reliable measures. The data sampled during a five-week observation period indicated the stability of a taxonomic instrument of behavior based upon direct, time-sampling observations and the stability of spontaneous autistic behavior. Results showed that the behavior of the subjects was largely nonrandom and unsocialized in character.

  13. Resampling methods in Microsoft Excel® for estimating reference intervals

    PubMed Central

    Theodorsson, Elvar

    2015-01-01

    Computer- intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles.
The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular.
Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples. PMID:26527366

  14. Resampling methods in Microsoft Excel® for estimating reference intervals.

    PubMed

    Theodorsson, Elvar

    2015-01-01

    Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. 
The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular.
 Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.

  15. Paleomagnetic and rock magnetic study of IODP Site U1408 in the Northwest Atlantic - toward the high-resolution relative paleointensity estimate during the middle Eocene

    NASA Astrophysics Data System (ADS)

    Yamamoto, Y.; Yamazaki, T.; Oda, H.

    2015-12-01

    We have conducted paleomagnetic and rock magnetic measurements on the sedimentary sections recovered from Integrated Ocean Drilling Program (IODP) Site U1408 in the Northwest Atlantic, off Newfoundland. The measurements were done on u-channel samples using a pass-through superconducting rock magnetometer in a manner that remanent magnetizations (natural, anhysteretic and isothermal remanent magnetizations: NRM, ARM and IRM) were subjected to stepwise alternating field (AF) demagnetizations up to 80 mT and are measured with 1 cm spacing at each step.The characteristic remanent magnetization (ChRM) was resolved after AF demagnetization of 20-30 mT for most of the studied interval. As a result, we could identify several polarity reversals which were able to be correlated with the geomagnetic polarity time scale by Gradstein et al. (2012) (Geologic Time Scale 2012), with referring the shipboard biostratigraphy (Norris et al., 2014). The interval at ~ 33-157 mcd (meter composite depth) was interpreted to cover the Chrons C18n.1n to C20n with missing Chron C19n because of the somewhat ambiguous magnetic signals at the interval at ~ 70-110 mcd. The correlation provided an age model inferring sedimentation rate of about 2-4 cm/kyr during these chrons.There is the interval that shows relatively constant ARM and IRM intensities as well as ratios of ARM to IRM (ARM/IRM): the interval at ~ 37-90 mcd resulted in ARM intensity of 0.2-0.4 A/m, IRM intensity of 1-2 A/m and ARM/IRM of 0.17-0.20. This interval corresponds to the Chron C18 and the estimated sedimentation rate of the interval is ~ 2 cm/kyr. It is expected that high-resolution relative paleointensity estimate during the middle Eocene is potentially possible. We will report a preliminary estimate.

  16. Long-term Stability and Reliability of Baseline Cognitive Assessments in High School Athletes Using ImPACT at 1-, 2-, and 3-year Test-Retest Intervals.

    PubMed

    Brett, Benjamin L; Smyk, Nathan; Solomon, Gary; Baughman, Brandon C; Schatz, Philip

    2016-08-18

    The ImPACT (Immediate Post-Concussion Assessment and Cognitive Testing) neurocognitive testing battery is a widely used tool used for the assessment and management of sports-related concussion. Research on the stability of ImPACT in high school athletes at a 1- and 2-year intervals have been inconsistent, requiring further investigation. We documented 1-, 2-, and 3-year test-retest reliability of repeated ImPACT baseline assessments in a sample of high school athletes, using multiple statistical methods for examining stability. A total of 1,510 high school athletes completed baseline cognitive testing using online ImPACT test battery at three time periods of approximately 1- (N = 250), 2- (N = 1146), and 3-year (N = 114) intervals. No participant sustained a concussion between assessments. Intraclass correlation coefficients (ICCs) ranged in composite scores from 0.36 to 0.90 and showed little change as intervals between assessments increased. Reliable change indices and regression-based measures (RBMs) examining the test-retest stability demonstrated a lack of significant change in composite scores across the various time intervals, with very few cases (0%-6%) falling outside of 95% confidence intervals. The results suggest ImPACT composites scores remain considerably stability across 1-, 2-, and 3-year test-retest intervals in high school athletes, when considering both ICCs and RBM. Annually ascertaining baseline scores continues to be optimal for ensuring accurate and individualized management of injury for concussed athletes. For instances in which more recent baselines are not available (1-2 years), clinicians should seek to utilize more conservative range estimates in determining the presence of clinically meaningful change in cognitive performance. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Upfront dilution of ferritin samples to reduce hook effect, improve turnaround time and reduce costs.

    PubMed

    Wu, Shu Juan; Hayden, Joshua A

    2018-02-15

    Sandwich immunoassays offer advantages in the clinical laboratory but can yield erroneously low results due to hook (prozone) effect, especially with analytes whose concentrations span several orders of magnitude such as ferritin. This study investigated a new approach to reduce the likelihood of hook effect in ferritin immunoassays by performing upfront, five-fold dilutions of all samples for ferritin analysis. The impact of this change on turnaround time and costs were also investigated. Ferritin concentrations were analysed in routine clinical practice with and without upfront dilutions on Siemens Centaur® XP (Siemens Healthineers, Erlang, Germany) immunoanalysers. In addition, one month of baseline data (1026 results) were collected prior to implementing upfront dilutions and one month of data (1033 results) were collected after implementation. Without upfront dilutions, hook effect was observed in samples with ferritin concentrations as low as 86,028 µg/L. With upfront dilutions, samples with ferritin concentrations as high as 126,050 µg/L yielded values greater than the measurement interval and would have been diluted until an accurate value was obtained. The implementation of upfront dilution of ferritin samples led to a decrease in turnaround time from a median of 2 hours and 3 minutes to 1 hour and 18 minutes (P = 0.002). Implementation of upfront dilutions of all ferritin samples reduced the possibility of hook effect, improved turnaround time and saved the cost of performing additional dilutions.

  18. Comparison of passive diffusion bag samplers and submersible pump sampling methods for monitoring volatile organic compounds in ground water at Area 6, Naval Air Station, Whidbey Island, Washington

    USGS Publications Warehouse

    Huffman, Raegan L.

    2002-01-01

    Ground-water samples were collected in April 1999 at Naval Air Station Whidbey Island, Washington, with passive diffusion samplers and a submersible pump to compare concentrations of volatile organic compounds (VOCs) in water samples collected using the two sampling methods. Single diffusion samplers were installed in wells with 10-foot screened intervals, and multiple diffusion samplers were installed in wells with 20- to 40-foot screened intervals. The diffusion samplers were recovered after 20 days and the wells were then sampled using a submersible pump. VOC concentrations in the 10-foot screened wells in water samples collected with diffusion samplers closely matched concentrations in samples collected with the submersible pump. Analysis of VOC concentrations in samples collected from the 20- to 40-foot screened wells with multiple diffusion samplers indicated vertical concentration variation within the screened interval, whereas the analysis of VOC concentrations in samples collected with the submersible pump indicated mixing during pumping. The results obtained using the two sampling methods indicate that the samples collected with the diffusion samplers were comparable with and can be considerably less expensive than samples collected using a submersible pump.

  19. Optimal time points sampling in pathway modelling.

    PubMed

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  20. ECCM Scheme against Interrupted Sampling Repeater Jammer Based on Parameter-Adjusted Waveform Design

    PubMed Central

    Wei, Zhenhua; Peng, Bo; Shen, Rui

    2018-01-01

    Interrupted sampling repeater jamming (ISRJ) is an effective way of deceiving coherent radar sensors, especially for linear frequency modulated (LFM) radar. In this paper, for a simplified scenario with a single jammer, we propose a dynamic electronic counter-counter measure (ECCM) scheme based on jammer parameter estimation and transmitted signal design. Firstly, the LFM waveform is transmitted to estimate the main jamming parameters by investigating the discontinuousness of the ISRJ’s time-frequency (TF) characteristics. Then, a parameter-adjusted intra-pulse frequency coded signal, whose ISRJ signal after matched filtering only forms a single false target, is designed adaptively according to the estimated parameters, i.e., sampling interval, sampling duration and repeater times. Ultimately, for typical jamming scenes with different jamming signal ratio (JSR) and duty cycle, we propose two particular ISRJ suppression approaches. Simulation results validate the effective performance of the proposed scheme for countering the ISRJ, and the trade-off relationship between the two approaches is demonstrated. PMID:29642508

  1. Determination of ammonium ion by fluorometry or spectrophotometry after on-line derivatization with o-phthalaldehyde

    NASA Technical Reports Server (NTRS)

    Goyal, S. S.; Rains, D. W.; Huffaker, R. C.

    1988-01-01

    A fast, sensitive, simple, and highly reproducible method for routine assay of ammonium ion (NH4+) was developed by using HPLC equipment. The method is based on the reaction of NH4+ with o-phthalaldehyde (OPA) in the presence of 2-mercaptoethanol. After an on-line derivatization, the resulting NH4(+)-OPA product was quantified by using fluorometric or spectrophotometric detection. For fluorometric detection, the excitation and emission wavelengths were 410 and 470 nm, respectively. The spectrophotometric detection was made by measuring absorbance at 410 nm. Results on the effects of OPA-reagent composition and pH, reaction temperature, sample matrix, and linearity of the assay are presented. Even though it took about 2 min from the time of sample injection to the appearance of sample peak, sample injections could be overlapped at an interval of about 1 min. Thus, the actual time needed for analysis was about 1 min per assay. The method can be used in a fully automated mode by using an autosampler injector.

  2. Ratio-based lengths of intervals to improve fuzzy time series forecasting.

    PubMed

    Huarng, Kunhuang; Yu, Tiffany Hui-Kuang

    2006-04-01

    The objective of this study is to explore ways of determining the useful lengths of intervals in fuzzy time series. It is suggested that ratios, instead of equal lengths of intervals, can more properly represent the intervals among observations. Ratio-based lengths of intervals are, therefore, proposed to improve fuzzy time series forecasting. Algebraic growth data, such as enrollments and the stock index, and exponential growth data, such as inventory demand, are chosen as the forecasting targets, before forecasting based on the various lengths of intervals is performed. Furthermore, sensitivity analyses are also carried out for various percentiles. The ratio-based lengths of intervals are found to outperform the effective lengths of intervals, as well as the arbitrary ones in regard to the different statistical measures. The empirical analysis suggests that the ratio-based lengths of intervals can also be used to improve fuzzy time series forecasting.

  3. Herbal dryer: drying of ginger (zingiber officinale) using tray dryer

    NASA Astrophysics Data System (ADS)

    Haryanto, B.; Hasibuan, R.; Alexander; Ashari, M.; Ridha, M.

    2018-02-01

    Drying is widely used as a method to preserve food because of its convenience and affordability. Drying of ginger using tray dryer were carried out at various drying conditions, such as air-drying flow, air-drying temperature, and sample dimensions, to achieve the highest drying rate. Samples with various dimensions were placed in the tray dryer and dried using various air-drying flow and temperatures. The weights of samples were observed every 3 minutes interval. Drying was stopped after three times of constant weighing. Data of drying was collected to make the drying curves. Drying curves show that the highest drying rate is achieved using highest air flow and temperature.

  4. Global Geopotential Modelling from Satellite-to-Satellite Tracking,

    DTIC Science & Technology

    1981-10-01

    measured range-rate sampled at regular intervals. The expansion of the potential has been truncated at degree n = 331, because little information on...averaging interval is 4 s , and sampling takes place every 4 s ; if residual data are used, with respect to a reference model of specified accuracy, complete...LEGFDN, MODEL, andNVAR... .. ....... 93 B-4 Sample Output .. .. .. .... ..... ..... ..... 94 Appendix C: Detailed Listings Degree by Degree

  5. Importance of the Time Interval between Bowel Preparation and Colonoscopy in Determining the Quality of Bowel Preparation for Full-Dose Polyethylene Glycol Preparation

    PubMed Central

    Kim, Tae Kyung; Kim, Hyung Wook; Kim, Su Jin; Ha, Jong Kun; Jang, Hyung Ha; Hong, Young Mi; Park, Su Bum; Choi, Cheol Woong; Kang, Dae Hwan

    2014-01-01

    Background/Aims The quality of bowel preparation (QBP) is the important factor in performing a successful colonoscopy. Several factors influencing QBP have been reported; however, some factors, such as the optimal preparation-to-colonoscopy time interval, remain controversial. This study aimed to determine the factors influencing QBP and the optimal time interval for full-dose polyethylene glycol (PEG) preparation. Methods A total of 165 patients who underwent colonoscopy from June 2012 to August 2012 were prospectively evaluated. The QBP was assessed using the Ottawa Bowel Preparation Scale (Ottawa) score according to several factors influencing the QBP were analyzed. Results Colonoscopies with a time interval of 5 to 6 hours had the best Ottawa score in all parts of the colon. Patients with time intervals of 6 hours or less had the better QBP than those with time intervals of more than 6 hours (p=0.046). In the multivariate analysis, the time interval (odds ratio, 1.897; 95% confidence interval, 1.006 to 3.577; p=0.048) was the only significant contributor to a satisfactory bowel preparation. Conclusions The optimal time was 5 to 6 hours for the full-dose PEG method, and the time interval was the only significant contributor to a satisfactory bowel preparation. PMID:25368750

  6. Increasing sensitivity in the measurement of heart rate variability: the method of non-stationary RR time-frequency analysis.

    PubMed

    Melkonian, D; Korner, A; Meares, R; Bahramali, H

    2012-10-01

    A novel method of the time-frequency analysis of non-stationary heart rate variability (HRV) is developed which introduces the fragmentary spectrum as a measure that brings together the frequency content, timing and duration of HRV segments. The fragmentary spectrum is calculated by the similar basis function algorithm. This numerical tool of the time to frequency and frequency to time Fourier transformations accepts both uniform and non-uniform sampling intervals, and is applicable to signal segments of arbitrary length. Once the fragmentary spectrum is calculated, the inverse transform recovers the original signal and reveals accuracy of spectral estimates. Numerical experiments show that discontinuities at the boundaries of the succession of inter-beat intervals can cause unacceptable distortions of the spectral estimates. We have developed a measure that we call the "RR deltagram" as a form of the HRV data that minimises spectral errors. The analysis of the experimental HRV data from real-life and controlled breathing conditions suggests transient oscillatory components as functionally meaningful elements of highly complex and irregular patterns of HRV. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  7. Precision, time, and cost: a comparison of three sampling designs in an emergency setting.

    PubMed

    Deitchler, Megan; Deconinck, Hedwig; Bergeron, Gilles

    2008-05-02

    The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 x 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 x 30 cluster survey with two alternative sampling designs: a 33 x 6 cluster design (33 clusters, 6 observations per cluster) and a 67 x 3 cluster design (67 clusters, 3 observations per cluster). Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 x 30 design to provide more precise results (i.e. narrower 95% confidence intervals) than the 33 x 6 and 67 x 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 x 6 and 67 x 3 designs provide wider confidence intervals than the 30 x 30 design for child anthropometric indicators, the 33 x 6 and 67 x 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 x 30 design does not. For the household-level indicators tested in this study, the 67 x 3 design provides the most precise results. However, our results show that neither the 33 x 6 nor the 67 x 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 x 6 and 67 x 3 designs required substantially less time and cost than that required for the 30 x 30 design. The findings of this study suggest the 33 x 6 and 67 x 3 designs can provide useful time- and resource-saving alternatives to the 30 x 30 method of data collection in emergency settings.

  8. Precision, time, and cost: a comparison of three sampling designs in an emergency setting

    PubMed Central

    Deitchler, Megan; Deconinck, Hedwig; Bergeron, Gilles

    2008-01-01

    The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 × 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 × 30 cluster survey with two alternative sampling designs: a 33 × 6 cluster design (33 clusters, 6 observations per cluster) and a 67 × 3 cluster design (67 clusters, 3 observations per cluster). Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 × 30 design to provide more precise results (i.e. narrower 95% confidence intervals) than the 33 × 6 and 67 × 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 × 6 and 67 × 3 designs provide wider confidence intervals than the 30 × 30 design for child anthropometric indicators, the 33 × 6 and 67 × 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 × 30 design does not. For the household-level indicators tested in this study, the 67 × 3 design provides the most precise results. However, our results show that neither the 33 × 6 nor the 67 × 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 × 6 and 67 × 3 designs required substantially less time and cost than that required for the 30 × 30 design. The findings of this study suggest the 33 × 6 and 67 × 3 designs can provide useful time- and resource-saving alternatives to the 30 × 30 method of data collection in emergency settings. PMID:18454866

  9. Analysis of groundwater response to tidal fluctuations, Operable Unit 2, Area 8, Naval Base Kitsap, Keyport, Washington

    USGS Publications Warehouse

    Opatz, Chad C.; Dinicola, Richard S.

    2018-05-21

    Operable Unit 2, Area 8, at Naval Base Kitsap, Keyport is the site of a former chrome-plating facility that released metals (primarily chromium and cadmium), chlorinated volatile organic compounds, and petroleum compounds into the local environment. To ensure long-term protectiveness, as stipulated in the Fourth Five-Year Review for the site, Naval Facilities Engineering Command Northwest collaborated with the U.S. Environmental Protection Agency, the Washington State Department of Ecology, and the Suquamish Tribe, to collect data to monitor the contamination left in place and to ensure the site does not pose a risk to human health or the environment. To support these efforts, refined information was needed on the interaction of fresh groundwater with seawater in response to the up-to 13-ft tidal fluctuations at this nearshore site adjacent to Port Orchard Bay. The information was analyzed to meet the primary objective of this investigation, which was to determine the optimal time during the semi-diurnal and the neap-spring tidal cycles to sample groundwater for freshwater contaminants in Area 8 monitoring wells.Groundwater levels and specific conductance in five monitoring wells, along with marine water-levels (tidal levels) in Port Orchard Bay, were monitored every 15 minutes during a 3-week duration to determine how nearshore groundwater responds to tidal forcing. Time series data were collected from October 24, 2017, to November 16, 2017, a period that included neap and spring tides. Vertical profiles of specific conductance were also measured once in the screened interval of each well prior to instrument deployment to determine if a freshwater/saltwater interface was present in the well during that particular time.The vertical profiles of specific conductance were measured only one time during an ebbing tide at approximately the top, middle, and bottom of the saturated thickness within the screened interval of each well. The landward-most well, MW8-8, was completely freshwater, while one of the most seaward wells, MW8-9, was completely saline. A distinct saltwater interface was measured in the three other shallow wells (MW8-11, MW8-12, and MW8-14), with the topmost groundwater occurring fresh underlain by higher conductivity water.Lag times between minimum spring-tide level and minimum groundwater levels in wells ranged from about 2 to 4.5 hours in the less-than 20-ft deep wells screened across the water table, and was about 7 hours for the single 48-ft deep well screened below the water table. Those lag times were surprisingly long considering the wells are all located within 200-ft of the shoreline and the local geology is largely coarse-grained glacial outwash deposits. Various manmade subsurface features, such as slurry walls and backfilled excavations, likely influence and confuse the connectivity between seawater and groundwater.The specific-conductance time-series data showed clear evidence of substantial saltwater intrusion into the screened intervals of most shallow wells. Unexpectedly, the intrusion was associated with the neap part of the tidal cycle around November 13–16, when relatively low barometric pressure and high southerly winds led to the highest high and low tides measured during the monitoring period. The data consistently indicated that the groundwater had the lowest specific conductance (was least mixed with seawater) during the prior neap tides around October 30, the same period when the shallow groundwater levels were lowest. Although the specific conductance response is somewhat different between wells, the data do suggest that it is the heights of the actual high-high and low-low tides, regardless of whether or not they occur during the neap or spring part of the cycle, that allows seawater intrusion into the nearshore aquifer at Area 8.With all the data taken into consideration, the optimal time for sampling the shallow monitoring wells at Area 8 would be centered on a 2–5-hour period following the predicted low-low tide during neap tide, with due consideration of local atmospheric pressure and wind conditions that have the potential to generate tides that can be substantially higher than those predicted from lunar-solar tidal forces. The optimal time for sampling the deeper monitoring wells at Area 8 would be during the 6–8-hour period following a predicted low-low tide, also during the neap tide part of the tidal cycle. The specific time window to sample each well following a low tide can be found in table 5. Those periods are when groundwater in the wells is most fresh and least diluted by seawater intrusion. In addition to timing, consideration should be given to collecting undisturbed samples from the top of the screened interval (or top of the water table if below the top of the interval) to best characterize contaminant concentrations in freshwater. A downhole conductivity probe could be used to identify the saltwater interface, above which would be the ideal depth for sampling.

  10. Reproducibility of onset and recovery oxygen uptake kinetics in moderately impaired patients with chronic heart failure

    PubMed Central

    De Vries, Wouter R.; Hoogeveen, Adwin R.; Zonderland, Maria L.; Thijssen, Eric J. M.; Schep, Goof

    2007-01-01

    Oxygen (O2) kinetics reflect the ability to adapt to or recover from exercise that is indicative of daily life. In patients with chronic heart failure (CHF), parameters of O2 kinetics have shown to be useful for clinical purposes like grading of functional impairment and assessment of prognosis. This study compared the goodness of fit and reproducibility of previously described methods to assess O2 kinetics in these patients. Nineteen CHF patients, New York Heart Association class II–III, performed two constant-load tests on a cycle ergometer at 50% of the maximum workload. Time constants of O2 onset- and recovery kinetics (τ) were calculated by mono-exponential modeling with four different sampling intervals (5 and 10 s, 5 and 8 breaths). The goodness of fit was expressed as the coefficient of determination (R2). Onset kinetics were also evaluated by the mean response time (MRT). Considering O2 onset kinetics, τ showed a significant inverse correlation with peak- \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} $$ \\ifmmode\\expandafter\\dot\\else\\expandafter\\.\\fi{V}{\\text{O}}_{2} $$\\end{document} (R = −0.88, using 10 s sampling intervals). The limits of agreement of both τ and MRT, however, were not clinically acceptable. O2 recovery kinetics yielded better reproducibility and goodness of fit. Using the most optimal sampling interval (5 breaths), a change of at least 13 s in τ is needed to exceed normal test-to-test variations. In conclusion, O2 recovery kinetics are more reproducible for clinical purposes than O2 onset kinetics in moderately impaired patients with CHF. It should be recognized that this observation cannot be assumed to be generalizable to more severely impaired CHF patients. PMID:17277937

  11. Interresponse Time Structures in Variable-Ratio and Variable-Interval Schedules

    ERIC Educational Resources Information Center

    Bowers, Matthew T.; Hill, Jade; Palya, William L.

    2008-01-01

    The interresponse-time structures of pigeon key pecking were examined under variable-ratio, variable-interval, and variable-interval plus linear feedback schedules. Whereas the variable-ratio and variable-interval plus linear feedback schedules generally resulted in a distinct group of short interresponse times and a broad distribution of longer…

  12. Sample entropy predicts lifesaving interventions in trauma patients with normal vital signs.

    PubMed

    Naraghi, L; Mejaddam, A Y; Birkhan, O A; Chang, Y; Cropano, C M; Mesar, T; Larentzakis, A; Peev, M; Sideris, A C; Van der Wilden, G M; Imam, A M; Hwabejire, J O; Velmahos, G C; Fagenholz, P J; Yeh, D; de Moya, M A; King, D R

    2015-08-01

    Heart rate complexity, commonly described as a "new vital sign," has shown promise in predicting injury severity, but its use in clinical practice is not yet widely adopted. We previously demonstrated the ability of this noninvasive technology to predict lifesaving interventions (LSIs) in trauma patients. This study was conducted to prospectively evaluate the utility of real-time, automated, noninvasive, instantaneous sample entropy (SampEn) analysis to predict the need for an LSI in a trauma alert population presenting with normal vital signs. Prospective enrollment of patients who met criteria for trauma team activation and presented with normal vital signs was conducted at a level I trauma center. High-fidelity electrocardiogram recording was used to calculate SampEn and SD of the normal-to-normal R-R interval (SDNN) continuously in real time for 2 hours with a portable, handheld device. Patients who received an LSI were compared to patients without any intervention (non-LSI). Multivariable analysis was performed to control for differences between the groups. Treating clinicians were blinded to results. Of 129 patients enrolled, 38 (29%) received 136 LSIs within 24 hours of hospital arrival. Initial systolic blood pressure was similar in both groups. Lifesaving intervention patients had a lower Glasgow Coma Scale. The mean SampEn on presentation was 0.7 (0.4-1.2) in the LSI group compared to 1.5 (1.1-2.0) in the non-LSI group (P < .0001). The area under the curve with initial SampEn alone was 0.73 (95% confidence interval [CI], 0.64-0.81) and increased to 0.93 (95% CI, 0.89-0.98) after adding sedation to the model. Sample entropy of less than 0.8 yields sensitivity, specificity, negative predictive value, and positive predictive value of 58%, 86%, 82%, and 65%, respectively, with an overall accuracy of 76% for predicting an LSI. SD of the normal-to-normal R-R interval had no predictive value. In trauma patients with normal presenting vital signs, decreased SampEn is an independent predictor of the need for LSI. Real-time SampEn analysis may be a useful adjunct to standard vital signs monitoring. Adoption of real-time, instantaneous SampEn monitoring for trauma patients, especially in resource-constrained environments, should be considered. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Not All Prehospital Time is Equal: Influence of Scene Time on Mortality

    PubMed Central

    Brown, Joshua B.; Rosengart, Matthew R.; Forsythe, Raquel M.; Reynolds, Benjamin R.; Gestring, Mark L.; Hallinan, William M.; Peitzman, Andrew B.; Billiar, Timothy R.; Sperry, Jason L.

    2016-01-01

    Background Trauma is time-sensitive and minimizing prehospital (PH) time is appealing. However, most studies have not linked increasing PH time with worse outcomes, as raw PH times are highly variable. It is unclear whether specific PH time patterns affect outcomes. Our objective was to evaluate the association of PH time interval distribution with mortality. Methods Patients transported by EMS in the Pennsylvania trauma registry 2000-2013 with total prehospital time (TPT)≥20min were included. TPT was divided into three PH time intervals: response, scene, and transport time. The number of minutes in each PH time interval was divided by TPT to determine the relative proportion each interval contributed to TPT. A prolonged interval was defined as any one PH interval contributing ≥50% of TPT. Patients were classified by prolonged PH interval or no prolonged PH interval (all intervals<50% of TPT). Patients were matched for TPT and conditional logistic regression determined the association of mortality with PH time pattern, controlling for confounders. PH interventions were explored as potential mediators, and prehospital triage criteria used identify patients with time-sensitive injuries. Results There were 164,471 patients included. Patients with prolonged scene time had increased odds of mortality (OR 1.21; 95%CI 1.02–1.44, p=0.03). Prolonged response, transport, and no prolonged interval were not associated with mortality. When adjusting for mediators including extrication and PH intubation, prolonged scene time was no longer associated with mortality (OR 1.06; 0.90–1.25, p=0.50). Together these factors mediated 61% of the effect between prolonged scene time and mortality. Mortality remained associated with prolonged scene time in patients with hypotension, penetrating injury, and flail chest. Conclusions Prolonged scene time is associated with increased mortality. PH interventions partially mediate this association. Further study should evaluate whether these interventions drive increased mortality because they prolong scene time or by another mechanism, as reducing scene time may be a target for intervention. Level of Evidence IV, prognostic study PMID:26886000

  14. Relationship between analysis of laser speckle image and Knoop hardness on softening enamel.

    PubMed

    Koshoji, Nelson H; Prates, Renato A; Bussadori, Sandra K; Bortoletto, Carolina C; de Miranda Junior, Walter G; Librantz, André F H; Leal, Cintia Raquel Lima; Oliveira, Marcelo T; Deana, Alessandro M

    2016-09-01

    In this study is presented the correlation between laser speckle images and enamel hardness loss. In order to shift the enamel hardness, a dental demineralization model was applied to 32 samples of vestibular bovine teeth. After they were cleaned, cut and polished, the samples were divided into 4 groups and immersed in 30ml of a cola-based soft drink for 10, 20, 30 and 40min twice a day for 7 consecutive days with half the surface protected by two layers of nail polish. Each sample was analyzed by Knoop hardness and laser speckle imaging. Pearson's correlation analysis demonstrated that the laser speckle image technique presents a strong correlation with the hardness loss of the enamel (r=0.7085, p<0.0001). This finding is corroborated by Blend & Altman analysis, in which the data presented a constant behavior throughout the whole interval. For both analyses, more than 95% of the data is within the confidence interval, as expected. This work demonstrates, for the first time to our knowledge, an empirical model for correlating laser speckle images with the loss of tooth enamel hardness. Copyright © 2016. Published by Elsevier B.V.

  15. Application of a temperature-dependent fluorescent dye (Rhodamine B) to the measurement of radiofrequency radiation-induced temperature changes in biological samples.

    PubMed

    Chen, Yuen Y; Wood, Andrew W

    2009-10-01

    We have applied a non-contact method for studying the temperature changes produced by radiofrequency (RF) radiation specifically to small biological samples. A temperature-dependent fluorescent dye, Rhodamine B, as imaged by laser scanning confocal microscopy (LSCM) was used to do this. The results were calibrated against real-time temperature measurements from fiber optic probes, with a calibration factor of 3.4% intensity change degrees C(-1) and a reproducibility of +/-6%. This non-contact method provided two-dimensional and three-dimensional images of temperature change and distributions in biological samples, at a spatial resolution of a few micrometers and with an estimated absolute precision of around 1.5 degrees C, with a differential precision of 0.4 degree C. Temperature rise within tissue was found to be non-uniform. Estimates of specific absorption rate (SAR) from absorbed power measurements were greater than those estimated from rate of temperature rise, measured at 1 min intervals, probably because this interval is too long to permit accurate estimation of initial temperature rise following start of RF exposure. Future experiments will aim to explore this.

  16. Comparative evaluation of nickel discharge from brackets in artificial saliva at different time intervals.

    PubMed

    Jithesh, C; Venkataramana, V; Penumatsa, Narendravarma; Reddy, S N; Poornima, K Y; Rajasigamani, K

    2015-08-01

    To determine and compare the potential difference of nickel release from three different orthodontic brackets, in different artificial pH, in different time intervals. Twenty-seven samples of three different orthodontic brackets were selected and grouped as 1, 2, and 3. Each group was divided into three subgroups depending on the type of orthodontic brackets, salivary pH and the time interval. The Nickel release from each subgroup were analyzed by using inductively coupled plasma-Atomic Emission Spectrophotometer (Perkin Elmer, Optima 2100 DV, USA) model. Quantitative analysis of nickel was performed three times, and the mean value was used as result. ANOVA (F-test) was used to test the significant difference among the groups at 0.05 level of significance (P < 0.05). The descriptive method of statistics was used to calculate the mean, standard deviation, minimum and maximum. SPSS 18 software ((SPSS.Ltd, Quarry bay, Hong Kong, PASW-statistics 18) was used to analyze the study. The analysis shows a significant difference between three groups. The study shows that the nickel releases from the recycled stainless steel brackets have the highest at all 4.2 pH except in 120 h. The study result shows that the nickel release from the recycled stainless steel brackets is highest. Metal slot ceramic bracket release significantly less nickel. So, recycled stainless steel brackets should not be used for nickel allergic patients. Metal slot ceramic brackets are advisable.

  17. Comparative evaluation of nickel discharge from brackets in artificial saliva at different time intervals

    PubMed Central

    Jithesh, C.; Venkataramana, V.; Penumatsa, Narendravarma; Reddy, S. N.; Poornima, K. Y.; Rajasigamani, K.

    2015-01-01

    Objectives: To determine and compare the potential difference of nickel release from three different orthodontic brackets, in different artificial pH, in different time intervals. Materials and Methods: Twenty-seven samples of three different orthodontic brackets were selected and grouped as 1, 2, and 3. Each group was divided into three subgroups depending on the type of orthodontic brackets, salivary pH and the time interval. The Nickel release from each subgroup were analyzed by using inductively coupled plasma-Atomic Emission Spectrophotometer (Perkin Elmer, Optima 2100 DV, USA) model. Quantitative analysis of nickel was performed three times, and the mean value was used as result. ANOVA (F-test) was used to test the significant difference among the groups at 0.05 level of significance (P < 0.05). The descriptive method of statistics was used to calculate the mean, standard deviation, minimum and maximum. SPSS 18 software ((SPSS.Ltd, Quarry bay, Hong Kong, PASW-statistics 18) was used to analyze the study. Result: The analysis shows a significant difference between three groups. The study shows that the nickel releases from the recycled stainless steel brackets have the highest at all 4.2 pH except in 120 h. Conclusion: The study result shows that the nickel release from the recycled stainless steel brackets is highest. Metal slot ceramic bracket release significantly less nickel. So, recycled stainless steel brackets should not be used for nickel allergic patients. Metal slot ceramic brackets are advisable. PMID:26538924

  18. In situ time-series measurements of subseafloor sediment properties

    USGS Publications Warehouse

    Wheatcroft, R.A.; Stevens, A.W.; Johnson, R.V.

    2007-01-01

    The capabilities and diversity of subsurface sediment sensors lags significantly from what is available for the water column, thereby limiting progress in understanding time-dependent seabed exchange and high-frequency acoustics. To help redress this imbalance, a new instrument, the autonomous sediment profiler (ASP), is described herein. ASP consists of a four-electrode, Wenner-type resistivity probe and a thermistor that log data at 0.1-cm vertical intervals over a 58-cm vertical profile. To avoid resampling the same spot on the seafloor, the probes are moved horizontally within a 20 times 100-cm-2 area in one of three preselected patterns. Memory and power capacities permit sampling at hourly intervals for up to 3-mo duration. The system was tested in a laboratory tank and shown to be able to resolve high-frequency sediment consolidation, as well as changes in sediment roughness. In a field test off the southern coast of France, the system collected resistivity and temperature data at hourly intervals for 16 d. Coupled with environmental data collected on waves, currents, and suspended sediment, the ASP is shown to be useful for understanding temporal evolution of subsurface sediment porosity, although no large depositional or erosional events occurred during the deployment. Following a rapid decrease in bottom-water temperature, the evolution of the subsurface temperature field was consistent with the 1-D thermal diffusion equation coupled with advection in the upper 3-4 cm. Collectively, the laboratory and field tests yielded promising results on time-dependent seabed change.

  19. Effect of Missing Inter-Beat Interval Data on Heart Rate Variability Analysis Using Wrist-Worn Wearables.

    PubMed

    Baek, Hyun Jae; Shin, JaeWook

    2017-08-15

    Most of the wrist-worn devices on the market provide a continuous heart rate measurement function using photoplethysmography, but have not yet provided a function to measure the continuous heart rate variability (HRV) using beat-to-beat pulse interval. The reason for such is the difficulty of measuring a continuous pulse interval during movement using a wearable device because of the nature of photoplethysmography, which is susceptible to motion noise. This study investigated the effect of missing heart beat interval data on the HRV analysis in cases where pulse interval cannot be measured because of movement noise. First, we performed simulations by randomly removing data from the RR interval of the electrocardiogram measured from 39 subjects and observed the changes of the relative and normalized errors for the HRV parameters according to the total length of the missing heart beat interval data. Second, we measured the pulse interval from 20 subjects using a wrist-worn device for 24 h and observed the error value for the missing pulse interval data caused by the movement during actual daily life. The experimental results showed that mean NN and RMSSD were the most robust for the missing heart beat interval data among all the parameters in the time and frequency domains. Most of the pulse interval data could not be obtained during daily life. In other words, the sample number was too small for spectral analysis because of the long missing duration. Therefore, the frequency domain parameters often could not be calculated, except for the sleep state with little motion. The errors of the HRV parameters were proportional to the missing data duration in the presence of missing heart beat interval data. Based on the results of this study, the maximum missing duration for acceptable errors for each parameter is recommended for use when the HRV analysis is performed on a wrist-worn device.

  20. Assessment of cardiac time intervals using high temporal resolution real-time spiral phase contrast with UNFOLDed-SENSE.

    PubMed

    Kowalik, Grzegorz T; Knight, Daniel S; Steeden, Jennifer A; Tann, Oliver; Odille, Freddy; Atkinson, David; Taylor, Andrew; Muthurangu, Vivek

    2015-02-01

    To develop a real-time phase contrast MR sequence with high enough temporal resolution to assess cardiac time intervals. The sequence utilized spiral trajectories with an acquisition strategy that allowed a combination of temporal encoding (Unaliasing by fourier-encoding the overlaps using the temporal dimension; UNFOLD) and parallel imaging (Sensitivity encoding; SENSE) to be used (UNFOLDed-SENSE). An in silico experiment was performed to determine the optimum UNFOLD filter. In vitro experiments were carried out to validate the accuracy of time intervals calculation and peak mean velocity quantification. In addition, 15 healthy volunteers were imaged with the new sequence, and cardiac time intervals were compared to reference standard Doppler echocardiography measures. For comparison, in silico, in vitro, and in vivo experiments were also carried out using sliding window reconstructions. The in vitro experiments demonstrated good agreement between real-time spiral UNFOLDed-SENSE phase contrast MR and the reference standard measurements of velocity and time intervals. The protocol was successfully performed in all volunteers. Subsequent measurement of time intervals produced values in keeping with literature values and good agreement with the gold standard echocardiography. Importantly, the proposed UNFOLDed-SENSE sequence outperformed the sliding window reconstructions. Cardiac time intervals can be successfully assessed with UNFOLDed-SENSE real-time spiral phase contrast. Real-time MR assessment of cardiac time intervals may be beneficial in assessment of patients with cardiac conditions such as diastolic dysfunction. © 2014 Wiley Periodicals, Inc.

  1. Study of the Effect of Temporal Sampling Frequency on DSCOVR Observations Using the GEOS-5 Nature Run Results (Part I): Earths Radiation Budget

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Yang, Yuekui

    2016-01-01

    Satellites always sample the Earth-atmosphere system in a finite temporal resolution. This study investigates the effect of sampling frequency on the satellite-derived Earth radiation budget, with the Deep Space Climate Observatory (DSCOVR) as an example. The output from NASA's Goddard Earth Observing System Version 5 (GEOS-5) Nature Run is used as the truth. The Nature Run is a high spatial and temporal resolution atmospheric simulation spanning a two-year period. The effect of temporal resolution on potential DSCOVR observations is assessed by sampling the full Nature Run data with 1-h to 24-h frequencies. The uncertainty associated with a given sampling frequency is measured by computing means over daily, monthly, seasonal and annual intervals and determining the spread across different possible starting points. The skill with which a particular sampling frequency captures the structure of the full time series is measured using correlations and normalized errors. Results show that higher sampling frequency gives more information and less uncertainty in the derived radiation budget. A sampling frequency coarser than every 4 h results in significant error. Correlations between true and sampled time series also decrease more rapidly for a sampling frequency less than 4 h.

  2. Palaeonummulites venosus: Natural growth rates and quantification by means of CT investigation

    NASA Astrophysics Data System (ADS)

    Kinoshita, Shunichi; Eder, Wolfgang; Woeger, Julia; Hohenegger, Johann; Briguglio, Antonino

    2016-04-01

    Symbiont-bearing larger benthic Foraminifera (LBF) are long-living marine (possibly >1 year), single-celled organisms with complex calcium carbonate shells. Reproduction period, longevity and chamber building rate of LBF are important for population dynamics studies. It was expected that growth experiments in laboratory cultures cannot be used for estimation of chamber building rates and longevity studies although the laboratory conditions were simulated to natural conditions. Therefore, it is necessary to study individual and population growth under natural conditions for getting natural information. Therefore, the 'natural laboratory' method was developed to calculate the averaged chamber building rate and averaged longevity of species based on monthly sampling at fixed sampling stations and to compare with laboratory cultures simulating environmental conditions as close as possible to the natural environment. Thus, in this study, samples of living individuals were collected in 16 monthly intervals at 50m depth in front of Sesoko Island, Okinawa, Japan. We used micro-computed tomography (microCT) to investigate the chamber number of every specimen from the samples immediately dried after sampling. Single non dried specimens were cultured and the time of chamber building was obtained using microphotographs counted for every specimen at 2 to 4 days time intervals. The investigation using the natural laboratory method of Palaeonummulites venosus is based on the decomposition of the monthly frequency distributions into normally distributed components. Then the shift of the component parameters mean and standard deviation was used to calculate the necessary character of maximum chamber number and the Michaelis Menten function was applied to estimate the chamber building rate under natural conditions. This resulted in two reproduction periods, the first starting in May and the second in September, both showing the same chamber building rates, where the first shows a slightly stronger increase in the initial part. Longevity seems to be round about one year. Due to the several reproduction periods, the existence of small and large specimens in the same sample and the bimodal distributions can be explained. The cultured individuals shows a much lower chamber building rate, often demonstrating a longer period with no chamber production just after sampling, the results of sampling shock. This is the first time that it can be shown that chamber building rates and longevities cannot be based on laboratory investigations.

  3. Digital ac monitor

    DOEpatents

    Hart, George W.; Kern, Jr., Edward C.

    1987-06-09

    An apparatus and method is provided for monitoring a plurality of analog ac circuits by sampling the voltage and current waveform in each circuit at predetermined intervals, converting the analog current and voltage samples to digital format, storing the digitized current and voltage samples and using the stored digitized current and voltage samples to calculate a variety of electrical parameters; some of which are derived from the stored samples. The non-derived quantities are repeatedly calculated and stored over many separate cycles then averaged. The derived quantities are then calculated at the end of an averaging period. This produces a more accurate reading, especially when averaging over a period in which the power varies over a wide dynamic range. Frequency is measured by timing three cycles of the voltage waveform using the upward zero crossover point as a starting point for a digital timer.

  4. Stabilization for sampled-data neural-network-based control systems.

    PubMed

    Zhu, Xun-Lin; Wang, Youyi

    2011-02-01

    This paper studies the problem of stabilization for sampled-data neural-network-based control systems with an optimal guaranteed cost. Unlike previous works, the resulting closed-loop system with variable uncertain sampling cannot simply be regarded as an ordinary continuous-time system with a fast-varying delay in the state. By defining a novel piecewise Lyapunov functional and using a convex combination technique, the characteristic of sampled-data systems is captured. A new delay-dependent stabilization criterion is established in terms of linear matrix inequalities such that the maximal sampling interval and the minimal guaranteed cost control performance can be obtained. It is shown that the newly proposed approach can lead to less conservative and less complex results than the existing ones. Application examples are given to illustrate the effectiveness and the benefits of the proposed method.

  5. Digital ac monitor

    DOEpatents

    Hart, G.W.; Kern, E.C. Jr.

    1987-06-09

    An apparatus and method is provided for monitoring a plurality of analog ac circuits by sampling the voltage and current waveform in each circuit at predetermined intervals, converting the analog current and voltage samples to digital format, storing the digitized current and voltage samples and using the stored digitized current and voltage samples to calculate a variety of electrical parameters; some of which are derived from the stored samples. The non-derived quantities are repeatedly calculated and stored over many separate cycles then averaged. The derived quantities are then calculated at the end of an averaging period. This produces a more accurate reading, especially when averaging over a period in which the power varies over a wide dynamic range. Frequency is measured by timing three cycles of the voltage waveform using the upward zero crossover point as a starting point for a digital timer. 24 figs.

  6. A comparison of temporal and location-based sampling strategies for global positioning system-triggered electronic diaries.

    PubMed

    Törnros, Tobias; Dorn, Helen; Reichert, Markus; Ebner-Priemer, Ulrich; Salize, Hans-Joachim; Tost, Heike; Meyer-Lindenberg, Andreas; Zipf, Alexander

    2016-11-21

    Self-reporting is a well-established approach within the medical and psychological sciences. In order to avoid recall bias, i.e. past events being remembered inaccurately, the reports can be filled out on a smartphone in real-time and in the natural environment. This is often referred to as ambulatory assessment and the reports are usually triggered at regular time intervals. With this sampling scheme, however, rare events (e.g. a visit to a park or recreation area) are likely to be missed. When addressing the correlation between mood and the environment, it may therefore be beneficial to include participant locations within the ambulatory assessment sampling scheme. Based on the geographical coordinates, the database query system then decides if a self-report should be triggered or not. We simulated four different ambulatory assessment sampling schemes based on movement data (coordinates by minute) from 143 voluntary participants tracked for seven consecutive days. Two location-based sampling schemes incorporating the environmental characteristics (land use and population density) at each participant's location were introduced and compared to a time-based sampling scheme triggering a report on the hour as well as to a sampling scheme incorporating physical activity. We show that location-based sampling schemes trigger a report less often, but we obtain more unique trigger positions and a greater spatial spread in comparison to sampling strategies based on time and distance. Additionally, the location-based methods trigger significantly more often at rarely visited types of land use and less often outside the study region where no underlying environmental data are available.

  7. Development of Automatic Control of Bayer Plant Digestion

    NASA Astrophysics Data System (ADS)

    Riffaud, J. P.

    Supervisory computer control has been achieved in Alcan's Bayer Plants at Arvida, Quebec, Canada. The purpose of the automatic control system is to stabilize and consequently increase, the alumina/caustic ratio within the digester train and in the blow-off liquor. Measurements of the electrical conductivity of the liquor are obtained from electrodeless conductivity meters. These signals, along with several others are scanned by the computer and converted to engineering units, using specific relationships which are updated periodically for calibration purposes. On regular time intervals, values of ratio are compared to target values and adjustments are made to the bauxite flow entering the digesters. Dead time compensation included in the control algorithm enables a faster rate for corrections. Modification of production rate is achieved through careful timing of various flow changes. Calibration of the conductivity meters is achieved by sampling at intervals the liquor flowing through them, and analysing it with a thermometric titrator. Calibration of the thermometric titrator is done at intervals with a standard solution. Calculations for both calibrations are performed by computer from data entered by the analyst. The computer was used for on-line data collection, modelling of the digester system, calculation of disturbances and simulation of control strategies before implementing the most successful strategy in the Plant. Control of ratio has been improved by the integrated system, resulting in increased Plant productivity.

  8. Synthesis and radiosensitization properties of hydrogen peroxide and sodium hyaluronate complex

    NASA Astrophysics Data System (ADS)

    Rosli, Nur Ratasha Alia Md.; Mohamed, Faizal; Heng, Cheong Kai; Rahman, Irman Abdul; Ahmad, Ainee Fatimah; Mohamad, Hur Munawar Kabir

    2014-09-01

    Cancer cells which are large in size are resistant towards radiation therapy due to the presence of large amount of anti-oxidative enzymes and hypoxic cancer cells. Thus radiosensitizer agents have been developed to enhance the therapeutic effect of radiotherapy by increasing the sensitivity of these cancer cells towards radiation. This study is conducted to investigate the radiosensitization properties of radiosensitizer complex containing hydrogen peroxide and sodium hyaluronate. Combination with sodium hyaluronate may decrease reactivity of hydrogen peroxide but maintain the oxygen concentration needed for radiosensitizing effect. HepG2 cancer cells are cultured as the mean of test subject. Cancer cell samples which are targeted and not targeted with these radiosensitizers are irradiated with 2Gy single fractionated dose. Results obtained shows that the cancer cells which are not targeted with radiosensitizers has a cell viability of 98.80±0.37% after a time interval of 48 hours and has even repopulated over 100% after a 72 hour time interval. This shows that the cancer cells are resistant towards radiation. However, when the cancer cells are targeted with radiosensitizers prior to irradiation, there is a reduction of cell viability by 25.50±10.81% and 10.30±5.10% at time intervals of 48 and 72 hours respectively. This indicates that through the use of these radiosensitizers, cancer cells are more sensitive towards radiation.

  9. VARIABLE TIME-INTERVAL GENERATOR

    DOEpatents

    Gross, J.E.

    1959-10-31

    This patent relates to a pulse generator and more particularly to a time interval generator wherein the time interval between pulses is precisely determined. The variable time generator comprises two oscillators with one having a variable frequency output and the other a fixed frequency output. A frequency divider is connected to the variable oscillator for dividing its frequency by a selected factor and a counter is used for counting the periods of the fixed oscillator occurring during a cycle of the divided frequency of the variable oscillator. This defines the period of the variable oscillator in terms of that of the fixed oscillator. A circuit is provided for selecting as a time interval a predetermined number of periods of the variable oscillator. The output of the generator consists of a first pulse produced by a trigger circuit at the start of the time interval and a second pulse marking the end of the time interval produced by the same trigger circuit.

  10. Estimation of aquifer scale proportion using equal area grids: assessment of regional scale groundwater quality

    USGS Publications Warehouse

    Belitz, Kenneth; Jurgens, Bryant C.; Landon, Matthew K.; Fram, Miranda S.; Johnson, Tyler D.

    2010-01-01

    The proportion of an aquifer with constituent concentrations above a specified threshold (high concentrations) is taken as a nondimensional measure of regional scale water quality. If computed on the basis of area, it can be referred to as the aquifer scale proportion. A spatially unbiased estimate of aquifer scale proportion and a confidence interval for that estimate are obtained through the use of equal area grids and the binomial distribution. Traditionally, the confidence interval for a binomial proportion is computed using either the standard interval or the exact interval. Research from the statistics literature has shown that the standard interval should not be used and that the exact interval is overly conservative. On the basis of coverage probability and interval width, the Jeffreys interval is preferred. If more than one sample per cell is available, cell declustering is used to estimate the aquifer scale proportion, and Kish's design effect may be useful for estimating an effective number of samples. The binomial distribution is also used to quantify the adequacy of a grid with a given number of cells for identifying a small target, defined as a constituent that is present at high concentrations in a small proportion of the aquifer. Case studies illustrate a consistency between approaches that use one well per grid cell and many wells per cell. The methods presented in this paper provide a quantitative basis for designing a sampling program and for utilizing existing data.

  11. Sample entropy applied to the analysis of synthetic time series and tachograms

    NASA Astrophysics Data System (ADS)

    Muñoz-Diosdado, A.; Gálvez-Coyt, G. G.; Solís-Montufar, E.

    2017-01-01

    Entropy is a method of non-linear analysis that allows an estimate of the irregularity of a system, however, there are different types of computational entropy that were considered and tested in order to obtain one that would give an index of signals complexity taking into account the data number of the analysed time series, the computational resources demanded by the method, and the accuracy of the calculation. An algorithm for the generation of fractal time-series with a certain value of β was used for the characterization of the different entropy algorithms. We obtained a significant variation for most of the algorithms in terms of the series size, which could result counterproductive for the study of real signals of different lengths. The chosen method was sample entropy, which shows great independence of the series size. With this method, time series of heart interbeat intervals or tachograms of healthy subjects and patients with congestive heart failure were analysed. The calculation of sample entropy was carried out for 24-hour tachograms and time subseries of 6-hours for sleepiness and wakefulness. The comparison between the two populations shows a significant difference that is accentuated when the patient is sleeping.

  12. Effects of Chicken Litter Storage Time and Ammonia Content on Thermal Resistance of Desiccation-Adapted Salmonella spp.

    PubMed

    Chen, Zhao; Wang, Hongye; Ionita, Claudia; Luo, Feng; Jiang, Xiuping

    2015-10-01

    Broiler chicken litter was kept as a stacked heap on a poultry farm, and samples were collected up to 9 months of storage. Chicken litter inoculated with desiccation-adapted Salmonella cells was heat-treated at 75, 80, 85, and 150°C. Salmonella populations decreased in all these samples during heat treatment, and the inactivation rates became lower in chicken litter when storage time was extended from 0 to 6 months. There was no significant difference (P > 0.05) in thermal resistance of Salmonella in 6- and 9-month litter samples, indicating that a threshold for thermal resistance was reached after 6 months. Overall, the thermal resistance of Salmonella in chicken litter was affected by the storage time of the litter. The changes in some chemical, physical, and microbiological properties during storage could possibly contribute to this difference. Moisture and ammonia could be two of the most significant factors influencing the thermal resistance of Salmonella cells in chicken litter. Our results emphasize the importance of adjusting time and temperature conditions for heat processing chicken litter when it is removed from the chicken house at different time intervals. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  13. Effects of Chicken Litter Storage Time and Ammonia Content on Thermal Resistance of Desiccation-Adapted Salmonella spp.

    PubMed Central

    Chen, Zhao; Wang, Hongye; Ionita, Claudia; Luo, Feng

    2015-01-01

    Broiler chicken litter was kept as a stacked heap on a poultry farm, and samples were collected up to 9 months of storage. Chicken litter inoculated with desiccation-adapted Salmonella cells was heat-treated at 75, 80, 85, and 150°C. Salmonella populations decreased in all these samples during heat treatment, and the inactivation rates became lower in chicken litter when storage time was extended from 0 to 6 months. There was no significant difference (P > 0.05) in thermal resistance of Salmonella in 6- and 9-month litter samples, indicating that a threshold for thermal resistance was reached after 6 months. Overall, the thermal resistance of Salmonella in chicken litter was affected by the storage time of the litter. The changes in some chemical, physical, and microbiological properties during storage could possibly contribute to this difference. Moisture and ammonia could be two of the most significant factors influencing the thermal resistance of Salmonella cells in chicken litter. Our results emphasize the importance of adjusting time and temperature conditions for heat processing chicken litter when it is removed from the chicken house at different time intervals. PMID:26209673

  14. Excellent amino acid racemization results from Holocene sand dollars

    NASA Astrophysics Data System (ADS)

    Kosnik, M.; Kaufman, D. S.; Kowalewski, M.; Whitacre, K.

    2015-12-01

    Amino acid racemization (AAR) is widely used as a cost-effective method to date molluscs in time-averaging and taphonomic studies, but it has not been attempted for echinoderms despite their paleobiological importance. Here we demonstrate the feasibility of AAR geochronology in Holocene aged Peronella peronii (Echinodermata: Echinoidea) collected from Sydney Harbour (Australia). Using standard HPLC methods we determined the extent of AAR in 74 Peronella tests and performed replicate analyses on 18 tests. We sampled multiple areas of two individuals and identified the outer edge as a good sampling location. Multiple replicate analyses from the outer edge of 18 tests spanning the observed range of D/Ls yielded median coefficients of variation < 4% for Asp, Phe, Ala, and Glu D/L values, which overlaps with the analytical precision. Correlations between D/L values across 155 HPLC injections sampled from 74 individuals are also very high (pearson r2 > 0.95) for these four amino acids. The ages of 11 individuals spanning the observed range of D/L values were determined using 14C analyses, and Bayesian model averaging was used to determine the best AAR age model. The averaged age model was mainly composed of time-dependent reaction kinetics models (TDK, 71%) based on phenylalanine (Phe, 94%). Modelled ages ranged from 14 to 5539 yrs, and the median 95% confidence interval for the 74 analysed individuals is ±28% of the modelled age. In comparison, the median 95% confidence interval for the 11 calibrated 14C ages was ±9% of the median age estimate. Overall Peronella yields exceptionally high-quality AAR D/L values and appears to be an excellent substrate for AAR geochronology. This work opens the way for time-averaging and taphonomic studies of echinoderms similar to those in molluscs.

  15. Mapping the spatio-temporal evolution of irrigation in the Coastal Plain of Georgia, USA

    Treesearch

    Marcus D. Williams; Christie M.S. Hawley; Marguerite Madden; J. Marshall Shepherd

    2017-01-01

    This study maps the spatial and temporal evolution of acres irrigated in the Coastal Plain of Georgia over a 38 year period. The goal of this analysis is to create a time-series of irrigated areas in the Coastal Plain of Georgia at a sub-county level. From 1976 through 2013, Landsat images were obtained and sampled at four year intervals to manually...

  16. The Use of Coding Methods to Estimate the Social Behavior Directed toward Peers and Adults of Preschoolers with ASD in TEACCH, LEAP, and Eclectic ''BAU'' Classrooms

    ERIC Educational Resources Information Center

    Sam, Ann; Reszka, Stephanie; Odom, Samuel; Hume, Kara; Boyd, Brian

    2015-01-01

    Momentary time sampling, partial-interval recording, and event coding are observational coding methods commonly used to examine the social and challenging behaviors of children at risk for or with developmental delays or disabilities. Yet there is limited research comparing the accuracy of and relationship between these three coding methods. By…

  17. Generalized Monitoring Facility. Users Manual.

    DTIC Science & Technology

    1982-05-01

    based monitor. The RMC will sample system queues and tables on a 30-second time interval. The data captured from these queues and cells are written...period, only the final change will be reported. The following communication region cells are constantly monitored for changes, since a processor...is reported as zeros in WW6.4. When GMC terminates, it writes a record containing information read from communication region cells and information

  18. Persistence in soil and on foliage of the nucleopolyhedrosis virus of the European pine sawfly, Neodiprion sertifer (Hymenoptera: Diprionidae)

    Treesearch

    M.A. Mohamed; H.C. Coppel; J.D. Podgwaite

    1982-01-01

    Six plots of pine trees harboring high densities of N. sertifer larvae were sprayed with the nucleopolyhedrosis virus (NPV) of this species. Half of these plots were resprayed in the second year of the study. Polyhedral inclusion bodies (PIB) were recovered in all plots from soil and foliage sampled at fixed time intervals within a 21-month period...

  19. Agent Based Computing Machine

    DTIC Science & Technology

    2005-12-09

    decision making logic that respond to the environment (concentration of operands - the state vector), and bias or "mood" as established by its history of...mentioned in the chart, there is no need for file management in a ABC Machine. Information is distributed, no history is maintained. The instruction set... Postgresql ) for collection of cluster samples/snapshots over intervals of time. An prototypical example of an XML file to configure and launch the ABC

  20. Endogenous or exogenous origin of vaginal candidiasis in Polish women?

    PubMed

    Mnichowska-Polanowskai, Magdalena; Wojciechowska-Koszko, Iwona; Klimowicz, Bogumia; Szymaniak, Ludmia; Krasnodebska-Szponder, Barbara; Szych, Zbigniew; Giedrys-Kalemba, Stefania

    2013-01-01

    Vaginal candidiasis is a common problem of clinical practice. Many studies have been conducted to explain its origin but only a few have included Polish women. The aim of the study was to determine the prevalence and similarity of oral, anal and vaginal Candida albicans strains isolated from Polish women with vaginal candidiasis. The study involved 20 from 37 recruited women. Swab samples were collected from their vagina, anus, and oral cavity at two-month intervals. All the women were treated with nystatin. Yeast were recovered and identified by the germ-tube test, API /Vitek system, typed by API ZYM and RAPD-PCR. Chi-square test was used to analyze the data. A total of 170 Candida albicans isolates were recovered from 180 samples collected 3 times from 3 sites of 20 women. Positive yeast vaginal cultures were found in all patients before administration of nystatin. Vaginal yeast recovery rate was decreased statistically significant in both follow-up visits (p= 0.001; p= 0.003). The same and different genotypes/biotypes were found concomitantly in a few body sites and/ or repeatedly at time interval from the same body site. The results support the concept of dynamic exchange of yeast within one woman and endogenous or exogenous origin of vaginal candidiasis.

  1. Comparative analysis of bones, mites, soil chemistry, nematodes and soil micro-eukaryotes from a suspected homicide to estimate the post-mortem interval.

    PubMed

    Szelecz, Ildikó; Lösch, Sandra; Seppey, Christophe V W; Lara, Enrique; Singer, David; Sorge, Franziska; Tschui, Joelle; Perotti, M Alejandra; Mitchell, Edward A D

    2018-01-08

    Criminal investigations of suspected murder cases require estimating the post-mortem interval (PMI, or time after death) which is challenging for long PMIs. Here we present the case of human remains found in a Swiss forest. We have used a multidisciplinary approach involving the analysis of bones and soil samples collected beneath the remains of the head, upper and lower body and "control" samples taken a few meters away. We analysed soil chemical characteristics, mites and nematodes (by microscopy) and micro-eukaryotes (by Illumina high throughput sequencing). The PMI estimate on hair 14 C-data via bomb peak radiocarbon dating gave a time range of 1 to 3 years before the discovery of the remains. Cluster analyses for soil chemical constituents, nematodes, mites and micro-eukaryotes revealed two clusters 1) head and upper body and 2) lower body and controls. From mite evidence, we conclude that the body was probably brought to the site after death. However, chemical analyses, nematode community analyses and the analyses of micro-eukaryotes indicate that decomposition took place at least partly on site. This study illustrates the usefulness of combining several lines of evidence for the study of homicide cases to better calibrate PMI inference tools.

  2. Cu(2+) and Fe(2+) mediated photodegradation studies of soil-incorporated chlorpyrifos.

    PubMed

    Rafique, Nazia; Tariq, Saadia R; Ahad, Karam; Taj, Touqeer

    2016-03-01

    The influences of Cu(2+) and Fe(2+) on the photodegradation of soil-incorporated chlorpyrifos were investigated in the present study. The soil samples spiked with chlorpyrifos and selected metal ions were irradiated with UV light for different intervals of time and analyzed by HPLC. The unsterile and sterile control soil samples amended with pesticides and selected metals were incubated in the dark at 25 °C for the same time intervals. The results of the study evidenced that photodegradation of chlorpyrifos followed the first-order kinetics. The dissipation t0.5 of chlorpyrifos was found to decrease from 41 to 20 days under UV irradiation. The rate of chlorpyrifos photodegradation was increased in the presence of both metals, i.e., Cu(2+) and Fe(2+). Thus, initially observed t0.5 of 19.8 days was decreased to 4.39 days in the case of Cu(+2) and 19.25 days for Fe(+2). Copper was found to increase the rate of photodegradation by 4.5 orders of magnitude while the microbial degradation of chlorpyrifos was increased only twofold. The microbial degradation of chlorpyrifos was only negligibly affected by Fe(2+) amendment. The studied trace metals also affected the abiotic degradation of the pesticide in the order Cu(2+) > Fe(2+).

  3. Hospital factors impact variation in emergency department length of stay more than physician factors.

    PubMed

    Krall, Scott P; Cornelius, Angela P; Addison, J Bruce

    2014-03-01

    To analyze the correlation between the many different emergency department (ED) treatment metric intervals and determine if the metrics directly impacted by the physician correlate to the "door to room" interval in an ED (interval determined by ED bed availability). Our null hypothesis was that the cause of the variation in delay to receiving a room was multifactorial and does not correlate to any one metric interval. We collected daily interval averages from the ED information system, Meditech©. Patient flow metrics were collected on a 24-hour basis. We analyzed the relationship between the time intervals that make up an ED visit and the "arrival to room" interval using simple correlation (Pearson Correlation coefficients). Summary statistics of industry standard metrics were also done by dividing the intervals into 2 groups, based on the average ED length of stay (LOS) from the National Hospital Ambulatory Medical Care Survey: 2008 Emergency Department Summary. Simple correlation analysis showed that the doctor-to-discharge time interval had no correlation to the interval of "door to room (waiting room time)", correlation coefficient (CC) (CC=0.000, p=0.96). "Room to doctor" had a low correlation to "door to room" CC=0.143, while "decision to admitted patients departing the ED time" had a moderate correlation of 0.29 (p <0.001). "New arrivals" (daily patient census) had a strong correlation to longer "door to room" times, 0.657, p<0.001. The "door to discharge" times had a very strong correlation CC=0.804 (p<0.001), to the extended "door to room" time. Physician-dependent intervals had minimal correlation to the variation in arrival to room time. The "door to room" interval was a significant component to the variation in "door to discharge" i.e. LOS. The hospital-influenced "admit decision to hospital bed" i.e. hospital inpatient capacity, interval had a correlation to delayed "door to room" time. The other major factor affecting department bed availability was the "total patients per day." The correlation to the increasing "door to room" time also reflects the effect of availability of ED resources (beds) on the patient evaluation time. The time that it took for a patient to receive a room appeared more dependent on the system resources, for example, beds in the ED, as well as in the hospital, than on the physician.

  4. Intact interval timing in circadian CLOCK mutants.

    PubMed

    Cordes, Sara; Gallistel, C R

    2008-08-28

    While progress has been made in determining the molecular basis for the circadian clock, the mechanism by which mammalian brains time intervals measured in seconds to minutes remains a mystery. An obvious question is whether the interval-timing mechanism shares molecular machinery with the circadian timing mechanism. In the current study, we trained circadian CLOCK +/- and -/- mutant male mice in a peak-interval procedure with 10 and 20-s criteria. The mutant mice were more active than their wild-type littermates, but there were no reliable deficits in the accuracy or precision of their timing as compared with wild-type littermates. This suggests that expression of the CLOCK protein is not necessary for normal interval timing.

  5. Resonance Shift of Single-Axis Acoustic Levitation

    NASA Astrophysics Data System (ADS)

    Xie, Wen-Jun; Wei, Bing-Bo

    2007-01-01

    The resonance shift due to the presence and movement of a rigid spherical sample in a single-axis acoustic levitator is studied with the boundary element method on the basis of a two-cylinder model of the levitator. The introduction of a sample into the sound pressure nodes, where it is usually levitated, reduces the resonant interval Hn (n is the mode number) between the reflector and emitter. The larger the sample radius, the greater the resonance shift. When the sample moves along the symmetric axis, the resonance interval Hn varies in an approximately periodical manner, which reaches the minima near the pressure nodes and the maxima near the pressure antinodes. This suggests a resonance interval oscillation around its minimum if the stably levitated sample is slightly perturbed. The dependence of the resonance shift on the sample radius R and position h for the single-axis acoustic levitator is compared with Leung's theory for a closed rectangular chamber, which shows a good agreement.

  6. The Anaesthetic-ECT Time Interval in Electroconvulsive Therapy Practice--Is It Time to Time?

    PubMed

    Gálvez, Verònica; Hadzi-Pavlovic, Dusan; Wark, Harry; Harper, Simon; Leyden, John; Loo, Colleen K

    2016-01-01

    Because most common intravenous anaesthetics used in ECT have anticonvulsant properties, their plasma-brain concentration at the time of seizure induction might affect seizure expression. The quality of ECT seizure expression has been repeatedly associated with efficacy outcomes. The time interval between the anaesthetic bolus injection and the ECT stimulus (anaesthetic-ECT time interval) will determine the anaesthetic plasma-brain concentration when the ECT stimulus is administered. The aim of this study was to examine the effect of the anaesthetic-ECT time interval on ECT seizure quality and duration. The anaesthetic-ECT time interval was recorded in 771 ECT sessions (84 patients). Right unilateral brief pulse ECT was applied. Anaesthesia given was propofol (1-2 mg/kg) and succinylcholine (0.5-1.0 mg/kg). Seizure quality indices (slow wave onset, amplitude, regularity, stereotypy and post-ictal suppression) and duration were rated through a structured rating scale by a single blinded trained rater. Linear Mixed Effects Models analysed the effect of the anaesthetic-ECT time interval on seizure quality indices, controlling for propofol dose (mg), ECT charge (mC), ECT session number, days between ECT, age (years), initial seizure threshold (mC) and concurrent medication. Longer anaesthetic-ECT time intervals lead to significantly higher quality seizures (p < 0.001 for amplitude, regularity, stereotypy and post-ictal suppression). These results suggest that the anaesthetic-ECT time interval is an important factor to consider in ECT practice. This time interval should be extended to as long as practically possible to facilitate the production of better quality seizures. Close collaboration between the anaesthetist and the psychiatrist is essential. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. A genetic algorithm-based framework for wavelength selection on sample categorization.

    PubMed

    Anzanello, Michel J; Yamashita, Gabrielli; Marcelo, Marcelo; Fogliatto, Flávio S; Ortiz, Rafael S; Mariotti, Kristiane; Ferrão, Marco F

    2017-08-01

    In forensic and pharmaceutical scenarios, the application of chemometrics and optimization techniques has unveiled common and peculiar features of seized medicine and drug samples, helping investigative forces to track illegal operations. This paper proposes a novel framework aimed at identifying relevant subsets of attenuated total reflectance Fourier transform infrared (ATR-FTIR) wavelengths for classifying samples into two classes, for example authentic or forged categories in case of medicines, or salt or base form in cocaine analysis. In the first step of the framework, the ATR-FTIR spectra were partitioned into equidistant intervals and the k-nearest neighbour (KNN) classification technique was applied to each interval to insert samples into proper classes. In the next step, selected intervals were refined through the genetic algorithm (GA) by identifying a limited number of wavelengths from the intervals previously selected aimed at maximizing classification accuracy. When applied to Cialis®, Viagra®, and cocaine ATR-FTIR datasets, the proposed method substantially decreased the number of wavelengths needed to categorize, and increased the classification accuracy. From a practical perspective, the proposed method provides investigative forces with valuable information towards monitoring illegal production of drugs and medicines. In addition, focusing on a reduced subset of wavelengths allows the development of portable devices capable of testing the authenticity of samples during police checking events, avoiding the need for later laboratorial analyses and reducing equipment expenses. Theoretically, the proposed GA-based approach yields more refined solutions than the current methods relying on interval approaches, which tend to insert irrelevant wavelengths in the retained intervals. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Screening of CO2 Laser (10.6 μm) Parameters for Prevention of Enamel Erosion

    PubMed Central

    Yu, Hao; de Paula Eduardo, Carlos; Meister, Jörg; Lampert, Friedrich; Attin, Thomas; Wiegand, Annette

    2012-01-01

    Abstract Objective: The aim of this study was to screen CO2 laser (10.6 μm) parameters to increase enamel resistance to a continuous-flow erosive challenge. Background data: A new clinical CO2 laser providing pulses of hundreds of microseconds, a range known to increase tooth acid-resistance, has been introduced in the market. Methods: Different laser parameters were tested in 12 groups (n=20) with varying fluences from 0.1 to 0.9 J/cm2, pulse durations from 80 to 400 μs and repetition rates from 180 to 700 Hz. Non-lased samples (n=30) served as controls. All samples were eroded by exposure to hydrochloric acid (pH 2.6) under continuous acid flow (60 μL/min). Calcium and phosphate release into acid was monitored colorimetrically at 30 sec intervals up to 5 min and at 1 min intervals up to a total erosion time of 15 min. Scanning electron microscopic (SEM) analysis was performed in lased samples (n=3). Data were statistically analysed by one-way ANOVA (p<0.05) and Dunnett's post-hoc tests. Results: Calcium and phosphate release were significantly reduced by a maximum of 20% over time in samples irradiated with 0.4 J/cm2 (200μs) at 450 Hz. Short-time reduction of calcium loss (≤1.5 min) could be also achieved by irradiation with 0.7 J/cm2 (300μs) at 200 and 300 Hz. Both parameters revealed surface modification. Conclusions: A set of CO2 laser parameters was found that could significantly reduce enamel mineral loss (20%) under in vitro erosive conditions. However, as all parameters also caused surface cracking, they are not recommended for clinical use. PMID:22462778

  9. Differential Nuclear and Mitochondrial DNA Preservation in Post-Mortem Teeth with Implications for Forensic and Ancient DNA Studies

    PubMed Central

    Higgins, Denice; Rohrlach, Adam B.; Kaidonis, John; Townsend, Grant; Austin, Jeremy J.

    2015-01-01

    Major advances in genetic analysis of skeletal remains have been made over the last decade, primarily due to improvements in post-DNA-extraction techniques. Despite this, a key challenge for DNA analysis of skeletal remains is the limited yield of DNA recovered from these poorly preserved samples. Enhanced DNA recovery by improved sampling and extraction techniques would allow further advancements. However, little is known about the post-mortem kinetics of DNA degradation and whether the rate of degradation varies between nuclear and mitochondrial DNA or across different skeletal tissues. This knowledge, along with information regarding ante-mortem DNA distribution within skeletal elements, would inform sampling protocols facilitating development of improved extraction processes. Here we present a combined genetic and histological examination of DNA content and rates of DNA degradation in the different tooth tissues of 150 human molars over short-medium post-mortem intervals. DNA was extracted from coronal dentine, root dentine, cementum and pulp of 114 teeth via a silica column method and the remaining 36 teeth were examined histologically. Real time quantification assays based on two nuclear DNA fragments (67 bp and 156 bp) and one mitochondrial DNA fragment (77 bp) showed nuclear and mitochondrial DNA degraded exponentially, but at different rates, depending on post-mortem interval and soil temperature. In contrast to previous studies, we identified differential survival of nuclear and mtDNA in different tooth tissues. Futhermore histological examination showed pulp and dentine were rapidly affected by loss of structural integrity, and pulp was completely destroyed in a relatively short time period. Conversely, cementum showed little structural change over the same time period. Finally, we confirm that targeted sampling of cementum from teeth buried for up to 16 months can provide a reliable source of nuclear DNA for STR-based genotyping using standard extraction methods, without the need for specialised equipment or large-volume demineralisation steps. PMID:25992635

  10. A combined basalt and peridotite perspective on 14 million years of melt generation at the Atlantis Bank segment of the Southwest Indian Ridge: Evidence for temporal changes in mantle dynamics?

    USGS Publications Warehouse

    Coogan, L.A.; Thompson, G.M.; MacLeod, C.J.; Dick, H.J.B.; Edwards, S.J.; Hosford, Scheirer A.; Barry, T.L.

    2004-01-01

    Little is known about temporal variations in melt generation and extraction at midocean ridges largely due to the paucity of sampling along flow lines. Here we present new whole-rock major and trace element data, and mineral and glass major element data, for 71 basaltic samples (lavas and dykes) and 23 peridotites from the same ridge segment (the Atlantis Bank segment of the Southwest Indian Ridge). These samples span an age range of almost 14 My and, in combination with the large amount of published data from this area, allow temporal variations in melting processes to be investigated. Basalts show systematic changes in incompatible trace element ratios with the older samples (from ???8-14 Ma) having more depleted incompatible trace element ratios than the younger ones. There is, however, no corresponding change in peridotite compositions. Peridotites come from the top of the melting column, where the extent of melting is highest, suggesting that the maximum degree of melting did not change over this interval of time. New and published Nd isotopic ratios of basalts, dykes and gabbros from this segment suggest that the average source composition has been approximately constant over this time interval. These data are most readily explained by a model in which the average source composition and temperature have not changed over the last 14 My, but the dynamics of mantle flow (active-to-passive) or melt extraction (less-to-more efficient extraction from the 'wings' of the melting column) has changed significantly. This hypothesised change in mantle dynamics occurs at roughly the same time as a change from a period of detachment faulting to 'normal' crustal accretion. We speculate that active mantle flow may impart sufficient shear stress on the base of the lithosphere to rotate the regional stress field and promote the formation of low angle normal faults. ?? 2004 Elsevier B.V. All rights reserved.

  11. Optimal estimation of suspended-sediment concentrations in streams

    USGS Publications Warehouse

    Holtschlag, D.J.

    2001-01-01

    Optimal estimators are developed for computation of suspended-sediment concentrations in streams. The estimators are a function of parameters, computed by use of generalized least squares, which simultaneously account for effects of streamflow, seasonal variations in average sediment concentrations, a dynamic error component, and the uncertainty in concentration measurements. The parameters are used in a Kalman filter for on-line estimation and an associated smoother for off-line estimation of suspended-sediment concentrations. The accuracies of the optimal estimators are compared with alternative time-averaging interpolators and flow-weighting regression estimators by use of long-term daily-mean suspended-sediment concentration and streamflow data from 10 sites within the United States. For sampling intervals from 3 to 48 days, the standard errors of on-line and off-line optimal estimators ranged from 52.7 to 107%, and from 39.5 to 93.0%, respectively. The corresponding standard errors of linear and cubic-spline interpolators ranged from 48.8 to 158%, and from 50.6 to 176%, respectively. The standard errors of simple and multiple regression estimators, which did not vary with the sampling interval, were 124 and 105%, respectively. Thus, the optimal off-line estimator (Kalman smoother) had the lowest error characteristics of those evaluated. Because suspended-sediment concentrations are typically measured at less than 3-day intervals, use of optimal estimators will likely result in significant improvements in the accuracy of continuous suspended-sediment concentration records. Additional research on the integration of direct suspended-sediment concentration measurements and optimal estimators applied at hourly or shorter intervals is needed.

  12. Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources.

    PubMed

    Klumpp, John; Brandl, Alexander

    2015-03-01

    A particle counting and detection system is proposed that searches for elevated count rates in multiple energy regions simultaneously. The system analyzes time-interval data (e.g., time between counts), as this was shown to be a more sensitive technique for detecting low count rate sources compared to analyzing counts per unit interval (Luo et al. 2013). Two distinct versions of the detection system are developed. The first is intended for situations in which the sample is fixed and can be measured for an unlimited amount of time. The second version is intended to detect sources that are physically moving relative to the detector, such as a truck moving past a fixed roadside detector or a waste storage facility under an airplane. In both cases, the detection system is expected to be active indefinitely; i.e., it is an online detection system. Both versions of the multi-energy detection systems are compared to their respective gross count rate detection systems in terms of Type I and Type II error rates and sensitivity.

  13. Forensic use of a subtropical blowfly: the first case indicating minimum postmortem interval (mPMI) in southern Brazil and first record of Sarconesia chlorogaster from a human corpse.

    PubMed

    Vairo, Karine P; Corrêa, Rodrigo C; Lecheta, Melise C; Caneparo, Maria F; Mise, Kleber M; Preti, Daniel; de Carvalho, Claudio J B; Almeida, Lucia M; Moura, Mauricio O

    2015-01-01

    Southern Brazil is unique due to its subtropical climate. Here, we report on the first forensic entomology case and the first record of Sarconesia chlorogaster (Wiedemann) in a human corpse in this region. Flies' samples were collected from a body indoors at 20°C. Four species were found, but only Chrysomya albiceps (Wiedemann) and S. chlorogaster were used to estimate the minimum postmortem interval (mPMI). The mPMI was calculated using accumulated degree hour (ADH) and developmental time. The S. chlorogaster puparium collected was light in color, so we used an experiment to establish a more accurate estimate for time since initiation of pupation where we found full tanning after 3 h. Development of C. albiceps at 20°C to the end of the third instar is 7.4 days. The mPMI based on S. chlorogaster (developmental time until the third instar with no more than 3 h of pupae development) was 7.6 days. © 2014 American Academy of Forensic Sciences.

  14. Intercalibration of radioisotopic and astrochronologic time scales for the Cenomanian-Turonian boundary interval, western interior Basin, USA

    USGS Publications Warehouse

    Meyers, S.R.; Siewert, S.E.; Singer, B.S.; Sageman, B.B.; Condon, D.J.; Obradovich, J.D.; Jicha, B.R.; Sawyer, D.A.

    2012-01-01

    We develop an intercalibrated astrochronologic and radioisotopic time scale for the Cenomanian-Turonian boundary (CTB) interval near the Global Stratotype Section and Point in Colorado, USA, where orbitally influenced rhythmic strata host bentonites that contain sanidine and zircon suitable for 40Ar/ 39Ar and U-Pb dating. Paired 40Ar/ 39Ar and U-Pb ages are determined from four bentonites that span the Vascoceras diartianum to Pseudaspidoceras flexuosum ammonite biozones, utilizing both newly collected material and legacy sanidine samples of J. Obradovich. Comparison of the 40Ar/ 39Ar and U-Pb results underscores the strengths and limitations of each system, and supports an astronomically calibrated Fish Canyon sanidine standard age of 28.201 Ma. The radioisotopic data and published astrochronology are employed to develop a new CTB time scale, using two statistical approaches: (1) a simple integration that yields a CTB age of 93.89 ?? 0.14 Ma (2??; total radioisotopic uncertainty), and (2) a Bayesian intercalibration that explicitly accounts for orbital time scale uncertainty, and yields a CTB age of 93.90 ?? 0.15 Ma (95% credible interval; total radioisotopic and orbital time scale uncertainty). Both approaches firmly anchor the floating orbital time scale, and the Bayesian technique yields astronomically recalibrated radioisotopic ages for individual bentonites, with analytical uncertainties at the permil level of resolution, and total uncertainties below 2???. Using our new results, the duration between the Cenomanian-Turonian and the Cretaceous-Paleogene boundaries is 27.94 ?? 0.16 Ma, with an uncertainty of less than one-half of a long eccentricity cycle. ?? 2012 Geological Society of America.

  15. Resistance to penicillin of Staphylococcus aureus isolates from cows with high somatic cell counts in organic and conventional dairy herds in Denmark

    PubMed Central

    Bennedsgaard, Torben W; Thamsborg, Stig M; Aarestrup, Frank M; Enevoldsen, Carsten; Vaarst, Mette; Christoffersen, Anna B

    2006-01-01

    Background Quarter milk samples from cows with high risk of intramammary infection were examined to determine the prevalence of Staphylococcus aureus (SA) and penicillin resistant SA (SAr) in conventional and organic dairy herds and herds converting to organic farming in a combined longitudinal and cross-sectional study. Methods 20 conventional herds, 18 organic herds that converted before 1995, and 19 herds converting to organic farming in 1999 or 2000 were included in the study. Herds converting to organic farming were sampled three times one year apart; the other herds were sampled once. Risk of infection was estimated based on somatic cell count, milk production, breed, age and lactation stage. Results The high-risk cows represented about 49 % of the cows in the herds. The overall prevalence of SA and SAr among these cows was 29% (95% confidence interval: 24%–34%) and 4% (95% confidence interval: 2%–5%) respectively. The prevalence of penicillin resistance among SA infected cows was 12% (95% confidence interval: 6%–19%) when calculated from the first herd visits. No statistically significant differences were observed in the prevalence of SAr or the proportion of isolates resistant to penicillin between herd groups. Conclusion The proportion of isolates resistant to penicillin was low compared to studies in other countries except Norway and Sweden. Based on the low prevalence of penicillin resistance of SA, penicillin should still be the first choice of antimicrobial agent for treatment of bovine intramammary infection in Denmark. PMID:17125515

  16. An Extension of the Time-Spectral Method to Overset Solvers

    NASA Technical Reports Server (NTRS)

    Leffell, Joshua Isaac; Murman, Scott M.; Pulliam, Thomas

    2013-01-01

    Relative motion in the Cartesian or overset framework causes certain spatial nodes to move in and out of the physical domain as they are dynamically blanked by moving solid bodies. This poses a problem for the conventional Time-Spectral approach, which expands the solution at every spatial node into a Fourier series spanning the period of motion. The proposed extension to the Time-Spectral method treats unblanked nodes in the conventional manner but expands the solution at dynamically blanked nodes in a basis of barycentric rational polynomials spanning partitions of contiguously defined temporal intervals. Rational polynomials avoid Runge's phenomenon on the equidistant time samples of these sub-periodic intervals. Fourier- and rational polynomial-based differentiation operators are used in tandem to provide a consistent hybrid Time-Spectral overset scheme capable of handling relative motion. The hybrid scheme is tested with a linear model problem and implemented within NASA's OVERFLOW Reynolds-averaged Navier- Stokes (RANS) solver. The hybrid Time-Spectral solver is then applied to inviscid and turbulent RANS cases of plunging and pitching airfoils and compared to time-accurate and experimental data. A limiter was applied in the turbulent case to avoid undershoots in the undamped turbulent eddy viscosity while maintaining accuracy. The hybrid scheme matches the performance of the conventional Time-Spectral method and converges to the time-accurate results with increased temporal resolution.

  17. Factors affecting hematology and plasma biochemistry in the southwest carpet python (Morelia spilota imbricata).

    PubMed

    Bryant, Gillian L; Fleming, Patricia A; Twomey, Leanne; Warren, Kristin A

    2012-04-01

    Despite increased worldwide popularity of keeping reptiles as pets, we know little about hematologic and biochemical parameters of most reptile species, or how these measures may be influenced by intrinsic and extrinsic factors. Blood samples from 43 wild-caught pythons (Morelia spilota imbricata) were collected at various stages of a 3-yr ecological study in Western Australia. Reference intervals are reported for 35 individuals sampled at the commencement of the study. As pythons were radiotracked for varying lengths of time (radiotransmitters were surgically implanted), repeated sampling was undertaken from some individuals. However, because of our ad hoc sampling design we cannot be definitive about temporal factors that were most important or that exclusively influenced blood parameters. There was no significant effect of sex or the presence of a hemogregarine parasite on blood parameters. Erythrocyte measures were highest for pythons captured in the jarrah forest and at the stage of radiotransmitter implantation, which was also linked with shorter time in captivity. Basophil count, the only leukocyte influenced by the factors tested, was highest when the python was anesthetized, as was globulin concentration. Albumin and the albumin:globulin ratio were more concentrated in summer (as was phosphorous) and at the initial stage of radiotransmitter placement (as was calcium). No intrinsic or extrinsic factors influenced creatinine kinase, aspartate aminotransferase, uric acid, or total protein. This study demonstrates that factors including season, location, surgical radiotransmitter placement, and anesthetic state can influence blood parameters of M. s. imbricata. For accurate diagnosis, veterinarians should be aware that the current reference intervals used to identify the health status of individuals for this species are outdated and the interpretation and an understanding of the influence of intrinsic and extrinsic factors are limited.

  18. Missing the Mark? A Two Time Point Cohort Study Estimating Intestinal Parasite Prevalence in Informal Settlements in Lima, Peru.

    PubMed

    Cooper, Michael Townsend; Searing, Rapha A; Thompson, David M; Bard, David; Carabin, Hélène; Gonzales, Carlos; Zavala, Carmen; Woodson, Kyle; Naifeh, Monique

    2017-01-01

    Objectives: The World Health Organization's (WHO) recommendations list Peru as potentially needing prevention of soil-transmitted helminthiasis (STH). Prevalence of STH varies regionally and remains understudied in the newest informal settlements of the capital city, Lima. The purpose of this study was to evaluate the need for Mass Drug Administration (MDA) of antiparasitic drugs in the newest informal settlements of Lima. The aim of this study was to estimate the season-specific prevalence of STH to determine if these prevalence estimates met the WHO threshold for MDA in 3 informal settlements. Methods : A 2 time point cohort study was conducted among a sample of 140 children aged 1 to 10 years living in 3 purposively sampled informal settlements of Lima, Peru. Children were asked to provide 2 stool samples that were analyzed with the spontaneous sedimentation in tube technique. The season-specific prevalence proportions of MDA-targeted STH were estimated using a hidden (latent) Markov modeling approach to adjust for repeated measurements over the 2 seasons and the imperfect validity of the screening tests. Results : The prevalence of MDA targeted STH was low at 2.2% (95% confidence interval = 0.3% to 6%) and 3.8% (95% confidence interval = 0.7% to 9.3%) among children sampled in the summer and winter months, respectively, when using the most conservative estimate of test sensitivity. These estimates were below the WHO threshold for MDA (20%). Conclusions : Empiric treatment for STH by organizations active in the newest informal settlements is not supported by the data and could contribute to unnecessary medication exposures and poor allocation of resources.

  19. Missing the Mark? A Two Time Point Cohort Study Estimating Intestinal Parasite Prevalence in Informal Settlements in Lima, Peru

    PubMed Central

    Cooper, Michael Townsend; Searing, Rapha A.; Thompson, David M.; Bard, David; Carabin, Hélène; Gonzales, Carlos; Zavala, Carmen; Woodson, Kyle; Naifeh, Monique

    2017-01-01

    Objectives: The World Health Organization’s (WHO) recommendations list Peru as potentially needing prevention of soil-transmitted helminthiasis (STH). Prevalence of STH varies regionally and remains understudied in the newest informal settlements of the capital city, Lima. The purpose of this study was to evaluate the need for Mass Drug Administration (MDA) of antiparasitic drugs in the newest informal settlements of Lima. The aim of this study was to estimate the season-specific prevalence of STH to determine if these prevalence estimates met the WHO threshold for MDA in 3 informal settlements. Methods: A 2 time point cohort study was conducted among a sample of 140 children aged 1 to 10 years living in 3 purposively sampled informal settlements of Lima, Peru. Children were asked to provide 2 stool samples that were analyzed with the spontaneous sedimentation in tube technique. The season-specific prevalence proportions of MDA-targeted STH were estimated using a hidden (latent) Markov modeling approach to adjust for repeated measurements over the 2 seasons and the imperfect validity of the screening tests. Results: The prevalence of MDA targeted STH was low at 2.2% (95% confidence interval = 0.3% to 6%) and 3.8% (95% confidence interval = 0.7% to 9.3%) among children sampled in the summer and winter months, respectively, when using the most conservative estimate of test sensitivity. These estimates were below the WHO threshold for MDA (20%). Conclusions: Empiric treatment for STH by organizations active in the newest informal settlements is not supported by the data and could contribute to unnecessary medication exposures and poor allocation of resources. PMID:29152541

  20. A single-sweep, nanosecond time resolution laser temperature-jump apparatus

    NASA Astrophysics Data System (ADS)

    Ballew, R. M.; Sabelko, J.; Reiner, C.; Gruebele, M.

    1996-10-01

    We describe a fast temperature-jump (T-jump) apparatus capable of acquiring kinetic relaxation transients via real-time fluorescence detection over a time interval from nanoseconds to milliseconds in a single sweep. The method is suitable for aqueous solutions, relying upon the direct absorption of laser light by the bulk water. This obviates the need for additives (serving as optical or conductive heaters) that may interact with the sample under investigation. The longitudinal temperature profile is made uniform by counterpropagating heating pulses. Dead time is limited to one period of the probe laser (16 ns). The apparatus response is tested with aqueous tryptophan and the diffusion-controlled dimerization of proflavine.

  1. Testing for time-based correlates of perceived gender discrimination.

    PubMed

    Blau, Gary; Tatum, Donna Surges; Ward-Cook, Kory; Dobria, Lidia; McCoy, Keith

    2005-01-01

    Using a sample of 201 medical technologists (MTs) over a five-year period, this study extends initial findings on perceived gender discrimination (PGD) by Blau and Tatum (2000) by applying organizational justice variables and internal-external locus of control as hypothesized correlates of PGD. Three types of organizational justice were measured: distributive, procedural, and interactional. General relationships found include locus of control being related to PGD such that internals perceived lower PGD. Also, distributive, procedural, and interactional justice were negatively related to PGD. However, increasing the time interval between these correlates weakened their relationships. The relationship of interactional justice to PGD remained the most "resistant" to attenuation over time.

  2. Transverse development of the human jaws between the ages of 8.5 and 15.5 years, studied longitudinally with use of implants.

    PubMed

    Korn, E L; Baumrind, S

    1990-06-01

    We report longitudinal data on the transverse widening of the maxilla and mandible from a sample of normal subjects (11 males and 20 females) with metallic implants of the Bjork type. Data were from measurements on lateral and frontal (posterior-anterior) cephalograms generated at annual intervals between the ages of 8.5 and 15.5 years (although data were not available for all subjects at all time points). The maxillary data were, in general, similar to those reported by Bjork and Skieller (1974, 1977) for a smaller sample of slightly younger boys. During the age interval studied, transverse widening was greater in the more posterior part of the palate. [The mean annual rate of change in the posterior-most (zygomatic) region was 0.43 mm, sd = 0.18 mm; p less than 0.001.] Although the rate of palatal widening was not large in absolute terms, widening appeared to continue throughout the age interval under study, and there was no evidence to support the conventionally accepted idea that palatal growth in the transverse dimension tapers off substantially or even ceases during the age interval under observation. Evidence of statistically significant widening of the mandibular arch by means of transverse rotation of the osseous matrix was noted in nine of the 29 subjects for whom three-dimensional mandibular information was available. For these nine subjects, the estimated annual increase in mandibular arch angle ranged from 0.52 degrees to 1.40 degrees. As far as we are aware, this is the first report of mandibular matrix rotation in the transverse direction from a sample of subjects with metallic implants. The finding that spontaneous changes in this dimension are relatively common raises the possibility that classical attitudes concerning the immutability of osseous relationships in the symphyseal region during growth may be inappropriate.

  3. Discrete- vs. Continuous-Time Modeling of Unequally Spaced Experience Sampling Method Data.

    PubMed

    de Haan-Rietdijk, Silvia; Voelkle, Manuel C; Keijsers, Loes; Hamaker, Ellen L

    2017-01-01

    The Experience Sampling Method is a common approach in psychological research for collecting intensive longitudinal data with high ecological validity. One characteristic of ESM data is that it is often unequally spaced, because the measurement intervals within a day are deliberately varied, and measurement continues over several days. This poses a problem for discrete-time (DT) modeling approaches, which are based on the assumption that all measurements are equally spaced. Nevertheless, DT approaches such as (vector) autoregressive modeling are often used to analyze ESM data, for instance in the context of affective dynamics research. There are equivalent continuous-time (CT) models, but they are more difficult to implement. In this paper we take a pragmatic approach and evaluate the practical relevance of the violated model assumption in DT AR(1) and VAR(1) models, for the N = 1 case. We use simulated data under an ESM measurement design to investigate the bias in the parameters of interest under four different model implementations, ranging from the true CT model that accounts for all the exact measurement times, to the crudest possible DT model implementation, where even the nighttime is treated as a regular interval. An analysis of empirical affect data illustrates how the differences between DT and CT modeling can play out in practice. We find that the size and the direction of the bias in DT (V)AR models for unequally spaced ESM data depend quite strongly on the true parameter in addition to data characteristics. Our recommendation is to use CT modeling whenever possible, especially now that new software implementations have become available.

  4. Discrete- vs. Continuous-Time Modeling of Unequally Spaced Experience Sampling Method Data

    PubMed Central

    de Haan-Rietdijk, Silvia; Voelkle, Manuel C.; Keijsers, Loes; Hamaker, Ellen L.

    2017-01-01

    The Experience Sampling Method is a common approach in psychological research for collecting intensive longitudinal data with high ecological validity. One characteristic of ESM data is that it is often unequally spaced, because the measurement intervals within a day are deliberately varied, and measurement continues over several days. This poses a problem for discrete-time (DT) modeling approaches, which are based on the assumption that all measurements are equally spaced. Nevertheless, DT approaches such as (vector) autoregressive modeling are often used to analyze ESM data, for instance in the context of affective dynamics research. There are equivalent continuous-time (CT) models, but they are more difficult to implement. In this paper we take a pragmatic approach and evaluate the practical relevance of the violated model assumption in DT AR(1) and VAR(1) models, for the N = 1 case. We use simulated data under an ESM measurement design to investigate the bias in the parameters of interest under four different model implementations, ranging from the true CT model that accounts for all the exact measurement times, to the crudest possible DT model implementation, where even the nighttime is treated as a regular interval. An analysis of empirical affect data illustrates how the differences between DT and CT modeling can play out in practice. We find that the size and the direction of the bias in DT (V)AR models for unequally spaced ESM data depend quite strongly on the true parameter in addition to data characteristics. Our recommendation is to use CT modeling whenever possible, especially now that new software implementations have become available. PMID:29104554

  5. A Three Month Comparative Evaluation of the Effect of Different Surface Treatment Agents on the Surface Integrity and Softness of Acrylic based Soft Liner: An In vivo Study

    PubMed Central

    Mahajan, Neerja; Naveen, Y. G.; Sethuraman, Rajesh

    2017-01-01

    Introduction Acrylic based soft liners are cost effective, yet are inferior in durability as compared to silicone based liners. Hence, this study was conducted to evaluate if the softness and surface integrity of acrylic based soft liner can be maintained by using different surface treatment agents. Aim To comparatively evaluate the effects of Varnish, Monopoly and Kregard surface treatment agents on the surface integrity and softness of acrylic based soft liner at baseline, at one month and after three months. Materials and Methods A total of 37 participants who required conventional maxillary dentures were selected according to the determined inclusion and exclusion criteria of the study. In the maxillary denture on the denture bearing surface, eight palatal recesses (5 mm x 3 mm) were made and filled with acrylic based soft liner (Permasoft). The soft liners in these recesses were given surface treatment and divided as control (uncoated), Varnish, Monopoly and Kregard groups. The hardness and surface integrity were evaluated with Shore A Durometer and Scanning Electron Microscope (SEM) respectively at baseline, one month and three months interval. Surface integrity between groups was compared using Kruskal-Wallis test. Intergroup comparison for hardness was done using ANOVA and Tukey’s HSD post-hoc tests. Results Amongst all the groups tested, surface integrity was maintained in the Kregard group, as compared to control, Varnish and Monopoly groups for all three time intervals (p< 0.001). Kregard treated samples also demonstrated significantly higher softness at all the time intervals (p<0.001). Conclusion Surface treatment with Kregard demonstrated better surface integrity and softness at all the time intervals. PMID:29207842

  6. A new stochastic model considering satellite clock interpolation errors in precise point positioning

    NASA Astrophysics Data System (ADS)

    Wang, Shengli; Yang, Fanlin; Gao, Wang; Yan, Lizi; Ge, Yulong

    2018-03-01

    Precise clock products are typically interpolated based on the sampling interval of the observational data when they are used for in precise point positioning. However, due to the occurrence of white noise in atomic clocks, a residual component of such noise will inevitable reside within the observations when clock errors are interpolated, and such noise will affect the resolution of the positioning results. In this paper, which is based on a twenty-one-week analysis of the atomic clock noise characteristics of numerous satellites, a new stochastic observation model that considers satellite clock interpolation errors is proposed. First, the systematic error of each satellite in the IGR clock product was extracted using a wavelet de-noising method to obtain the empirical characteristics of atomic clock noise within each clock product. Then, based on those empirical characteristics, a stochastic observation model was structured that considered the satellite clock interpolation errors. Subsequently, the IGR and IGS clock products at different time intervals were used for experimental validation. A verification using 179 stations worldwide from the IGS showed that, compared with the conventional model, the convergence times using the stochastic model proposed in this study were respectively shortened by 4.8% and 4.0% when the IGR and IGS 300-s-interval clock products were used and by 19.1% and 19.4% when the 900-s-interval clock products were used. Furthermore, the disturbances during the initial phase of the calculation were also effectively improved.

  7. Intact Interval Timing in Circadian CLOCK Mutants

    PubMed Central

    Cordes, Sara; Gallistel, C. R.

    2008-01-01

    While progress has been made in determining the molecular basis for the circadian clock, the mechanism by which mammalian brains time intervals measured in seconds to minutes remains a mystery. An obvious question is whether the interval timing mechanism shares molecular machinery with the circadian timing mechanism. In the current study, we trained circadian CLOCK +/− and −/− mutant male mice in a peak-interval procedure with 10 and 20-s criteria. The mutant mice were more active than their wild-type littermates, but there were no reliable deficits in the accuracy or precision of their timing as compared with wild-type littermates. This suggests that expression of the CLOCK protein is not necessary for normal interval timing. PMID:18602902

  8. Reliability of the Parabola Approximation Method in Heart Rate Variability Analysis Using Low-Sampling-Rate Photoplethysmography.

    PubMed

    Baek, Hyun Jae; Shin, JaeWook; Jin, Gunwoo; Cho, Jaegeol

    2017-10-24

    Photoplethysmographic signals are useful for heart rate variability analysis in practical ambulatory applications. While reducing the sampling rate of signals is an important consideration for modern wearable devices that enable 24/7 continuous monitoring, there have not been many studies that have investigated how to compensate the low timing resolution of low-sampling-rate signals for accurate heart rate variability analysis. In this study, we utilized the parabola approximation method and measured it against the conventional cubic spline interpolation method for the time, frequency, and nonlinear domain variables of heart rate variability. For each parameter, the intra-class correlation, standard error of measurement, Bland-Altman 95% limits of agreement and root mean squared relative error were presented. Also, elapsed time taken to compute each interpolation algorithm was investigated. The results indicated that parabola approximation is a simple, fast, and accurate algorithm-based method for compensating the low timing resolution of pulse beat intervals. In addition, the method showed comparable performance with the conventional cubic spline interpolation method. Even though the absolute value of the heart rate variability variables calculated using a signal sampled at 20 Hz were not exactly matched with those calculated using a reference signal sampled at 250 Hz, the parabola approximation method remains a good interpolation method for assessing trends in HRV measurements for low-power wearable applications.

  9. Evaluation of listener-based anuran surveys with automated audio recording devices

    USGS Publications Warehouse

    Shearin, A. F.; Calhoun, A.J.K.; Loftin, C.S.

    2012-01-01

    Volunteer-based audio surveys are used to document long-term trends in anuran community composition and abundance. Current sampling protocols, however, are not region- or species-specific and may not detect relatively rare or audibly cryptic species. We used automated audio recording devices to record calling anurans during 2006–2009 at wetlands in Maine, USA. We identified species calling, chorus intensity, time of day, and environmental variables when each species was calling and developed logistic and generalized mixed models to determine the time interval and environmental variables that optimize detection of each species during peak calling periods. We detected eight of nine anurans documented in Maine. Individual recordings selected from the sampling period (0.5 h past sunset to 0100 h) described in the North American Amphibian Monitoring Program (NAAMP) detected fewer species than were detected in recordings from 30 min past sunset until sunrise. Time of maximum detection of presence and full chorusing for three species (green frogs, mink frogs, pickerel frogs) occurred after the NAAMP sampling end time (0100 h). The NAAMP protocol’s sampling period may result in omissions and misclassifications of chorus sizes for certain species. These potential errors should be considered when interpreting trends generated from standardized anuran audio surveys.

  10. speed-ne: Software to simulate and estimate genetic effective population size (Ne ) from linkage disequilibrium observed in single samples.

    PubMed

    Hamilton, Matthew B; Tartakovsky, Maria; Battocletti, Amy

    2018-05-01

    The genetic effective population size, N e , can be estimated from the average gametic disequilibrium (r2^) between pairs of loci, but such estimates require evaluation of assumptions and currently have few methods to estimate confidence intervals. speed-ne is a suite of matlab computer code functions to estimate Ne^ from r2^ with a graphical user interface and a rich set of outputs that aid in understanding data patterns and comparing multiple estimators. speed-ne includes functions to either generate or input simulated genotype data to facilitate comparative studies of Ne^ estimators under various population genetic scenarios. speed-ne was validated with data simulated under both time-forward and time-backward coalescent models of genetic drift. Three classes of estimators were compared with simulated data to examine several general questions: what are the impacts of microsatellite null alleles on Ne^, how should missing data be treated, and does disequilibrium contributed by reduced recombination among some loci in a sample impact Ne^. Estimators differed greatly in precision in the scenarios examined, and a widely employed Ne^ estimator exhibited the largest variances among replicate data sets. speed-ne implements several jackknife approaches to estimate confidence intervals, and simulated data showed that jackknifing over loci and jackknifing over individuals provided ~95% confidence interval coverage for some estimators and should be useful for empirical studies. speed-ne provides an open-source extensible tool for estimation of Ne^ from empirical genotype data and to conduct simulations of both microsatellite and single nucleotide polymorphism (SNP) data types to develop expectations and to compare Ne^ estimators. © 2018 John Wiley & Sons Ltd.

  11. Levonorgestrel release rates over 5 years with the Liletta® 52-mg intrauterine system.

    PubMed

    Creinin, Mitchell D; Jansen, Rolf; Starr, Robert M; Gobburu, Joga; Gopalakrishnan, Mathangi; Olariu, Andrea

    2016-10-01

    To understand the potential duration of action for Liletta®, we conducted this study to estimate levonorgestrel (LNG) release rates over approximately 5½years of product use. Clinical sites in the U.S. Phase 3 study of Liletta collected the LNG intrauterine systems (IUSs) from women who discontinued the study. We randomly selected samples within 90-day intervals after discontinuation of IUS use through 900days (approximately 2.5years) and 180-day intervals for the remaining duration through 5.4years (1980days) to evaluate residual LNG content. We also performed an initial LNG content analysis using 10 randomly selected samples from a single lot. We calculated the average ex vivo release rate using the residual LNG content over the duration of the analysis. We analyzed 64 samples within 90-day intervals (range 6-10 samples per interval) through 900days and 36 samples within 180-day intervals (6 samples per interval) for the remaining duration. The initial content analysis averaged 52.0±1.8mg. We calculated an average initial release rate of 19.5mcg/day that decreased to 17.0, 14.8, 12.9, 11.3 and 9.8mcg/day after 1, 2, 3, 4 and 5years, respectively. The 5-year average release rate is 14.7mcg/day. The estimated initial LNG release rate and gradual decay of the estimated release rate are consistent with the target design and function of the product. The calculated LNG content and release rate curves support the continued evaluation of Liletta as a contraceptive for 5 or more years of use. Liletta LNG content and release rates are comparable to published data for another LNG 52-mg IUS. The release rate at 5years is more than double the published release rate at 3years with an LNG 13.5-mg IUS, suggesting continued efficacy of Liletta beyond 5years. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Comparison of psychomotor function between music students and students participating in music training.

    PubMed

    Chansirinukor, Wunpen; Khemthong, Supalak

    2014-07-01

    To compare psychomotor function between a music student group who had music education and a non-music student group who participated in music training. Consecutive sampling was used for completing questionnaires, testing reaction times (visual, auditory, and tactile system), measuring electromyography of upper trapezius muscles both sides and taking photos of the Craniovertebral (CV) angle in the sitting position. Data collection was made twice for each student group: the music students at one-hour intervals for resting and conducting nonmusic activities, the non-music students at two-day intervals, 20 minutes/session, and performed music training (by a manual of keyboard notation). The non-music students (n = 65) improved reaction times, but responded slower than the music students except for the tactile system. The music students (n = 28) showed faster reaction times and higher activities of the trapezius muscle than the non-music students at post-test. In addition, the CV angle of the non-music students was significantly improved. The level of musical ability may influence the psychomotor function. Significant improvement was observed in visual, auditory and tactile reaction time, and CV angle in the non-music students. However upper trapezius muscle activities between both student groups were unchanged.

  13. Simultaneous determination of radionuclides separable into natural decay series by use of time-interval analysis.

    PubMed

    Hashimoto, Tetsuo; Sanada, Yukihisa; Uezu, Yasuhiro

    2004-05-01

    A delayed coincidence method, time-interval analysis (TIA), has been applied to successive alpha- alpha decay events on the millisecond time-scale. Such decay events are part of the (220)Rn-->(216)Po ( T(1/2) 145 ms) (Th-series) and (219)Rn-->(215)Po ( T(1/2) 1.78 ms) (Ac-series). By using TIA in addition to measurement of (226)Ra (U-series) from alpha-spectrometry by liquid scintillation counting (LSC), two natural decay series could be identified and separated. The TIA detection efficiency was improved by using the pulse-shape discrimination technique (PSD) to reject beta-pulses, by solvent extraction of Ra combined with simple chemical separation, and by purging the scintillation solution with dry N(2) gas. The U- and Th-series together with the Ac-series were determined, respectively, from alpha spectra and TIA carried out immediately after Ra-extraction. Using the (221)Fr-->(217)At ( T(1/2) 32.3 ms) decay process as a tracer, overall yields were estimated from application of TIA to the (225)Ra (Np-decay series) at the time of maximum growth. The present method has proven useful for simultaneous determination of three radioactive decay series in environmental samples.

  14. Characteristic Lifelength of Coherent Structure in the Turbulent Boundary Layer

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.

    2006-01-01

    A characteristic lifelength is defined by which a Gaussian distribution is fit to data correlated over a 3 sensor array sampling streamwise sidewall pressure. The data were acquired at subsonic, transonic and supersonic speeds aboard a Tu-144. Lifelengths are estimated using the cross spectrum and are shown to compare favorably with Efimtsov's prediction of correlation space scales. Lifelength distributions are computed in the time/frequency domain using an interval correlation technique on the continuous wavelet transform of the original time data. The median values of the lifelength distributions are found to be very close to the frequency averaged result. The interval correlation technique is shown to allow the retrieval and inspection of the original time data of each event in the lifelength distribution, thus providing a means to locate and study the nature of the coherent structure in the turbulent boundary layer. The lifelength data can be converted to lifetimes using the convection velocity. The lifetime of events in the time/frequency domain are displayed in Lifetime Maps. The primary purpose of the paper is to validate these new analysis techniques so that they can be used with confidence to further characterize coherent structure in the turbulent boundary layer.

  15. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    NASA Astrophysics Data System (ADS)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  16. Palaeoclimatic oscillations in the Pliensbachian (Early Jurassic) of the Asturian Basin (Northern Spain)

    NASA Astrophysics Data System (ADS)

    Gómez, Juan J.; Comas-Rengifo, María J.; Goy, Antonio

    2016-05-01

    One of the main controversial themes in palaeoclimatology involves elucidating whether climate during the Jurassic was warmer than the present day and if it was the same over Pangaea, with no major latitudinal gradients. There has been an abundance of evidence of oscillations in seawater temperature throughout the Jurassic. The Pliensbachian (Early Jurassic) constitutes a distinctive time interval for which several seawater temperature oscillations, including an exceptional cooling event, have been documented. To constrain the timing and magnitude of these climate changes, the Rodiles section of the Asturian Basin (Northern Spain), a well exposed succession of the uppermost Sinemurian, Pliensbachian and Lower Toarcian deposits, has been studied. A total of 562 beds were measured and sampled for ammonites, for biochronostratigraphical purposes, and for belemnites, to determine the palaeoclimatic evolution through stable isotope studies. Comparison of the recorded latest Sinemurian, Pliensbachian and Early Toarcian changes in seawater palaeotemperature with other European sections allows characterization of several climatic changes that are likely of a global extent. A warming interval partly coinciding with a δ13Cbel negative excursion was recorded at the Late Sinemurian. After a "normal" temperature interval, with temperatures close to average values of the Late Sinemurian-Early Toarcian period, a new warming interval containing a short-lived positive δ13Cbel peak, developed during the Early-Late Pliensbachian transition. The Late Pliensbachian represents an outstanding cooling interval containing a δ13Cbel positive excursion interrupted by a small negative δ13Cbel peak. Finally, the Early Toarcian represented an exceptional warming period, which has been pointed out as being responsible for the prominent Early Toarcian mass extinction.

  17. Reference intervals and longitudinal changes in copeptin and MR-proADM concentrations during pregnancy.

    PubMed

    Joosen, Annemiek M C P; van der Linden, Ivon J M; Schrauwen, Lianne; Theeuwes, Alisia; de Groot, Monique J M; Ermens, Antonius A M

    2017-11-27

    Vasopressin and adrenomedullin and their stable by-products copeptin and midregional part of proadrenomedullin (MR-proADM) are promising biomarkers for the development of preeclampsia. However, clinical use is hampered by the lack of trimester-specific reference intervals. We therefore estimated reference intervals for copeptin and MR-proADM in disease-free Dutch women throughout pregnancy. Apparently healthy low risk pregnant women were recruited. Exclusion criteria included current or past history of endocrine disease, multiple pregnancy, use of medication known to influence thyroid function and current pregnancy as a result of hormonal stimulation. Women who miscarried, developed hyperemesis gravidarum, hypertension, pre-eclampsia, hemolysis elevated liver enzymes and low platelets, diabetes or other disease, delivered prematurely or had a small for gestational age neonate were excluded from analyses. Blood samples were collected at 9-13 weeks (n=98), 27-29 weeks (n=94) and 36-39 weeks (n=91) of gestation and at 4-13 weeks post-partum (PP) (n=89). Sixty-two women had complete data during pregnancy and PP. All analyses were performed on a Kryptor compact plus. Copeptin increases during pregnancy, but 97.5th percentiles remain below the non-pregnant upper reference limit (URL) provided by the manufacturer. MR-proADM concentrations increase as well during pregnancy. In trimesters 2 and 3 the 97.5th percentiles are over three times the non-pregnant URL provided by the manufacturer. Trimester- and assay-specific reference intervals for copeptin and MR-proADM should be used. In addition, consecutive measurements and the time frame between measurements should be considered as the differences seen with or in advance of preeclampsia can be expected to be relatively small compared to the reference intervals.

  18. Working times of elastomeric impression materials determined by dimensional accuracy.

    PubMed

    Tan, E; Chai, J; Wozniak, W T

    1996-01-01

    The working times of five poly(vinyl siloxane) impression materials were estimated by evaluating the dimensional accuracy of stone dies of impressions of a standard model made at successive time intervals. The stainless steel standard model was represented by two abutments having known distances between landmarks in three dimensions. Three dimensions in the x-, y-, and z-axes of the stone dies were measured with a traveling microscope. A time interval was rejected as being within the working time if the percentage change of the resultant dies, in any dimension, was statistically different from those measured from stone dies from previous time intervals. The absolute dimensions of those dies from the rejected time interval also must have exceeded all those from previous time intervals. Results showed that the working times estimated with this method generally were about 30 seconds longer than those recommended by the manufacturers.

  19. Exploring repeat HIV testing among men who have sex with men in Cape Town and Port Elizabeth, South Africa.

    PubMed

    Siegler, Aaron J; Sullivan, Patrick S; de Voux, Alex; Phaswana-Mafuya, Nancy; Bekker, Linda-Gail; Baral, Stefan D; Winskell, Kate; Kose, Zamakayise; Wirtz, Andrea L; Brown, Ben; Stephenson, Rob

    2015-01-01

    Despite the high prevalence of HIV among men who have sex with men (MSM) - and the general adult population - in South Africa, there is little data regarding the extent to which MSM seek repeat testing for HIV. This study explores reported histories of HIV testing, and the rationales for test seeking, among a purposive sample of 34 MSM in two urban areas of South Africa. MSM participated in activity-based in-depth interviews that included a timeline element to facilitate discussion. Repeat HIV testing was limited among participants, with three-quarters having two or fewer lifetime HIV tests, and over one-third of the sample having one or fewer lifetime tests. For most repeat testers, the time gap between their HIV tests was greater than the one-year interval recommended by national guidelines. Analysis of the reasons for seeking HIV testing revealed several types of rationale. The reasons for a first HIV test were frequently one-time occurrences, such as a requirement prior to circumcision, or motivations likely satisfied by a single HIV test. For MSM who reported repeat testing at more timely intervals, the most common rationale was seeking test results with a sex partner. Results indicate a need to shift HIV test promotion messaging and programming for MSM in South Africa away from a one-off model to one that frames HIV testing as a repeated, routine health maintenance behavior.

  20. Single-channel autocorrelation functions: the effects of time interval omission.

    PubMed Central

    Ball, F G; Sansom, M S

    1988-01-01

    We present a general mathematical framework for analyzing the dynamic aspects of single channel kinetics incorporating time interval omission. An algorithm for computing model autocorrelation functions, incorporating time interval omission, is described. We show, under quite general conditions, that the form of these autocorrelations is identical to that which would be obtained if time interval omission was absent. We also show, again under quite general conditions, that zero correlations are necessarily a consequence of the underlying gating mechanism and not an artefact of time interval omission. The theory is illustrated by a numerical study of an allosteric model for the gating mechanism of the locust muscle glutamate receptor-channel. PMID:2455553

Top