Sample records for time point samples

  1. Selective Data Acquisition in NMR. The Quantification of Anti-phase Scalar Couplings

    NASA Astrophysics Data System (ADS)

    Hodgkinson, P.; Holmes, K. J.; Hore, P. J.

    Almost all time-domain NMR experiments employ "linear sampling," in which the NMR response is digitized at equally spaced times, with uniform signal averaging. Here, the possibilities of nonlinear sampling are explored using anti-phase doublets in the indirectly detected dimensions of multidimensional COSY-type experiments as an example. The Cramér-Rao lower bounds are used to evaluate and optimize experiments in which the sampling points, or the extent of signal averaging at each point, or both, are varied. The optimal nonlinear sampling for the estimation of the coupling constant J, by model fitting, turns out to involve just a few key time points, for example, at the first node ( t= 1/ J) of the sin(π Jt) modulation. Such sparse sampling patterns can be used to derive more practical strategies, in which the sampling or the signal averaging is distributed around the most significant time points. The improvements in the quantification of NMR parameters can be quite substantial especially when, as is often the case for indirectly detected dimensions, the total number of samples is limited by the time available.

  2. A test of alternative estimators for volume at time 1 from remeasured point samples

    Treesearch

    Francis A. Roesch; Edwin J. Green; Charles T. Scott

    1993-01-01

    Two estimators for volume at time 1 for use with permanent horizontal point samples are evaluated. One estimator, used traditionally, uses only the trees sampled at time 1, while the second estimator, originally presented by Roesch and coauthors (F.A. Roesch, Jr., E.J. Green, and C.T. Scott. 1989. For. Sci. 35(2):281-293). takes advantage of additional sample...

  3. Statistical aspects of point count sampling

    USGS Publications Warehouse

    Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.

    1995-01-01

    The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.

  4. Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.

    PubMed

    Omer, Travis; Intes, Xavier; Hahn, Juergen

    2015-01-01

    Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.

  5. Identification of driving network of cellular differentiation from single sample time course gene expression data

    NASA Astrophysics Data System (ADS)

    Chen, Ye; Wolanyk, Nathaniel; Ilker, Tunc; Gao, Shouguo; Wang, Xujing

    Methods developed based on bifurcation theory have demonstrated their potential in driving network identification for complex human diseases, including the work by Chen, et al. Recently bifurcation theory has been successfully applied to model cellular differentiation. However, there one often faces a technical challenge in driving network prediction: time course cellular differentiation study often only contains one sample at each time point, while driving network prediction typically require multiple samples at each time point to infer the variation and interaction structures of candidate genes for the driving network. In this study, we investigate several methods to identify both the critical time point and the driving network through examination of how each time point affects the autocorrelation and phase locking. We apply these methods to a high-throughput sequencing (RNA-Seq) dataset of 42 subsets of thymocytes and mature peripheral T cells at multiple time points during their differentiation (GSE48138 from GEO). We compare the predicted driving genes with known transcription regulators of cellular differentiation. We will discuss the advantages and limitations of our proposed methods, as well as potential further improvements of our methods.

  6. Ruminal bacteria and protozoa composition, digestibility, and amino acid profile determined by multiple hydrolysis times.

    PubMed

    Fessenden, S W; Hackmann, T J; Ross, D A; Foskolos, A; Van Amburgh, M E

    2017-09-01

    Microbial samples from 4 independent experiments in lactating dairy cattle were obtained and analyzed for nutrient composition, AA digestibility, and AA profile after multiple hydrolysis times ranging from 2 to 168 h. Similar bacterial and protozoal isolation techniques were used for all isolations. Omasal bacteria and protozoa samples were analyzed for AA digestibility using a new in vitro technique. Multiple time point hydrolysis and least squares nonlinear regression were used to determine the AA content of omasal bacteria and protozoa, and equivalency comparisons were made against single time point hydrolysis. Formalin was used in 1 experiment, which negatively affected AA digestibility and likely limited the complete release of AA during acid hydrolysis. The mean AA digestibility was 87.8 and 81.6% for non-formalin-treated bacteria and protozoa, respectively. Preservation of microbe samples in formalin likely decreased recovery of several individual AA. Results from the multiple time point hydrolysis indicated that Ile, Val, and Met hydrolyzed at a slower rate compared with other essential AA. Singe time point hydrolysis was found to be nonequivalent to multiple time point hydrolysis when considering biologically important changes in estimated microbial AA profiles. Several AA, including Met, Ile, and Val, were underpredicted using AA determination after a single 24-h hydrolysis. Models for predicting postruminal supply of AA might need to consider potential bias present in postruminal AA flow literature when AA determinations are performed after single time point hydrolysis and when using formalin as a preservative for microbial samples. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  7. Investigation of the influence of sampling schemes on quantitative dynamic fluorescence imaging

    PubMed Central

    Dai, Yunpeng; Chen, Xueli; Yin, Jipeng; Wang, Guodong; Wang, Bo; Zhan, Yonghua; Nie, Yongzhan; Wu, Kaichun; Liang, Jimin

    2018-01-01

    Dynamic optical data from a series of sampling intervals can be used for quantitative analysis to obtain meaningful kinetic parameters of probe in vivo. The sampling schemes may affect the quantification results of dynamic fluorescence imaging. Here, we investigate the influence of different sampling schemes on the quantification of binding potential (BP) with theoretically simulated and experimentally measured data. Three groups of sampling schemes are investigated including the sampling starting point, sampling sparsity, and sampling uniformity. In the investigation of the influence of the sampling starting point, we further summarize two cases by considering the missing timing sequence between the probe injection and sampling starting time. Results show that the mean value of BP exhibits an obvious growth trend with an increase in the delay of the sampling starting point, and has a strong correlation with the sampling sparsity. The growth trend is much more obvious if throwing the missing timing sequence. The standard deviation of BP is inversely related to the sampling sparsity, and independent of the sampling uniformity and the delay of sampling starting time. Moreover, the mean value of BP obtained by uniform sampling is significantly higher than that by using the non-uniform sampling. Our results collectively suggest that a suitable sampling scheme can help compartmental modeling of dynamic fluorescence imaging provide more accurate results and simpler operations. PMID:29675325

  8. A general method to determine sampling windows for nonlinear mixed effects models with an application to population pharmacokinetic studies.

    PubMed

    Foo, Lee Kien; McGree, James; Duffull, Stephen

    2012-01-01

    Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.

  9. Analysis of Sample Size, Counting Time, and Plot Size from an Avian Point Count Survey on Hoosier National Forest, Indiana

    Treesearch

    Frank R. Thompson; Monica J. Schwalbach

    1995-01-01

    We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...

  10. Portable Dew Point Mass Spectrometry System for Real-Time Gas and Moisture Analysis

    NASA Technical Reports Server (NTRS)

    Arkin, C.; Gillespie, Stacey; Ratzel, Christopher

    2010-01-01

    A portable instrument incorporates both mass spectrometry and dew point measurement to provide real-time, quantitative gas measurements of helium, nitrogen, oxygen, argon, and carbon dioxide, along with real-time, quantitative moisture analysis. The Portable Dew Point Mass Spectrometry (PDP-MS) system comprises a single quadrupole mass spectrometer and a high vacuum system consisting of a turbopump and a diaphragm-backing pump. A capacitive membrane dew point sensor was placed upstream of the MS, but still within the pressure-flow control pneumatic region. Pressure-flow control was achieved with an upstream precision metering valve, a capacitance diaphragm gauge, and a downstream mass flow controller. User configurable LabVIEW software was developed to provide real-time concentration data for the MS, dew point monitor, and sample delivery system pressure control, pressure and flow monitoring, and recording. The system has been designed to include in situ, NIST-traceable calibration. Certain sample tubing retains sufficient water that even if the sample is dry, the sample tube will desorb water to an amount resulting in moisture concentration errors up to 500 ppm for as long as 10 minutes. It was determined that Bev-A-Line IV was the best sample line to use. As a result of this issue, it is prudent to add a high-level humidity sensor to PDP-MS so such events can be prevented in the future.

  11. Evaluation of limited sampling models for prediction of oral midazolam AUC for CYP3A phenotyping and drug interaction studies.

    PubMed

    Mueller, Silke C; Drewelow, Bernd

    2013-05-01

    The area under the concentration-time curve (AUC) after oral midazolam administration is commonly used for cytochrome P450 (CYP) 3A phenotyping studies. The aim of this investigation was to evaluate a limited sampling strategy for the prediction of AUC with oral midazolam. A total of 288 concentration-time profiles from 123 healthy volunteers who participated in four previously performed drug interaction studies with intense sampling after a single oral dose of 7.5 mg midazolam were available for evaluation. Of these, 45 profiles served for model building, which was performed by stepwise multiple linear regression, and the remaining 243 datasets served for validation. Mean prediction error (MPE), mean absolute error (MAE) and root mean squared error (RMSE) were calculated to determine bias and precision The one- to four-sampling point models with the best coefficient of correlation were the one-sampling point model (8 h; r (2) = 0.84), the two-sampling point model (0.5 and 8 h; r (2) = 0.93), the three-sampling point model (0.5, 2, and 8 h; r (2) = 0.96), and the four-sampling point model (0.5,1, 2, and 8 h; r (2) = 0.97). However, the one- and two-sampling point models were unable to predict the midazolam AUC due to unacceptable bias and precision. Only the four-sampling point model predicted the very low and very high midazolam AUC of the validation dataset with acceptable precision and bias. The four-sampling point model was also able to predict the geometric mean ratio of the treatment phase over the baseline (with 90 % confidence interval) results of three drug interaction studies in the categories of strong, moderate, and mild induction, as well as no interaction. A four-sampling point limited sampling strategy to predict the oral midazolam AUC for CYP3A phenotyping is proposed. The one-, two- and three-sampling point models were not able to predict midazolam AUC accurately.

  12. Selecting the most appropriate time points to profile in high-throughput studies

    PubMed Central

    Kleyman, Michael; Sefer, Emre; Nicola, Teodora; Espinoza, Celia; Chhabra, Divya; Hagood, James S; Kaminski, Naftali; Ambalavanan, Namasivayam; Bar-Joseph, Ziv

    2017-01-01

    Biological systems are increasingly being studied by high throughput profiling of molecular data over time. Determining the set of time points to sample in studies that profile several different types of molecular data is still challenging. Here we present the Time Point Selection (TPS) method that solves this combinatorial problem in a principled and practical way. TPS utilizes expression data from a small set of genes sampled at a high rate. As we show by applying TPS to study mouse lung development, the points selected by TPS can be used to reconstruct an accurate representation for the expression values of the non selected points. Further, even though the selection is only based on gene expression, these points are also appropriate for representing a much larger set of protein, miRNA and DNA methylation changes over time. TPS can thus serve as a key design strategy for high throughput time series experiments. Supporting Website: www.sb.cs.cmu.edu/TPS DOI: http://dx.doi.org/10.7554/eLife.18541.001 PMID:28124972

  13. A possible simplification for the estimation of area under the curve (AUC₀₋₁₂) of enteric-coated mycophenolate sodium in renal transplant patients receiving tacrolimus.

    PubMed

    Fleming, Denise H; Mathew, Binu S; Prasanna, Samuel; Annapandian, Vellaichamy M; John, George T

    2011-04-01

    Enteric-coated mycophenolate sodium (EC-MPS) is widely used in renal transplantation. With a delayed absorption profile, it has not been possible to develop limited sampling strategies to estimate area under the curve (mycophenolic acid [MPA] AUC₀₋₁₂), which have limited time points and are completed in 2 hours. We developed and validated simplified strategies to estimate MPA AUC₀₋₁₂ in an Indian renal transplant population prescribed EC-MPS together with prednisolone and tacrolimus. Intensive pharmacokinetic sampling (17 samples each) was performed in 18 patients to measure MPA AUC₀₋₁₂. The profiles at 1 month were used to develop the simplified strategies and those at 5.5 months used for validation. We followed two approaches. In one, the AUC was calculated using the trapezoidal rule with fewer time points followed by an extrapolation. In the second approach, by stepwise multiple regression analysis, models with different time points were identified and linear regression analysis performed. Using the trapezoidal rule, two equations were developed with six time points and sampling to 6 or 8 hours (8hrAUC[₀₋₁₂exp]) after the EC-MPS dose. On validation, the 8hrAUC(₀₋₁₂exp) compared with total measured AUC₀₋₁₂ had a coefficient of correlation (r²) of 0.872 with a bias and precision (95% confidence interval) of 0.54% (-6.07-7.15) and 9.73% (5.37-14.09), respectively. Second, limited sampling strategies were developed with four, five, six, seven, and eight time points and completion within 2 hours, 4 hours, 6 hours, and 8 hours after the EC-MPS dose. On validation, six, seven, and eight time point equations, all with sampling to 8 hours, had an acceptable r with the total measured MPA AUC₀₋₁₂ (0.817-0.927). In the six, seven, and eight time points, the bias (95% confidence interval) was 3.00% (-4.59 to 10.59), 0.29% (-5.4 to 5.97), and -0.72% (-5.34 to 3.89) and the precision (95% confidence interval) was 10.59% (5.06-16.13), 8.33% (4.55-12.1), and 6.92% (3.94-9.90), respectively. Of the eight simplified approaches, inclusion of seven or eight time points improved the accuracy of the predicted AUC compared with the actual and can be advocated based on the priority of the user.

  14. The relevance of time series in molecular ecology and conservation biology.

    PubMed

    Habel, Jan C; Husemann, Martin; Finger, Aline; Danley, Patrick D; Zachos, Frank E

    2014-05-01

    The genetic structure of a species is shaped by the interaction of contemporary and historical factors. Analyses of individuals from the same population sampled at different points in time can help to disentangle the effects of current and historical forces and facilitate the understanding of the forces driving the differentiation of populations. The use of such time series allows for the exploration of changes at the population and intraspecific levels over time. Material from museum collections plays a key role in understanding and evaluating observed population structures, especially if large numbers of individuals have been sampled from the same locations at multiple time points. In these cases, changes in population structure can be assessed empirically. The development of new molecular markers relying on short DNA fragments (such as microsatellites or single nucleotide polymorphisms) allows for the analysis of long-preserved and partially degraded samples. Recently developed techniques to construct genome libraries with a reduced complexity and next generation sequencing and their associated analysis pipelines have the potential to facilitate marker development and genotyping in non-model species. In this review, we discuss the problems with sampling and available marker systems for historical specimens and demonstrate that temporal comparative studies are crucial for the estimation of important population genetic parameters and to measure empirically the effects of recent habitat alteration. While many of these analyses can be performed with samples taken at a single point in time, the measurements are more robust if multiple points in time are studied. Furthermore, examining the effects of habitat alteration, population declines, and population bottlenecks is only possible if samples before and after the respective events are included. © 2013 The Authors. Biological Reviews © 2013 Cambridge Philosophical Society.

  15. Point Intercept (PO)

    Treesearch

    John F. Caratti

    2006-01-01

    The FIREMON Point Intercept (PO) method is used to assess changes in plant species cover or ground cover for a macroplot. This method uses a narrow diameter sampling pole or sampling pins, placed at systematic intervals along line transects to sample within plot variation and quantify statistically valid changes in plant species cover and height over time. Plant...

  16. Time-of-travel data for Nebraska streams, 1968 to 1977

    USGS Publications Warehouse

    Petri, L.R.

    1984-01-01

    This report documents the results of 10 time-of-travel studies, using ' dye-tracer ' methods, conducted on five streams in Nebraska during the period 1968 to 1977. Streams involved in the studies were the North Platte, North Loup, Elkhorn, and Big Blue Rivers and Salt Creek. Rhodamine WT dye in a 20 percent solution was used as the tracer for all 10 time-of-travel studies. Water samples were collected at several points below each injection site. Concentrations of dye in the samples were measured by determining fluorescence of the sample and comparing that value to fluorescence-concentration curves. Stream discharges were measured before and during each study. Results of each time-by-travel study are shown on two tables and on graph. The first table shows water discharge at injection and sampling sites, distance between sites, and time and rate of travel of the dye between sites. The second table provides descriptions of study sites, amounts of dye injected in the streams, actual sampling times, and actual concentrations of dye detected. The graphs for each time-of-travel study provide indications of changing travel rates between sampling sites, information on length of dye clouds, and times for dye passage past given points. (USGS)

  17. Communication: importance sampling including path correlation in semiclassical initial value representation calculations for time correlation functions.

    PubMed

    Pan, Feng; Tao, Guohua

    2013-03-07

    Full semiclassical (SC) initial value representation (IVR) for time correlation functions involves a double phase space average over a set of two phase points, each of which evolves along a classical path. Conventionally, the two initial phase points are sampled independently for all degrees of freedom (DOF) in the Monte Carlo procedure. Here, we present an efficient importance sampling scheme by including the path correlation between the two initial phase points for the bath DOF, which greatly improves the performance of the SC-IVR calculations for large molecular systems. Satisfactory convergence in the study of quantum coherence in vibrational relaxation has been achieved for a benchmark system-bath model with up to 21 DOF.

  18. 49 CFR 563.8 - Data format.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... the first acceleration data point; (3) The number of the last point (NLP), which is an integer that...; and (4) NLP—NFP + 1 acceleration values sequentially beginning with the acceleration at time NFP * TS and continue sampling the acceleration at TS increments in time until the time NLP * TS is reached...

  19. The utility of point count surveys to predict wildlife interactions with wind energy facilities: An example focused on golden eagles

    USGS Publications Warehouse

    Sur, Maitreyi; Belthoff, James R.; Bjerre, Emily R.; Millsap, Brian A.; Katzner, Todd

    2018-01-01

    Wind energy development is rapidly expanding in North America, often accompanied by requirements to survey potential facility locations for existing wildlife. Within the USA, golden eagles (Aquila chrysaetos) are among the most high-profile species of birds that are at risk from wind turbines. To minimize golden eagle fatalities in areas proposed for wind development, modified point count surveys are usually conducted to estimate use by these birds. However, it is not always clear what drives variation in the relationship between on-site point count data and actual use by eagles of a wind energy project footprint. We used existing GPS-GSM telemetry data, collected at 15 min intervals from 13 golden eagles in 2012 and 2013, to explore the relationship between point count data and eagle use of an entire project footprint. To do this, we overlaid the telemetry data on hypothetical project footprints and simulated a variety of point count sampling strategies for those footprints. We compared the time an eagle was found in the sample plots with the time it was found in the project footprint using a metric we called “error due to sampling”. Error due to sampling for individual eagles appeared to be influenced by interactions between the size of the project footprint (20, 40, 90 or 180 km2) and the sampling type (random, systematic or stratified) and was greatest on 90 km2 plots. However, use of random sampling resulted in lowest error due to sampling within intermediate sized plots. In addition sampling intensity and sampling frequency both influenced the effectiveness of point count sampling. Although our work focuses on individual eagles (not the eagle populations typically surveyed in the field), our analysis shows both the utility of simulations to identify specific influences on error and also potential improvements to sampling that consider the context-specific manner that point counts are laid out on the landscape.

  20. A quantitative evaluation of two methods for preserving hair samples

    USGS Publications Warehouse

    Roon, David A.; Waits, L.P.; Kendall, K.C.

    2003-01-01

    Hair samples are an increasingly important DNA source for wildlife studies, yet optimal storage methods and DNA degradation rates have not been rigorously evaluated. We tested amplification success rates over a one-year storage period for DNA extracted from brown bear (Ursus arctos) hair samples preserved using silica desiccation and -20C freezing. For three nuclear DNA microsatellites, success rates decreased significantly after a six-month time point, regardless of storage method. For a 1000 bp mitochondrial fragment, a similar decrease occurred after a two-week time point. Minimizing delays between collection and DNA extraction will maximize success rates for hair-based noninvasive genetic sampling projects.

  1. Using learned under-sampling pattern for increasing speed of cardiac cine MRI based on compressive sensing principles

    NASA Astrophysics Data System (ADS)

    Zamani, Pooria; Kayvanrad, Mohammad; Soltanian-Zadeh, Hamid

    2012-12-01

    This article presents a compressive sensing approach for reducing data acquisition time in cardiac cine magnetic resonance imaging (MRI). In cardiac cine MRI, several images are acquired throughout the cardiac cycle, each of which is reconstructed from the raw data acquired in the Fourier transform domain, traditionally called k-space. In the proposed approach, a majority, e.g., 62.5%, of the k-space lines (trajectories) are acquired at the odd time points and a minority, e.g., 37.5%, of the k-space lines are acquired at the even time points of the cardiac cycle. Optimal data acquisition at the even time points is learned from the data acquired at the odd time points. To this end, statistical features of the k-space data at the odd time points are clustered by fuzzy c-means and the results are considered as the states of Markov chains. The resulting data is used to train hidden Markov models and find their transition matrices. Then, the trajectories corresponding to transition matrices far from an identity matrix are selected for data acquisition. At the end, an iterative thresholding algorithm is used to reconstruct the images from the under-sampled k-space datasets. The proposed approaches for selecting the k-space trajectories and reconstructing the images generate more accurate images compared to alternative methods. The proposed under-sampling approach achieves an acceleration factor of 2 for cardiac cine MRI.

  2. Local activation time sampling density for atrial tachycardia contact mapping: how much is enough?

    PubMed

    Williams, Steven E; Harrison, James L; Chubb, Henry; Whitaker, John; Kiedrowicz, Radek; Rinaldi, Christopher A; Cooklin, Michael; Wright, Matthew; Niederer, Steven; O'Neill, Mark D

    2018-02-01

    Local activation time (LAT) mapping forms the cornerstone of atrial tachycardia diagnosis. Although anatomic and positional accuracy of electroanatomic mapping (EAM) systems have been validated, the effect of electrode sampling density on LAT map reconstruction is not known. Here, we study the effect of chamber geometry and activation complexity on optimal LAT sampling density using a combined in silico and in vivo approach. In vivo 21 atrial tachycardia maps were studied in three groups: (1) focal activation, (2) macro-re-entry, and (3) localized re-entry. In silico activation was simulated on a 4×4cm atrial monolayer, sampled randomly at 0.25-10 points/cm2 and used to re-interpolate LAT maps. Activation patterns were studied in the geometrically simple porcine right atrium (RA) and complex human left atrium (LA). Activation complexity was introduced into the porcine RA by incomplete inter-caval linear ablation. In all cases, optimal sampling density was defined as the highest density resulting in minimal further error reduction in the re-interpolated maps. Optimal sampling densities for LA tachycardias were 0.67 ± 0.17 points/cm2 (focal activation), 1.05 ± 0.32 points/cm2 (macro-re-entry) and 1.23 ± 0.26 points/cm2 (localized re-entry), P = 0.0031. Increasing activation complexity was associated with increased optimal sampling density both in silico (focal activation 1.09 ± 0.14 points/cm2; re-entry 1.44 ± 0.49 points/cm2; spiral-wave 1.50 ± 0.34 points/cm2, P < 0.0001) and in vivo (porcine RA pre-ablation 0.45 ± 0.13 vs. post-ablation 0.78 ± 0.17 points/cm2, P = 0.0008). Increasing chamber geometry was also associated with increased optimal sampling density (0.61 ± 0.22 points/cm2 vs. 1.0 ± 0.34 points/cm2, P = 0.0015). Optimal sampling densities can be identified to maximize diagnostic yield of LAT maps. Greater sampling density is required to correctly reveal complex activation and represent activation across complex geometries. Overall, the optimal sampling density for LAT map interpolation defined in this study was ∼1.0-1.5 points/cm2. Published on behalf of the European Society of Cardiology

  3. A Statistical Guide to the Design of Deep Mutational Scanning Experiments

    PubMed Central

    Matuszewski, Sebastian; Hildebrandt, Marcel E.; Ghenu, Ana-Hermina; Jensen, Jeffrey D.; Bank, Claudia

    2016-01-01

    The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. PMID:27412710

  4. The effect of different control point sampling sequences on convergence of VMAT inverse planning

    NASA Astrophysics Data System (ADS)

    Pardo Montero, Juan; Fenwick, John D.

    2011-04-01

    A key component of some volumetric-modulated arc therapy (VMAT) optimization algorithms is the progressive addition of control points to the optimization. This idea was introduced in Otto's seminal VMAT paper, in which a coarse sampling of control points was used at the beginning of the optimization and new control points were progressively added one at a time. A different form of the methodology is also present in the RapidArc optimizer, which adds new control points in groups called 'multiresolution levels', each doubling the number of control points in the optimization. This progressive sampling accelerates convergence, improving the results obtained, and has similarities with the ordered subset algorithm used to accelerate iterative image reconstruction. In this work we have used a VMAT optimizer developed in-house to study the performance of optimization algorithms which use different control point sampling sequences, most of which fall into three different classes: doubling sequences, which add new control points in groups such that the number of control points in the optimization is (roughly) doubled; Otto-like progressive sampling which adds one control point at a time, and equi-length sequences which contain several multiresolution levels each with the same number of control points. Results are presented in this study for two clinical geometries, prostate and head-and-neck treatments. A dependence of the quality of the final solution on the number of starting control points has been observed, in agreement with previous works. We have found that some sequences, especially E20 and E30 (equi-length sequences with 20 and 30 multiresolution levels, respectively), generate better results than a 5 multiresolution level RapidArc-like sequence. The final value of the cost function is reduced up to 20%, such reductions leading to small improvements in dosimetric parameters characterizing the treatments—slightly more homogeneous target doses and better sparing of the organs at risk.

  5. Composite analysis for Escherichia coli at coastal beaches

    USGS Publications Warehouse

    Bertke, E.E.

    2007-01-01

    At some coastal beaches, concentrations of fecal-indicator bacteria can differ substantially between multiple points at the same beach at the same time. Because of this spatial variability, the recreational water quality at beaches is sometimes determined by stratifying a beach into several areas and collecting a sample from each area to analyze for the concentration of fecal-indicator bacteria. The average concentration of bacteria from those points is often used to compare to the recreational standard for advisory postings. Alternatively, if funds are limited, a single sample is collected to represent the beach. Compositing the samples collected from each section of the beach may yield equally accurate data as averaging concentrations from multiple points, at a reduced cost. In the study described herein, water samples were collected at multiple points from three Lake Erie beaches and analyzed for Escherichia coli on modified mTEC agar (EPA Method 1603). From the multiple-point samples, a composite sample (n = 116) was formed at each beach by combining equal aliquots of well-mixed water from each point. Results from this study indicate that E. coli concentrations from the arithmetic average of multiple-point samples and from composited samples are not significantly different (t = 1.59, p = 0.1139) and yield similar measures of recreational water quality; additionally, composite samples could result in a significant cost savings.

  6. A comparison of cover calculation techniques for relating point-intercept vegetation sampling to remote sensing imagery

    USDA-ARS?s Scientific Manuscript database

    Accurate and timely spatial predictions of vegetation cover from remote imagery are an important data source for natural resource management. High-quality in situ data are needed to develop and validate these products. Point-intercept sampling techniques are a common method for obtaining quantitativ...

  7. Smoothing data series by means of cubic splines: quality of approximation and introduction of a repeating spline approach

    NASA Astrophysics Data System (ADS)

    Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael

    2017-09-01

    Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.

  8. Comparing line-intersect, fixed-area, and point relascope sampling for dead and downed coarse woody material in a managed northern hardwood forest

    Treesearch

    G. J. Jordan; M. J. Ducey; J. H. Gove

    2004-01-01

    We present the results of a timed field trial comparing the bias characteristics and relative sampling efficiency of line-intersect, fixed-area, and point relascope sampling for downed coarse woody material. Seven stands in a managed northern hardwood forest in New Hampshire were inventoried. Significant differences were found among estimates in some stands, indicating...

  9. Instance-Based Learning: Integrating Sampling and Repeated Decisions from Experience

    ERIC Educational Resources Information Center

    Gonzalez, Cleotilde; Dutt, Varun

    2011-01-01

    In decisions from experience, there are 2 experimental paradigms: sampling and repeated-choice. In the sampling paradigm, participants sample between 2 options as many times as they want (i.e., the stopping point is variable), observe the outcome with no real consequences each time, and finally select 1 of the 2 options that cause them to earn or…

  10. Light beam range finder

    DOEpatents

    McEwan, Thomas E.

    1998-01-01

    A "laser tape measure" for measuring distance which includes a transmitter such as a laser diode which transmits a sequence of electromagnetic pulses in response to a transmit timing signal. A receiver samples reflections from objects within the field of the sequence of visible electromagnetic pulses with controlled timing, in response to a receive timing signal. The receiver generates a sample signal in response to the samples which indicates distance to the object causing the reflections. The timing circuit supplies the transmit timing signal to the transmitter and supplies the receive timing signal to the receiver. The receive timing signal causes the receiver to sample the reflection such that the time between transmission of pulses in the sequence in sampling by the receiver sweeps over a range of delays. The transmit timing signal causes the transmitter to transmit the sequence of electromagnetic pulses at a pulse repetition rate, and the received timing signal sweeps over the range of delays in a sweep cycle such that reflections are sampled at the pulse repetition rate and with different delays in the range of delays, such that the sample signal represents received reflections in equivalent time. The receiver according to one aspect of the invention includes an avalanche photodiode and a sampling gate coupled to the photodiode which is responsive to the received timing signal. The transmitter includes a laser diode which supplies a sequence of visible electromagnetic pulses. A bright spot projected on to the target clearly indicates the point that is being measured, and the user can read the range to that point with precision of better than 0.1%.

  11. Light beam range finder

    DOEpatents

    McEwan, T.E.

    1998-06-16

    A ``laser tape measure`` for measuring distance is disclosed which includes a transmitter such as a laser diode which transmits a sequence of electromagnetic pulses in response to a transmit timing signal. A receiver samples reflections from objects within the field of the sequence of visible electromagnetic pulses with controlled timing, in response to a receive timing signal. The receiver generates a sample signal in response to the samples which indicates distance to the object causing the reflections. The timing circuit supplies the transmit timing signal to the transmitter and supplies the receive timing signal to the receiver. The receive timing signal causes the receiver to sample the reflection such that the time between transmission of pulses in the sequence in sampling by the receiver sweeps over a range of delays. The transmit timing signal causes the transmitter to transmit the sequence of electromagnetic pulses at a pulse repetition rate, and the received timing signal sweeps over the range of delays in a sweep cycle such that reflections are sampled at the pulse repetition rate and with different delays in the range of delays, such that the sample signal represents received reflections in equivalent time. The receiver according to one aspect of the invention includes an avalanche photodiode and a sampling gate coupled to the photodiode which is responsive to the received timing signal. The transmitter includes a laser diode which supplies a sequence of visible electromagnetic pulses. A bright spot projected on to the target clearly indicates the point that is being measured, and the user can read the range to that point with precision of better than 0.1%. 7 figs.

  12. Longitudinal Effects on Early Adolescent Language: A Twin Study

    PubMed Central

    DeThorne, Laura Segebart; Smith, Jamie Mahurin; Betancourt, Mariana Aparicio; Petrill, Stephen A.

    2016-01-01

    Purpose We evaluated genetic and environmental contributions to individual differences in language skills during early adolescence, measured by both language sampling and standardized tests, and examined the extent to which these genetic and environmental effects are stable across time. Method We used structural equation modeling on latent factors to estimate additive genetic, shared environmental, and nonshared environmental effects on variance in standardized language skills (i.e., Formal Language) and productive language-sample measures (i.e., Productive Language) in a sample of 527 twins across 3 time points (mean ages 10–12 years). Results Individual differences in the Formal Language factor were influenced primarily by genetic factors at each age, whereas individual differences in the Productive Language factor were primarily due to nonshared environmental influences. For the Formal Language factor, the stability of genetic effects was high across all 3 time points. For the Productive Language factor, nonshared environmental effects showed low but statistically significant stability across adjacent time points. Conclusions The etiology of language outcomes may differ substantially depending on assessment context. In addition, the potential mechanisms for nonshared environmental influences on language development warrant further investigation. PMID:27732720

  13. Sampling command generator corrects for noise and dropouts in recorded data

    NASA Technical Reports Server (NTRS)

    Anderson, T. O.

    1973-01-01

    Generator measures period between zero crossings of reference signal and accepts as correct timing points only those zero crossings which occur acceptably close to nominal time predicted from last accepted command. Unidirectional crossover points are used exclusively so errors from analog nonsymmetry of crossover detector are avoided.

  14. A Statistical Guide to the Design of Deep Mutational Scanning Experiments.

    PubMed

    Matuszewski, Sebastian; Hildebrandt, Marcel E; Ghenu, Ana-Hermina; Jensen, Jeffrey D; Bank, Claudia

    2016-09-01

    The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. Copyright © 2016 by the Genetics Society of America.

  15. Time-integrated passive sampling as a complement to conventional point-in-time sampling for investigating drinking-water quality, McKenzie River Basin, Oregon, 2007 and 2010-11

    USGS Publications Warehouse

    McCarthy, Kathleen A.; Alvarez, David A.

    2014-01-01

    The Eugene Water & Electric Board (EWEB) supplies drinking water to approximately 200,000 people in Eugene, Oregon. The sole source of this water is the McKenzie River, which has consistently excellent water quality relative to established drinking-water standards. To ensure that this quality is maintained as land use in the source basin changes and water demands increase, EWEB has developed a proactive management strategy that includes a combination of conventional point-in-time discrete water sampling and time‑integrated passive sampling with a combination of chemical analyses and bioassays to explore water quality and identify where vulnerabilities may lie. In this report, we present the results from six passive‑sampling deployments at six sites in the basin, including the intake and outflow from the EWEB drinking‑water treatment plant (DWTP). This is the first known use of passive samplers to investigate both the source and finished water of a municipal DWTP. Results indicate that low concentrations of several polycyclic aromatic hydrocarbons and organohalogen compounds are consistently present in source waters, and that many of these compounds are also present in finished drinking water. The nature and patterns of compounds detected suggest that land-surface runoff and atmospheric deposition act as ongoing sources of polycyclic aromatic hydrocarbons, some currently used pesticides, and several legacy organochlorine pesticides. Comparison of results from point-in-time and time-integrated sampling indicate that these two methods are complementary and, when used together, provide a clearer understanding of contaminant sources than either method alone.

  16. A new method for mapping multidimensional data to lower dimensions

    NASA Technical Reports Server (NTRS)

    Gowda, K. C.

    1983-01-01

    A multispectral mapping method is proposed which is based on the new concept of BEND (Bidimensional Effective Normalised Difference). The method, which involves taking one sample point at a time and finding the interrelationships between its features, is found very economical from the point of view of storage and processing time. It has good dimensionality reduction and clustering properties, and is highly suitable for computer analysis of large amounts of data. The transformed values obtained by this procedure are suitable for either a planar 2-space mapping of geological sample points or for making grayscale and color images of geo-terrains. A few examples are given to justify the efficacy of the proposed procedure.

  17. Instance-based learning: integrating sampling and repeated decisions from experience.

    PubMed

    Gonzalez, Cleotilde; Dutt, Varun

    2011-10-01

    In decisions from experience, there are 2 experimental paradigms: sampling and repeated-choice. In the sampling paradigm, participants sample between 2 options as many times as they want (i.e., the stopping point is variable), observe the outcome with no real consequences each time, and finally select 1 of the 2 options that cause them to earn or lose money. In the repeated-choice paradigm, participants select 1 of the 2 options for a fixed number of times and receive immediate outcome feedback that affects their earnings. These 2 experimental paradigms have been studied independently, and different cognitive processes have often been assumed to take place in each, as represented in widely diverse computational models. We demonstrate that behavior in these 2 paradigms relies upon common cognitive processes proposed by the instance-based learning theory (IBLT; Gonzalez, Lerch, & Lebiere, 2003) and that the stopping point is the only difference between the 2 paradigms. A single cognitive model based on IBLT (with an added stopping point rule in the sampling paradigm) captures human choices and predicts the sequence of choice selections across both paradigms. We integrate the paradigms through quantitative model comparison, where IBLT outperforms the best models created for each paradigm separately. We discuss the implications for the psychology of decision making. © 2011 American Psychological Association

  18. Limited Sampling Strategy for Accurate Prediction of Pharmacokinetics of Saroglitazar: A 3-point Linear Regression Model Development and Successful Prediction of Human Exposure.

    PubMed

    Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V

    2018-03-01

    Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were <30% (predefined criterion) and correlation (r) was at least 0.7950 for the consolidated internal and external datasets of 102 healthy subjects for the AUC 0-t prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error <30% and correlation (r) was at least 0.9339 in the same pool of healthy subjects. A 3-concentration-time points limited sampling model predicts the exposure of saroglitazar (ie, AUC 0-t ) within predefined acceptable bias and imprecision limit. Same model was also used to predict AUC 0-∞ . The same limited sampling model was found to predict the exposure of saroglitazar sulfoxide within predefined criteria. This model can find utility during late-phase clinical development of saroglitazar in the patient population. Copyright © 2018 Elsevier HS Journals, Inc. All rights reserved.

  19. Comparisons of discrete and integrative sampling accuracy in estimating pulsed aquatic exposures.

    PubMed

    Morrison, Shane A; Luttbeg, Barney; Belden, Jason B

    2016-11-01

    Most current-use pesticides have short half-lives in the water column and thus the most relevant exposure scenarios for many aquatic organisms are pulsed exposures. Quantifying exposure using discrete water samples may not be accurate as few studies are able to sample frequently enough to accurately determine time-weighted average (TWA) concentrations of short aquatic exposures. Integrative sampling methods that continuously sample freely dissolved contaminants over time intervals (such as integrative passive samplers) have been demonstrated to be a promising measurement technique. We conducted several modeling scenarios to test the assumption that integrative methods may require many less samples for accurate estimation of peak 96-h TWA concentrations. We compared the accuracies of discrete point samples and integrative samples while varying sampling frequencies and a range of contaminant water half-lives (t 50  = 0.5, 2, and 8 d). Differences the predictive accuracy of discrete point samples and integrative samples were greatest at low sampling frequencies. For example, when the half-life was 0.5 d, discrete point samples required 7 sampling events to ensure median values > 50% and no sampling events reporting highly inaccurate results (defined as < 10% of the true 96-h TWA). Across all water half-lives investigated, integrative sampling only required two samples to prevent highly inaccurate results and measurements resulting in median values > 50% of the true concentration. Regardless, the need for integrative sampling diminished as water half-life increased. For an 8-d water half-life, two discrete samples produced accurate estimates and median values greater than those obtained for two integrative samples. Overall, integrative methods are the more accurate method for monitoring contaminants with short water half-lives due to reduced frequency of extreme values, especially with uncertainties around the timing of pulsed events. However, the acceptability of discrete sampling methods for providing accurate concentration measurements increases with increasing aquatic half-lives. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Compatible Basal Area and Number of Trees Estimators from Remeasured Horizontal Point Samples

    Treesearch

    Francis A. Roesch; Edwin J. Green; Charles T. Scott

    1989-01-01

    Compatible groups of estimators for total value at time 1 (V1), survivor growth (S), and ingrowth (I) for use with permanent horizontal point samples are evaluated for the special cases of estimating the change in both the number of trees and basal area. Caveats which should be observed before any one compatible grouping of estimators is chosen...

  1. Sampling Development

    ERIC Educational Resources Information Center

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…

  2. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    PubMed

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  3. Random phase detection in multidimensional NMR.

    PubMed

    Maciejewski, Mark W; Fenwick, Matthew; Schuyler, Adam D; Stern, Alan S; Gorbatyuk, Vitaliy; Hoch, Jeffrey C

    2011-10-04

    Despite advances in resolution accompanying the development of high-field superconducting magnets, biomolecular applications of NMR require multiple dimensions in order to resolve individual resonances, and the achievable resolution is typically limited by practical constraints on measuring time. In addition to the need for measuring long evolution times to obtain high resolution, the need to distinguish the sign of the frequency constrains the ability to shorten measuring times. Sign discrimination is typically accomplished by sampling the signal with two different receiver phases or by selecting a reference frequency outside the range of frequencies spanned by the signal and then sampling at a higher rate. In the parametrically sampled (indirect) time dimensions of multidimensional NMR experiments, either method imposes an additional factor of 2 sampling burden for each dimension. We demonstrate that by using a single detector phase at each time sample point, but randomly altering the phase for different points, the sign ambiguity that attends fixed single-phase detection is resolved. Random phase detection enables a reduction in experiment time by a factor of 2 for each indirect dimension, amounting to a factor of 8 for a four-dimensional experiment, albeit at the cost of introducing sampling artifacts. Alternatively, for fixed measuring time, random phase detection can be used to double resolution in each indirect dimension. Random phase detection is complementary to nonuniform sampling methods, and their combination offers the potential for additional benefits. In addition to applications in biomolecular NMR, random phase detection could be useful in magnetic resonance imaging and other signal processing contexts.

  4. A Compressed Sensing Based Method for Reducing the Sampling Time of A High Resolution Pressure Sensor Array System

    PubMed Central

    Sun, Chenglu; Li, Wei; Chen, Wei

    2017-01-01

    For extracting the pressure distribution image and respiratory waveform unobtrusively and comfortably, we proposed a smart mat which utilized a flexible pressure sensor array, printed electrodes and novel soft seven-layer structure to monitor those physiological information. However, in order to obtain high-resolution pressure distribution and more accurate respiratory waveform, it needs more time to acquire the pressure signal of all the pressure sensors embedded in the smart mat. In order to reduce the sampling time while keeping the same resolution and accuracy, a novel method based on compressed sensing (CS) theory was proposed. By utilizing the CS based method, 40% of the sampling time can be decreased by means of acquiring nearly one-third of original sampling points. Then several experiments were carried out to validate the performance of the CS based method. While less than one-third of original sampling points were measured, the correlation degree coefficient between reconstructed respiratory waveform and original waveform can achieve 0.9078, and the accuracy of the respiratory rate (RR) extracted from the reconstructed respiratory waveform can reach 95.54%. The experimental results demonstrated that the novel method can fit the high resolution smart mat system and be a viable option for reducing the sampling time of the pressure sensor array. PMID:28796188

  5. Time of travel of solutes in selected reaches of the Sandusky River Basin, Ohio, 1972 and 1973

    USGS Publications Warehouse

    Westfall, Arthur O.

    1976-01-01

    A time of travel study of a 106-mile (171-kilometer) reach of the Sandusky River and a 39-mile (63-kilometer) reach of Tymochtee Creek was made to determine the time required for water released from Killdeer Reservoir on Tymochtee Creek to reach selected downstream points. In general, two dye sample runs were made through each subreach to define the time-discharge relation for approximating travel times at selected discharges within the measured range, and time-discharge graphs are presented for 38 subreaches. Graphs of dye dispersion and variation in relation to time are given for three selected sampling sites. For estimating travel time and velocities between points in the study reach, tables for selected flow durations are given. Duration curves of daily discharge for four index stations are presented to indicate the lo-flow characteristics and for use in shaping downward extensions of the time-discharge curves.

  6. An efficient reliability algorithm for locating design point using the combination of importance sampling concepts and response surface method

    NASA Astrophysics Data System (ADS)

    Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin

    2017-06-01

    Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.

  7. Effects of time and sampling location on concentrations of β-hydroxybutyric acid in dairy cows.

    PubMed

    Mahrt, A; Burfeind, O; Heuwieser, W

    2014-01-01

    Two trials were conducted to examine factors potentially influencing the measurement of blood β-hydroxybutyric acid (BHBA) in dairy cows. The objective of the first trial was to study effects of sampling time on BHBA concentration in continuously fed dairy cows. Furthermore, we determined test characteristics of a single BHBA measurement at a random time of the day to diagnose subclinical ketosis considering commonly used cut-points (1.2 and 1.4 mmol/L). Finally, we set out to evaluate if test characteristics could be enhanced by repeating measurements after different time intervals. During 4 herd visits, a total of 128 cows (8 to 28 d in milk) fed 10 times daily were screened at 0900 h and preselected by BHBA concentration. Blood samples were drawn from the tail vessels and BHBA concentrations were measured using an electronic BHBA meter (Precision Xceed, Abbott Diabetes Care Ltd., Witney, UK). Cows with BHBA concentrations ≥0.8 mmol/L at this time were enrolled in the trial (n=92). Subsequent BHBA measurements took place every 3h for a total of 8 measurements during 24 h. The effect of sampling time on BHBA concentrations was tested in a repeated-measures ANOVA repeating sampling time. Sampling time did not affect BHBA concentrations in continuously fed dairy cows. Defining the average daily BHBA concentration calculated from the 8 measurements as the gold standard, a single measurement at a random time of the day to diagnose subclinical ketosis had a sensitivity of 0.90 or 0.89 at the 2 BHBA cut-points (1.2 and 1.4 mmol/L). Specificity was 0.88 or 0.90 using the same cut-points. Repeating measurements after different time intervals improved test characteristics only slightly. In the second experiment, we compared BHBA concentrations of samples drawn from 3 different blood sampling locations (tail vessels, jugular vein, and mammary vein) of 116 lactating dairy cows. Concentrations of BHBA differed in samples from the 3 sampling locations. Mean BHBA concentration was 0.3 mmol/L lower when measured in the mammary vein compared with the jugular vein and 0.4 mmol/L lower in the mammary vein compared with the tail vessels. We conclude that to measure BHBA, blood samples of continuously fed dairy cows can be drawn at any time of the day. A single measurement provides very good test characteristics for on-farm conditions. Blood samples for BHBA measurement should be drawn from the jugular vein or tail vessels; the mammary vein should not be used for this purpose. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  8. The Effect of Hydraulic Gradient and Pattern of Conduit Systems on Tracing Tests: Bench-Scale Modeling.

    PubMed

    Mohammadi, Zargham; Gharaat, Mohammad Javad; Field, Malcolm

    2018-03-13

    Tracer breakthrough curves provide valuable information about the traced media, especially in inherently heterogeneous karst aquifers. In order to study the effect of variations in hydraulic gradient and conduit systems on breakthrough curves, a bench scale karst model was constructed. The bench scale karst model contains both matrix and a conduit. Eight tracing tests were conducted under a wide range of hydraulic gradients from 1 to greater than 5 for branchwork and network-conduit systems. Sampling points at varying distances from the injection point were utilized. Results demonstrate that mean tracer velocities, tracer mass recovery and linear rising slope of the breakthrough curves were directly controlled by hydraulic gradient. As hydraulic gradient increased, both one half the time for peak concentration and one fifth the time for peak concentration decreased. The results demonstrate the variations in one half the time for peak concentration and one fifth the time for peak concentration of the descending limb for different sampling points under differing hydraulic gradients are mainly controlled by the interactions of advection with dispersion. The results are discussed from three perspectives: different conduit systems, different hydraulic-gradient conditions, and different sampling points. The research confirmed the undeniable role of hydrogeological setting (i.e., hydraulic gradient and conduit system) on the shape of the breakthrough curve. The extracted parameters (mobile-fluid velocity, tracer-mass recovery, linear rising limb, one half the time for peak concentration, and one fifth the time for peak concentration) allow for differentiating hydrogeological settings and enhance interpretations the tracing tests in karst aquifers. © 2018, National Ground Water Association.

  9. Repeated measurements of mite and pet allergen levels in house dust over a time period of 8 years.

    PubMed

    Antens, C J M; Oldenwening, M; Wolse, A; Gehring, U; Smit, H A; Aalberse, R C; Kerkhof, M; Gerritsen, J; de Jongste, J C; Brunekreef, B

    2006-12-01

    Studies of the association between indoor allergen exposure and the development of allergic diseases have often measured allergen exposure at one point in time. We investigated the variability of house dust mite (Der p 1, Der f 1) and cat (Fel d 1) allergen in Dutch homes over a period of 8 years. Data were obtained in the Dutch PIAMA birth cohort study. Dust from the child's mattress, the parents' mattress and the living room floor was collected at four points in time, when the child was 3 months, 4, 6 and 8 years old. Dust samples were analysed for Der p 1, Der f 1 and Fel d 1 by sandwich enzyme immuno assay. Mite allergen concentrations for the child's mattress, the parents' mattress and the living room floor were moderately correlated between time-points. Agreement was better for cat allergen. For Der p 1 and Der f 1 on the child's mattress, the within-home variance was close to or smaller than the between-home variance in most cases. For Fel d 1, the within-home variance was almost always smaller than the between-home variance. Results were similar for allergen levels expressed per gram of dust and allergen levels expressed per square metre of the sampled surface. Variance ratios were smaller when samples were taken at shorter time intervals than at longer time intervals. Over a period of 4 years, mite and cat allergens measured in house dust are sufficiently stable to use single measurements with confidence in epidemiological studies. The within-home variance was larger when samples were taken 8 years apart so that over such long periods, repetition of sampling is recommended.

  10. Analyzing survival curves at a fixed point in time for paired and clustered right-censored data

    PubMed Central

    Su, Pei-Fang; Chi, Yunchan; Lee, Chun-Yi; Shyr, Yu; Liao, Yi-De

    2018-01-01

    In clinical trials, information about certain time points may be of interest in making decisions about treatment effectiveness. Rather than comparing entire survival curves, researchers can focus on the comparison at fixed time points that may have a clinical utility for patients. For two independent samples of right-censored data, Klein et al. (2007) compared survival probabilities at a fixed time point by studying a number of tests based on some transformations of the Kaplan-Meier estimators of the survival function. However, to compare the survival probabilities at a fixed time point for paired right-censored data or clustered right-censored data, their approach would need to be modified. In this paper, we extend the statistics to accommodate the possible within-paired correlation and within-clustered correlation, respectively. We use simulation studies to present comparative results. Finally, we illustrate the implementation of these methods using two real data sets. PMID:29456280

  11. Efficient computation of the Grünwald-Letnikov fractional diffusion derivative using adaptive time step memory

    NASA Astrophysics Data System (ADS)

    MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.

    2015-09-01

    Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.

  12. Diversity of human small intestinal Streptococcus and Veillonella populations.

    PubMed

    van den Bogert, Bartholomeus; Erkus, Oylum; Boekhorst, Jos; de Goffau, Marcus; Smid, Eddy J; Zoetendal, Erwin G; Kleerebezem, Michiel

    2013-08-01

    Molecular and cultivation approaches were employed to study the phylogenetic richness and temporal dynamics of Streptococcus and Veillonella populations in the small intestine. Microbial profiling of human small intestinal samples collected from four ileostomy subjects at four time points displayed abundant populations of Streptococcus spp. most affiliated with S. salivarius, S. thermophilus, and S. parasanguinis, as well as Veillonella spp. affiliated with V. atypica, V. parvula, V. dispar, and V. rogosae. Relative abundances varied per subject and time of sampling. Streptococcus and Veillonella isolates were cultured using selective media from ileostoma effluent samples collected at two time points from a single subject. The richness of the Streptococcus and Veillonella isolates was assessed at species and strain level by 16S rRNA gene sequencing and genetic fingerprinting, respectively. A total of 160 Streptococcus and 37 Veillonella isolates were obtained. Genetic fingerprinting differentiated seven Streptococcus lineages from ileostoma effluent, illustrating the strain richness within this ecosystem. The Veillonella isolates were represented by a single phylotype. Our study demonstrated that the small intestinal Streptococcus populations displayed considerable changes over time at the genetic lineage level because only representative strains of a single Streptococcus lineage could be cultivated from ileostoma effluent at both time points. © 2013 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.

  13. Evaluation of single-point sampling strategies for the estimation of moclobemide exposure in depressive patients.

    PubMed

    Ignjatovic, Anita Rakic; Miljkovic, Branislava; Todorovic, Dejan; Timotijevic, Ivana; Pokrajac, Milena

    2011-05-01

    Because moclobemide pharmacokinetics vary considerably among individuals, monitoring of plasma concentrations lends insight into its pharmacokinetic behavior and enhances its rational use in clinical practice. The aim of this study was to evaluate whether single concentration-time points could adequately predict moclobemide systemic exposure. Pharmacokinetic data (full 7-point pharmacokinetic profiles), obtained from 21 depressive inpatients receiving moclobemide (150 mg 3 times daily), were randomly split into development (n = 18) and validation (n = 16) sets. Correlations between the single concentration-time points and the area under the concentration-time curve within a 6-hour dosing interval at steady-state (AUC(0-6)) were assessed by linear regression analyses. The predictive performance of single-point sampling strategies was evaluated in the validation set by mean prediction error, mean absolute error, and root mean square error. Plasma concentrations in the absorption phase yielded unsatisfactory predictions of moclobemide AUC(0-6). The best estimation of AUC(0-6) was achieved from concentrations at 4 and 6 hours following dosing. As the most reliable surrogate for moclobemide systemic exposure, concentrations at 4 and 6 hours should be used instead of predose trough concentrations as an indicator of between-patient variability and a guide for dose adjustments in specific clinical situations.

  14. Time as a dimension of the sample design in national-scale forest inventories

    Treesearch

    Francis Roesch; Paul Van Deusen

    2013-01-01

    Historically, the goal of forest inventories has been to determine the extent of the timber resource. Predictions of how the resource was changing were made by comparing differences between successive inventories. The general view of the associated sample design was with selection probabilities based on land area observed at a discrete point in time. Time was not...

  15. [Application of Fourier amplitude sensitivity test in Chinese healthy volunteer population pharmacokinetic model of tacrolimus].

    PubMed

    Guan, Zheng; Zhang, Guan-min; Ma, Ping; Liu, Li-hong; Zhou, Tian-yan; Lu, Wei

    2010-07-01

    In this study, we evaluated the influence of different variance from each of the parameters on the output of tacrolimus population pharmacokinetic (PopPK) model in Chinese healthy volunteers, using Fourier amplitude sensitivity test (FAST). Besides, we estimated the index of sensitivity within whole course of blood sampling, designed different sampling times, and evaluated the quality of parameters' and the efficiency of prediction. It was observed that besides CL1/F, the index of sensitivity for all of the other four parameters (V1/F, V2/F, CL2/F and k(a)) in tacrolimus PopPK model showed relatively high level and changed fast with the time passing. With the increase of the variance of k(a), its indices of sensitivity increased obviously, associated with significant decrease in sensitivity index for the other parameters, and obvious change in peak time as well. According to the simulation of NONMEM and the comparison among different fitting results, we found that the sampling time points designed according to FAST surpassed the other time points. It suggests that FAST can access the sensitivities of model parameters effectively, and assist the design of clinical sampling times and the construction of PopPK model.

  16. Sampling Error in a Particulate Mixture: An Analytical Chemistry Experiment.

    ERIC Educational Resources Information Center

    Kratochvil, Byron

    1980-01-01

    Presents an undergraduate experiment demonstrating sampling error. Selected as the sampling system is a mixture of potassium hydrogen phthalate and sucrose; using a self-zeroing, automatically refillable buret to minimize titration time of multiple samples and employing a dilute back-titrant to obtain high end-point precision. (CS)

  17. Temporal Variability of Microplastic Concentrations in Freshwater Streams

    NASA Astrophysics Data System (ADS)

    Watkins, L.; Walter, M. T.

    2016-12-01

    Plastic pollution, specifically the size fraction less than 5mm known as microplastics, is an emerging contaminant in waterways worldwide. The ability of microplastics to adsorb and transport contaminants and microbes, as well as be ingested by organisms, makes them a concern in both freshwater and marine ecosystems. Recent efforts to determine the extent of microplastic pollution are increasingly focused on freshwater systems, but most studies have reported concentrations at a single time-point; few have begun to uncover how plastic concentrations in riverine systems may change through time. We hypothesize the time of day and season of sampling influences the concentrations of microplastics in water samples and more specifically, that daytime stormflow samples contain the highest microplastic concentrations due to maximized runoff and wastewater discharge. In order to test this hypothesis, we sampled in two similar streams in Ithaca, New York using a 333µm mesh net deployed within the thalweg. Repeat samples were collected to identify diurnal patterns as well as monthly variation. Samples were processed in the laboratory following the NOAA wet peroxide oxidation protocol. This work improves our ability to interpret existing single-time-point survey results by providing information on how microplastic concentrations change over time and whether concentrations in existing stream studies are likely representative of their location. Additionally, these results will inform future studies by providing insight into representative sample timing and capturing temporal trends for the purposes of modeling and of developing regulations for microplastic pollution.

  18. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    PubMed

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.

  19. Minimum and Maximum Times Required to Obtain Representative Suspended Sediment Samples

    NASA Astrophysics Data System (ADS)

    Gitto, A.; Venditti, J. G.; Kostaschuk, R.; Church, M. A.

    2014-12-01

    Bottle sampling is a convenient method of obtaining suspended sediment measurements for the development of sediment budgets. While these methods are generally considered to be reliable, recent analysis of depth-integrated sampling has identified considerable uncertainty in measurements of grain-size concentration between grain-size classes of multiple samples. Point-integrated bottle sampling is assumed to represent the mean concentration of suspended sediment but the uncertainty surrounding this method is not well understood. Here we examine at-a-point variability in velocity, suspended sediment concentration, grain-size distribution, and grain-size moments to determine if traditional point-integrated methods provide a representative sample of suspended sediment. We present continuous hour-long observations of suspended sediment from the sand-bedded portion of the Fraser River at Mission, British Columbia, Canada, using a LISST laser-diffraction instrument. Spectral analysis suggests that there are no statistically significant peak in energy density, suggesting the absence of periodic fluctuations in flow and suspended sediment. However, a slope break in the spectra at 0.003 Hz corresponds to a period of 5.5 minutes. This coincides with the threshold between large-scale turbulent eddies that scale with channel width/mean velocity and hydraulic phenomena related to channel dynamics. This suggests that suspended sediment samples taken over a period longer than 5.5 minutes incorporate variability that is larger scale than turbulent phenomena in this channel. Examination of 5.5-minute periods of our time series indicate that ~20% of the time a stable mean value of volumetric concentration is reached within 30 seconds, a typical bottle sample duration. In ~12% of measurements a stable mean was not reached over the 5.5 minute sample duration. The remaining measurements achieve a stable mean in an even distribution over the intervening interval.

  20. Evaluating the Whitening and Microstructural Effects of a Novel Whitening Strip on Porcelain and Composite Dental Materials

    PubMed Central

    Takesh, Thair; Sargsyan, Anik; Lee, Matthew; Anbarani, Afarin; Ho, Jessica; Wilder-Smith, Petra

    2017-01-01

    Aims The aim of this project was to evaluate the effects of 2 different whitening strips on color, microstructure and roughness of tea stained porcelain and composite surfaces. Methods 54 porcelain and 72 composite chips served as samples for timed application of over-the-counter (OTC) test or control dental whitening strips. Chips were divided randomly into three groups of 18 porcelain and 24 composite chips each. Of these groups, 1 porcelain and 1 composite set served as controls. The remaining 2 groups were randomized to treatment with either Oral Essentials® Whitening Strips or Crest® 3D White Whitestrips™. Sample surface structure was examined by light microscopy, profilometry and Scanning Electron Microscopy (SEM). Additionally, a reflectance spectrophotometer was used to assess color changes in the porcelain and composite samples over 24 hours of whitening. Data points were analyzed at each time point using ANOVA. Results In the light microscopy and SEM images, no discrete physical defects were observed in any of the samples at any time points. However, high-resolution SEM images showed an appearance of increased surface roughness in all composite samples. Using profilometry, significantly increased post-whitening roughness was documented in the composite samples exposed to the control bleaching strips. Composite samples underwent a significant and equivalent shift in color following exposure to Crest® 3D White Whitestrips™ and Oral Essentials® Whitening Strips. Conclusions A novel commercial tooth whitening strip demonstrated a comparable beaching effect to a widely used OTC whitening strip. Neither whitening strip caused physical defects in the sample surfaces. However, the control strip caused roughening of the composite samples whereas the test strip did not. PMID:29226023

  1. Calling, Vocational Development, and Well Being: A Longitudinal Study of Medical Students

    ERIC Educational Resources Information Center

    Duffy, Ryan D.; Manuel, R. Stephen; Borges, Nicole J.; Bott, Elizabeth M.

    2011-01-01

    The present study investigated the relation of calling to the vocational development and well-being of a sample of medical students. Students were surveyed at two time points: prior to beginning the first year of medical school and prior to beginning the third year of medical school. At each time point, calling moderately correlated with positive…

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cornwell, Paris A; Bunn, Jeffrey R; Schmidlin, Joshua E

    The December 2010 version of the guide, ORNL/TM-2008/159, by Jeff Bunn, Josh Schmidlin, Camden Hubbard, and Paris Cornwell, has been further revised due to a major change in the GeoMagic Studio software for constructing a surface model. The Studio software update also includes a plug-in module to operate the FARO Scan Arm. Other revisions for clarity were also made. The purpose of this revision document is to guide the reader through the process of laser alignment used by NRSF2 at HFIR and VULCAN at SNS. This system was created to increase the spatial accuracy of the measurement points in amore » sample, reduce the use of neutron time used for alignment, improve experiment planning, and reduce operator error. The need for spatial resolution has been driven by the reduction in gauge volumes to the sub-millimeter level, steep strain gradients in some samples, and requests to mount multiple samples within a few days for relating data from each sample to a common sample coordinate system. The first step in this process involves mounting the sample on an indexer table in a laboratory set up for offline sample mounting and alignment in the same manner it would be mounted at either instrument. In the shared laboratory, a FARO ScanArm is used to measure the coordinates of points on the sample surface ('point cloud'), specific features and fiducial points. A Sample Coordinate System (SCS) needs to be established first. This is an advantage of the technique because the SCS can be defined in such a way to facilitate simple definition of measurement points within the sample. Next, samples are typically mounted to a frame of 80/20 and fiducial points are attached to the sample or frame then measured in the established sample coordinate system. The laser scan probe on the ScanArm can then be used to scan in an 'as-is' model of the sample as well as mounting hardware. GeoMagic Studio 12 is the software package used to construct the model from the point cloud the scan arm creates. Once a model, fiducial, and measurement files are created, a special program, called SScanSS combines the information and by simulation of the sample on the diffractometer can help plan the experiment before using neutron time. Finally, the sample is mounted on the relevant stress measurement instrument and the fiducial points are measured again. In the HFIR beam room, a laser tracker is used in conjunction with a program called CAM2 to measure the fiducial points in the NRSF2 instrument's sample positioner coordinate system. SScanSS is then used again to perform a coordinate system transformation of the measurement file locations to the sample positioner coordinate system. A procedure file is then written with the coordinates in the sample positioner coordinate system for the desired measurement locations. This file is often called a script or command file and can be further modified using excel. It is very important to note that this process is not a linear one, but rather, it often is iterative. Many of the steps in this guide are interdependent on one another. It is very important to discuss the process as it pertains to the specific sample being measured. What works with one sample may not necessarily work for another. This guide attempts to provide a typical work flow that has been successful in most cases.« less

  3. Piecewise multivariate modelling of sequential metabolic profiling data.

    PubMed

    Rantalainen, Mattias; Cloarec, Olivier; Ebbels, Timothy M D; Lundstedt, Torbjörn; Nicholson, Jeremy K; Holmes, Elaine; Trygg, Johan

    2008-02-19

    Modelling the time-related behaviour of biological systems is essential for understanding their dynamic responses to perturbations. In metabolic profiling studies, the sampling rate and number of sampling points are often restricted due to experimental and biological constraints. A supervised multivariate modelling approach with the objective to model the time-related variation in the data for short and sparsely sampled time-series is described. A set of piecewise Orthogonal Projections to Latent Structures (OPLS) models are estimated, describing changes between successive time points. The individual OPLS models are linear, but the piecewise combination of several models accommodates modelling and prediction of changes which are non-linear with respect to the time course. We demonstrate the method on both simulated and metabolic profiling data, illustrating how time related changes are successfully modelled and predicted. The proposed method is effective for modelling and prediction of short and multivariate time series data. A key advantage of the method is model transparency, allowing easy interpretation of time-related variation in the data. The method provides a competitive complement to commonly applied multivariate methods such as OPLS and Principal Component Analysis (PCA) for modelling and analysis of short time-series data.

  4. Strengths and weaknesses of temporal stability analysis for monitoring and estimating grid-mean soil moisture in a high-intensity irrigated agricultural landscape

    NASA Astrophysics Data System (ADS)

    Ran, Youhua; Li, Xin; Jin, Rui; Kang, Jian; Cosh, Michael H.

    2017-01-01

    Monitoring and estimating grid-mean soil moisture is very important for assessing many hydrological, biological, and biogeochemical processes and for validating remotely sensed surface soil moisture products. Temporal stability analysis (TSA) is a valuable tool for identifying a small number of representative sampling points to estimate the grid-mean soil moisture content. This analysis was evaluated and improved using high-quality surface soil moisture data that were acquired by a wireless sensor network in a high-intensity irrigated agricultural landscape in an arid region of northwestern China. The performance of the TSA was limited in areas where the representative error was dominated by random events, such as irrigation events. This shortcoming can be effectively mitigated by using a stratified TSA (STSA) method, proposed in this paper. In addition, the following methods were proposed for rapidly and efficiently identifying representative sampling points when using TSA. (1) Instantaneous measurements can be used to identify representative sampling points to some extent; however, the error resulting from this method is significant when validating remotely sensed soil moisture products. Thus, additional representative sampling points should be considered to reduce this error. (2) The calibration period can be determined from the time span of the full range of the grid-mean soil moisture content during the monitoring period. (3) The representative error is sensitive to the number of calibration sampling points, especially when only a few representative sampling points are used. Multiple sampling points are recommended to reduce data loss and improve the likelihood of representativeness at two scales.

  5. Definition of a new thermal contrast and pulse correction for defect quantification in pulsed thermography

    NASA Astrophysics Data System (ADS)

    Benítez, Hernán D.; Ibarra-Castanedo, Clemente; Bendada, AbdelHakim; Maldague, Xavier; Loaiza, Humberto; Caicedo, Eduardo

    2008-01-01

    It is well known that the methods of thermographic non-destructive testing based on the thermal contrast are strongly affected by non-uniform heating at the surface. Hence, the results obtained from these methods considerably depend on the chosen reference point. The differential absolute contrast (DAC) method was developed to eliminate the need of determining a reference point that defined the thermal contrast with respect to an ideal sound area. Although, very useful at early times, the DAC accuracy decreases when the heat front approaches the sample rear face. We propose a new DAC version by explicitly introducing the sample thickness using the thermal quadrupoles theory and showing that the new DAC range of validity increases for long times while preserving the validity for short times. This new contrast is used for defect quantification in composite, Plexiglas™ and aluminum samples.

  6. Quantification of HIV-1 DNA using real-time recombinase polymerase amplification.

    PubMed

    Crannell, Zachary Austin; Rohrman, Brittany; Richards-Kortum, Rebecca

    2014-06-17

    Although recombinase polymerase amplification (RPA) has many advantages for the detection of pathogenic nucleic acids in point-of-care applications, RPA has not yet been implemented to quantify sample concentration using a standard curve. Here, we describe a real-time RPA assay with an internal positive control and an algorithm that analyzes real-time fluorescence data to quantify HIV-1 DNA. We show that DNA concentration and the onset of detectable amplification are correlated by an exponential standard curve. In a set of experiments in which the standard curve and algorithm were used to analyze and quantify additional DNA samples, the algorithm predicted an average concentration within 1 order of magnitude of the correct concentration for all HIV-1 DNA concentrations tested. These results suggest that quantitative RPA (qRPA) may serve as a powerful tool for quantifying nucleic acids and may be adapted for use in single-sample point-of-care diagnostic systems.

  7. Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images

    PubMed Central

    Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun

    2013-01-01

    This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608

  8. Variability of residue concentrations of ciprofloxacin in honey from treated hives.

    PubMed

    Chan, Danny; Macarthur, Roy; Fussell, Richard J; Wilford, Jack; Budge, Giles

    2017-04-01

    Honey bees (Apis mellifera L.) were treated with a model veterinary drug compound (ciprofloxacin) in a 3-year study (2012-14) to investigate the variability of residue concentration in honey. Sucrose solution containing ciprofloxacin was administered to 45 hives (1 g of ciprofloxacin per hive) at the beginning of the honey flow in late May/mid-June 2012, 2013 and 2014. Buckfast honey bees (A. mellifera - hybrid) were used in years 2012 and 2013. Carniolan honey bees (A. mellifera carnica) were used instead of the Buckfast honey bees as a replacement due to unforeseen circumstances in the final year of the study (2014). Honey was collected over nine scheduled time points from May/June till late October each year. Up to five hives were removed and their honey analysed per time point. Honey samples were analysed by liquid chromatography-tandem mass spectrometry (LC-MS/MS) to determine ciprofloxacin concentration. Statistical assessment of the data shows that the inter-hive variation of ciprofloxacin concentrations in 2012/13 is very different compared with that of 2014 with relative standard deviations (RSDs) of 138% and 61%, respectively. The average ciprofloxacin concentration for 2014 at the last time point was more than 10 times the concentration compared with samples from 2012/13 at the same time point. The difference between the 2012/13 data compared with the 2014 data is likely due to the different type of honey bees used in this study (2012/13 Buckfast versus 2014 Carniolan). Uncertainty estimates for honey with high ciprofloxacin concentration (upper 95th percentile) across all hives for 55-day withdrawal samples gave residual standard errors (RSEs) of 22%, 20% and 11% for 2012, 2013 and 2014, respectively. If the number of hives were to be reduced for future studies, RSEs were estimated to be 52% (2012), 54% (2013) and 26% (2014) for one hive per time point (nine total hives).

  9. Detecting a Change in School Performance: A Bayesian Analysis for a Multilevel Join Point Problem. CSE Technical Report 542.

    ERIC Educational Resources Information Center

    Thum, Yeow Meng; Bhattacharya, Suman Kumar

    To better describe individual behavior within a system, this paper uses a sample of longitudinal test scores from a large urban school system to consider hierarchical Bayes estimation of a multilevel linear regression model in which each individual regression slope of test score on time switches at some unknown point in time, "kj."…

  10. Automated DBS microsampling, microscale automation and microflow LC-MS for therapeutic protein PK.

    PubMed

    Zhang, Qian; Tomazela, Daniela; Vasicek, Lisa A; Spellman, Daniel S; Beaumont, Maribel; Shyong, BaoJen; Kenny, Jacqueline; Fauty, Scott; Fillgrove, Kerry; Harrelson, Jane; Bateman, Kevin P

    2016-04-01

    Reduce animal usage for discovery-stage PK studies for biologics programs using microsampling-based approaches and microscale LC-MS. We report the development of an automated DBS-based serial microsampling approach for studying the PK of therapeutic proteins in mice. Automated sample preparation and microflow LC-MS were used to enable assay miniaturization and improve overall assay throughput. Serial sampling of mice was possible over the full 21-day study period with the first six time points over 24 h being collected using automated DBS sample collection. Overall, this approach demonstrated comparable data to a previous study using single mice per time point liquid samples while reducing animal and compound requirements by 14-fold. Reduction in animals and drug material is enabled by the use of automated serial DBS microsampling for mice studies in discovery-stage studies of protein therapeutics.

  11. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    PubMed

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.

  12. A Time-Domain CMOS Oscillator-Based Thermostat with Digital Set-Point Programming

    PubMed Central

    Chen, Chun-Chi; Lin, Shih-Hao

    2013-01-01

    This paper presents a time-domain CMOS oscillator-based thermostat with digital set-point programming [without a digital-to-analog converter (DAC) or external resistor] to achieve on-chip thermal management of modern VLSI systems. A time-domain delay-line-based thermostat with multiplexers (MUXs) was used to substantially reduce the power consumption and chip size, and can benefit from the performance enhancement due to the scaling down of fabrication processes. For further cost reduction and accuracy enhancement, this paper proposes a thermostat using two oscillators that are suitable for time-domain curvature compensation instead of longer linear delay lines. The final time comparison was achieved using a time comparator with a built-in custom hysteresis to generate the corresponding temperature alarm and control. The chip size of the circuit was reduced to 0.12 mm2 in a 0.35-μm TSMC CMOS process. The thermostat operates from 0 to 90 °C, and achieved a fine resolution better than 0.05 °C and an improved inaccuracy of ± 0.6 °C after two-point calibration for eight packaged chips. The power consumption was 30 μW at a sample rate of 10 samples/s. PMID:23385403

  13. Influence of land use intensity on the diversity of ammonia oxidizing bacteria and archaea in soils from grassland ecosystems.

    PubMed

    Meyer, Annabel; Focks, Andreas; Radl, Viviane; Welzl, Gerhard; Schöning, Ingo; Schloter, Michael

    2014-01-01

    In the present study, the influence of the land use intensity on the diversity of ammonia oxidizing bacteria (AOB) and archaea (AOA) in soils from different grassland ecosystems has been investigated in spring and summer of the season (April and July). Diversity of AOA and AOB was studied by TRFLP fingerprinting of amoA amplicons. The diversity from AOB was low and dominated by a peak that could be assigned to Nitrosospira. The obtained profiles for AOB were very stable and neither influenced by the land use intensity nor by the time point of sampling. In contrast, the obtained patterns for AOA were more complex although one peak that could be assigned to Nitrosopumilus was dominating all profiles independent from the land use intensity and the sampling time point. Overall, the AOA profiles were much more dynamic than those of AOB and responded clearly to the land use intensity. An influence of the sampling time point was again not visible. Whereas AOB profiles were clearly linked to potential nitrification rates in soil, major TRFs from AOA were negatively correlated to DOC and ammonium availability and not related to potential nitrification rates.

  14. A single-chip 32-channel analog beamformer with 4-ns delay resolution and 768-ns maximum delay range for ultrasound medical imaging with a linear array transducer.

    PubMed

    Um, Ji-Yong; Kim, Yoon-Jee; Cho, Seong-Eun; Chae, Min-Kyun; Kim, Byungsub; Sim, Jae-Yoon; Park, Hong-June

    2015-02-01

    A single-chip 32-channel analog beamformer is proposed. It achieves a delay resolution of 4 ns and a maximum delay range of 768 ns. It has a focal-point based architecture, which consists of 7 sub-analog beamformers (sub-ABF). Each sub-ABF performs a RX focusing operation for a single focal point. Seven sub-ABFs perform a time-interleaving operation to achieve the maximum delay range of 768 ns. Phase interpolators are used in sub-ABFs to generate sampling clocks with the delay resolution of 4 ns from a low frequency system clock of 5 MHz. Each sub-ABF samples 32 echo signals at different times into sampling capacitors, which work as analog memory cells. The sampled 32 echo signals of each sub-ABF are originated from one target focal point at one instance. They are summed at one instance in a sub-ABF to perform the RX focusing for the target focal point. The proposed ABF chip has been fabricated in a 0.13- μ m CMOS process with an active area of 16 mm (2). The total power consumption is 287 mW. In measurement, the digital echo signals from a commercial ultrasound medical imaging machine were applied to the fabricated chip through commercial DAC chips. Due to the speed limitation of the DAC chips, the delay resolution was relaxed to 10 ns for the real-time measurement. A linear array transducer with no steering operation is used in this work.

  15. Strategies to assess systemic exposure of chemicals in subchronic/chronic diet and drinking water studies.

    PubMed

    Saghir, Shakil A; Mendrala, Alan L; Bartels, Michael J; Day, Sue J; Hansen, Steve C; Sushynski, Jacob M; Bus, James S

    2006-03-15

    Strategies were developed for the estimation of systemically available daily doses of chemicals, diurnal variations in blood levels, and rough elimination rates in subchronic feeding/drinking water studies, utilizing a minimal number of blood samples. Systemic bioavailability of chemicals was determined by calculating area under the plasma concentration curve over 24 h (AUC-24 h) using complete sets of data (> or =5 data points) and also three, two, and one selected time points. The best predictions of AUC-24 h were made when three time points were used, corresponding to Cmax, a mid-morning sample, and C(min). These values were found to be 103 +/- 10% of the original AUC-24 h, with 13 out of 17 values ranging between 96 and 105% of the original. Calculation of AUC-24 h from two samples (Cmax and Cmin) or one mid-morning sample afforded slightly larger variations in the calculated AUC-24 h (69-136% of the actual). Following drinking water exposure, prediction of AUC-24 h using 3 time points (Cmax, mid-morning, and Cmin) was very close to actual values (80-100%) among mice, while values for rats were only 63% of the original due to less frequent drinking behavior of rats during the light cycle. Collection and analysis of 1-3 blood samples per dose may provide insight into dose-proportional or non-dose-proportional differences in systemic bioavailability, pointing towards saturation of absorption or elimination or some other phenomenon warranting further investigation. In addition, collection of the terminal blood samples from rats, which is usually conducted after 18 h of fasting, will be helpful in rough estimation of blood/plasma half-life of the compound. The amount of chemical(s) and/or metabolite(s) in excreta and their possible use as biomarkers in predicting the daily systemic exposure levels are also discussed. Determining these parameters in the early stages of testing will provide critical information to improve the appropriate design of other longer-term toxicity studies.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saghir, Shakil A.; Mendrala, Alan L.; Bartels, Michael J.

    Strategies were developed for the estimation of systemically available daily doses of chemicals, diurnal variations in blood levels, and rough elimination rates in subchronic feeding/drinking water studies, utilizing a minimal number of blood samples. Systemic bioavailability of chemicals was determined by calculating area under the plasma concentration curve over 24 h (AUC-24 h) using complete sets of data ({>=}5 data points) and also three, two, and one selected time points. The best predictions of AUC-24 h were made when three time points were used, corresponding to C {sub max}, a mid-morning sample, and C {sub min}. These values were foundmore » to be 103 {+-} 10% of the original AUC-24 h, with 13 out of 17 values ranging between 96 and 105% of the original. Calculation of AUC-24 h from two samples (C {sub max} and C {sub min}) or one mid-morning sample afforded slightly larger variations in the calculated AUC-24 h (69-136% of the actual). Following drinking water exposure, prediction of AUC-24 h using 3 time points (C {sub max}, mid-morning, and C {sub min}) was very close to actual values (80-100%) among mice, while values for rats were only 63% of the original due to less frequent drinking behavior of rats during the light cycle. Collection and analysis of 1-3 blood samples per dose may provide insight into dose-proportional or non-dose-proportional differences in systemic bioavailability, pointing towards saturation of absorption or elimination or some other phenomenon warranting further investigation. In addition, collection of the terminal blood samples from rats, which is usually conducted after 18 h of fasting, will be helpful in rough estimation of blood/plasma half-life of the compound. The amount of chemical(s) and/or metabolite(s) in excreta and their possible use as biomarkers in predicting the daily systemic exposure levels are also discussed. Determining these parameters in the early stages of testing will provide critical information to improve the appropriate design of other longer-term toxicity studies.« less

  17. Impact of urine preservation methods and duration of storage on measured levels of environmental contaminants.

    PubMed

    Hoppin, Jane A; Ulmer, Ross; Calafat, Antonia M; Barr, Dana B; Baker, Susan V; Meltzer, Helle M; Rønningen, Kjersti S

    2006-01-01

    Collection of urine samples in human studies involves choices regarding shipping, sample preservation, and storage that may ultimately influence future analysis. As more studies collect and archive urine samples to evaluate environmental exposures in the future, we were interested in assessing the impact of urine preservative, storage temperature, and time since collection on nonpersistent contaminants in urine samples. In spiked urine samples stored in three types of urine vacutainers (no preservative, boric acid, and chlorhexidine), we measured five groups of contaminants to assess the levels of these analytes at five time points (0, 24, 48, and 72 h, and 1 week) and at two temperatures (room temperature and 4 degrees C). The target chemicals were bisphenol A (BPA), metabolites of organophosphate (OP), carbamate, and pyrethroid insecticides, chlorinated phenols, and phthalate monoesters, and were measured using five different mass spectrometry-based methods. Three samples were analyzed at each time point, with the exception of BPA. Repeated measures analysis of variance was used to evaluate effects of storage time, temperature, and preservative. Stability was summarized with percent change in mean concentration from time 0. In general, most analytes were stable under all conditions with changes in mean concentration over time, temperature, and preservative being generally less than 20%, with the exception of the OP metabolites in the presence of boric acid. The effect of storage temperature was less important than time since collection. The precision of the laboratory measurements was high allowing us to observe small differences, which may not be important when categorizing individuals into broader exposure groups.

  18. Reassessing the educational environment among undergraduate students in a chiropractic training institution: A study over time.

    PubMed

    Palmgren, Per J; Sundberg, Tobias; Laksov, Klara Bolander

    2015-10-01

    The aim of the study was twofold: (1) to compare the perceived educational environment at 2 points in time and (2) to longitudinally examine potential changes in perceptions of the educational environment over time. The validated Dundee Ready Educational Environment Measure (DREEM), a 50-item, self-administered Likert-type inventory, was used in this prospective study. Employing convenience sampling, undergraduate chiropractic students were investigated at 2 points in time: 2009 (n = 124) and 2012 (n = 127). An analysis of 2 matching samples was performed on 27% (n = 34) of the respondents in 2009. A total of 251 students (79%) completed the inventory, 83% (n = 124) in 2009 and 75% (n = 127) in 2012. The overall DREEM scores in both years were excellent: 156 (78%) and 153 (77%), respectively. The students' perceptions of teachers differed significantly between the 2 cohort years, decreasing from 77% to 73%. Three items received deprived scores: limited support for stressed students, authoritarian teachers, and an overemphasis on factual learning; the latter significantly decreased in 2012. In the longitudinal sample these items also displayed scores below the expected mean. Students viewed the educational environment as excellent both in 2009 and 2012. The perceptions of teachers declined with time; however, this could be attributed to teachers' new roles. Certain aspects of the educational environment factored prominently during the comparative points in time, as well as longitudinally, and these ought to be further investigated and addressed to provide an enhanced educational environment.

  19. Point -of -care testing (POCT) in molecular diagnostics: Performance evaluation of GeneXpert HCV RNA test in diagnosing and monitoring of HCV infection.

    PubMed

    Gupta, Ekta; Agarwala, Pragya; Kumar, Guresh; Maiwall, Rakhi; Sarin, Shiv Kumar

    2017-03-01

    Molecular testing at the point-of-care may turn out to be game changer for HCV diagnosis and treatment monitoring, through increased sensitivity, reduced turnaround time, and ease of performance. One such assay GeneXpert ® has recently been released. Comparative analysis between performances of GeneXpert ® and Abbott HCV-RNA was done. 174 HCV infected patients were recruited and, one time plasma samples from 154 patients and repeated samples from 20 patients, obtained at specific treatment time-points (0, 4, 12 and 24) weeks were serially re-tested on Xpert ® . Genotype 3 was the commonest, seen in 80 (66%) of the cases, genotype 1 in 34 (28.3%), genotype 4 in 4 (3.3%) and genotypes 2 and 5 in 1 (0.8%) each. Median HCV RNA load was 4.69 log 10 (range: 0-6.98log 10 ) IU/ml. Overall a very good correlation was seen between the two assays (R 2 =0.985), concordance of the results between the assays was seen in 138 samples (89.6%). High and low positive standards were tested ten times on Xpert ® to evaluate the precision and the coefficient of variation was 0.01 for HPC and 0.07 for the LPC. Monitoring of patients on two different regimes of treatment, pegylated interferon plus ribavirin and sofosbuvir plus ribavirin was done by both the systems at baseline, 4, 12 and 24 weeks. Perfect correlation between the assays in the course of therapy at different treatment time- point in genotypes 3 and 1 was seen. The study demonstrates excellent performance of the Xpert ® HCV assay in viral load assessment and in treatment course monitoring consistency. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Permissive Attitude Towards Drug Use, Life Satisfaction, and Continuous Drug Use Among Psychoactive Drug Users in Hong Kong.

    PubMed

    Cheung, N Wt; Cheung, Y W; Chen, X

    2016-06-01

    To examine the effects of a permissive attitude towards regular and occasional drug use, life satisfaction, self-esteem, depression, and other psychosocial variables in the drug use of psychoactive drug users. Psychosocial factors that might affect a permissive attitude towards regular / occasional drug use and life satisfaction were further explored. We analysed data of a sample of psychoactive drug users from a longitudinal survey of psychoactive drug abusers in Hong Kong who were interviewed at 6 time points at 6-month intervals between January 2009 and December 2011. Data of the second to the sixth time points were stacked into an individual time point structure. Random-effects probit regression analysis was performed to estimate the relative contribution of the independent variables to the binary dependent variable of drug use in the last 30 days. A permissive attitude towards drug use, life satisfaction, and depression at the concurrent time point, and self-esteem at the previous time point had direct effects on drug use in the last 30 days. Interestingly, permissiveness to occasional drug use was a stronger predictor of drug use than permissiveness to regular drug use. These 2 permissive attitude variables were affected by the belief that doing extreme things shows the vitality of young people (at concurrent time point), life satisfaction (at concurrent time point), and self-esteem (at concurrent and previous time points). Life satisfaction was affected by sense of uncertainty about the future (at concurrent time point), self-esteem (at concurrent time point), depression (at both concurrent and previous time points), and being stricken by stressful events (at previous time point). A number of psychosocial factors could affect the continuation or discontinuation of drug use, as well as the permissive attitude towards regular and occasional drug use, and life satisfaction. Implications of the findings for prevention and intervention work targeted at psychoactive drug users are discussed.

  1. The impact of time and field conditions on brown bear (Ursus arctos) faecal DNA amplification

    USGS Publications Warehouse

    Murphy, M.A.; Kendall, K.C.; Robinson, A.; Waits, L.P.

    2007-01-01

    To establish longevity of faecal DNA samples under varying summer field conditions, we collected 53 faeces from captive brown bears (Ursus arctos) on a restricted vegetation diet. Each faeces was divided, and one half was placed on a warm, dry field site while the other half was placed on a cool, wet field site on Moscow Mountain, Idaho, USA. Temperature, relative humidity, and dew point data were collected on each site, and faeces were sampled for DNA extraction at <1, 3, 6, 14, 30, 45, and 60 days. Faecal DNA sample viability was assessed by attempting PCR amplification of a mitochondrial DNA (mtDNA) locus (???150 bp) and a nuclear DNA (nDNA) microsatellite locus (180-200 bp). Time in the field, temperature, and dew point impacted mtDNA and nDNA amplification success with the greatest drop in success rates occurring between 1 and 3 days. In addition, genotyping errors significantly increased over time at both field sites. Based on these results, we recommend collecting samples at frequent transect intervals and focusing sampling efforts during drier portions of the year when possible. ?? 2007 Springer Science+Business Media, Inc.

  2. Biomechanical and Histologic Evaluation of LifeMesh™: A Novel Self-Fixating Mesh Adhesive.

    PubMed

    Shahan, Charles P; Stoikes, Nathaniel N; Roan, Esra; Tatum, James; Webb, David L; Voeller, Guy R

    2018-04-01

    Mesh fixation with the use of adhesives results in an immediate and total surface area adhesion of the mesh, removing the need for penetrating fixation points. The purpose of this study was to evaluate LifeMesh™, a prototype mesh adhesive technology which coats polypropylene mesh. The strength of the interface between mesh and tissue, inflammatory responses, and histology were measured at varying time points in a swine model, and these results were compared with sutures. Twenty Mongrel swine underwent implantation of LifeMesh™ and one piece of bare polypropylene mesh secured with suture (control). One additional piece of either LifeMesh™ or control was used for histopathologic evaluation. The implants were retrieved at 3, 7, and 14 days. Only 3- and 7-day specimens underwent lap shear testing. On Day 3, LifeMesh™ samples showed considerably less contraction than sutured samples. The interfacial strength of Day 3 LifeMesh™ samples was similar to that of sutured samples. At seven days, LifeMesh™ samples continued to show significantly less contraction than sutured samples. The strength of fixation at seven days was greater in the control samples. The histologic findings were similar in LifeMesh™ and control samples. LifeMesh™ showed significantly less contraction than sutured samples at all measured time points. Although fixation strength was similar at three days, the interfacial strength of LifeMesh™ remained unchanged, whereas sutured controls increased by day 7. With histologic equivalence, considerably less contraction, and similar early fixation strength, LifeMesh™ is a viable mesh fixation technology.

  3. Data that describe at-a-point temporal variations in the transport rate and particle-size distribution of bedload; East Fork River, Wyoming, and Fall River, Colorado

    USGS Publications Warehouse

    Gomez, Basil; Emmett, W.W.

    1990-01-01

    Data from the East Fork River, Wyoming, and the Fall River, Colorado, that document at-a-point temporal variations in the transport rate and particle-size distribution of bedload, associated with the downstream migration of dunes, are presented. Bedload sampling was undertaken, using a 76.2 x 76.2 mm Helley-Smith sampler, on three separate occasions at each site in June 1988. In each instance, the sampling time was 30 seconds and the sampling intervals 5 minutes. The sampling period ranged from 4.92 to 8.25 hours. Water stage did not vary appreciably during any of the sampling periods. (USGS)

  4. Concrete thawing studied by single-point ramped imaging.

    PubMed

    Prado, P J; Balcom, B J; Beyea, S D; Armstrong, R L; Bremner, T W

    1997-12-01

    A series of two-dimensional images of proton distribution in a hardened concrete sample has been obtained during the thawing process (from -50 degrees C up to 11 degrees C). The SPRITE sequence is optimal for this study given the characteristic short relaxation times of water in this porous media (T2* < 200 micros and T1 < 3.6 ms). The relaxation parameters of the sample were determined in order to optimize the time efficiency of the sequence, permitting a 4-scan 64 x 64 acquisition in under 3 min. The image acquisition is fast on the time scale of the temperature evolution of the specimen. The frozen water distribution is quantified through a position based study of the image contrast. A multiple point acquisition method is presented and the signal sensitivity improvement is discussed.

  5. 16S Based Microbiome Analysis from Healthy Subjects’ Skin Swabs Stored for Different Storage Periods Reveal Phylum to Genus Level Changes

    PubMed Central

    Klymiuk, Ingeborg; Bambach, Isabella; Patra, Vijaykumar; Trajanoski, Slave; Wolf, Peter

    2016-01-01

    Microbiome research and improvements in high throughput sequencing technologies revolutionize our current scientific viewpoint. The human associated microbiome is a prominent focus of clinical research. Large cohort studies are often required to investigate the human microbiome composition and its changes in a multitude of human diseases. Reproducible analyses of large cohort samples require standardized protocols in study design, sampling, storage, processing, and data analysis. In particular, the effect of sample storage on actual results is critical for reproducibility. So far, the effect of storage conditions on the results of microbial analysis has been examined for only a few human biological materials (e.g., stool samples). There is a lack of data and information on appropriate storage conditions on other human derived samples, such as skin. Here, we analyzed skin swab samples collected from three different body locations (forearm, V of the chest and back) of eight healthy volunteers. The skin swabs were soaked in sterile buffer and total DNA was isolated after freezing at -80°C for 24 h, 90 or 365 days. Hypervariable regions V1-2 were amplified from total DNA and libraries were sequenced on an Illumina MiSeq desktop sequencer in paired end mode. Data were analyzed using Qiime 1.9.1. Summarizing all body locations per time point, we found no significant differences in alpha diversity and multivariate community analysis among the three time points. Considering body locations separately significant differences in the richness of forearm samples were found between d0 vs. d90 and d90 vs. d365. Significant differences in the relative abundance of major skin genera (Propionibacterium, Streptococcus, Bacteroides, Corynebacterium, and Staphylococcus) were detected in our samples in Bacteroides only among all time points in forearm samples and between d0 vs. d90 and d90 vs. d365 in V of the chest and back samples. Accordingly, significant differences were detected in the ratios of the main phyla Actinobacteria, Firmicutes, and Bacteroidetes: Actinobacteria vs. Bacteroidetes at d0 vs. d90 (p-value = 0.0234), at d0 vs. d365 (p-value = 0.0234) and d90 vs. d365 (p-value = 0.0234) in forearm samples and at d90 vs. d365 in V of the chest (p-value = 0.0234) and back samples (p-value = 0.0234). The ratios of Firmicutes vs. Bacteroidetes showed no significant changes in any of the body locations as well as the ratios of Actinobacteria vs. Firmicutes at any time point. Studies with larger sample sizes are required to verify our results and determine long term storage effects with regard to specific biological questions. PMID:28066342

  6. End points for adjuvant therapy trials: has the time come to accept disease-free survival as a surrogate end point for overall survival?

    PubMed

    Gill, Sharlene; Sargent, Daniel

    2006-06-01

    The intent of adjuvant therapy is to eradicate micro-metastatic residual disease following curative resection with the goal of preventing or delaying recurrence. The time-honored standard for demonstrating efficacy of new adjuvant therapies is an improvement in overall survival (OS). This typically requires phase III trials of large sample size with lengthy follow-up. With the intent of reducing the cost and time of completing such trials, there is considerable interest in developing alternative or surrogate end points. A surrogate end point may be employed as a substitute to directly assess the effects of an intervention on an already accepted clinical end point such as mortality. When used judiciously, surrogate end points can accelerate the evaluation of new therapies, resulting in the more timely dissemination of effective therapies to patients. The current review provides a perspective on the suitability and validity of disease-free survival (DFS) as an alternative end point for OS. Criteria for establishing surrogacy and the advantages and limitations associated with the use of DFS as a primary end point in adjuvant clinical trials and as the basis for approval of new adjuvant therapies are discussed.

  7. Optimal time points sampling in pathway modelling.

    PubMed

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  8. Effect of the time interval from harvesting to the pre-drying step on natural fumonisin contamination in freshly harvested corn from the State of Parana, Brazil.

    PubMed

    Da Silva, M; Garcia, G T; Vizoni, E; Kawamura, O; Hirooka, E Y; Ono, E Y S

    2008-05-01

    Natural mycoflora and fumonisins were analysed in 490 samples of freshly harvested corn (Zea mays L.) (2003 and 2004 crops) collected at three points in the producing chain from the Northern region of Parana State, Brazil, and correlated to the time interval between the harvesting and the pre-drying step. The two crops showed a similar profile concerning the fungal frequency, and Fusarium sp. was the prevalent genera (100%) for the sampling sites from both crops. Fumonisins were detected in all samples from the three points of the producing chain (2003 and 2004 crops). The levels ranged from 0.11 to 15.32 microg g(-1)in field samples, from 0.16 to 15.90 microg g(-1)in reception samples, and from 0.02 to 18.78 microg g(-1)in pre-drying samples (2003 crop). Samples from the 2004 crop showed lower contamination and fumonisin levels ranged from 0.07 to 4.78 microg g(-1)in field samples, from 0.03 to 4.09 microg g(-1)in reception samples, and from 0.11 to 11.21 microg g(-1)in pre-drying samples. The mean fumonisin level increased gradually from < or = 5.0 to 19.0 microg g(-1)as the time interval between the harvesting and the pre-drying step increased from 3.22 to 8.89 h (2003 crop). The same profile was observed for samples from the 2004 crop. Fumonisin levels and the time interval (rho = 0.96) showed positive correlation (p < or = 0.05), indicating that delay in the drying process can increase fumonisin levels.

  9. Galaxy evolution and large-scale structure in the far-infrared. I - IRAS pointed observations

    NASA Astrophysics Data System (ADS)

    Lonsdale, Carol J.; Hacking, Perry B.

    1989-04-01

    Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained in terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution.

  10. Galaxy evolution and large-scale structure in the far-infrared. I. IRAS pointed observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lonsdale, C.J.; Hacking, P.B.

    1989-04-01

    Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained inmore » terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution. 81 refs.« less

  11. Galaxy evolution and large-scale structure in the far-infrared. I - IRAS pointed observations

    NASA Technical Reports Server (NTRS)

    Lonsdale, Carol J.; Hacking, Perry B.

    1989-01-01

    Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained in terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution.

  12. Ex vivo 12 h bactericidal activity of oral co-amoxiclav (1.125 g) against beta-lactamase-producing Haemophilus influenzae.

    PubMed

    Bronner, S; Pompei, D; Elkhaïli, H; Dhoyen, N; Monteil, H; Jehl, F

    2001-10-01

    The aim of the study was to evaluate the in vitro/ex vivo bactericidal activity of a new coamoxiclav single-dose sachet formulation (1 g amoxicillin + 0.125 g clavulanic acid) against a beta-lactamase-producing strain of Haemophilus influenzae. The evaluation covered the 12 h period after antibiotic administration. Serum specimens from the 12 healthy volunteers included in the pharmacokinetic study were pooled by time point and in equal volumes. Eight of 12 pharmacokinetic sampling time points were included in the study. At time points 0.5, 0.75, 1, 1.5, 2.5, 5, 8 and 12 h post-dosing, the kinetics of bactericidal activity were determined for each of the serial dilutions. Each specimen was serially diluted from 1:2 to 1:256. The index of surviving bacteria (ISB) was subsequently determined for each pharmacokinetic time point. For all the serum samples, bactericidal activity was fast (3-6 h), marked (3-6 log(10) reduction in the initial inoculum) and sustained over the 12 h between-dosing interval. The results obtained also confirmed that the potency of the amoxicillin plus clavulanic acid combination was time dependent against the species under study and that the time interval over which the concentrations were greater than the MIC (t > MIC) was 100% for the strain under study. The data thus generated constitute an interesting prerequisite with a view to using co-amoxiclav 1.125 g in a bd oral regimen.

  13. Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.

    PubMed

    Ette, E I; Howie, C A; Kelman, A W; Whiting, B

    1995-05-01

    Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.

  14. Estimation of time-delayed mutual information and bias for irregularly and sparsely sampled time-series

    PubMed Central

    Albers, D. J.; Hripcsak, George

    2012-01-01

    A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database. PMID:22536009

  15. Evaluation of microRNA stability in feces from healthy dogs.

    PubMed

    Cirera, Susanna; Willumsen, Line M; Johansen, Thea T; Nielsen, Lise N

    2018-03-01

    Gastrointestinal cancer accounts for approximately 8% of all canine malignancies. Early detection of cancer may have a tremendous impact on both treatment options and prognosis. MicroRNAs (miRNAs), a class of noncoding RNAs that can be found stably expressed in body fluids and feces, have been suggested as valuable human cancer biomarkers. The purpose of the study was to investigate the feasibility of detecting miRNAs in canine feces and to determine the miRNA stability in fecal samples stored at different temperatures for different duration. The levels of 4 Canine familiaris (cfa) miRNAs (cfa-miR-16, cfa-miR-20a, cfa-miR-21, and cfa-miR-92a) were investigated by quantitative real-time PCR(qPCR) in fecal samples from 10 healthy dogs. Fecal samples were collected at 3 different time points and samples from the first time point were stored at different temperatures and for a different duration. A statistically significant difference was found in miRNA levels from samples stored at room temperature compared with samples stored at -20°C for cfa-miR-16 and cfa-miR-21. No significant difference was found in the level of the investigated miRNAs over time. Overall, miRNAs are present in dog feces at measurable levels. Some miRNAs seem to be subject to a higher degree of degradation in samples stored at room temperature for 24 hours compared with samples frozen after collection at -20°C. The investigated miRNAs were stably expressed over time. This study provides the basis for further research on miRNA expression profiles as biomarkers for gastrointestinal cancer in dogs. © 2018 American Society for Veterinary Clinical Pathology.

  16. A novel method for sampling the suspended sediment load in the tidal environment using bi-directional time-integrated mass-flux sediment (TIMS) samplers

    NASA Astrophysics Data System (ADS)

    Elliott, Emily A.; Monbureau, Elaine; Walters, Glenn W.; Elliott, Mark A.; McKee, Brent A.; Rodriguez, Antonio B.

    2017-12-01

    Identifying the source and abundance of sediment transported within tidal creeks is essential for studying the connectivity between coastal watersheds and estuaries. The fine-grained suspended sediment load (SSL) makes up a substantial portion of the total sediment load carried within an estuarine system and efficient sampling of the SSL is critical to our understanding of nutrient and contaminant transport, anthropogenic influence, and the effects of climate. Unfortunately, traditional methods of sampling the SSL, including instantaneous measurements and automatic samplers, can be labor intensive, expensive and often yield insufficient mass for comprehensive geochemical analysis. In estuaries this issue is even more pronounced due to bi-directional tidal flow. This study tests the efficacy of a time-integrated mass sediment sampler (TIMS) design, originally developed for uni-directional flow within the fluvial environment, modified in this work for implementation the tidal environment under bi-directional flow conditions. Our new TIMS design utilizes an 'L' shaped outflow tube to prevent backflow, and when deployed in mirrored pairs, each sampler collects sediment uniquely in one direction of tidal flow. Laboratory flume experiments using dye and particle image velocimetry (PIV) were used to characterize the flow within the sampler, specifically, to quantify the settling velocities and identify stagnation points. Further laboratory tests of sediment indicate that bidirectional TIMS capture up to 96% of incoming SSL across a range of flow velocities (0.3-0.6 m s-1). The modified TIMS design was tested in the field at two distinct sampling locations within the tidal zone. Single-time point suspended sediment samples were collected at high and low tide and compared to time-integrated suspended sediment samples collected by the bi-directional TIMS over the same four-day period. Particle-size composition from the bi-directional TIMS were representative of the array of single time point samples, but yielded greater mass, representative of flow and sediment-concentration conditions at the site throughout the deployment period. This work proves the efficacy of the modified bi-directional TIMS design, offering a novel tool for collection of suspended sediment in the tidally-dominated portion of the watershed.

  17. Predictors of Disordered Eating in Adolescence and Young Adulthood: A Population-Based, Longitudinal Study of Females and Males in Norway

    ERIC Educational Resources Information Center

    Abebe, Dawit Shawel; Torgersen, Leila; Lien, Lars; Hafstad, Gertrud S.; von Soest, Tilmann

    2014-01-01

    We investigated longitudinal predictors for disordered eating from early adolescence to young adulthood (12-34 years) across gender and different developmental phases among Norwegian young people. Survey data from a population-based sample were collected at four time points (T) over a 13-year time span. A population-based sample of 5,679 females…

  18. Nonparametric change point estimation for survival distributions with a partially constant hazard rate.

    PubMed

    Brazzale, Alessandra R; Küchenhoff, Helmut; Krügel, Stefanie; Schiergens, Tobias S; Trentzsch, Heiko; Hartl, Wolfgang

    2018-04-05

    We present a new method for estimating a change point in the hazard function of a survival distribution assuming a constant hazard rate after the change point and a decreasing hazard rate before the change point. Our method is based on fitting a stump regression to p values for testing hazard rates in small time intervals. We present three real data examples describing survival patterns of severely ill patients, whose excess mortality rates are known to persist far beyond hospital discharge. For designing survival studies in these patients and for the definition of hospital performance metrics (e.g. mortality), it is essential to define adequate and objective end points. The reliable estimation of a change point will help researchers to identify such end points. By precisely knowing this change point, clinicians can distinguish between the acute phase with high hazard (time elapsed after admission and before the change point was reached), and the chronic phase (time elapsed after the change point) in which hazard is fairly constant. We show in an extensive simulation study that maximum likelihood estimation is not robust in this setting, and we evaluate our new estimation strategy including bootstrap confidence intervals and finite sample bias correction.

  19. Advances in paper-based sample pretreatment for point-of-care testing.

    PubMed

    Tang, Rui Hua; Yang, Hui; Choi, Jane Ru; Gong, Yan; Feng, Shang Sheng; Pingguan-Murphy, Belinda; Huang, Qing Sheng; Shi, Jun Ling; Mei, Qi Bing; Xu, Feng

    2017-06-01

    In recent years, paper-based point-of-care testing (POCT) has been widely used in medical diagnostics, food safety and environmental monitoring. However, a high-cost, time-consuming and equipment-dependent sample pretreatment technique is generally required for raw sample processing, which are impractical for low-resource and disease-endemic areas. Therefore, there is an escalating demand for a cost-effective, simple and portable pretreatment technique, to be coupled with the commonly used paper-based assay (e.g. lateral flow assay) in POCT. In this review, we focus on the importance of using paper as a platform for sample pretreatment. We firstly discuss the beneficial use of paper for sample pretreatment, including sample collection and storage, separation, extraction, and concentration. We highlight the working principle and fabrication of each sample pretreatment device, the existing challenges and the future perspectives for developing paper-based sample pretreatment technique.

  20. Multiply Degenerate Exceptional Points and Quantum Phase Transitions

    NASA Astrophysics Data System (ADS)

    Borisov, Denis I.; Ružička, František; Znojil, Miloslav

    2015-12-01

    The realization of a genuine phase transition in quantum mechanics requires that at least one of the Kato's exceptional-point parameters becomes real. A new family of finite-dimensional and time-parametrized quantum-lattice models with such a property is proposed and studied. All of them exhibit, at a real exceptional-point time t = 0, the Jordan-block spectral degeneracy structure of some of their observables sampled by Hamiltonian H( t) and site-position Q( t). The passes through the critical instant t = 0 are interpreted as schematic simulations of non-equivalent versions of the Big-Bang-like quantum catastrophes.

  1. Atmospheric plume progression as a function of time and distance from the release point for radioactive isotopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslinger, Paul W.; Bowyer, Ted W.; Cameron, Ian M.

    2015-10-01

    The International Monitoring System contains up to 80 stations around the world that have aerosol and xenon monitoring systems designed to detect releases of radioactive materials to the atmosphere from nuclear tests. A rule of thumb description of plume concentration and duration versus time and distance from the release point is useful when designing and deploying new sample collection systems. This paper uses plume development from atmospheric transport modeling to provide a power-law rule describing atmospheric dilution factors as a function of distance from the release point.

  2. Modeling abundance using hierarchical distance sampling

    USGS Publications Warehouse

    Royle, Andy; Kery, Marc

    2016-01-01

    In this chapter, we provide an introduction to classical distance sampling ideas for point and line transect data, and for continuous and binned distance data. We introduce the conditional and the full likelihood, and we discuss Bayesian analysis of these models in BUGS using the idea of data augmentation, which we discussed in Chapter 7. We then extend the basic ideas to the problem of hierarchical distance sampling (HDS), where we have multiple point or transect sample units in space (or possibly in time). The benefit of HDS in practice is that it allows us to directly model spatial variation in population size among these sample units. This is a preeminent concern of most field studies that use distance sampling methods, but it is not a problem that has received much attention in the literature. We show how to analyze HDS models in both the unmarked package and in the BUGS language for point and line transects, and for continuous and binned distance data. We provide a case study of HDS applied to a survey of the island scrub-jay on Santa Cruz Island, California.

  3. Simplified Therapeutic Intervention Scoring System: the TISS-28 items--results from a multicenter study.

    PubMed

    Miranda, D R; de Rijk, A; Schaufeli, W

    1996-01-01

    To validate a simplified version of the Therapeutic Intervention Scoring System, the TISS-28, and to determine the association of TISS-28 with the time spent on scored and nonscored nursing activities. Prospective, multicenter study. Twenty-two adult medical, surgical, and general Dutch intensive care units (ICUs). A total of 903 patients consecutively admitted to the ICUs. TISS-28 was constructed from a random sample of 10,000 records of TISS-76 items. The respective weights were calculated using multivariable regression analysis through the origin; TISS-76 scores were used as predicted values. Cross validation was performed in another random sample of 10,000 records and the scores of TISS-76 were compared with those scores obtained with TISS-28 (r = .96, r2 = .93). Nursing activities in the ICU were inventoried and divided into six categories: a) activities in TISS-28; b) patient care activities not in TISS-28; c) indirect patient care (activities related to but not in direct contact with the patient, such as contact with family, maintaining supplies); d) organizational activities (e.g., meetings, trainee supervision, research); e) personal activities (for the nurse him/herself, such as taking a break, going to the bathroom); f) other. During a 1-month period, TISS-76 and TISS-28 scores were determined daily from the patient's records by independent raters. During a 1-wk period, all of the nurses on duty scored their activities using a method called "work sampling." The analysis of validation included 1,820 valid pairs of TISS-76 and TISS-28 records. The mean value of TISS-28 (28.8 +/- 11.1) was higher (p < .00) than that value of TISS-76 (24.2 +/- 10.2). TISS-28 explained 86% of the variation in TISS-76 (r = .93, r2 = .86). "Work sampling" generated 10,079 registrations of nursing activities, of which 5,530 could be matched with TISS-28 records. Samples were taken from medical (19.3%), surgical (19.1%), and general (61.6%) ICUs. Of these samples, 51.1% originated from university hospitals, 35.8% from hospitals with > 500 beds, 7.1% from hospitals with 300 to 500 beds, and 5.8% from hospitals with < 300 beds. Samples were scored in the morning (43.0%), evening (32.9%), and night shifts (24.1%). This sample of work activities was divided into four groups, according to their matched TISS scores (0 to 20, 20 to 35, 35 to 60, and > 60 points). In the successive groups of TISS scores, there was a significant increase in the proportion of time spent on the activities scored with TISS-28. In the lower TISS score group (0 to 20 points), there was a significantly larger proportion of time allocated to patient care activities not in TISS-28. There was no significant difference in the proportion of the time spent when associating indirect patient care and organizational activities with the level of TISS score. There was a significant decrease in the proportion of time spent on personal activities in the successive groups of TISS scores. The mean time spent per shift with personal activities varied between 1 hr and 40 mins (group 0 to 20 points TISS), and 1 hr and 16 mins (group > 60 points TISS). Significantly more time was used for patient care activities during the evening shift than during the day or the night shift. Conversely, nurses spent significantly less time on activities regarding their personal care during the evening shift. The time consumed for the activities of indirect patient care did not differ significantly among the three shifts. A typical nurse was capable of delivering work equal to 46.35 TISS-28 points per shift (one TISS-28 point equals 10.6 mins of each nurse's shift). The simplified TISS-28 explains 86% of the variation in TISS-76 and can therefore replace the original version in the clinical practice in the ICU. Per shift, a typical nurse is capable of delivering nursing activities equal to 46 TISS-28 points.

  4. Reassessing the educational environment among undergraduate students in a chiropractic training institution: A study over time

    PubMed Central

    Palmgren, Per J.; Sundberg, Tobias; Laksov, Klara Bolander

    2015-01-01

    Objective The aim of the study was twofold: (1) to compare the perceived educational environment at 2 points in time and (2) to longitudinally examine potential changes in perceptions of the educational environment over time. Methods The validated Dundee Ready Educational Environment Measure (DREEM), a 50-item, self-administered Likert-type inventory, was used in this prospective study. Employing convenience sampling, undergraduate chiropractic students were investigated at 2 points in time: 2009 (n = 124) and 2012 (n = 127). An analysis of 2 matching samples was performed on 27% (n = 34) of the respondents in 2009. Results A total of 251 students (79%) completed the inventory, 83% (n = 124) in 2009 and 75% (n = 127) in 2012. The overall DREEM scores in both years were excellent: 156 (78%) and 153 (77%), respectively. The students' perceptions of teachers differed significantly between the 2 cohort years, decreasing from 77% to 73%. Three items received deprived scores: limited support for stressed students, authoritarian teachers, and an overemphasis on factual learning; the latter significantly decreased in 2012. In the longitudinal sample these items also displayed scores below the expected mean. Conclusion Students viewed the educational environment as excellent both in 2009 and 2012. The perceptions of teachers declined with time; however, this could be attributed to teachers' new roles. Certain aspects of the educational environment factored prominently during the comparative points in time, as well as longitudinally, and these ought to be further investigated and addressed to provide an enhanced educational environment. PMID:26023892

  5. On the improvement of blood sample collection at clinical laboratories

    PubMed Central

    2014-01-01

    Background Blood samples are usually collected daily from different collection points, such hospitals and health centers, and transported to a core laboratory for testing. This paper presents a project to improve the collection routes of two of the largest clinical laboratories in Spain. These routes must be designed in a cost-efficient manner while satisfying two important constraints: (i) two-hour time windows between collection and delivery, and (ii) vehicle capacity. Methods A heuristic method based on a genetic algorithm has been designed to solve the problem of blood sample collection. The user enters the following information for each collection point: postal address, average collecting time, and average demand (in thermal containers). After implementing the algorithm using C programming, this is run and, in few seconds, it obtains optimal (or near-optimal) collection routes that specify the collection sequence for each vehicle. Different scenarios using various types of vehicles have been considered. Unless new collection points are added or problem parameters are changed substantially, routes need to be designed only once. Results The two laboratories in this study previously planned routes manually for 43 and 74 collection points, respectively. These routes were covered by an external carrier company. With the implementation of this algorithm, the number of routes could be reduced from ten to seven in one laboratory and from twelve to nine in the other, which represents significant annual savings in transportation costs. Conclusions The algorithm presented can be easily implemented in other laboratories that face this type of problem, and it is particularly interesting and useful as the number of collection points increases. The method designs blood collection routes with reduced costs that meet the time and capacity constraints of the problem. PMID:24406140

  6. Incremental isometric embedding of high-dimensional data using connected neighborhood graphs.

    PubMed

    Zhao, Dongfang; Yang, Li

    2009-01-01

    Most nonlinear data embedding methods use bottom-up approaches for capturing the underlying structure of data distributed on a manifold in high dimensional space. These methods often share the first step which defines neighbor points of every data point by building a connected neighborhood graph so that all data points can be embedded to a single coordinate system. These methods are required to work incrementally for dimensionality reduction in many applications. Because input data stream may be under-sampled or skewed from time to time, building connected neighborhood graph is crucial to the success of incremental data embedding using these methods. This paper presents algorithms for updating $k$-edge-connected and $k$-connected neighborhood graphs after a new data point is added or an old data point is deleted. It further utilizes a simple algorithm for updating all-pair shortest distances on the neighborhood graph. Together with incremental classical multidimensional scaling using iterative subspace approximation, this paper devises an incremental version of Isomap with enhancements to deal with under-sampled or unevenly distributed data. Experiments on both synthetic and real-world data sets show that the algorithm is efficient and maintains low dimensional configurations of high dimensional data under various data distributions.

  7. Cost-Benefit Analysis of Computer Resources for Machine Learning

    USGS Publications Warehouse

    Champion, Richard A.

    2007-01-01

    Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.

  8. Off-Time Pubertal Timing Predicts Physiological Reactivity to Postpuberty Interpersonal Stress

    ERIC Educational Resources Information Center

    Smith, Anne Emilie; Powers, Sally I.

    2009-01-01

    We investigated associations between retrospectively assessed timing of pubertal development, interpersonal interactions, and hypothalamic-pituitary-adrenal axis reactivity to an interpersonal stress task in 110 young adult women. Participants provided salivary cortisol samples at points prior and subsequent to a video-taped conflict discussion…

  9. Effects of the H-3 Highway Stormwater Runoff on the Water Quality of Halawa Stream, Oahu, Hawaii, November 1998 to August 2004

    USGS Publications Warehouse

    Wolff, Reuben H.; Wong, Michael F.

    2008-01-01

    Since November 1998, water-quality data have been collected from the H-3 Highway Storm Drain C, which collects runoff from a 4-mi-long viaduct, and from Halawa Stream on Oahu, Hawaii. From January 2001 to August 2004, data were collected from the storm drain and four stream sites in the Halawa Stream drainage basin as part of the State of Hawaii Department of Transportation Storm Water Monitoring Program. Data from the stormwater monitoring program have been published in annual reports. This report uses these water-quality data to explore how the highway storm-drain runoff affects Halawa Stream and the factors that might be controlling the water quality in the drainage basin. In general, concentrations of nutrients, total dissolved solids, and total suspended solids were lower in highway runoff from Storm Drain C than at stream sites upstream and downstream of Storm Drain C. The opposite trend was observed for most trace metals, which generally occurred in higher concentrations in the highway runoff from Storm Drain C than in the samples collected from Halawa Stream. The absolute contribution from Storm Drain C highway runoff, in terms of total storm loads, was much smaller than at stations upstream and downstream, whereas the constituent yields (the relative contribution per unit drainage basin area) at Storm Drain C were comparable to or higher than storm yields at stations upstream and downstream. Most constituent concentrations and loads in stormwater runoff increased in a downstream direction. The timing of the storm sampling is an important factor controlling constituent concentrations observed in stormwater runoff samples. Automated point samplers were used to collect grab samples during the period of increasing discharge of the storm throughout the stormflow peak and during the period of decreasing discharge of the storm, whereas manually collected grab samples were generally collected during the later stages near the end of the storm. Grab samples were analyzed to determine concentrations and loads at a particular point in time. Flow-weighted time composite samples from the automated point samplers were analyzed to determine mean constituent concentrations or loads during a storm. Chemical analysis of individual grab samples from the automated point sampler at Storm Drain C demonstrated the ?first flush? phenomenon?higher constituent concentrations at the beginning of runoff events?for the trace metals cadmium, lead, zinc, and copper, whose concentrations were initially high during the period of increasing discharge and gradually decreased over the duration of the storm. Water-quality data from Storm Drain C and four stream sites were compared to the State of Hawaii Department of Health (HDOH) water-quality standards to determine the effects of highway storm runoff on the water quality of Halawa Stream. The geometric-mean standards and the 10- and 2-percent-of-the-time concentration standards for total nitrogen, nitrite plus nitrate, total phosphorus, total suspended solids, and turbidity were exceeded in many of the comparisons. However, these standards were not designed for stormwater sampling, in which constituent concentrations would be expected to increase for short periods of time. With the aim of enhancing the usefulness of the water-quality data, several modifications to the stormwater monitoring program are suggested. These suggestions include (1) the periodic analyzing of discrete samples from the automated point samplers over the course of a storm to get a clearer profile of the storm, from first flush to the end of the receding discharge; (2) adding an analysis of the dissolved fractions of metals to the sampling plan; (3) installation of an automatic sampler at Bridge 8 to enable sampling earlier in the storms; (4) a one-time sampling and analysis of soils upstream of Bridge 8 for base-line contaminant concentrations; (5) collection of samples from Halawa Stream during low-flow conditions

  10. Standard Samples and Reference Standards Issued by the National Bureau of Standards

    DTIC Science & Technology

    1954-08-31

    precision and accuracy of control testing in the melting - point , density, index of refraction, heat rubber industry. of combustion, color, and gloss...pH (approx.) 1.7 65 2.50 Melting - Point Standards 44d Aluminum ---------------------------- 659.70 C...calculating the best frequencies for communication between any two points in the world at any time during the given month. The data are important to all

  11. Assessment of real-time PCR cycle threshold values in Microsporum canis culture-positive and culture-negative cats in an animal shelter: a field study.

    PubMed

    Jacobson, Linda S; McIntyre, Lauren; Mykusz, Jenny

    2018-02-01

    Objectives Real-time PCR provides quantitative information, recorded as the cycle threshold (Ct) value, about the number of organisms detected in a diagnostic sample. The Ct value correlates with the number of copies of the target organism in an inversely proportional and exponential relationship. The aim of the study was to determine whether Ct values could be used to distinguish between culture-positive and culture-negative samples. Methods This was a retrospective analysis of Ct values from dermatophyte PCR results in cats with suspicious skin lesions or suspected exposure to dermatophytosis. Results One hundred and thirty-two samples were included. Using culture as the gold standard, 28 were true positives, 12 were false positives and 92 were true negatives. The area under the curve for the pretreatment time point was 96.8% (95% confidence interval [CI] 94.2-99.5) compared with 74.3% (95% CI 52.6-96.0) for pooled data during treatment. Before treatment, a Ct cut-off of <35.7 (approximate DNA count 300) provided a sensitivity of 92.3% and specificity of 95.2%. There was no reliable cut-off Ct value between culture-positive and culture-negative samples during treatment. Ct values prior to treatment differed significantly between the true-positive and false-positive groups ( P = 0.0056). There was a significant difference between the pretreatment and first and second negative culture time points ( P = 0.0002 and P <0.0001, respectively). However, there was substantial overlap between Ct values for true positives and true negatives, and for pre- and intra-treatment time points. Conclusions and relevance Ct values had limited usefulness for distinguishing between culture-positive and culture-negative cases when field study samples were analyzed. In addition, Ct values were less reliable than fungal culture for determining mycological cure.

  12. Simulating the complex output of rainfall and hydrological processes using the information contained in large data sets: the Direct Sampling approach.

    NASA Astrophysics Data System (ADS)

    Oriani, Fabio

    2017-04-01

    The unpredictable nature of rainfall makes its estimation as much difficult as it is essential to hydrological applications. Stochastic simulation is often considered a convenient approach to asses the uncertainty of rainfall processes, but preserving their irregular behavior and variability at multiple scales is a challenge even for the most advanced techniques. In this presentation, an overview on the Direct Sampling technique [1] and its recent application to rainfall and hydrological data simulation [2, 3] is given. The algorithm, having its roots in multiple-point statistics, makes use of a training data set to simulate the outcome of a process without inferring any explicit probability measure: the data are simulated in time or space by sampling the training data set where a sufficiently similar group of neighbor data exists. This approach allows preserving complex statistical dependencies at different scales with a good approximation, while reducing the parameterization to the minimum. The straights and weaknesses of the Direct Sampling approach are shown through a series of applications to rainfall and hydrological data: from time-series simulation to spatial rainfall fields conditioned by elevation or a climate scenario. In the era of vast databases, is this data-driven approach a valid alternative to parametric simulation techniques? [1] Mariethoz G., Renard P., and Straubhaar J. (2010), The Direct Sampling method to perform multiple-point geostatistical simulations, Water. Rerous. Res., 46(11), http://dx.doi.org/10.1029/2008WR007621 [2] Oriani F., Straubhaar J., Renard P., and Mariethoz G. (2014), Simulation of rainfall time series from different climatic regions using the direct sampling technique, Hydrol. Earth Syst. Sci., 18, 3015-3031, http://dx.doi.org/10.5194/hess-18-3015-2014 [3] Oriani F., Borghi A., Straubhaar J., Mariethoz G., Renard P. (2016), Missing data simulation inside flow rate time-series using multiple-point statistics, Environ. Model. Softw., vol. 86, pp. 264 - 276, http://dx.doi.org/10.1016/j.envsoft.2016.10.002

  13. Combined VSWIR/TIR Products Overview: Issues and Examples

    NASA Technical Reports Server (NTRS)

    Knox, Robert G.

    2010-01-01

    The presentation provides a summary of VSWIR data collected at 19-day intervals for most areas. TIR data was collected both day and night on a 5-day cycle (more frequently at higher latitudes), the TIR swath is four times as wide as VSWIR, and the 5-day orbit repeat is approximate. Topics include nested swath geometry for reference point design and coverage simulations for sample FLUXNET tower sites. Other points examined include variation in latitude for revisit frequency, overpass times, and TIR overlap geometry and timing between VSWIR data collections.

  14. Using 50 years of soil radiocarbon data to identify optimal approaches for estimating soil carbon residence times

    NASA Astrophysics Data System (ADS)

    Baisden, W. T.; Canessa, S.

    2013-01-01

    In 1959, Athol Rafter began a substantial programme of systematically monitoring the flow of 14C produced by atmospheric thermonuclear tests through organic matter in New Zealand soils under stable land use. A database of ∼500 soil radiocarbon measurements spanning 50 years has now been compiled, and is used here to identify optimal approaches for soil C-cycle studies. Our results confirm the potential of 14C to determine residence times, by estimating the amount of ‘bomb 14C’ incorporated. High-resolution time series confirm this approach is appropriate, and emphasise that residence times can be calculated routinely with two or more time points as little as 10 years apart. This approach is generally robust to the key assumptions that can create large errors when single time-point 14C measurements are modelled. The three most critical assumptions relate to: (1) the distribution of turnover times, and particularly the proportion of old C (‘passive fraction’), (2) the lag time between photosynthesis and C entering the modelled pool, (3) changes in the rates of C input. When carrying out approaches using robust assumptions on time-series samples, multiple soil layers can be aggregated using a mixing equation. Where good archived samples are available, AMS measurements can develop useful understanding for calibrating models of the soil C cycle at regional to continental scales with sample numbers on the order of hundreds rather than thousands. Sample preparation laboratories and AMS facilities can play an important role in coordinating the efficient delivery of robust calculated residence times for soil carbon.

  15. Clinical usefulness of limited sampling strategies for estimating AUC of proton pump inhibitors.

    PubMed

    Niioka, Takenori

    2011-03-01

    Cytochrome P450 (CYP) 2C19 (CYP2C19) genotype is regarded as a useful tool to predict area under the blood concentration-time curve (AUC) of proton pump inhibitors (PPIs). In our results, however, CYP2C19 genotypes had no influence on AUC of all PPIs during fluvoxamine treatment. These findings suggest that CYP2C19 genotyping is not always a good indicator for estimating AUC of PPIs. Limited sampling strategies (LSS) were developed to estimate AUC simply and accurately. It is important to minimize the number of blood samples because of patient's acceptance. This article reviewed the usefulness of LSS for estimating AUC of three PPIs (omeprazole: OPZ, lansoprazole: LPZ and rabeprazole: RPZ). The best prediction formulas in each PPI were AUC(OPZ)=9.24 x C(6h)+2638.03, AUC(LPZ)=12.32 x C(6h)+3276.09 and AUC(RPZ)=1.39 x C(3h)+7.17 x C(6h)+344.14, respectively. In order to optimize the sampling strategy of LPZ, we tried to establish LSS for LPZ using a time point within 3 hours through the property of pharmacokinetics of its enantiomers. The best prediction formula using the fewest sampling points (one point) was AUC(racemic LPZ)=6.5 x C(3h) of (R)-LPZ+13.7 x C(3h) of (S)-LPZ-9917.3 x G1-14387.2×G2+7103.6 (G1: homozygous extensive metabolizer is 1 and the other genotypes are 0; G2: heterozygous extensive metabolizer is 1 and the other genotypes are 0). Those strategies, plasma concentration monitoring at one or two time-points, might be more suitable for AUC estimation than reference to CYP2C19 genotypes, particularly in the case of coadministration of CYP mediators.

  16. Survival of Salmonella enterica in poultry feed is strain dependent

    PubMed Central

    Andino, Ana; Pendleton, Sean; Zhang, Nan; Chen, Wei; Critzer, Faith; Hanning, Irene

    2014-01-01

    Feed components have low water activity, making bacterial survival difficult. The mechanisms of Salmonella survival in feed and subsequent colonization of poultry are unknown. The purpose of this research was to compare the ability of Salmonella serovars and strains to survive in broiler feed and to evaluate molecular mechanisms associated with survival and colonization by measuring the expression of genes associated with colonization (hilA, invA) and survival via fatty acid synthesis (cfa, fabA, fabB, fabD). Feed was inoculated with 1 of 15 strains of Salmonella enterica consisting of 11 serovars (Typhimurium, Enteriditis, Kentucky, Seftenburg, Heidelberg, Mbandanka, Newport, Bairely, Javiana, Montevideo, and Infantis). To inoculate feed, cultures were suspended in PBS and survival was evaluated by plating samples onto XLT4 agar plates at specific time points (0 h, 4 h, 8 h, 24 h, 4 d, and 7 d). To evaluate gene expression, RNA was extracted from the samples at the specific time points (0, 4, 8, and 24 h) and gene expression measured with real-time PCR. The largest reduction in Salmonella occurred at the first and third sampling time points (4 h and 4 d) with the average reductions being 1.9 and 1.6 log cfu per g, respectively. For the remaining time points (8 h, 24 h, and 7 d), the average reduction was less than 1 log cfu per g (0.6, 0.4, and 0.6, respectively). Most strains upregulated cfa (cyclopropane fatty acid synthesis) within 8 h, which would modify the fluidity of the cell wall to aid in survival. There was a weak negative correlation between survival and virulence gene expression indicating downregulation to focus energy on other gene expression efforts such as survival-related genes. These data indicate the ability of strains to survive over time in poultry feed was strain dependent and that upregulation of cyclopropane fatty acid synthesis and downregulation of virulence genes were associated with a response to desiccation stress. PMID:24570467

  17. Survival of Salmonella enterica in poultry feed is strain dependent.

    PubMed

    Andino, Ana; Pendleton, Sean; Zhang, Nan; Chen, Wei; Critzer, Faith; Hanning, Irene

    2014-02-01

    Feed components have low water activity, making bacterial survival difficult. The mechanisms of Salmonella survival in feed and subsequent colonization of poultry are unknown. The purpose of this research was to compare the ability of Salmonella serovars and strains to survive in broiler feed and to evaluate molecular mechanisms associated with survival and colonization by measuring the expression of genes associated with colonization (hilA, invA) and survival via fatty acid synthesis (cfa, fabA, fabB, fabD). Feed was inoculated with 1 of 15 strains of Salmonella enterica consisting of 11 serovars (Typhimurium, Enteriditis, Kentucky, Seftenburg, Heidelberg, Mbandanka, Newport, Bairely, Javiana, Montevideo, and Infantis). To inoculate feed, cultures were suspended in PBS and survival was evaluated by plating samples onto XLT4 agar plates at specific time points (0 h, 4 h, 8 h, 24 h, 4 d, and 7 d). To evaluate gene expression, RNA was extracted from the samples at the specific time points (0, 4, 8, and 24 h) and gene expression measured with real-time PCR. The largest reduction in Salmonella occurred at the first and third sampling time points (4 h and 4 d) with the average reductions being 1.9 and 1.6 log cfu per g, respectively. For the remaining time points (8 h, 24 h, and 7 d), the average reduction was less than 1 log cfu per g (0.6, 0.4, and 0.6, respectively). Most strains upregulated cfa (cyclopropane fatty acid synthesis) within 8 h, which would modify the fluidity of the cell wall to aid in survival. There was a weak negative correlation between survival and virulence gene expression indicating downregulation to focus energy on other gene expression efforts such as survival-related genes. These data indicate the ability of strains to survive over time in poultry feed was strain dependent and that upregulation of cyclopropane fatty acid synthesis and downregulation of virulence genes were associated with a response to desiccation stress.

  18. [Automated analyzer of enzyme immunoassay].

    PubMed

    Osawa, S

    1995-09-01

    Automated analyzers for enzyme immunoassay can be classified by several points of view: the kind of labeled antibodies or enzymes, detection methods, the number of tests per unit time, analytical time and speed per run. In practice, it is important for us consider the several points such as detection limits, the number of tests per unit time, analytical range, and precision. Most of the automated analyzers on the market can randomly access and measure samples. I will describe the recent advance of automated analyzers reviewing their labeling antibodies and enzymes, the detection methods, the number of test per unit time and analytical time and speed per test.

  19. Limited sampling strategies to predict the area under the concentration-time curve for rifampicin.

    PubMed

    Medellín-Garibay, Susanna E; Correa-López, Tania; Romero-Méndez, Carmen; Milán-Segovia, Rosa C; Romano-Moreno, Silvia

    2014-12-01

    Rifampicin (RMP) is the most effective first-line antituberculosis drug. One of the most critical aspects of using it in fixed-drug combination formulations is to ensure it reaches therapeutic levels in blood. The determination of the area under the concentration-time curve (AUC) and appropriate dose adjustment of this drug may contribute to optimization of therapy. Even when the maximal concentration (Cmax) of RMP also predicts its sterilizing effect, the time to reach it (Tmax) takes 40 minutes to 6 hours. The aim of this study was to develop a limited sampling strategy (LSS) for therapeutic drug monitoring assistance for RMP. Full concentration-time curves were obtained from 58 patients with tuberculosis (TB) after the oral administration of RMP in fixed-drug combination formulation. A validated high-performance liquid chromatographic method was used. Pharmacokinetic parameters were estimated with a noncompartmental model. Generalized linear models were obtained by forward steps, and bootstrapping was performed to develop LSS to predict AUC curve from time 0 to the last measured at 24 hours postdose (AUC0-24). The predictive performance of the proposed models was assessed using RMP profiles from 25 other TB patients by comparing predicted and observed AUC0-24. The mean AUC0-24 in the current study was 91.46 ± 36.7 mg·h·L, and the most convenient sampling time points to predict it were 2, 4 and 12 hours postdose (slope [m] = 0.955 ± 0.06; r = 0.92). The mean prediction error was -0.355%, and the root mean square error was 5.6% in the validation group. Alternate LSSs are proposed with 2 of these sampling time points, which also provide good predictions when the 3 most convenient are not feasible. The AUC0-24 for RMP in TB patients can be predicted with acceptable precision through a 2- or 3-point sampling strategy, despite wide interindividual variability. These LSSs could be applied in clinical practice to optimize anti-TB therapy based on therapeutic drug monitoring.

  20. Adaptive control of theophylline therapy: importance of blood sampling times.

    PubMed

    D'Argenio, D Z; Khakmahd, K

    1983-10-01

    A two-observation protocol for estimating theophylline clearance during a constant-rate intravenous infusion is used to examine the importance of blood sampling schedules with regard to the information content of resulting concentration data. Guided by a theory for calculating maximally informative sample times, population simulations are used to assess the effect of specific sampling times on the precision of resulting clearance estimates and subsequent predictions of theophylline plasma concentrations. The simulations incorporated noise terms for intersubject variability, dosing errors, sample collection errors, and assay error. Clearance was estimated using Chiou's method, least squares, and a Bayesian estimation procedure. The results of these simulations suggest that clinically significant estimation and prediction errors may result when using the above two-point protocol for estimating theophylline clearance if the time separating the two blood samples is less than one population mean elimination half-life.

  1. The Importance and Role of Intracluster Correlations in Planning Cluster Trials

    PubMed Central

    Preisser, John S.; Reboussin, Beth A.; Song, Eun-Young; Wolfson, Mark

    2008-01-01

    There is increasing recognition of the critical role of intracluster correlations of health behavior outcomes in cluster intervention trials. This study examines the estimation, reporting, and use of intracluster correlations in planning cluster trials. We use an estimating equations approach to estimate the intracluster correlations corresponding to the multiple-time-point nested cross-sectional design. Sample size formulae incorporating 2 types of intracluster correlations are examined for the purpose of planning future trials. The traditional intracluster correlation is the correlation among individuals within the same community at a specific time point. A second type is the correlation among individuals within the same community at different time points. For a “time × condition” analysis of a pretest–posttest nested cross-sectional trial design, we show that statistical power considerations based upon a posttest-only design generally are not an adequate substitute for sample size calculations that incorporate both types of intracluster correlations. Estimation, reporting, and use of intracluster correlations are illustrated for several dichotomous measures related to underage drinking collected as part of a large nonrandomized trial to enforce underage drinking laws in the United States from 1998 to 2004. PMID:17879427

  2. Occupational Gender Desegregation in the 1980s.

    ERIC Educational Resources Information Center

    Cotter, David A.; And Others

    1995-01-01

    Analysis of 1980 and 1990 Public Use Microdata Samples showed that, among full-time workers, occupational sex segregation declined 6.5 percentage points, less than the 8.5 point decline in the 1970s. Three-quarters of the desegregation was due to changed gender composition of occupations, one-quarter due to faster growth in more integrated…

  3. Estimation of point source fugitive emission rates from a single sensor time series: a conditionally-sampled Gaussian plume reconstruction

    EPA Science Inventory

    This paper presents a technique for determining the trace gas emission rate from a point source. The technique was tested using data from controlled methane release experiments and from measurement downwind of a natural gas production facility in Wyoming. Concentration measuremen...

  4. Incorporating availability for detection in estimates of bird abundance

    USGS Publications Warehouse

    Diefenbach, D.R.; Marshall, M.R.; Mattice, J.A.; Brauning, D.W.

    2007-01-01

    Several bird-survey methods have been proposed that provide an estimated detection probability so that bird-count statistics can be used to estimate bird abundance. However, some of these estimators adjust counts of birds observed by the probability that a bird is detected and assume that all birds are available to be detected at the time of the survey. We marked male Henslow's Sparrows (Ammodramus henslowii) and Grasshopper Sparrows (A. savannarum) and monitored their behavior during May-July 2002 and 2003 to estimate the proportion of time they were available for detection. We found that the availability of Henslow's Sparrows declined in late June to <10% for 5- or 10-min point counts when a male had to sing and be visible to the observer; but during 20 May-19 June, males were available for detection 39.1% (SD = 27.3) of the time for 5-min point counts and 43.9% (SD = 28.9) of the time for 10-min point counts (n = 54). We detected no temporal changes in availability for Grasshopper Sparrows, but estimated availability to be much lower for 5-min point counts (10.3%, SD = 12.2) than for 10-min point counts (19.2%, SD = 22.3) when males had to be visible and sing during the sampling period (n = 80). For distance sampling, we estimated the availability of Henslow's Sparrows to be 44.2% (SD = 29.0) and the availability of Grasshopper Sparrows to be 20.6% (SD = 23.5). We show how our estimates of availability can be incorporated in the abundance and variance estimators for distance sampling and modify the abundance and variance estimators for the double-observer method. Methods that directly estimate availability from bird counts but also incorporate detection probabilities need further development and will be important for obtaining unbiased estimates of abundance for these species.

  5. Evaluation of satisfaction in an extracurricular enrichment program for high-intellectual ability participants.

    PubMed

    Sastre I Riba, Sylvia; Fonseca-Pedrero, Eduardo; Santarén-Rosell, Marta; Urraca-Martínez, María Luz

    2015-01-01

    The objective of this study was to evaluate the satisfaction of an extracurricular enrichment program of the cognitive and personal management of participants with high intellectual ability. At the first time point, the sample consisted of n= 38 participants, and n= 20 parents; n= 48 participants at the second time point; and n= 60 participants at the third time point. The Satisfaction Questionnaire (CSA in Spanish), both for students (CSA-S) and for parents (CSA-P), was constructed. The CSA-S scores showed adequate psychometric properties. Exploratory factor analysis yielded a unidimensional structure. Cronbach’s alpha ranged between 85 and .86. Test-retest reliability was 0.45 (p<.05). The generalizability coefficient was .98. A high percentage of the sample was satisfied with the program, perceived improvements in cognitive and emotional management, motivation and interest in learning, and in the frequency and quality of their interpersonal relationships. The evaluation of educational programs is necessary in order to determine the efficacy and the effects of their implementation on the participants’ personal and intellectual management.

  6. Phase II Trials for Heterogeneous Patient Populations with a Time-to-Event Endpoint.

    PubMed

    Jung, Sin-Ho

    2017-07-01

    In this paper, we consider a single-arm phase II trial with a time-to-event end-point. We assume that the study population has multiple subpopulations with different prognosis, but the study treatment is expected to be similarly efficacious across the subpopulations. We review a stratified one-sample log-rank test and present its sample size calculation method under some practical design settings. Our sample size method requires specification of the prevalence of subpopulations. We observe that the power of the resulting sample size is not very sensitive to misspecification of the prevalence.

  7. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    PubMed

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The use of bulk EC a gradient as an exhaustive variable, known at any node of an interpolation grid, has allowed the optimization of the sampling scheme, distinguishing among areas with different priority levels.

  8. Ion photon emission microscope

    DOEpatents

    Doyle, Barney L.

    2003-04-22

    An ion beam analysis system that creates microscopic multidimensional image maps of the effects of high energy ions from an unfocussed source upon a sample by correlating the exact entry point of an ion into a sample by projection imaging of the ion-induced photons emitted at that point with a signal from a detector that measures the interaction of that ion within the sample. The emitted photons are collected in the lens system of a conventional optical microscope, and projected on the image plane of a high resolution single photon position sensitive detector. Position signals from this photon detector are then correlated in time with electrical effects, including the malfunction of digital circuits, detected within the sample that were caused by the individual ion that created these photons initially.

  9. Modeling Canadian Quality Control Test Program for Steroid Hormone Receptors in Breast Cancer: Diagnostic Accuracy Study.

    PubMed

    Pérez, Teresa; Makrestsov, Nikita; Garatt, John; Torlakovic, Emina; Gilks, C Blake; Mallett, Susan

    The Canadian Immunohistochemistry Quality Control program monitors clinical laboratory performance for estrogen receptor and progesterone receptor tests used in breast cancer treatment management in Canada. Current methods assess sensitivity and specificity at each time point, compared with a reference standard. We investigate alternative performance analysis methods to enhance the quality assessment. We used 3 methods of analysis: meta-analysis of sensitivity and specificity of each laboratory across all time points; sensitivity and specificity at each time point for each laboratory; and fitting models for repeated measurements to examine differences between laboratories adjusted by test and time point. Results show 88 laboratories participated in quality control at up to 13 time points using typically 37 to 54 histology samples. In meta-analysis across all time points no laboratories have sensitivity or specificity below 80%. Current methods, presenting sensitivity and specificity separately for each run, result in wide 95% confidence intervals, typically spanning 15% to 30%. Models of a single diagnostic outcome demonstrated that 82% to 100% of laboratories had no difference to reference standard for estrogen receptor and 75% to 100% for progesterone receptor, with the exception of 1 progesterone receptor run. Laboratories with significant differences to reference standard identified with Generalized Estimating Equation modeling also have reduced performance by meta-analysis across all time points. The Canadian Immunohistochemistry Quality Control program has a good design, and with this modeling approach has sufficient precision to measure performance at each time point and allow laboratories with a significantly lower performance to be targeted for advice.

  10. Microbial succession in an inflated lunar/Mars analog habitat during a 30-day human occupation.

    PubMed

    Mayer, Teresa; Blachowicz, Adriana; Probst, Alexander J; Vaishampayan, Parag; Checinska, Aleksandra; Swarmer, Tiffany; de Leon, Pablo; Venkateswaran, Kasthuri

    2016-06-02

    For potential future human missions to the Moon or Mars and sustained presence in the International Space Station, a safe enclosed habitat environment for astronauts is required. Potential microbial contamination of closed habitats presents a risk for crewmembers due to reduced human immune response during long-term confinement. To make future habitat designs safer for crewmembers, lessons learned from characterizing analogous habitats is very critical. One of the key issues is that how human presence influences the accumulation of microorganisms in the closed habitat. Molecular technologies, along with traditional microbiological methods, were utilized to catalog microbial succession during a 30-day human occupation of a simulated inflatable lunar/Mars habitat. Surface samples were collected at different time points to capture the complete spectrum of viable and potential opportunistic pathogenic bacterial population. Traditional cultivation, propidium monoazide (PMA)-quantitative polymerase chain reaction (qPCR), and adenosine triphosphate (ATP) assays were employed to estimate the cultivable, viable, and metabolically active microbial population, respectively. Next-generation sequencing was used to elucidate the microbial dynamics and community profiles at different locations of the habitat during varying time points. Statistical analyses confirm that occupation time has a strong influence on bacterial community profiles. The Day 0 samples (before human occupation) have a very different microbial diversity compared to the later three time points. Members of Proteobacteria (esp. Oxalobacteraceae and Caulobacteraceae) and Firmicutes (esp. Bacillaceae) were most abundant before human occupation (Day 0), while other members of Firmicutes (Clostridiales) and Actinobacteria (esp. Corynebacteriaceae) were abundant during the 30-day occupation. Treatment of samples with PMA (a DNA-intercalating dye for selective detection of viable microbial population) had a significant effect on the microbial diversity compared to non-PMA-treated samples. Statistical analyses revealed a significant difference in community structure of samples over time, particularly of the bacteriomes existing before human occupation of the habitat (Day 0 sampling) and after occupation (Day 13, Day 20, and Day 30 samplings). Actinobacteria (mainly Corynebacteriaceae) and Firmicutes (mainly Clostridiales Incertae Sedis XI and Staphylococcaceae) were shown to increase over the occupation time period. The results of this study revealed a strong relationship between human presence and succession of microbial diversity in a closed habitat. Consequently, it is necessary to develop methods and tools for effective maintenance of a closed system to enable safe human habitation in enclosed environments on Earth and beyond.

  11. Sampling through time and phylodynamic inference with coalescent and birth–death models

    PubMed Central

    Volz, Erik M.; Frost, Simon D. W.

    2014-01-01

    Many population genetic models have been developed for the purpose of inferring population size and growth rates from random samples of genetic data. We examine two popular approaches to this problem, the coalescent and the birth–death-sampling model (BDM), in the context of estimating population size and birth rates in a population growing exponentially according to the birth–death branching process. For sequences sampled at a single time, we found the coalescent and the BDM gave virtually indistinguishable results in terms of the growth rates and fraction of the population sampled, even when sampling from a small population. For sequences sampled at multiple time points, we find that the birth–death model estimators are subject to large bias if the sampling process is misspecified. Since BDMs incorporate a model of the sampling process, we show how much of the statistical power of BDMs arises from the sequence of sample times and not from the genealogical tree. This motivates the development of a new coalescent estimator, which is augmented with a model of the known sampling process and is potentially more precise than the coalescent that does not use sample time information. PMID:25401173

  12. Influence of Survey Length and Radius Size on Grassland Bird Surveys by Point Counts at Williams Lake, British Columbia

    Treesearch

    Jean-Pierre L. Savard; Tracey D. Hooper

    1995-01-01

    We examine the effect of survey length and radius on the results of point count surveys for grassland birds at Williams Lake, British Columbia. Four- and 8-minute counts detected on average 68 percent and 85 percent of the number of birds detected during 12-minute counts. The most efficient sampling duration was 4 minutes, as long as travel time between points was...

  13. Local Sampling of the Wigner Function at Telecom Wavelength with Loss-Tolerant Detection of Photon Statistics.

    PubMed

    Harder, G; Silberhorn, Ch; Rehacek, J; Hradil, Z; Motka, L; Stoklasa, B; Sánchez-Soto, L L

    2016-04-01

    We report the experimental point-by-point sampling of the Wigner function for nonclassical states created in an ultrafast pulsed type-II parametric down-conversion source. We use a loss-tolerant time-multiplexed detector based on a fiber-optical setup and a pair of photon-number-resolving avalanche photodiodes. By capitalizing on an expedient data-pattern tomography, we assess the properties of the light states with outstanding accuracy. The method allows us to reliably infer the squeezing of genuine two-mode states without any phase reference.

  14. Auto covariance computer

    NASA Technical Reports Server (NTRS)

    Hepner, T. E.; Meyers, J. F. (Inventor)

    1985-01-01

    A laser velocimeter covariance processor which calculates the auto covariance and cross covariance functions for a turbulent flow field based on Poisson sampled measurements in time from a laser velocimeter is described. The device will process a block of data that is up to 4096 data points in length and return a 512 point covariance function with 48-bit resolution along with a 512 point histogram of the interarrival times which is used to normalize the covariance function. The device is designed to interface and be controlled by a minicomputer from which the data is received and the results returned. A typical 4096 point computation takes approximately 1.5 seconds to receive the data, compute the covariance function, and return the results to the computer.

  15. Limited sampling strategy models for estimating the AUC of gliclazide in Chinese healthy volunteers.

    PubMed

    Huang, Ji-Han; Wang, Kun; Huang, Xiao-Hui; He, Ying-Chun; Li, Lu-Jin; Sheng, Yu-Cheng; Yang, Juan; Zheng, Qing-Shan

    2013-06-01

    The aim of this work is to reduce the cost of required sampling for the estimation of the area under the gliclazide plasma concentration versus time curve within 60 h (AUC0-60t ). The limited sampling strategy (LSS) models were established and validated by the multiple regression model within 4 or fewer gliclazide concentration values. Absolute prediction error (APE), root of mean square error (RMSE) and visual prediction check were used as criterion. The results of Jack-Knife validation showed that 10 (25.0 %) of the 40 LSS based on the regression analysis were not within an APE of 15 % using one concentration-time point. 90.2, 91.5 and 92.4 % of the 40 LSS models were capable of prediction using 2, 3 and 4 points, respectively. Limited sampling strategies were developed and validated for estimating AUC0-60t of gliclazide. This study indicates that the implementation of an 80 mg dosage regimen enabled accurate predictions of AUC0-60t by the LSS model. This study shows that 12, 6, 4, 2 h after administration are the key sampling times. The combination of (12, 2 h), (12, 8, 2 h) or (12, 8, 4, 2 h) can be chosen as sampling hours for predicting AUC0-60t in practical application according to requirement.

  16. qPCR-based mitochondrial DNA quantification: Influence of template DNA fragmentation on accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jackson, Christopher B., E-mail: Christopher.jackson@insel.ch; Gallati, Sabina, E-mail: sabina.gallati@insel.ch; Schaller, Andre, E-mail: andre.schaller@insel.ch

    2012-07-06

    Highlights: Black-Right-Pointing-Pointer Serial qPCR accurately determines fragmentation state of any given DNA sample. Black-Right-Pointing-Pointer Serial qPCR demonstrates different preservation of the nuclear and mitochondrial genome. Black-Right-Pointing-Pointer Serial qPCR provides a diagnostic tool to validate the integrity of bioptic material. Black-Right-Pointing-Pointer Serial qPCR excludes degradation-induced erroneous quantification. -- Abstract: Real-time PCR (qPCR) is the method of choice for quantification of mitochondrial DNA (mtDNA) by relative comparison of a nuclear to a mitochondrial locus. Quantitative abnormal mtDNA content is indicative of mitochondrial disorders and mostly confines in a tissue-specific manner. Thus handling of degradation-prone bioptic material is inevitable. We established a serialmore » qPCR assay based on increasing amplicon size to measure degradation status of any DNA sample. Using this approach we can exclude erroneous mtDNA quantification due to degraded samples (e.g. long post-exicision time, autolytic processus, freeze-thaw cycles) and ensure abnormal DNA content measurements (e.g. depletion) in non-degraded patient material. By preparation of degraded DNA under controlled conditions using sonification and DNaseI digestion we show that erroneous quantification is due to the different preservation qualities of the nuclear and the mitochondrial genome. This disparate degradation of the two genomes results in over- or underestimation of mtDNA copy number in degraded samples. Moreover, as analysis of defined archival tissue would allow to precise the molecular pathomechanism of mitochondrial disorders presenting with abnormal mtDNA content, we compared fresh frozen (FF) with formalin-fixed paraffin-embedded (FFPE) skeletal muscle tissue of the same sample. By extrapolation of measured decay constants for nuclear DNA ({lambda}{sub nDNA}) and mtDNA ({lambda}{sub mtDNA}) we present an approach to possibly correct measurements in degraded samples in the future. To our knowledge this is the first time different degradation impact of the two genomes is demonstrated and which evaluates systematically the impact of DNA degradation on quantification of mtDNA copy number.« less

  17. Design of point-of-care (POC) microfluidic medical diagnostic devices

    NASA Astrophysics Data System (ADS)

    Leary, James F.

    2018-02-01

    Design of inexpensive and portable hand-held microfluidic flow/image cytometry devices for initial medical diagnostics at the point of initial patient contact by emergency medical personnel in the field requires careful design in terms of power/weight requirements to allow for realistic portability as a hand-held, point-of-care medical diagnostics device. True portability also requires small micro-pumps for high-throughput capability. Weight/power requirements dictate use of super-bright LEDs and very small silicon photodiodes or nanophotonic sensors that can be powered by batteries. Signal-to-noise characteristics can be greatly improved by appropriately pulsing the LED excitation sources and sampling and subtracting noise in between excitation pulses. The requirements for basic computing, imaging, GPS and basic telecommunications can be simultaneously met by use of smartphone technologies, which become part of the overall device. Software for a user-interface system, limited real-time computing, real-time imaging, and offline data analysis can be accomplished through multi-platform software development systems that are well-suited to a variety of currently available cellphone technologies which already contain all of these capabilities. Microfluidic cytometry requires judicious use of small sample volumes and appropriate statistical sampling by microfluidic cytometry or imaging for adequate statistical significance to permit real-time (typically < 15 minutes) medical decisions for patients at the physician's office or real-time decision making in the field. One or two drops of blood obtained by pin-prick should be able to provide statistically meaningful results for use in making real-time medical decisions without the need for blood fractionation, which is not realistic in the field.

  18. Comparison of Support Vector Machine, Neural Network, and CART Algorithms for the Land-Cover Classification Using Limited Training Data Points

    EPA Science Inventory

    Support vector machine (SVM) was applied for land-cover characterization using MODIS time-series data. Classification performance was examined with respect to training sample size, sample variability, and landscape homogeneity (purity). The results were compared to two convention...

  19. [Effect of hydroxyethyl starch 130/0.4 on S100B protein level and cerebral oxygen metabolism in open cardiac surgery under cardiopulmonary bypass].

    PubMed

    Pi, Zhi-bing; Tan, Guan-xian; Wang, Jun-lu

    2007-07-17

    To observe the effect of hydroxyethyl starch (HES) 130/0.4 on S100B protein level and cerebral metabolism of oxygen in open cardiac surgery under cardiopulmonary bypass (CPB) and to explore whether it has the protective effect of 6%HES130/0.4 as priming solution on cerebral injury during CPB and explore the probable mechanism. Forty patients with atrioseptal defect or ventricular septal defect scheduled for elective surgical repair under CPB with moderate hypothermia were randomly divided into two equal groups: HES 130/0.4 group (HES group) in which HES 130/0.4 (voluven) was used as priming solution and gelatin group (GRL group) in which gelofusine (succinylated gelatin) was used as priming solution. ECG, heart rate (HR), blood pressure (BP), mean arterial pressure (MAP), central venous pressure (CVP), arterial partial pressure of oxygen (P(a)O(2),), arterial partial pressure of carbon dioxide (P(et)CO(2)) and body temperature (naso-pharyngeal and rectal) were continuously monitored during the operation. Blood samples were obtained from the central vein for determination of blood concentrations of S100B protein at the following time points: before CPB (T(0)), 20 minutes after the beginning of CPB (T(1)), immediately after the termination of CPB (T(2)), 60 minutes after the termination of CPB (T(3)), and 24 hours after the termination of CPB (T(4)). The serum S100B protein levels were measured by ELISA. At the same time points blood samples were obtained from the jugular vein and radial artery to undergo blood gas analysis and measurement of blood glucose, based on which the cerebral oxygen metabolic rate/cerebral metabolic rate of glucose (CMRO(2)/CMR(GLU)) was calculated. Compared with the time point of immediately before CPB (T(0)), The S100B protein level of the 2 groups began to increase since the time point T(1), peaked at the time point T(2), began to decrease gradually since the time point T(3), and were still significantly higher than those before CPB at the time point T(4) (all P < 0.01), and the S100B protein levels at different time points of the HES group were all significantly lower than those of the GEL group (all P < 0.01). The S(jv)O(2) and CMRO(2)/CMR(GLU) levels of both groups increased at the time point T(1), decreased at the time points T(2) and T(3), and then restored to normal at the time points T(4). In the GEL group there were no significant differences in the levels between any 2 different time points, however, in the HES group S(jv)O(2) and CMRO(2)/CMR(GLU) levels at T(1) was significantly higher than those at the other time points (P < 0.05 or P < 0.01). S100B protein increases significantly in open cardiac surgery under CPB. HES130/0.4 lowers the S100B protein levels from the beginning of CPB to one hour after the termination of CPB with the probable mechanism of improving the cerebral metabolism of oxygen. 6%HES130/0.4 as priming solution may play a protective role in reduction of cerebral injury during CPB and open cardiac surgery.

  20. Statistical approaches to the analysis of point count data: A little extra information can go a long way

    USGS Publications Warehouse

    Farnsworth, G.L.; Nichols, J.D.; Sauer, J.R.; Fancy, S.G.; Pollock, K.H.; Shriner, S.A.; Simons, T.R.; Ralph, C. John; Rich, Terrell D.

    2005-01-01

    Point counts are a standard sampling procedure for many bird species, but lingering concerns still exist about the quality of information produced from the method. It is well known that variation in observer ability and environmental conditions can influence the detection probability of birds in point counts, but many biologists have been reluctant to abandon point counts in favor of more intensive approaches to counting. However, over the past few years a variety of statistical and methodological developments have begun to provide practical ways of overcoming some of the problems with point counts. We describe some of these approaches, and show how they can be integrated into standard point count protocols to greatly enhance the quality of the information. Several tools now exist for estimation of detection probability of birds during counts, including distance sampling, double observer methods, time-depletion (removal) methods, and hybrid methods that combine these approaches. Many counts are conducted in habitats that make auditory detection of birds much more likely than visual detection. As a framework for understanding detection probability during such counts, we propose separating two components of the probability a bird is detected during a count into (1) the probability a bird vocalizes during the count and (2) the probability this vocalization is detected by an observer. In addition, we propose that some measure of the area sampled during a count is necessary for valid inferences about bird populations. This can be done by employing fixed-radius counts or more sophisticated distance-sampling models. We recommend any studies employing point counts be designed to estimate detection probability and to include a measure of the area sampled.

  1. A seamless acquisition digital storage oscilloscope with three-dimensional waveform display

    NASA Astrophysics Data System (ADS)

    Yang, Kuojun; Tian, Shulin; Zeng, Hao; Qiu, Lei; Guo, Lianping

    2014-04-01

    In traditional digital storage oscilloscope (DSO), sampled data need to be processed after each acquisition. During data processing, the acquisition is stopped and oscilloscope is blind to the input signal. Thus, this duration is called dead time. With the rapid development of modern electronic systems, the effect of infrequent events becomes significant. To capture these occasional events in shorter time, dead time in traditional DSO that causes the loss of measured signal needs to be reduced or even eliminated. In this paper, a seamless acquisition oscilloscope without dead time is proposed. In this oscilloscope, three-dimensional waveform mapping (TWM) technique, which converts sampled data to displayed waveform, is proposed. With this technique, not only the process speed is improved, but also the probability information of waveform is displayed with different brightness. Thus, a three-dimensional waveform is shown to the user. To reduce processing time further, parallel TWM which processes several sampled points simultaneously, and dual-port random access memory based pipelining technique which can process one sampling point in one clock period are proposed. Furthermore, two DDR3 (Double-Data-Rate Three Synchronous Dynamic Random Access Memory) are used for storing sampled data alternately, thus the acquisition can continue during data processing. Therefore, the dead time of DSO is eliminated. In addition, a double-pulse test method is adopted to test the waveform capturing rate (WCR) of the oscilloscope and a combined pulse test method is employed to evaluate the oscilloscope's capture ability comprehensively. The experiment results show that the WCR of the designed oscilloscope is 6 250 000 wfms/s (waveforms per second), the highest value in all existing oscilloscopes. The testing results also prove that there is no dead time in our oscilloscope, thus realizing the seamless acquisition.

  2. A seamless acquisition digital storage oscilloscope with three-dimensional waveform display

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Kuojun, E-mail: kuojunyang@gmail.com; Guo, Lianping; School of Electrical and Electronic Engineering, Nanyang Technological University

    In traditional digital storage oscilloscope (DSO), sampled data need to be processed after each acquisition. During data processing, the acquisition is stopped and oscilloscope is blind to the input signal. Thus, this duration is called dead time. With the rapid development of modern electronic systems, the effect of infrequent events becomes significant. To capture these occasional events in shorter time, dead time in traditional DSO that causes the loss of measured signal needs to be reduced or even eliminated. In this paper, a seamless acquisition oscilloscope without dead time is proposed. In this oscilloscope, three-dimensional waveform mapping (TWM) technique, whichmore » converts sampled data to displayed waveform, is proposed. With this technique, not only the process speed is improved, but also the probability information of waveform is displayed with different brightness. Thus, a three-dimensional waveform is shown to the user. To reduce processing time further, parallel TWM which processes several sampled points simultaneously, and dual-port random access memory based pipelining technique which can process one sampling point in one clock period are proposed. Furthermore, two DDR3 (Double-Data-Rate Three Synchronous Dynamic Random Access Memory) are used for storing sampled data alternately, thus the acquisition can continue during data processing. Therefore, the dead time of DSO is eliminated. In addition, a double-pulse test method is adopted to test the waveform capturing rate (WCR) of the oscilloscope and a combined pulse test method is employed to evaluate the oscilloscope's capture ability comprehensively. The experiment results show that the WCR of the designed oscilloscope is 6 250 000 wfms/s (waveforms per second), the highest value in all existing oscilloscopes. The testing results also prove that there is no dead time in our oscilloscope, thus realizing the seamless acquisition.« less

  3. Connections between Transcription Downstream of Genes and cis-SAGe Chimeric RNA.

    PubMed

    Chwalenia, Katarzyna; Qin, Fujun; Singh, Sandeep; Tangtrongstittikul, Panjapon; Li, Hui

    2017-11-22

    cis-Splicing between adjacent genes (cis-SAGe) is being recognized as one way to produce chimeric fusion RNAs. However, its detail mechanism is not clear. Recent study revealed induction of transcriptions downstream of genes (DoGs) under osmotic stress. Here, we investigated the influence of osmotic stress on cis-SAGe chimeric RNAs and their connection to DoGs. We found,the absence of induction of at least some cis-SAGe fusions and/or their corresponding DoGs at early time point(s). In fact, these DoGs and their cis-SAGe fusions are inversely correlated. This negative correlation was changed to positive at a later time point. These results suggest a direct competition between the two categories of transcripts when total pool of readthrough transcripts is limited at an early time point. At a later time point, DoGs and corresponding cis-SAGe fusions are both induced, indicating that total readthrough transcripts become more abundant. Finally, we observed overall enhancement of cis-SAGe chimeric RNAs in KCl-treated samples by RNA-Seq analysis.

  4. Digital ac monitor

    DOEpatents

    Hart, George W.; Kern, Jr., Edward C.

    1987-06-09

    An apparatus and method is provided for monitoring a plurality of analog ac circuits by sampling the voltage and current waveform in each circuit at predetermined intervals, converting the analog current and voltage samples to digital format, storing the digitized current and voltage samples and using the stored digitized current and voltage samples to calculate a variety of electrical parameters; some of which are derived from the stored samples. The non-derived quantities are repeatedly calculated and stored over many separate cycles then averaged. The derived quantities are then calculated at the end of an averaging period. This produces a more accurate reading, especially when averaging over a period in which the power varies over a wide dynamic range. Frequency is measured by timing three cycles of the voltage waveform using the upward zero crossover point as a starting point for a digital timer.

  5. Open-loop measurement of data sampling point for SPM

    NASA Astrophysics Data System (ADS)

    Wang, Yueyu; Zhao, Xuezeng

    2006-03-01

    SPM (Scanning Probe Microscope) provides "three-dimensional images" with nanometer level resolution, and some of them can be used as metrology tools. However, SPM's images are commonly distorted by non-ideal properties of SPM's piezoelectric scanner, which reduces metrological accuracy and data repeatability. In order to eliminate this limit, an "open-loop sampling" method is presented. In this method, the positional values of sampling points in all three directions on the surface of the sample are measured by the position sensor and recorded in SPM's image file, which is used to replace the image file from a conventional SPM. Because the positions in X and Y directions are measured at the same time of sampling height information in Z direction, the image distortion caused by scanner locating error can be reduced by proper image processing algorithm.

  6. Digital ac monitor

    DOEpatents

    Hart, G.W.; Kern, E.C. Jr.

    1987-06-09

    An apparatus and method is provided for monitoring a plurality of analog ac circuits by sampling the voltage and current waveform in each circuit at predetermined intervals, converting the analog current and voltage samples to digital format, storing the digitized current and voltage samples and using the stored digitized current and voltage samples to calculate a variety of electrical parameters; some of which are derived from the stored samples. The non-derived quantities are repeatedly calculated and stored over many separate cycles then averaged. The derived quantities are then calculated at the end of an averaging period. This produces a more accurate reading, especially when averaging over a period in which the power varies over a wide dynamic range. Frequency is measured by timing three cycles of the voltage waveform using the upward zero crossover point as a starting point for a digital timer. 24 figs.

  7. 40 CFR 1039.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... You may extend the sampling time to improve measurement accuracy of PM emissions, using good..., you may omit speed, torque, and power points from the duty-cycle regression statistics if the... mapped. (2) For variable-speed engines without low-speed governors, you may omit torque and power points...

  8. 40 CFR 1039.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... You may extend the sampling time to improve measurement accuracy of PM emissions, using good..., you may omit speed, torque, and power points from the duty-cycle regression statistics if the... mapped. (2) For variable-speed engines without low-speed governors, you may omit torque and power points...

  9. Particle acceleration due to shocks in the interplanetary field: High time resolution data and simulation results

    NASA Technical Reports Server (NTRS)

    Kessel, R. L.; Armstrong, T. P.; Nuber, R.; Bandle, J.

    1985-01-01

    Data were examined from two experiments aboard the Explorer 50 (IMP 8) spacecraft. The Johns Hopkins University/Applied Lab Charged Particle Measurement Experiment (CPME) provides 10.12 second resolution ion and electron count rates as well as 5.5 minute or longer averages of the same, with data sampled in the ecliptic plane. The high time resolution of the data allows for an explicit, point by point, merging of the magnetic field and particle data and thus a close examination of the pre- and post-shock conditions and particle fluxes associated with large angle oblique shocks in the interplanetary field. A computer simulation has been developed wherein sample particle trajectories, taken from observed fluxes, are allowed to interact with a planar shock either forward or backward in time. One event, the 1974 Day 312 shock, is examined in detail.

  10. Gender differences in the long-term associations between posttraumatic stress disorder and depression symptoms: findings from the Detroit Neighborhood Health Study.

    PubMed

    Horesh, Danny; Lowe, Sarah R; Galea, Sandro; Uddin, Monica; Koenen, Karestan C

    2015-01-01

    Posttraumatic stress disorder (PTSD) and depression are known to be highly comorbid. However, previous findings regarding the nature of this comorbidity have been inconclusive. This study prospectively examined whether PTSD and depression are distinct constructs in an epidemiologic sample, as well as assessed the directionality of the PTSD-depression association across time. Nine hundred and forty-two Detroit residents (males: n = 387; females: n = 555) were interviewed by phone at three time points, 1 year apart. At each time point, they were assessed for PTSD (using the PCL-C), depression (PHQ-9), trauma exposure, and stressful life events. First, a confirmatory factor analysis showed PTSD and depression to be two distinct factors at all three waves of assessments (W1, W2, and W3). Second, chi-square analysis detected significant differences between observed and expected rates of comorbidity at each time point, with significantly more no-disorder and comorbid cases, and significantly fewer PTSD only and depression only cases, than would be expected by chance alone. Finally, a cross-lagged analysis revealed a bidirectional association between PTSD and depression symptoms across time for the entire sample, as well as for women separately, wherein PTSD symptoms at an early wave predicted later depression symptoms, and vice versa. For men, however, only the paths from PTSD symptoms to subsequent depression symptoms were significant. Across time, PTSD and depression are distinct, but correlated, constructs among a highly-exposed epidemiologic sample. Women and men differ in both the risk of these conditions, and the nature of the long-term associations between them. © 2014 Wiley Periodicals, Inc.

  11. An elevated neutrophil-lymphocyte ratio is associated with adverse outcomes following single time-point paracetamol (acetaminophen) overdose: a time-course analysis.

    PubMed

    Craig, Darren G; Kitto, Laura; Zafar, Sara; Reid, Thomas W D J; Martin, Kirsty G; Davidson, Janice S; Hayes, Peter C; Simpson, Kenneth J

    2014-09-01

    The innate immune system is profoundly dysregulated in paracetamol (acetaminophen)-induced liver injury. The neutrophil-lymphocyte ratio (NLR) is a simple bedside index with prognostic value in a number of inflammatory conditions. To evaluate the prognostic accuracy of the NLR in patients with significant liver injury following single time-point and staggered paracetamol overdoses. Time-course analysis of 100 single time-point and 50 staggered paracetamol overdoses admitted to a tertiary liver centre. Timed laboratory samples were correlated with time elapsed after overdose or admission, respectively, and the NLR was calculated. A total of 49/100 single time-point patients developed hepatic encephalopathy (HE). Median NLRs were higher at both 72 (P=0.0047) and 96 h after overdose (P=0.0041) in single time-point patients who died or were transplanted. Maximum NLR values by 96 h were associated with increasing HE grade (P=0.0005). An NLR of more than 16.7 during the first 96 h following overdose was independently associated with the development of HE [odds ratio 5.65 (95% confidence interval 1.67-19.13), P=0.005]. Maximum NLR values by 96 h were strongly associated with the requirement for intracranial pressure monitoring (P<0.0001), renal replacement therapy (P=0.0002) and inotropic support (P=0.0005). In contrast, in the staggered overdose cohort, the NLR was not associated with adverse outcomes or death/transplantation either at admission or subsequently. The NLR is a simple test which is strongly associated with adverse outcomes following single time-point, but not staggered, paracetamol overdoses. Future studies should assess the value of incorporating the NLR into existing prognostic and triage indices of single time-point paracetamol overdose.

  12. Protocol for monitoring forest-nesting birds in National Park Service parks

    USGS Publications Warehouse

    Dawson, Deanna K.; Efford, Murray G.

    2013-01-01

    These documents detail the protocol for monitoring forest-nesting birds in National Park Service parks in the National Capital Region Network (NCRN). In the first year of sampling, counts of birds should be made at 384 points on the NCRN spatially randomized grid, developed to sample terrestrial resources. Sampling should begin on or about May 20 and continue into early July; on each day the sampling period begins at sunrise and ends five hours later. Each point should be counted twice, once in the first half of the field season and once in the second half, with visits made by different observers, balancing the within-season coverage of points and their spatial coverage by observers, and allowing observer differences to be tested. Three observers, skilled in identifying birds of the region by sight and sound and with previous experience in conducting timed counts of birds, will be needed for this effort. Observers should be randomly assigned to ‘routes’ consisting of eight points, in close proximity and, ideally, in similar habitat, that can be covered in one morning. Counts are 10 minutes in length, subdivided into four 2.5-min intervals. Within each time interval, new birds (i.e., those not already detected) are recorded as within or beyond 50 m of the point, based on where first detected. Binomial distance methods are used to calculate annual estimates of density for species. The data are also amenable to estimation of abundance and detection probability via the removal method. Generalized linear models can be used to assess between-year changes in density estimates or unadjusted count data. This level of sampling is expected to be sufficient to detect a 50% decline in 10 years for approximately 50 bird species, including 14 of 19 species that are priorities for conservation efforts, if analyses are based on unadjusted count data, and for 30 species (6 priority species) if analyses are based on density estimates. The estimates of required sample sizes are based on the mean number of individuals detected per 10 minutes in available data from surveys in three NCRN parks. Once network-wide data from the first year of sampling are available, this and other aspects of the protocol should be re-assessed, and changes made as desired or necessary before the start of the second field season. Thereafter, changes should not be made to the field methods, and sampling should be conducted annually for at least ten years. NCRN staff should keep apprised of new analytical methods developed for analysis of point-count data.

  13. Rethinking Timing of First Sex and Delinquency

    ERIC Educational Resources Information Center

    Harden, K. Paige; Mendle, Jane; Hill, Jennifer E.; Turkheimer, Eric; Emery, Robert E.

    2008-01-01

    The relation between timing of first sex and later delinquency was examined using a genetically informed sample of 534 same-sex twin pairs from the National Longitudinal Study of Adolescent Health, who were assessed at three time points over a 7-year interval. Genetic and environmental differences between families were found to account for the…

  14. Collective efficacy: How is it conceptualized, how is it measured, and does it really matter for understanding perceived neighborhood crime and disorder?

    PubMed Central

    Hipp, John R.

    2016-01-01

    Building on the insights of the self-efficacy literature, this study highlights that collective efficacy is a collective perception that comes from a process. This study emphasizes that 1) there is updating, as there are feedback effects from success or failure by the group to the perception of collective efficacy, and 2) this updating raises the importance of accounting for members' degree of uncertainty regarding neighborhood collective efficacy. Using a sample of 113 block groups in three rural North Carolina counties, this study finds evidence of updating as neighborhoods perceiving more crime or disorder reported less collective efficacy at the next time point. Furthermore, collective efficacy was only associated with lower perceived disorder at the next time point when it occurred in highly cohesive neighborhoods. Finally, neighborhoods with more perceived disorder and uncertainty regarding collective efficacy at one time point had lower levels of collective efficacy at the next time point, illustrating the importance of uncertainty along with updating. PMID:27069285

  15. Power of tests for comparing trend curves with application to national immunization survey (NIS).

    PubMed

    Zhao, Zhen

    2011-02-28

    To develop statistical tests for comparing trend curves of study outcomes between two socio-demographic strata across consecutive time points, and compare statistical power of the proposed tests under different trend curves data, three statistical tests were proposed. For large sample size with independent normal assumption among strata and across consecutive time points, the Z and Chi-square test statistics were developed, which are functions of outcome estimates and the standard errors at each of the study time points for the two strata. For small sample size with independent normal assumption, the F-test statistic was generated, which is a function of sample size of the two strata and estimated parameters across study period. If two trend curves are approximately parallel, the power of Z-test is consistently higher than that of both Chi-square and F-test. If two trend curves cross at low interaction, the power of Z-test is higher than or equal to the power of both Chi-square and F-test; however, at high interaction, the powers of Chi-square and F-test are higher than that of Z-test. The measurement of interaction of two trend curves was defined. These tests were applied to the comparison of trend curves of vaccination coverage estimates of standard vaccine series with National Immunization Survey (NIS) 2000-2007 data. Copyright © 2011 John Wiley & Sons, Ltd.

  16. A hierarchical model combining distance sampling and time removal to estimate detection probability during avian point counts

    USGS Publications Warehouse

    Amundson, Courtney L.; Royle, J. Andrew; Handel, Colleen M.

    2014-01-01

    Imperfect detection during animal surveys biases estimates of abundance and can lead to improper conclusions regarding distribution and population trends. Farnsworth et al. (2005) developed a combined distance-sampling and time-removal model for point-transect surveys that addresses both availability (the probability that an animal is available for detection; e.g., that a bird sings) and perceptibility (the probability that an observer detects an animal, given that it is available for detection). We developed a hierarchical extension of the combined model that provides an integrated analysis framework for a collection of survey points at which both distance from the observer and time of initial detection are recorded. Implemented in a Bayesian framework, this extension facilitates evaluating covariates on abundance and detection probability, incorporating excess zero counts (i.e. zero-inflation), accounting for spatial autocorrelation, and estimating population density. Species-specific characteristics, such as behavioral displays and territorial dispersion, may lead to different patterns of availability and perceptibility, which may, in turn, influence the performance of such hierarchical models. Therefore, we first test our proposed model using simulated data under different scenarios of availability and perceptibility. We then illustrate its performance with empirical point-transect data for a songbird that consistently produces loud, frequent, primarily auditory signals, the Golden-crowned Sparrow (Zonotrichia atricapilla); and for 2 ptarmigan species (Lagopus spp.) that produce more intermittent, subtle, and primarily visual cues. Data were collected by multiple observers along point transects across a broad landscape in southwest Alaska, so we evaluated point-level covariates on perceptibility (observer and habitat), availability (date within season and time of day), and abundance (habitat, elevation, and slope), and included a nested point-within-transect and park-level effect. Our results suggest that this model can provide insight into the detection process during avian surveys and reduce bias in estimates of relative abundance but is best applied to surveys of species with greater availability (e.g., breeding songbirds).

  17. A dryer for rapid response on-line expired gas measurements.

    PubMed

    Deno, N S; Kamon, E

    1979-06-01

    A dryer is described for use in on-line breath-by-breath gas analysis systems. The dryer continuously removes water vapor by condensation and controls the sample gas at 2 degrees C dew-point temperature or 5 Torr water vapor partial pressure. It is designed to operate at gas sampling flow rates from 0.5 to 1 1.min-1. The step-response time for the described system including a Beckman LB-2 CO2 analyzer, sampling tubing, and dryer is 120 ms at 1 l.min-1. The time required for gas samples to transport through the dryer is 105 ms at a gas sampling-flow rate of 1 l.min=1.

  18. Imaging systems and algorithms to analyze biological samples in real-time using mobile phone microscopy.

    PubMed

    Shanmugam, Akshaya; Usmani, Mohammad; Mayberry, Addison; Perkins, David L; Holcomb, Daniel E

    2018-01-01

    Miniaturized imaging devices have pushed the boundaries of point-of-care imaging, but existing mobile-phone-based imaging systems do not exploit the full potential of smart phones. This work demonstrates the use of simple imaging configurations to deliver superior image quality and the ability to handle a wide range of biological samples. Results presented in this work are from analysis of fluorescent beads under fluorescence imaging, as well as helminth eggs and freshwater mussel larvae under white light imaging. To demonstrate versatility of the systems, real time analysis and post-processing results of the sample count and sample size are presented in both still images and videos of flowing samples.

  19. An evaluation of potential sampling locations in a reservoir with emphasis on conserved spatial correlation structure.

    PubMed

    Yenilmez, Firdes; Düzgün, Sebnem; Aksoy, Aysegül

    2015-01-01

    In this study, kernel density estimation (KDE) was coupled with ordinary two-dimensional kriging (OK) to reduce the number of sampling locations in measurement and kriging of dissolved oxygen (DO) concentrations in Porsuk Dam Reservoir (PDR). Conservation of the spatial correlation structure in the DO distribution was a target. KDE was used as a tool to aid in identification of the sampling locations that would be removed from the sampling network in order to decrease the total number of samples. Accordingly, several networks were generated in which sampling locations were reduced from 65 to 10 in increments of 4 or 5 points at a time based on kernel density maps. DO variograms were constructed, and DO values in PDR were kriged. Performance of the networks in DO estimations were evaluated through various error metrics, standard error maps (SEM), and whether the spatial correlation structure was conserved or not. Results indicated that smaller number of sampling points resulted in loss of information in regard to spatial correlation structure in DO. The minimum representative sampling points for PDR was 35. Efficacy of the sampling location selection method was tested against the networks generated by experts. It was shown that the evaluation approach proposed in this study provided a better sampling network design in which the spatial correlation structure of DO was sustained for kriging.

  20. [Demonstration plan used in the study of human reproduction in the district of Sao Paulo. 1967].

    PubMed

    Silva, Eunice Pinho de Castro

    2006-10-01

    This work presents the sampling procedure used to select the sample got for a "Human Reproduction Study in the District of São Paulo" (Brazil), done by the Department of Applied Statistics of "Faculdade de Higiene e Saúde Pública da Universidade de São Paulo". The procedure tried to solve the situation which resulted from the limitation in cost, time and lack of a frame that could be used in order to get a probability sample in the fixed term of time and with the fixed cost. It consisted in a two stage sampling with dwelling-units as primary units and women as secondary units. At the first stage, it was used stratified sampling in which sub-districts were taken as strata. In order to select primary units, there was a selection of points ("starting points") on the maps of subdistricts by a procedure that was similar to that one called "square grid" but differed from this in several aspects. There were fixed rules to establish a correspondence between each selected "starting point" and a set of three dwelling units where at least one woman of the target population lived. In the selected dwelling units where more than one woman of target population lived, there was a sub-sampling in order to select one of them. In this selection each woman living in the dwelling unit had equal probability of selection. Several "no-answer" cases and correspondent instructions to be followed by the interviewers are presented too.

  1. Human blood metabolite timetable indicates internal body time

    PubMed Central

    Kasukawa, Takeya; Sugimoto, Masahiro; Hida, Akiko; Minami, Yoichi; Mori, Masayo; Honma, Sato; Honma, Ken-ichi; Mishima, Kazuo; Soga, Tomoyoshi; Ueda, Hiroki R.

    2012-01-01

    A convenient way to estimate internal body time (BT) is essential for chronotherapy and time-restricted feeding, both of which use body-time information to maximize potency and minimize toxicity during drug administration and feeding, respectively. Previously, we proposed a molecular timetable based on circadian-oscillating substances in multiple mouse organs or blood to estimate internal body time from samples taken at only a few time points. Here we applied this molecular-timetable concept to estimate and evaluate internal body time in humans. We constructed a 1.5-d reference timetable of oscillating metabolites in human blood samples with 2-h sampling frequency while simultaneously controlling for the confounding effects of activity level, light, temperature, sleep, and food intake. By using this metabolite timetable as a reference, we accurately determined internal body time within 3 h from just two anti-phase blood samples. Our minimally invasive, molecular-timetable method with human blood enables highly optimized and personalized medicine. PMID:22927403

  2. Limnology of Blue Mesa, Morrow Point, and Crystal Reservoirs, Curecanti National Recreation area, during 1999, and a 25-year retrospective of nutrient conditions in Blue Mesa Reservoir, Colorado

    USGS Publications Warehouse

    Bauch, Nancy J.; Malick, Matt

    2003-01-01

    The U.S. Geological Survey and the National Park Service conducted a water-quality investigation in Curecanti National Recreation Area in Colorado from April through December 1999. Current (as of 1999) limnological characteristics, including nutrients, phytoplankton, chlorophyll-a, trophic status, and the water quality of stream inflows and reservoir outflows, of Blue Mesa, Morrow Point, and Crystal Reservoirs were assessed, and a 25-year retrospective of nutrient conditions in Blue Mesa Reservoir was conducted. The three reservoirs are in a series on the Gunnison River, with an upstream to downstream order of Blue Mesa, Morrow Point, and Crystal Reservoirs. Physical properties and water-quality samples were collected four times during 1999 from reservoir, inflow, and outflow sites in and around the recreation area. Samples were analyzed for nutrients, phytoplankton and chlorophyll-a (reservoir sites only), and suspended sediment (stream inflows only). Nutrient concentrations in the reservoirs were low; median total nitrogen and phosphorus concentrations were less than 0.4 and 0.06 milligram per liter, respectively. During water-column stratification, samples collected at depth had higher nutrient concentrations than photic-zone samples. Phytoplankton community and density were affected by water temperature, nutrients, and water residence time. Diatoms were the dominant phytoplankton throughout the year in Morrow Point and Crystal Reservoirs and during spring and early winter in Blue Mesa Reservoir. Blue-green algae were dominant in Blue Mesa Reservoir during summer and fall. Phytoplankton density was highest in Blue Mesa Reservoir and lowest in Crystal Reservoir. Longer residence times and warmer temperatures in Blue Mesa Reservoir were favorable for phytoplankton growth and development. Shorter residence times and cooler temperatures in the downstream reservoirs probably limited phytoplankton growth and development. Median chlorophyll-a concentrations were higher in Blue Mesa Reservoir than Morrow Point or Crystal Reservoirs. Blue Mesa Reservoir was mesotrophic in upstream areas and oligotrophic downstream. Both Morrow Point and Crystal Reservoirs were oligotrophic. Trophic-state index values were determined for total phosphorus, chlorophyll-a, and Secchi depth for each reservoir by the Carlson method; all values ranged between 29 and 55. Only the upstream areas in Blue Mesa Reservoir had total phosphorus and chlorophyll-a indices above 50, reflecting mesotrophic conditions. Nutrient inflows to Blue Mesa Reservoir, which were derived primarily from the Gunnison River, varied on a seasonal basis, whereas nutrient inflows to Morrow Point and Crystal Reservoirs, which were derived primarily from deep water releases from the respective upstream reservoir, were steady throughout the sampling period. Total phosphorus concentrations were elevated in many stream inflows. A comparison of current (as of 1999) and historical nutrient, chlorophyll-a, and trophic conditions in Blue Mesa Reservoir and its tributaries indicated that the trophic status in Blue Mesa Reservoir has not changed over the last 25 years, and more recent nutrient enrichment has not occurred.

  3. Coalescent Inference Using Serially Sampled, High-Throughput Sequencing Data from Intrahost HIV Infection

    PubMed Central

    Dialdestoro, Kevin; Sibbesen, Jonas Andreas; Maretty, Lasse; Raghwani, Jayna; Gall, Astrid; Kellam, Paul; Pybus, Oliver G.; Hein, Jotun; Jenkins, Paul A.

    2016-01-01

    Human immunodeficiency virus (HIV) is a rapidly evolving pathogen that causes chronic infections, so genetic diversity within a single infection can be very high. High-throughput “deep” sequencing can now measure this diversity in unprecedented detail, particularly since it can be performed at different time points during an infection, and this offers a potentially powerful way to infer the evolutionary dynamics of the intrahost viral population. However, population genomic inference from HIV sequence data is challenging because of high rates of mutation and recombination, rapid demographic changes, and ongoing selective pressures. In this article we develop a new method for inference using HIV deep sequencing data, using an approach based on importance sampling of ancestral recombination graphs under a multilocus coalescent model. The approach further extends recent progress in the approximation of so-called conditional sampling distributions, a quantity of key interest when approximating coalescent likelihoods. The chief novelties of our method are that it is able to infer rates of recombination and mutation, as well as the effective population size, while handling sampling over different time points and missing data without extra computational difficulty. We apply our method to a data set of HIV-1, in which several hundred sequences were obtained from an infected individual at seven time points over 2 years. We find mutation rate and effective population size estimates to be comparable to those produced by the software BEAST. Additionally, our method is able to produce local recombination rate estimates. The software underlying our method, Coalescenator, is freely available. PMID:26857628

  4. A robust method of thin plate spline and its application to DEM construction

    NASA Astrophysics Data System (ADS)

    Chen, Chuanfa; Li, Yanyan

    2012-11-01

    In order to avoid the ill-conditioning problem of thin plate spline (TPS), the orthogonal least squares (OLS) method was introduced, and a modified OLS (MOLS) was developed. The MOLS of TPS (TPS-M) can not only select significant points, termed knots, from large and dense sampling data sets, but also easily compute the weights of the knots in terms of back-substitution. For interpolating large sampling points, we developed a local TPS-M, where some neighbor sampling points around the point being estimated are selected for computation. Numerical tests indicate that irrespective of sampling noise level, the average performance of TPS-M can advantage with smoothing TPS. Under the same simulation accuracy, the computational time of TPS-M decreases with the increase of the number of sampling points. The smooth fitting results on lidar-derived noise data indicate that TPS-M has an obvious smoothing effect, which is on par with smoothing TPS. The example of constructing a series of large scale DEMs, located in Shandong province, China, was employed to comparatively analyze the estimation accuracies of the two versions of TPS and the classical interpolation methods including inverse distance weighting (IDW), ordinary kriging (OK) and universal kriging with the second-order drift function (UK). Results show that regardless of sampling interval and spatial resolution, TPS-M is more accurate than the classical interpolation methods, except for the smoothing TPS at the finest sampling interval of 20 m, and the two versions of kriging at the spatial resolution of 15 m. In conclusion, TPS-M, which avoids the ill-conditioning problem, is considered as a robust method for DEM construction.

  5. Simultaneous optimization of limited sampling points for pharmacokinetic analysis of amrubicin and amrubicinol in cancer patients.

    PubMed

    Makino, Yoshinori; Watanabe, Michiko; Makihara, Reiko Ando; Nokihara, Hiroshi; Yamamoto, Noboru; Ohe, Yuichiro; Sugiyama, Erika; Sato, Hitoshi; Hayashi, Yoshikazu

    2016-09-01

    Limited sampling points for both amrubicin (AMR) and its active metabolite amrubicinol (AMR-OH) were simultaneously optimized using Akaike's information criterion (AIC) calculated by pharmacokinetic modeling. In this pharmacokinetic study, 40 mg/m(2) of AMR was administered as a 5-min infusion on three consecutive days to 21 Japanese lung cancer patients. Blood samples were taken at 0, 0.08, 0.25, 0.5, 1, 2, 4, 8 and 24 h after drug infusion, and AMR and AMR-OH concentrations in plasma were quantitated using a high-performance liquid chromatography. The pharmacokinetic profile of AMR was characterized using a three-compartment model and that of AMR-OH using a one-compartment model following a first-order absorption process. These pharmacokinetic profiles were then integrated into one pharmacokinetic model for simultaneous fitting of AMR and AMR-OH. After fitting to the pharmacokinetic model, 65 combinations of four sampling points from the concentration profiles were evaluated for their AICs. Stepwise regression analysis was applied to select the sampling points for AMR and AMR-OH to predict the area under the concentration-time curves (AUCs) at best. Of the three combinations that yielded favorable AIC values, 0.25, 2, 4 and 8 h yielded the best AUC prediction for both AMR (R(2) = 0.977) and AMR-OH (R(2) = 0.886). The prediction error for AUC was less than 15%. The optimal limited sampling points of AMR and AMR-OH after AMR infusion were found to be 0.25, 2, 4 and 8 h, enabling less frequent blood sampling in further expanded pharmacokinetic studies for both AMR and AMR-OH. © 2016 John Wiley & Sons Australia, Ltd.

  6. Birth-death models and coalescent point processes: the shape and probability of reconstructed phylogenies.

    PubMed

    Lambert, Amaury; Stadler, Tanja

    2013-12-01

    Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. An innovative procedure to assess multi-scale temporal trends in groundwater quality: Example of the nitrate in the Seine-Normandy basin, France

    NASA Astrophysics Data System (ADS)

    Lopez, Benjamin; Baran, Nicole; Bourgine, Bernard

    2015-03-01

    The European Water Framework Directive (WFD) asks Member States to identify trends in contaminant concentrations in groundwater and to take measures to reach a good chemical status by 2015. In this study, carried out in a large hydrological basin (95,300 km2), an innovative procedure is described for the assessment of recent trends in groundwater nitrate concentrations both at sampling point and regional scales. Temporal variograms of piezometric and nitrate concentration time series are automatically calculated and fitted in order to classify groundwater according to their temporal pattern. These results are then coupled with aquifer lithology to map spatial units within which the modes of diffuse transport of contaminants towards groundwater are assumed to be the same at all points. These spatial units are suitable for evaluating regional trends. The stability over time of the time series is tested based on the cumulative sum principle, to determine the time period during which the trend should be sought. The Mann-Kendall and Regional-Kendall nonparametric tests for monotonic trends, coupled with the Sen-slope test, are applied to the periods following the point breaks thus determined at both the sampling point or regional scales. This novel procedure is robust and enables rapid processing of large databases of raw data. It would therefore be useful for managing groundwater quality in compliance with the aims of the WFD.

  8. Methods for measuring populations of small, diurnal forest birds.

    Treesearch

    D.A. Manuwal; A.B. Carey

    1991-01-01

    Before a bird population is measured, the objectives of the study should be clearly defined. Important factors to be considered in designing a study are study site selection, plot size or transect length, distance between sampling points, duration of counts, and frequency and timing of sampling. Qualified field personnel are especially important. Assumptions applying...

  9. Sex-Specific Associations between Umbilical Cord Blood Testosterone Levels and Language Delay in Early Childhood

    ERIC Educational Resources Information Center

    Whitehouse, Andrew J. O.; Mattes, Eugen; Maybery, Murray T.; Sawyer, Michael G.; Jacoby, Peter; Keelan, Jeffrey A.; Hickey, Martha

    2012-01-01

    Background: Preliminary evidence suggests that prenatal testosterone exposure may be associated with language delay. However, no study has examined a large sample of children at multiple time-points. Methods: Umbilical cord blood samples were obtained at 861 births and analysed for bioavailable testosterone (BioT) concentrations. When…

  10. Language skills of children during the first 12 months after stuttering onset.

    PubMed

    Watts, Amy; Eadie, Patricia; Block, Susan; Mensah, Fiona; Reilly, Sheena

    2017-03-01

    To describe the language development in a sample of young children who stutter during the first 12 months after stuttering onset was reported. Language production was analysed in a sample of 66 children who stuttered (aged 2-4 years). The sample were identified from a pre-existing prospective, community based longitudinal cohort. Data were collected at three time points within the first year after stuttering onset. Stuttering severity was measured, and global indicators of expressive language proficiency (length of utterances and grammatical complexity) were derived from the samples and summarised. Language production abilities of the children who stutter were contrasted with normative data. The majority of children's stuttering was rated as mild in severity, with more than 83% of participants demonstrating very mild or mild stuttering at each of the time points studied. The participants demonstrated developmentally appropriate spoken language skills comparable with available normative data. In the first year following the report of stuttering onset, the language skills of the children who were stuttering progressed in a manner that is consistent with developmental expectations. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Vibration Pattern Imager (VPI): A control and data acquisition system for scanning laser vibrometers

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Brown, Donald E.; Shaffer, Thomas A.

    1993-01-01

    The Vibration Pattern Imager (VPI) system was designed to control and acquire data from scanning laser vibrometer sensors. The PC computer based system uses a digital signal processing (DSP) board and an analog I/O board to control the sensor and to process the data. The VPI system was originally developed for use with the Ometron VPI Sensor, but can be readily adapted to any commercially available sensor which provides an analog output signal and requires analog inputs for control of mirror positioning. The sensor itself is not part of the VPI system. A graphical interface program, which runs on a PC under the MS-DOS operating system, functions in an interactive mode and communicates with the DSP and I/O boards in a user-friendly fashion through the aid of pop-up menus. Two types of data may be acquired with the VPI system: single point or 'full field.' In the single point mode, time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and is stored by the PC. The position of the measuring point (adjusted by mirrors in the sensor) is controlled via a mouse input. The mouse input is translated to output voltages by the D/A converter on the I/O board to control the mirror servos. In the 'full field' mode, the measurement point is moved over a user-selectable rectangular area. The time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and converted to a root-mean-square (rms) value by the DSP board. The rms 'full field' velocity distribution is then uploaded for display and storage on the PC.

  12. Validation of two dilution models to predict chloramine-T concentrations in aquaculture facility effluent

    USGS Publications Warehouse

    Gaikowski, M.P.; Larson, W.J.; Steuer, J.J.; Gingerich, W.H.

    2004-01-01

    Accurate estimates of drug concentrations in hatchery effluent are critical to assess the environmental risk of hatchery drug discharge resulting from disease treatment. This study validated two dilution simple n models to estimate chloramine-T environmental introduction concentrations by comparing measured and predicted chloramine-T concentrations using the US Geological Survey's Upper Midwest Environmental Sciences Center aquaculture facility effluent as an example. The hydraulic characteristics of our treated raceway and effluent and the accuracy of our water flow rate measurements were confirmed with the marker dye rhodamine WT. We also used the rhodamine WT data to develop dilution models that would (1) estimate the chloramine-T concentration at a given time and location in the effluent system and (2) estimate the average chloramine-T concentration at a given location over the entire discharge period. To test our models, we predicted the chloramine-T concentration at two sample points based on effluent flow and the maintenance of chloramine-T at 20 mg/l for 60 min in the same raceway used with rhodamine WT. The effluent sample points selected (sample points A and B) represented 47 and 100% of the total effluent flow, respectively. Sample point B is-analogous to the discharge of a hatchery that does not have a detention lagoon, i.e. The sample site was downstream of the last dilution water addition following treatment. We then applied four chloramine-T flow-through treatments at 20mg/l for 60 min and measured the chloramine-T concentration in water samples collected every 15 min for about 180 min from the treated raceway and sample points A and B during and after application. The predicted chloramine-T concentration at each sampling interval was similar to the measured chloramine-T concentration at sample points A and B and was generally bounded by the measured 90% confidence intervals. The predicted aver,age chloramine-T concentrations at sample points A or B (2.8 and 1.3 mg/l, respectively) were not significantly different (P > 0.05) from the average measured chloramine-T concentrations (2.7 and 1.3 mg/l, respectively). The close agreement between our predicted and measured chloramine-T concentrations indicate either of the dilution models could be used to adequately predict the chloramine-T environmental introduction concentration in Upper Midwest Environmental Sciences Center effluent. (C) 2003 Elsevier B.V. All rights reserved.

  13. Trajectories of Affective States in Adolescent Hockey Players: Turning Point and Motivational Antecedents

    ERIC Educational Resources Information Center

    Gaudreau, Patrick; Amiot, Catherine E.; Vallerand, Robert J.

    2009-01-01

    This study examined longitudinal trajectories of positive and negative affective states with a sample of 265 adolescent elite hockey players followed across 3 measurement points during the 1st 11 weeks of a season. Latent class growth modeling, incorporating a time-varying covariate and a series of predictors assessed at the onset of the season,…

  14. Scanning electron microscopy, X-ray diffraction and thermal analysis study of the TiH{sub 2} foaming agent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandrino, Djordje, E-mail: djordje.mandrino@imt.si; Paulin, Irena; Skapin, Sreco D.

    2012-10-15

    The decomposition of commercially available TiH{sub 2} was investigated while performing different thermal treatments. TiH{sub 2} powder, which is widely used as a foaming agent, was heat treated at 450 Degree-Sign C for various times, from 15 min to 120 min. Scanning electron microscopy (SEM) images of the surfaces at different magnifications were obtained and interpreted. A Bragg-Brentano X-ray diffractometer was used to measure the X-ray diffraction (XRD) spectra on all five samples. A close examination of the diffraction spectra showed that for an as-received sample and samples undergoing the longest thermal treatment (1 and 2 h) these spectra canmore » be explained as deriving from cubic TiH{sub 1.924}, while for the other two samples they can be explained as deriving from tetragonal TiH{sub 1.924}. A constant-unit-cell-volume phase transition between the cubic and tetragonal phases in TiH{sub 2-y}-type compounds had been described in the literature. The unit-cell parameters obtained from measured spectra confirm that within the measurement uncertainty the unit-cell volume is indeed constant in all five samples. Thermo-gravimetry (TG) and differential thermal analysis (DTA) measurements were performed on all the samples, showing that the intensity of the dehydrogenation depends on the previous treatment of the TiH{sub 2}. After the thermal analysis XRD of the samples was performed again and the material was found to exhibit a Ti-like unit cell, but slightly enlarged due to the unreleased hydrogen. - Highlights: Black-Right-Pointing-Pointer TiH{sub 2} samples were cubic or tetragonal TiH{sub 1.924} Black-Right-Pointing-Pointer Onset of the hydrogen release temperature increases with the pre-treatment time. Black-Right-Pointing-Pointer Thermal dehydrogenation for the as-prepared TiH{sub 2} is a three-step process. Black-Right-Pointing-Pointer After thermal analysis 2 residual hydrogen TiH{sub x} phases, close to {alpha}Ti, appeared.« less

  15. Adaptive sampling of information in perceptual decision-making.

    PubMed

    Cassey, Thomas C; Evens, David R; Bogacz, Rafal; Marshall, James A R; Ludwig, Casimir J H

    2013-01-01

    In many perceptual and cognitive decision-making problems, humans sample multiple noisy information sources serially, and integrate the sampled information to make an overall decision. We derive the optimal decision procedure for two-alternative choice tasks in which the different options are sampled one at a time, sources vary in the quality of the information they provide, and the available time is fixed. To maximize accuracy, the optimal observer allocates time to sampling different information sources in proportion to their noise levels. We tested human observers in a corresponding perceptual decision-making task. Observers compared the direction of two random dot motion patterns that were triggered only when fixated. Observers allocated more time to the noisier pattern, in a manner that correlated with their sensory uncertainty about the direction of the patterns. There were several differences between the optimal observer predictions and human behaviour. These differences point to a number of other factors, beyond the quality of the currently available sources of information, that influences the sampling strategy.

  16. Strategies for satellite-based monitoring of CO2 from distributed area and point sources

    NASA Astrophysics Data System (ADS)

    Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David

    2014-05-01

    Atmospheric CO2 budgets are controlled by the strengths, as well as the spatial and temporal variabilities of CO2 sources and sinks. Natural CO2 sources and sinks are dominated by the vast areas of the oceans and the terrestrial biosphere. In contrast, anthropogenic and geogenic CO2 sources are dominated by distributed area and point sources, which may constitute as much as 70% of anthropogenic (e.g., Duren & Miller, 2012), and over 80% of geogenic emissions (Burton et al., 2013). Comprehensive assessments of CO2 budgets necessitate robust and highly accurate satellite remote sensing strategies that address the competing and often conflicting requirements for sampling over disparate space and time scales. Spatial variability: The spatial distribution of anthropogenic sources is dominated by patterns of production, storage, transport and use. In contrast, geogenic variability is almost entirely controlled by endogenic geological processes, except where surface gas permeability is modulated by soil moisture. Satellite remote sensing solutions will thus have to vary greatly in spatial coverage and resolution to address distributed area sources and point sources alike. Temporal variability: While biogenic sources are dominated by diurnal and seasonal patterns, anthropogenic sources fluctuate over a greater variety of time scales from diurnal, weekly and seasonal cycles, driven by both economic and climatic factors. Geogenic sources typically vary in time scales of days to months (geogenic sources sensu stricto are not fossil fuels but volcanoes, hydrothermal and metamorphic sources). Current ground-based monitoring networks for anthropogenic and geogenic sources record data on minute- to weekly temporal scales. Satellite remote sensing solutions would have to capture temporal variability through revisit frequency or point-and-stare strategies. Space-based remote sensing offers the potential of global coverage by a single sensor. However, no single combination of orbit and sensor provides the full range of temporal sampling needed to characterize distributed area and point source emissions. For instance, point source emission patterns will vary with source strength, wind speed and direction. Because wind speed, direction and other environmental factors change rapidly, short term variabilities should be sampled. For detailed target selection and pointing verification, important lessons have already been learned and strategies devised during JAXA's GOSAT mission (Schwandner et al, 2013). The fact that competing spatial and temporal requirements drive satellite remote sensing sampling strategies dictates a systematic, multi-factor consideration of potential solutions. Factors to consider include vista, revisit frequency, integration times, spatial resolution, and spatial coverage. No single satellite-based remote sensing solution can address this problem for all scales. It is therefore of paramount importance for the international community to develop and maintain a constellation of atmospheric CO2 monitoring satellites that complement each other in their temporal and spatial observation capabilities: Polar sun-synchronous orbits (fixed local solar time, no diurnal information) with agile pointing allow global sampling of known distributed area and point sources like megacities, power plants and volcanoes with daily to weekly temporal revisits and moderate to high spatial resolution. Extensive targeting of distributed area and point sources comes at the expense of reduced mapping or spatial coverage, and the important contextual information that comes with large-scale contiguous spatial sampling. Polar sun-synchronous orbits with push-broom swath-mapping but limited pointing agility may allow mapping of individual source plumes and their spatial variability, but will depend on fortuitous environmental conditions during the observing period. These solutions typically have longer times between revisits, limiting their ability to resolve temporal variations. Geostationary and non-sun-synchronous low-Earth-orbits (precessing local solar time, diurnal information possible) with agile pointing have the potential to provide, comprehensive mapping of distributed area sources such as megacities with longer stare times and multiple revisits per day, at the expense of global access and spatial coverage. An ad hoc CO2 remote sensing constellation is emerging. NASA's OCO-2 satellite (launch July 2014) joins JAXA's GOSAT satellite in orbit. These will be followed by GOSAT-2 and NASA's OCO-3 on the International Space Station as early as 2017. Additional polar orbiting satellites (e.g., CarbonSat, under consideration at ESA) and geostationary platforms may also become available. However, the individual assets have been designed with independent science goals and requirements, and limited consideration of coordinated observing strategies. Every effort must be made to maximize the science return from this constellation. We discuss the opportunities to exploit the complementary spatial and temporal coverage provided by these assets as well as the crucial gaps in the capabilities of this constellation. References Burton, M.R., Sawyer, G.M., and Granieri, D. (2013). Deep carbon emissions from volcanoes. Rev. Mineral. Geochem. 75: 323-354. Duren, R.M., Miller, C.E. (2012). Measuring the carbon emissions of megacities. Nature Climate Change 2, 560-562. Schwandner, F.M., Oda, T., Duren, R., Carn, S.A., Maksyutov, S., Crisp, D., Miller, C.E. (2013). Scientific Opportunities from Target-Mode Capabilities of GOSAT-2. NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena CA, White Paper, 6p., March 2013.

  17. Combined Vocal Exercises for Rehabilitation After Supracricoid Laryngectomy: Evaluation of Different Execution Times.

    PubMed

    Silveira, Hevely Saray Lima; Simões-Zenari, Marcia; Kulcsar, Marco Aurélio; Cernea, Claudio Roberto; Nemr, Kátia

    2017-10-27

    The supracricoid partial laryngectomy allows the preservation of laryngeal functions with good local cancer control. To assess laryngeal configuration and voice analysis data following the performance of a combination of two vocal exercises: the prolonged /b/vocal exercise combined with the vowel /e/ using chest and arm pushing with different durations among individuals who have undergone supracricoid laryngectomy. Eleven patients undergoing partial laryngectomy supracricoid with cricohyoidoepiglottopexy (CHEP) were evaluated using voice recording. Four judges performed separately a perceptive-vocal analysis of hearing voices, with random samples. For the analysis of intrajudge reliability, repetitions of 70% of the voices were done. Intraclass correlation coefficient was used to analyze the reliability of the judges. For an analysis of each judge to the comparison between zero time (time point 0), after the first series of exercises (time point 1), after the second series (time point 2), after the third series (time point 3), after the fourth series (time point 4), and after the fifth and final series (time point 5), the Friedman test was used with a significance level of 5%. The data relative to the configuration of the larynx were subjected to a descriptive analysis. In the evaluation, were considered the judge results 1 which have greater reliability. There was an improvement in the general level of vocal, roughness, and breathiness deviations from time point 4 [T4]. The prolonged /b/vocal exercise, combined with the vowel /e/ using chest- and arm-pushing exercises, was associated with an improvement in the overall grade of vocal deviation, roughness, and breathiness starting at minute 4 among patients who had undergone supracricoid laryngectomy with CHEP reconstruction. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  18. Standard error of estimated average timber volume per acre under point sampling when trees are measured for volume on a subsample of all points.

    Treesearch

    Floyd A. Johnson

    1961-01-01

    This report assumes a knowledge of the principles of point sampling as described by Grosenbaugh, Bell and Alexander, and others. Whenever trees are counted at every point in a sample of points (large sample) and measured for volume at a portion (small sample) of these points, the sampling design could be called ratio double sampling. If the large...

  19. Ion-induced electron emission microscopy

    DOEpatents

    Doyle, Barney L.; Vizkelethy, Gyorgy; Weller, Robert A.

    2001-01-01

    An ion beam analysis system that creates multidimensional maps of the effects of high energy ions from an unfocussed source upon a sample by correlating the exact entry point of an ion into a sample by projection imaging of the secondary electrons emitted at that point with a signal from a detector that measures the interaction of that ion within the sample. The emitted secondary electrons are collected in a strong electric field perpendicular to the sample surface and (optionally) projected and refocused by the electron lenses found in a photon emission electron microscope, amplified by microchannel plates and then their exact position is sensed by a very sensitive X Y position detector. Position signals from this secondary electron detector are then correlated in time with nuclear, atomic or electrical effects, including the malfunction of digital circuits, detected within the sample that were caused by the individual ion that created these secondary electrons in the fit place.

  20. Study on the stability of adrenaline and on the determination of its acidity constants

    NASA Astrophysics Data System (ADS)

    Corona-Avendaño, S.; Alarcón-Angeles, G.; Rojas-Hernández, A.; Romero-Romo, M. A.; Ramírez-Silva, M. T.

    2005-01-01

    In this work, the results are presented concerning the influence of time on the spectral behaviour of adrenaline (C 9H 13NO 3) (AD) and of the determination of its acidity constants by means of spectrophotometry titrations and point-by-point analysis, using for the latter freshly prepared samples for each analysis at every single pH. As the catecholamines are sensitive to light, all samples were protected against it during the course of the experiments. Each method rendered four acidity constants corresponding each to the four acid protons belonging to the functional groups present in the molecule; for the point-by-point analysis the values found were: log β 1=38.25±0.21 , log β 2=29.65±0.17 , log β 3=21.01±0.14 , log β 4=11.34±0.071 .

  1. Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range

    NASA Technical Reports Server (NTRS)

    Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.

  2. Curvature-correction-based time-domain CMOS smart temperature sensor with an inaccuracy of -0.8 °C-1.2 °C after one-point calibration from -40 °C to 120 °C

    NASA Astrophysics Data System (ADS)

    Chen, Chun-Chi; Lin, Shih-Hao; Lin, Yi

    2014-06-01

    This paper proposes a time-domain CMOS smart temperature sensor featuring on-chip curvature correction and one-point calibration support for thermal management systems. Time-domain inverter-based temperature sensors, which exhibit the advantages of low power and low cost, have been proposed for on-chip thermal monitoring. However, the curvature is large for the thermal transfer curve, which substantially affects the accuracy as the temperature range increases. Another problem is that the inverter is sensitive to process variations, resulting in difficulty for the sensors to achieve an acceptable accuracy for one-point calibration. To overcome these two problems, a temperature-dependent oscillator with curvature correction is proposed to increase the linearity of the oscillatory width, thereby resolving the drawback caused by a costly off-chip second-order master curve fitting. For one-point calibration support, an adjustable-gain time amplifier was adopted to eliminate the effect of process variations, with the assistance of a calibration circuit. The proposed circuit occupied a small area of 0.073 mm2 and was fabricated in a TSMC CMOS 0.35-μm 2P4M digital process. The linearization of the oscillator and the effect cancellation of process variations enabled the sensor, which featured a fixed resolution of 0.049 °C/LSB, to achieve an optimal inaccuracy of -0.8 °C to 1.2 °C after one-point calibration of 12 test chips from -40 °C to 120 °C. The power consumption was 35 μW at a sample rate of 10 samples/s.

  3. Exploring the Impact of Work Experience on Part-Time Students' Academic Success in Malaysian Polytechnics

    ERIC Educational Resources Information Center

    Ibrahim, Norhayati; Freeman, Steven A.; Shelley, Mack C.

    2012-01-01

    The study explored the influence of work experience on adult part-time students' academic success as defined by their cumulative grade point average. The sample consisted of 614 part-time students from four polytechnic institutions in Malaysia. The study identified six factors to measure the perceived influence of work experiences--positive…

  4. A comparison of four-sample slope-intercept and single-sample 51Cr-EDTA glomerular filtration rate measurements.

    PubMed

    Porter, Charlotte A; Bradley, Kevin M; McGowan, Daniel R

    2018-05-01

    The aim of this study was to verify, with a large dataset of 1394 Cr-EDTA glomerular filtration rate (GFR) studies, the equivalence of slope-intercept and single-sample GFR. Raw data from 1394 patient studies were used to calculate four-sample slope-intercept GFR in addition to four individual single-sample GFR values (blood samples taken at 90, 150, 210 and 270 min after injection). The percentage differences between the four-sample slope-intercept and each of the single-sample GFR values were calculated, to identify the optimum single-sample time point. Having identified the optimum time point, the percentage difference between the slope-intercept and optimal single-sample GFR was calculated across a range of GFR values to investigate whether there was a GFR value below which the two methodologies cannot be considered equivalent. It was found that the lowest percentage difference between slope-intercept and single-sample GFR was for the third blood sample, taken at 210 min after injection. The median percentage difference was 2.5% and only 6.9% of patient studies had a percentage difference greater than 10%. Above a GFR value of 30 ml/min/1.73 m, the median percentage difference between the slope-intercept and optimal single-sample GFR values was below 10%, and so it was concluded that, above this value, the two techniques are sufficiently equivalent. This study supports the recommendation of performing single-sample GFR measurements for GFRs greater than 30 ml/min/1.73 m.

  5. Voronoi Tessellation for reducing the processing time of correlation functions

    NASA Astrophysics Data System (ADS)

    Cárdenas-Montes, Miguel; Sevilla-Noarbe, Ignacio

    2018-01-01

    The increase of data volume in Cosmology is motivating the search of new solutions for solving the difficulties associated with the large processing time and precision of calculations. This is specially true in the case of several relevant statistics of the galaxy distribution of the Large Scale Structure of the Universe, namely the two and three point angular correlation functions. For these, the processing time has critically grown with the increase of the size of the data sample. Beyond parallel implementations to overcome the barrier of processing time, space partitioning algorithms are necessary to reduce the computational load. These can delimit the elements involved in the correlation function estimation to those that can potentially contribute to the final result. In this work, Voronoi Tessellation is used to reduce the processing time of the two-point and three-point angular correlation functions. The results of this proof-of-concept show a significant reduction of the processing time when preprocessing the galaxy positions with Voronoi Tessellation.

  6. Harmonised investigation of the occurrence of human enteric viruses in the leafy green vegetable supply chain in three European countries.

    PubMed

    Kokkinos, P; Kozyra, I; Lazic, S; Bouwknegt, M; Rutjes, S; Willems, K; Moloney, R; de Roda Husman, A M; Kaupke, A; Legaki, E; D'Agostino, M; Cook, N; Rzeżutka, A; Petrovic, T; Vantarakis, A

    2012-12-01

    Numerous outbreaks have been attributed to the consumption of raw or minimally processed leafy green vegetables contaminated with enteric viral pathogens. The aim of the present study was an integrated virological monitoring of the salad vegetables supply chain in Europe, from production, processing and point-of-sale. Samples were collected and analysed in Greece, Serbia and Poland, from 'general' and 'ad hoc' sampling points, which were perceived as critical points for virus contamination. General sampling points were identified through the analysis of background information questionnaires based on HACCP audit principles, and they were sampled during each sampling occasion where as-ad hoc sampling points were identified during food safety fact-finding visits and samples were only collected during the fact-finding visits. Human (hAdV) and porcine (pAdV) adenovirus, hepatitis A (HAV) and E (HEV) virus, norovirus GI and GII (NoV) and bovine polyomavirus (bPyV) were detected by means of real-time (RT-) PCR-based protocols. General samples were positive for hAdV, pAdV, HAV, HEV, NoV GI, NoV GII and bPyV at 20.09 % (134/667), 5.53 % (13/235), 1.32 % (4/304), 3.42 % (5/146), 2 % (6/299), 2.95 % (8/271) and 0.82 % (2/245), respectively. Ad hoc samples were positive for hAdV, pAdV, bPyV and NoV GI at 9 % (3/33), 9 % (2/22), 4.54 % (1/22) and 7.14 % (1/14), respectively. These results demonstrate the existence of viral contamination routes from human and animal sources to the salad vegetable supply chain and more specifically indicate the potential for public health risks due to the virus contamination of leafy green vegetables at primary production.

  7. Virus-Induced Gene Silencing Identifies an Important Role of the TaRSR1 Transcription Factor in Starch Synthesis in Bread Wheat.

    PubMed

    Liu, Guoyu; Wu, Yufang; Xu, Mengjun; Gao, Tian; Wang, Pengfei; Wang, Lina; Guo, Tiancai; Kang, Guozhang

    2016-09-23

    The function of a wheat starch regulator 1 (TaRSR1) in regulating the synthesis of grain storage starch was determined using the barley stripe mosaic virus-virus induced gene-silencing (BSMV-VIGS) method in field experiments. Chlorotic stripes appeared on the wheat spikes infected with barley stripe mosaic virus-virus induced gene-silencing- wheat starch regulator 1 (BSMV-VIGS-TaRSR1) at 15 days after anthesis, at which time the transcription levels of the TaRSR1 gene significantly decreased. Quantitative real-time PCR was also used to measure the transcription levels of 26 starch synthesis-related enzyme genes in the grains of BSMV-VIGS-TaRSR1-silenced wheat plants at 20, 27, and 31 days after anthesis. The results showed that the transcription levels of some starch synthesis-related enzyme genes were markedly induced at different sampling time points: TaSSI, TaSSIV, TaBEIII, TaISA1, TaISA3, TaPHOL, and TaDPE1 genes were induced at each of the three sampling time points and TaAGPS1-b, TaAGPL1, TaAGPL2, TaSSIIb, TaSSIIc, TaSSIIIb, TaBEI, TaBEIIa, TaBEIIb, TaISA2, TaPHOH, and TaDPE2 genes were induced at one sampling time point. Moreover, both the grain starch contents, one thousand kernel weights, grain length and width of BSMV-VIGS-TaRSR1-infected wheat plants significantly increased. These results suggest that TaRSR1 acts as a negative regulator and plays an important role in starch synthesis in wheat grains by temporally regulating the expression of specific starch synthesis-related enzyme genes.

  8. Virus-Induced Gene Silencing Identifies an Important Role of the TaRSR1 Transcription Factor in Starch Synthesis in Bread Wheat

    PubMed Central

    Liu, Guoyu; Wu, Yufang; Xu, Mengjun; Gao, Tian; Wang, Pengfei; Wang, Lina; Guo, Tiancai; Kang, Guozhang

    2016-01-01

    The function of a wheat starch regulator 1 (TaRSR1) in regulating the synthesis of grain storage starch was determined using the barley stripe mosaic virus—virus induced gene-silencing (BSMV-VIGS) method in field experiments. Chlorotic stripes appeared on the wheat spikes infected with barley stripe mosaic virus-virus induced gene-silencing- wheat starch regulator 1 (BSMV-VIGS-TaRSR1) at 15 days after anthesis, at which time the transcription levels of the TaRSR1 gene significantly decreased. Quantitative real-time PCR was also used to measure the transcription levels of 26 starch synthesis-related enzyme genes in the grains of BSMV-VIGS-TaRSR1-silenced wheat plants at 20, 27, and 31 days after anthesis. The results showed that the transcription levels of some starch synthesis-related enzyme genes were markedly induced at different sampling time points: TaSSI, TaSSIV, TaBEIII, TaISA1, TaISA3, TaPHOL, and TaDPE1 genes were induced at each of the three sampling time points and TaAGPS1-b, TaAGPL1, TaAGPL2, TaSSIIb, TaSSIIc, TaSSIIIb, TaBEI, TaBEIIa, TaBEIIb, TaISA2, TaPHOH, and TaDPE2 genes were induced at one sampling time point. Moreover, both the grain starch contents, one thousand kernel weights, grain length and width of BSMV-VIGS-TaRSR1-infected wheat plants significantly increased. These results suggest that TaRSR1 acts as a negative regulator and plays an important role in starch synthesis in wheat grains by temporally regulating the expression of specific starch synthesis-related enzyme genes. PMID:27669224

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ottesen, Elizabeth A.; Marin, Roman; Preston, Christina M.

    Planktonic microbial activity and community structure is dynamic, and can change dramatically on time scales of hours to days. Yet for logistical reasons, this temporal scale is typically undersampled in the marine environment. In order to facilitate higher-resolution, long-term observation of microbial diversity and activity, we developed a protocol for automated collection and fixation of marine microbes using the Environmental Sample Processor (ESP) platform. The protocol applies a preservative (RNALater) to cells collected on filters, for long-term storage and preservation of total cellular RNA. Microbial samples preserved using this protocol yielded high-quality RNA after 30 days of storage at roommore » temperature, or onboard the ESP at in situ temperatures. Pyrosequencing of complementary DNA libraries generated from ESP-collected and preserved samples yielded transcript abundance profiles nearly indistinguishable from those derived from conventionally treated replicate samples. To demonstrate the utility of the method, we used a moored ESP to remotely and autonomously collect Monterey Bay seawater for metatranscriptomic analysis. Community RNA was extracted and pyrosequenced from samples collected at four time points over the course of a single day. In all four samples, the oxygenic photoautotrophs were predominantly eukaryotic, while the bacterial community was dominated by Polaribacter-like Flavobacteria and a Rhodobacterales bacterium sharing high similarity with Rhodobacterales sp. HTCC2255. However, each time point was associated with distinct species abundance and gene transcript profiles. These laboratory and field tests confirmed that autonomous collection and preservation is a feasible and useful approach for characterizing the expressed genes and environmental responses of marine microbial communities.« less

  10. A high speed implementation of the random decrement algorithm

    NASA Technical Reports Server (NTRS)

    Kiraly, L. J.

    1982-01-01

    The algorithm is useful for measuring net system damping levels in stochastic processes and for the development of equivalent linearized system response models. The algorithm works by summing together all subrecords which occur after predefined threshold level is crossed. The random decrement signature is normally developed by scanning stored data and adding subrecords together. The high speed implementation of the random decrement algorithm exploits the digital character of sampled data and uses fixed record lengths of 2(n) samples to greatly speed up the process. The contributions to the random decrement signature of each data point was calculated only once and in the same sequence as the data were taken. A hardware implementation of the algorithm using random logic is diagrammed and the process is shown to be limited only by the record size and the threshold crossing frequency of the sampled data. With a hardware cycle time of 200 ns and 1024 point signature, a threshold crossing frequency of 5000 Hertz can be processed and a stably averaged signature presented in real time.

  11. Reduction of VSC and salivary bacteria by a multibenefit mouthrinse.

    PubMed

    Boyd, T; Vazquez, J; Williams, M

    2008-03-01

    To evaluate the effectiveness of a multibenefit mouthrinse containing 0.05% cetylpyridinium chloride (CPC) and 0.025% sodium fluoride in reducing volatile sulfur compound (VSC) levels and total cultivable salivary bacteria, at both 4 h and overnight. In vitro analysis of efficacy was performed using saliva-coated hydroxyapatite disc substrates first treated with the mouthrinse, then exposed to whole human saliva, followed by overnight incubation in air-tight vials. Headspace VSC was quantified by gas chromatography (GC). A clinical evaluation was conducted with 14 subjects using a crossover design. After a seven-day washout period, baseline clinical measurement of VSC was performed by GC analysis of mouth air sampled in the morning prior to eating, drinking or performing any oral hygiene. A 10 mL saline rinse was used to sample and enumerate cultivable salivary bacterial levels via serial dilution and plating. Subjects were instructed to use the treatment rinse twice daily in combination with a controlled brushing regimen. After one week the subjects returned in the morning prior to eating, drinking or performing oral hygiene to provide samples of overnight mouth air and salivary bacteria. The subjects were then immediately rinsed with the test product, and provided additional mouth air and saliva rinse samples 4 h later. A multibenefit rinse containing 0.05% CPC and 0.025% sodium fluoride was found to reduce VSC in vitro by 52%. The rinse also demonstrated a significant clinical reduction in breath VSC (p < 0.05) of 55.8% at 4 h and 23.4% overnight relative to baseline VSC levels. At both time points, the multibenefit rinse was more effective than the control; this difference was statistically significant at the overnight time point (p < 0.05). Total cultivable salivary bacteria levels were also reduced significantly (p < 0.05) at 4 h and overnight by this mouthrinse compared to baseline levels and the control. A multibenefit mouthrinse was shown to reduce in vitro VSC levels via headspace analysis and clinically at the 4 h and overnight time points. A significant reduction in total cultivable salivary bacteria was also observed at all time points, supporting the VSC data.

  12. A Scanning Quantum Cryogenic Atom Microscope

    NASA Astrophysics Data System (ADS)

    Lev, Benjamin

    Microscopic imaging of local magnetic fields provides a window into the organizing principles of complex and technologically relevant condensed matter materials. However, a wide variety of intriguing strongly correlated and topologically nontrivial materials exhibit poorly understood phenomena outside the detection capability of state-of-the-art high-sensitivity, high-resolution scanning probe magnetometers. We introduce a quantum-noise-limited scanning probe magnetometer that can operate from room-to-cryogenic temperatures with unprecedented DC-field sensitivity and micron-scale resolution. The Scanning Quantum Cryogenic Atom Microscope (SQCRAMscope) employs a magnetically levitated atomic Bose-Einstein condensate (BEC), thereby providing immunity to conductive and blackbody radiative heating. The SQCRAMscope has a field sensitivity of 1.4 nT per resolution-limited point (2 um), or 6 nT / Hz1 / 2 per point at its duty cycle. Compared to point-by-point sensors, the long length of the BEC provides a naturally parallel measurement, allowing one to measure nearly one-hundred points with an effective field sensitivity of 600 pT / Hz1 / 2 each point during the same time as a point-by-point scanner would measure these points sequentially. Moreover, it has a noise floor of 300 pT and provides nearly two orders of magnitude improvement in magnetic flux sensitivity (down to 10- 6 Phi0 / Hz1 / 2) over previous atomic probe magnetometers capable of scanning near samples. These capabilities are for the first time carefully benchmarked by imaging magnetic fields arising from microfabricated wire patterns and done so using samples that may be scanned, cryogenically cooled, and easily exchanged. We anticipate the SQCRAMscope will provide charge transport images at temperatures from room to \\x9D4K in unconventional superconductors and topologically nontrivial materials.

  13. Towards point of care testing for C. difficile infection by volatile profiling, using the combination of a short multi-capillary gas chromatography column with metal oxide sensor detection

    NASA Astrophysics Data System (ADS)

    McGuire, N. D.; Ewen, R. J.; de Lacy Costello, B.; Garner, C. E.; Probert, C. S. J.; Vaughan, K.; Ratcliffe, N. M.

    2014-06-01

    Rapid volatile profiling of stool sample headspace was achieved using a combination of short multi-capillary chromatography column (SMCC), highly sensitive heated metal oxide semiconductor sensor and artificial neural network software. For direct analysis of biological samples this prototype offers alternatives to conventional gas chromatography (GC) detectors and electronic nose technology. The performance was compared to an identical instrument incorporating a long single capillary column (LSCC). The ability of the prototypes to separate complex mixtures was assessed using gas standards and homogenized in house ‘standard’ stool samples, with both capable of detecting more than 24 peaks per sample. The elution time was considerably faster with the SMCC resulting in a run time of 10 min compared to 30 min for the LSCC. The diagnostic potential of the prototypes was assessed using 50 C. difficile positive and 50 negative samples. The prototypes demonstrated similar capability of discriminating between positive and negative samples with sensitivity and specificity of 85% and 80% respectively. C. difficile is an important cause of hospital acquired diarrhoea, with significant morbidity and mortality around the world. A device capable of rapidly diagnosing the disease at the point of care would reduce cases, deaths and financial burden.

  14. Towards point of care testing for C. difficile infection by volatile profiling, using the combination of a short multi-capillary gas chromatography column with metal oxide sensor detection

    PubMed Central

    McGuire, N D; Ewen, R J; de Lacy Costello, B; Garner, C E; Probert, C S J; Vaughan, K.; Ratcliffe, N M

    2016-01-01

    Rapid volatile profiling of stool sample headspace was achieved using a combination of short multi-capillary chromatography column (SMCC), highly sensitive heated metal oxide semiconductor (MOS) sensor and artificial neural network (ANN) software. For direct analysis of biological samples this prototype offers alternatives to conventional GC detectors and electronic nose technology. The performance was compared to an identical instrument incorporating a long single capillary column (LSCC). The ability of the prototypes to separate complex mixtures was assessed using gas standards and homogenised in house ‘standard’ stool samples, with both capable of detecting more than 24 peaks per sample. The elution time was considerably faster with the SMCC resulting in a run time of 10 minutes compared to 30 minutes for the LSCC. The diagnostic potential of the prototypes was assessed using 50 C. difficile positive and 50 negative samples. The prototypes demonstrated similar capability of discriminating between positive and negative samples with sensitivity and specificity of 85% and 80% respectively. C. difficile is an important cause of hospital acquired diarrhoea, with significant morbidity and mortality around the world. A device capable of rapidly diagnosing the disease at the point of care would reduce cases, deaths and financial burden. PMID:27212803

  15. Validation of a modification to Performance-Tested Method 070601: Reveal Listeria Test for detection of Listeria spp. in selected foods and selected environmental samples.

    PubMed

    Alles, Susan; Peng, Linda X; Mozola, Mark A

    2009-01-01

    A modification to Performance-Tested Method (PTM) 070601, Reveal Listeria Test (Reveal), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there was a statistically significant difference in performance between the Reveal and reference culture [U.S. Food and Drug Administration's Bacteriological Analytical Manual (FDA/BAM) or U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS)] methods for only a single food in one trial (pasteurized crab meat) at the 27 h enrichment time point, with more positive results obtained with the FDA/BAM reference method. No foods showed statistically significant differences in method performance at the 30 h time point. Independent laboratory testing of 3 foods again produced a statistically significant difference in results for crab meat at the 27 h time point; otherwise results of the Reveal and reference methods were statistically equivalent. Overall, considering both internal and independent laboratory trials, sensitivity of the Reveal method relative to the reference culture procedures in testing of foods was 85.9% at 27 h and 97.1% at 30 h. Results from 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the Reveal method was more productive than the reference USDA-FSIS culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the Reveal method at the 24 h time point. Overall, sensitivity of the Reveal method at 24 h relative to that of the USDA-FSIS method was 153%. The Reveal method exhibited extremely high specificity, with only a single false-positive result in all trials combined for overall specificity of 99.5%.

  16. Spatio-temporal evaluation of organic contaminants and their transformation products along a river basin affected by urban, agricultural and industrial pollution.

    PubMed

    Gómez, María José; Herrera, Sonia; Solé, David; García-Calvo, Eloy; Fernández-Alba, Amadeo R

    2012-03-15

    This study aims to assess the occurrence, fate and temporal and spatial distribution of anthropogenic contaminants in a river subjected to different pressures (industrial, agricultural, wastewater discharges). For this purpose, the Henares River basin (central Spain) can be considered a representative basin within a continental Mediterranean climate. As the studied river runs through several residential, industrial and agricultural areas, it would be expected that the chemical water quality is modified along its course. Thereby the selection of sampling points and timing of sample collection are critical factors in the monitoring of a river basin. In this study, six different monitoring campaigns were performed in 2010 and contaminants were measured at the effluent point of the main wastewater treatment plant (WWTP) in the river basin and at five different points upstream and downstream from the WWTP emission point. The target compounds evaluated were personal care products (PCPs), polycyclic aromatic hydrocarbons (PAHs) and pesticides. Results show that the river is clearly influenced by wastewater discharges and also by its proximity to agricultural areas. The contaminants detected at higher concentrations were the PCPs. The spatial distribution of the contaminants indicates that the studied contaminants persist along the river. In the time period studied no great seasonal variations of PCPs at the river collection points were observed. In contrast, a temporal trend of pesticides and PAHs was observed. Besides the target compounds, other new contaminants were identified and evaluated in the water samples, some of them being investigated for the first time in the aquatic environment. The behaviour of three important transformation products was also evaluated: 9,10-anthracenodione, galaxolide-lactone and 4-amino-musk xylene. These were found at higher concentrations than their parent compounds, indicating the significance of including the study of transformation products in the monitoring programmes. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Device for modular input high-speed multi-channel digitizing of electrical data

    DOEpatents

    VanDeusen, Alan L.; Crist, Charles E.

    1995-09-26

    A multi-channel high-speed digitizer module converts a plurality of analog signals to digital signals (digitizing) and stores the signals in a memory device. The analog input channels are digitized simultaneously at high speed with a relatively large number of on-board memory data points per channel. The module provides an automated calibration based upon a single voltage reference source. Low signal noise at such a high density and sample rate is accomplished by ensuring the A/D converters are clocked at the same point in the noise cycle each time so that synchronous noise sampling occurs. This sampling process, in conjunction with an automated calibration, yields signal noise levels well below the noise level present on the analog reference voltages.

  18. Imaging systems and algorithms to analyze biological samples in real-time using mobile phone microscopy

    PubMed Central

    Mayberry, Addison; Perkins, David L.; Holcomb, Daniel E.

    2018-01-01

    Miniaturized imaging devices have pushed the boundaries of point-of-care imaging, but existing mobile-phone-based imaging systems do not exploit the full potential of smart phones. This work demonstrates the use of simple imaging configurations to deliver superior image quality and the ability to handle a wide range of biological samples. Results presented in this work are from analysis of fluorescent beads under fluorescence imaging, as well as helminth eggs and freshwater mussel larvae under white light imaging. To demonstrate versatility of the systems, real time analysis and post-processing results of the sample count and sample size are presented in both still images and videos of flowing samples. PMID:29509786

  19. Detection of image structures using the Fisher information and the Rao metric.

    PubMed

    Maybank, Stephen J

    2004-12-01

    In many detection problems, the structures to be detected are parameterized by the points of a parameter space. If the conditional probability density function for the measurements is known, then detection can be achieved by sampling the parameter space at a finite number of points and checking each point to see if the corresponding structure is supported by the data. The number of samples and the distances between neighboring samples are calculated using the Rao metric on the parameter space. The Rao metric is obtained from the Fisher information which is, in turn, obtained from the conditional probability density function. An upper bound is obtained for the probability of a false detection. The calculations are simplified in the low noise case by making an asymptotic approximation to the Fisher information. An application to line detection is described. Expressions are obtained for the asymptotic approximation to the Fisher information, the volume of the parameter space, and the number of samples. The time complexity for line detection is estimated. An experimental comparison is made with a Hough transform-based method for detecting lines.

  20. Measurement of bite force variables related to human discrimination of left-right hardness differences of silicone rubber samples placed between the incisors.

    PubMed

    Dan, Haruka; Azuma, Teruaki; Hayakawa, Fumiyo; Kohyama, Kaoru

    2005-05-01

    This study was designed to examine human subjects' ability to discriminate between spatially different bite pressures. We measured actual bite pressure distribution when subjects simultaneously bit two silicone rubber samples with different hardnesses using their right and left incisors. They were instructed to compare the hardness of these two rubber samples and indicate which was harder (right or left). The correct-answer rates were statistically significant at P < 0.05 for all pairs of different right and left silicone rubber hardnesses. Simultaneous bite measurements using a multiple-point sheet sensor demonstrated that the bite force, active pressure and maximum pressure point were greater for the harder silicone rubber sample. The difference between the left and right was statistically significant (P < 0.05) for all pairs with different silicone rubber hardnesses. We demonstrated for the first time that subjects could perceive and discriminate between spatially different bite pressures during a single bite with incisors. Differences of the bite force, pressure and the maximum pressure point between the right and left silicone samples should be sensory cues for spatial hardness discrimination.

  1. Interleukin-6 Detection with a Plasmonic Chip

    NASA Astrophysics Data System (ADS)

    Tawa, Keiko; Sumiya, Masashi; Toma, Mana; Sasakawa, Chisato; Sujino, Takuma; Miyaki, Tatsuki; Nakazawa, Hikaru; Umetsu, Mitsuo

    Interleukin-6, a cytokine relating inflammatory and autoimmune activity, was detected with three fluorescence assays using a plasmonic chip. In their assays, the way of surface modification, sample volume, incubation time and mixing solution, were found to influence the detection sensitivity. When the assay was revised in the point of a rapid and easy process, the detection sensitivity was not compromised compared to assays with sufficient sample volume and assay time. To suit the purpose of immunosensing, the assay conditions should be determined.

  2. The effect of substrate composition and storage time on urine specific gravity in dogs.

    PubMed

    Steinberg, E; Drobatz, K; Aronson, L

    2009-10-01

    The purpose of this study is to evaluate the effects of substrate composition and storage time on urine specific gravity in dogs. A descriptive cohort study of 15 dogs. The urine specific gravity of free catch urine samples was analysed during a 5-hour time period using three separate storage methods; a closed syringe, a diaper pad and non-absorbable cat litter. The urine specific gravity increased over time in all three substrates. The syringe sample had the least change from baseline and the diaper sample had the greatest change from baseline. The urine specific gravity for the litter and diaper samples had a statistically significant increase from the 1-hour to the 5-hour time point. The urine specific gravity from canine urine stored either on a diaper or in a non-absorbable litter increased over time. Although the change was found to be statistically significant over the 5-hour study period it is unlikely to be clinically significant.

  3. The procedures manual of the Environmental Measurements Laboratory. Volume 2, 28. edition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chieco, N.A.

    1997-02-01

    This report contains environmental sampling and analytical chemistry procedures that are performed by the Environmental Measurements Laboratory. The purpose of environmental sampling and analysis is to obtain data that describe a particular site at a specific point in time from which an evaluation can be made as a basis for possible action.

  4. 40 CFR 141.40 - Monitoring requirements for unregulated contaminants.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-12/31/2015 Column headings are: 1—Contaminant: The name of the contaminant to be analyzed. 2—CAS..., for List 2 Screening Survey, or List 3 Pre-Screen Testing, during the time frame indicated in column 6... paragraph (a)(3) of this section. Samples must be collected at each sample point that is specified in column...

  5. 40 CFR 141.40 - Monitoring requirements for unregulated contaminants.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-12/31/2015 Column headings are: 1—Contaminant: The name of the contaminant to be analyzed. 2—CAS..., for List 2 Screening Survey, or List 3 Pre-Screen Testing, during the time frame indicated in column 6... paragraph (a)(3) of this section. Samples must be collected at each sample point that is specified in column...

  6. 40 CFR 141.40 - Monitoring requirements for unregulated contaminants.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-12/31/2015 Column headings are: 1—Contaminant: The name of the contaminant to be analyzed. 2—CAS..., for List 2 Screening Survey, or List 3 Pre-Screen Testing, during the time frame indicated in column 6... paragraph (a)(3) of this section. Samples must be collected at each sample point that is specified in column...

  7. Trends in Adolescent Emotional Problems in England: A Comparison of Two National Cohorts Twenty Years Apart

    ERIC Educational Resources Information Center

    Collishaw, Stephan; Maughan, Barbara; Natarajan, Lucy; Pickles, Andrew

    2010-01-01

    Background: Evidence about trends in adolescent emotional problems (depression and anxiety) is inconclusive, because few studies have used comparable measures and samples at different points in time. We compared rates of adolescent emotional problems in two nationally representative English samples of youth 20 years apart using identical symptom…

  8. Studies in astronomical time series analysis. III - Fourier transforms, autocorrelation functions, and cross-correlation functions of unevenly spaced data

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey D.

    1989-01-01

    This paper develops techniques to evaluate the discrete Fourier transform (DFT), the autocorrelation function (ACF), and the cross-correlation function (CCF) of time series which are not evenly sampled. The series may consist of quantized point data (e.g., yes/no processes such as photon arrival). The DFT, which can be inverted to recover the original data and the sampling, is used to compute correlation functions by means of a procedure which is effectively, but not explicitly, an interpolation. The CCF can be computed for two time series not even sampled at the same set of times. Techniques for removing the distortion of the correlation functions caused by the sampling, determining the value of a constant component to the data, and treating unequally weighted data are also discussed. FORTRAN code for the Fourier transform algorithm and numerical examples of the techniques are given.

  9. Theoretical repeatability assessment without repetitive measurements in gradient high-performance liquid chromatography.

    PubMed

    Kotani, Akira; Tsutsumi, Risa; Shoji, Asaki; Hayashi, Yuzuru; Kusu, Fumiyo; Yamamoto, Kazuhiro; Hakamata, Hideki

    2016-07-08

    This paper puts forward a time and material-saving method for evaluating the repeatability of area measurements in gradient HPLC with UV detection (HPLC-UV), based on the function of mutual information (FUMI) theory which can theoretically provide the measurement standard deviation (SD) and detection limits through the stochastic properties of baseline noise with no recourse to repetitive measurements of real samples. The chromatographic determination of terbinafine hydrochloride and enalapril maleate is taken as an example. The best choice of the number of noise data points, inevitable for the theoretical evaluation, is shown to be 512 data points (10.24s at 50 point/s sampling rate of an A/D converter). Coupled with the relative SD (RSD) of sample injection variability in the instrument used, the theoretical evaluation is proved to give identical values of area measurement RSDs to those estimated by the usual repetitive method (n=6) over a wide concentration range of the analytes within the 95% confidence intervals of the latter RSD. The FUMI theory is not a statistical one, but the "statistical" reliability of its SD estimates (n=1) is observed to be as high as that attained by thirty-one measurements of the same samples (n=31). Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Methane Flux Estimation from Point Sources using GOSAT Target Observation: Detection Limit and Improvements with Next Generation Instruments

    NASA Astrophysics Data System (ADS)

    Kuze, A.; Suto, H.; Kataoka, F.; Shiomi, K.; Kondo, Y.; Crisp, D.; Butz, A.

    2017-12-01

    Atmospheric methane (CH4) has an important role in global radiative forcing of climate but its emission estimates have larger uncertainties than carbon dioxide (CO2). The area of anthropogenic emission sources is usually much smaller than 100 km2. The Thermal And Near infrared Sensor for carbon Observation Fourier-Transform Spectrometer (TANSO-FTS) onboard the Greenhouse gases Observing SATellite (GOSAT) has measured CO2 and CH4 column density using sun light reflected from the earth's surface. It has an agile pointing system and its footprint can cover 87-km2 with a single detector. By specifying pointing angles and observation time for every orbit, TANSO-FTS can target various CH4 point sources together with reference points every 3 day over years. We selected a reference point that represents CH4 background density before or after targeting a point source. By combining satellite-measured enhancement of the CH4 column density and surface measured wind data or estimates from the Weather Research and Forecasting (WRF) model, we estimated CH4emission amounts. Here, we picked up two sites in the US West Coast, where clear sky frequency is high and a series of data are available. The natural gas leak at Aliso Canyon showed a large enhancement and its decrease with time since the initial blowout. We present time series of flux estimation assuming the source is single point without influx. The observation of the cattle feedlot in Chino, California has weather station within the TANSO-FTS footprint. The wind speed is monitored continuously and the wind direction is stable at the time of GOSAT overpass. The large TANSO-FTS footprint and strong wind decreases enhancement below noise level. Weak wind shows enhancements in CH4, but the velocity data have large uncertainties. We show the detection limit of single samples and how to reduce uncertainty using time series of satellite data. We will propose that the next generation instruments for accurate anthropogenic CO2 and CH4 flux estimation have improve spatial resolution (˜1km2 ) to further enhance column density changes. We also propose adding imaging capability to monitor plume orientation. We will present laboratory model results and a sampling pattern optimization study that combines local emission source and global survey observations.

  11. Plasma serotonin in horses undergoing surgery for small intestinal colic

    PubMed Central

    Torfs, Sara C.; Maes, An A.; Delesalle, Catherine J.; Pardon, Bart; Croubels, Siska M.; Deprez, Piet

    2015-01-01

    This study compared serotonin concentrations in platelet poor plasma (PPP) from healthy horses and horses with surgical small intestinal (SI) colic, and evaluated their association with postoperative ileus, strangulation and non-survival. Plasma samples (with EDTA) from 33 horses with surgical SI colic were collected at several pre- and post-operative time points. Serotonin concentrations were determined using liquid-chromatography tandem mass spectrometry. Results were compared with those for 24 healthy control animals. The serotonin concentrations in PPP were significantly lower (P < 0.01) in pre- and post-operative samples from surgical SI colic horses compared to controls. However, no association with postoperative ileus or non-survival could be demonstrated at any time point. In this clinical study, plasma serotonin was not a suitable prognostic factor in horses with SI surgical colic. PMID:25694668

  12. Pulse-Echo Ultrasonic Imaging Method for Eliminating Sample Thickness Variation Effects

    NASA Technical Reports Server (NTRS)

    Roth, Don J. (Inventor)

    1997-01-01

    A pulse-echo, immersion method for ultrasonic evaluation of a material which accounts for and eliminates nonlevelness in the equipment set-up and sample thickness variation effects employs a single transducer and automatic scanning and digital imaging to obtain an image of a property of the material, such as pore fraction. The nonlevelness and thickness variation effects are accounted for by pre-scan adjustments of the time window to insure that the echoes received at each scan point are gated in the center of the window. This information is input into the scan file so that, during the automatic scanning for the material evaluation, each received echo is centered in its time window. A cross-correlation function calculates the velocity at each scan point, which is then proportionalized to a color or grey scale and displayed on a video screen.

  13. Computer Analysis of 400 HZ Aircraft Electrical Generator Test Data.

    DTIC Science & Technology

    1980-06-01

    Data Acquisition System. ............ 6 3 Voltage Waveform with Data Points. ....... 19 14 Zero Crossover Interpolation. ........ 20 5 Numerical...difference between successive positive-sloped zero crossovers of the waveform. However, the exact time of zero crossover is not known. This is because...data sampling and the generator output are not synchronized. This unsynchronization means that data points which correspond with an exact zero crossover

  14. How Much Is Too Little to Detect Impacts? A Case Study of a Nuclear Power Plant

    PubMed Central

    Széchy, Maria T. M.; Viana, Mariana S.; Curbelo-Fernandez, Maria P.; Lavrado, Helena P.; Junqueira, Andrea O. R.; Vilanova, Eduardo; Silva, Sérgio H. G.

    2012-01-01

    Several approaches have been proposed to assess impacts on natural assemblages. Ideally, the potentially impacted site and multiple reference sites are sampled through time, before and after the impact. Often, however, the lack of information regarding the potential overall impact, the lack of knowledge about the environment in many regions worldwide, budgets constraints and the increasing dimensions of human activities compromise the reliability of the impact assessment. We evaluated the impact, if any, and its extent of a nuclear power plant effluent on sessile epibiota assemblages using a suitable and feasible sampling design with no ‘before’ data and budget and logistic constraints. Assemblages were sampled at multiple times and at increasing distances from the point of the discharge of the effluent. There was a clear and localized effect of the power plant effluent (up to 100 m from the point of the discharge). However, depending on the time of the year, the impact reaches up to 600 m. We found a significantly lower richness of taxa in the Effluent site when compared to other sites. Furthermore, at all times, the variability of assemblages near the discharge was also smaller than in other sites. Although the sampling design used here (in particular the number of replicates) did not allow an unambiguously evaluation of the full extent of the impact in relation to its intensity and temporal variability, the multiple temporal and spatial scales used allowed the detection of some differences in the intensity of the impact, depending on the time of sampling. Our findings greatly contribute to increase the knowledge on the effects of multiple stressors caused by the effluent of a power plant and also have important implications for management strategies and conservation ecology, in general. PMID:23110117

  15. How much is too little to detect impacts? A case study of a nuclear power plant.

    PubMed

    Mayer-Pinto, Mariana; Ignacio, Barbara L; Széchy, Maria T M; Viana, Mariana S; Curbelo-Fernandez, Maria P; Lavrado, Helena P; Junqueira, Andrea O R; Vilanova, Eduardo; Silva, Sérgio H G

    2012-01-01

    Several approaches have been proposed to assess impacts on natural assemblages. Ideally, the potentially impacted site and multiple reference sites are sampled through time, before and after the impact. Often, however, the lack of information regarding the potential overall impact, the lack of knowledge about the environment in many regions worldwide, budgets constraints and the increasing dimensions of human activities compromise the reliability of the impact assessment. We evaluated the impact, if any, and its extent of a nuclear power plant effluent on sessile epibiota assemblages using a suitable and feasible sampling design with no 'before' data and budget and logistic constraints. Assemblages were sampled at multiple times and at increasing distances from the point of the discharge of the effluent. There was a clear and localized effect of the power plant effluent (up to 100 m from the point of the discharge). However, depending on the time of the year, the impact reaches up to 600 m. We found a significantly lower richness of taxa in the Effluent site when compared to other sites. Furthermore, at all times, the variability of assemblages near the discharge was also smaller than in other sites. Although the sampling design used here (in particular the number of replicates) did not allow an unambiguously evaluation of the full extent of the impact in relation to its intensity and temporal variability, the multiple temporal and spatial scales used allowed the detection of some differences in the intensity of the impact, depending on the time of sampling. Our findings greatly contribute to increase the knowledge on the effects of multiple stressors caused by the effluent of a power plant and also have important implications for management strategies and conservation ecology, in general.

  16. High resolution 4-D spectroscopy with sparse concentric shell sampling and FFT-CLEAN.

    PubMed

    Coggins, Brian E; Zhou, Pei

    2008-12-01

    Recent efforts to reduce the measurement time for multidimensional NMR experiments have fostered the development of a variety of new procedures for sampling and data processing. We recently described concentric ring sampling for 3-D NMR experiments, which is superior to radial sampling as input for processing by a multidimensional discrete Fourier transform. Here, we report the extension of this approach to 4-D spectroscopy as Randomized Concentric Shell Sampling (RCSS), where sampling points for the indirect dimensions are positioned on concentric shells, and where random rotations in the angular space are used to avoid coherent artifacts. With simulations, we show that RCSS produces a very low level of artifacts, even with a very limited number of sampling points. The RCSS sampling patterns can be adapted to fine rectangular grids to permit use of the Fast Fourier Transform in data processing, without an apparent increase in the artifact level. These artifacts can be further reduced to the noise level using the iterative CLEAN algorithm developed in radioastronomy. We demonstrate these methods on the high resolution 4-D HCCH-TOCSY spectrum of protein G's B1 domain, using only 1.2% of the sampling that would be needed conventionally for this resolution. The use of a multidimensional FFT instead of the slow DFT for initial data processing and for subsequent CLEAN significantly reduces the calculation time, yielding an artifact level that is on par with the level of the true spectral noise.

  17. High Resolution 4-D Spectroscopy with Sparse Concentric Shell Sampling and FFT-CLEAN

    PubMed Central

    Coggins, Brian E.; Zhou, Pei

    2009-01-01

    SUMMARY Recent efforts to reduce the measurement time for multidimensional NMR experiments have fostered the development of a variety of new procedures for sampling and data processing. We recently described concentric ring sampling for 3-D NMR experiments, which is superior to radial sampling as input for processing by a multidimensional discrete Fourier transform. Here, we report the extension of this approach to 4-D spectroscopy as Randomized Concentric Shell Sampling (RCSS), where sampling points for the indirect dimensions are positioned on concentric shells, and where random rotations in the angular space are used to avoid coherent artifacts. With simulations, we show that RCSS produces a very low level of artifacts, even with a very limited number of sampling points. The RCSS sampling patterns can be adapted to fine rectangular grids to permit use of the Fast Fourier Transform in data processing, without an apparent increase in the artifact level. These artifacts can be further reduced to the noise level using the iterative CLEAN algorithm developed in radioastronomy. We demonstrate these methods on the high resolution 4-D HCCH-TOCSY spectrum of protein G's B1 domain, using only 1.2% of the sampling that would be needed conventionally for this resolution. The use of a multidimensional FFT instead of the slow DFT for initial data processing and for subsequent CLEAN significantly reduces the calculation time, yielding an artifact level that is on par with the level of the true spectral noise. PMID:18853260

  18. Exploring revictimization risk in a community sample of sexual assault survivors.

    PubMed

    Chu, Ann T; Deprince, Anne P; Mauss, Iris B

    2014-01-01

    Previous research points to links between risk detection (the ability to detect danger cues in various situations) and sexual revictimization in college women. Given important differences between college and community samples that may be relevant to revictimization risk (e.g., the complexity of trauma histories), the current study explored the link between risk detection and revictimization in a community sample of women. Community-recruited women (N = 94) reported on their trauma histories in a semistructured interview. In a laboratory session, participants listened to a dating scenario involving a woman and a man that culminated in sexual assault. Participants were instructed to press a button "when the man had gone too far." Unlike in college samples, revictimized community women (n = 47) did not differ in terms of risk detection response times from women with histories of no victimization (n = 10) or single victimization (n = 15). Data from this study point to the importance of examining revictimization in heterogeneous community samples where risk mechanisms may differ from college samples.

  19. Quasi-elastic light scattering: Signal storage, correlation, and spectrum analysis under control of an 8-bit microprocessor

    NASA Astrophysics Data System (ADS)

    Glatter, Otto; Fuchs, Heribert; Jorde, Christian; Eigner, Wolf-Dieter

    1987-03-01

    The microprocessor of an 8-bit PC system is used as a central control unit for the acquisition and evaluation of data from quasi-elastic light scattering experiments. Data are sampled with a width of 8 bits under control of the CPU. This limits the minimum sample time to 20 μs. Shorter sample times would need a direct memory access channel. The 8-bit CPU can address a 64-kbyte RAM without additional paging. Up to 49 000 sample points can be measured without interruption. After storage, a correlation function or a power spectrum can be calculated from such a primary data set. Furthermore access is provided to the primary data for stability control, statistical tests, and for comparison of different evaluation methods for the same experiment. A detailed analysis of the signal (histogram) and of the effect of overflows is possible and shows that the number of pulses but not the number of overflows determines the error in the result. The correlation function can be computed with reasonable accuracy from data with a mean pulse rate greater than one, the power spectrum needs a three times higher pulse rate for convergence. The statistical accuracy of the results from 49 000 sample points is of the order of a few percent. Additional averages are necessary to improve their quality. The hardware extensions for the PC system are inexpensive. The main disadvantage of the present system is the high minimum sampling time of 20 μs and the fact that the correlogram or the power spectrum cannot be computed on-line as it can be done with hardware correlators or spectrum analyzers. These shortcomings and the storage size restrictions can be removed with a faster 16/32-bit CPU.

  20. Evaluation of process errors in bed load sampling using a Dune Model

    USGS Publications Warehouse

    Gomez, Basil; Troutman, Brent M.

    1997-01-01

    Reliable estimates of the streamwide bed load discharge obtained using sampling devices are dependent upon good at-a-point knowledge across the full width of the channel. Using field data and information derived from a model that describes the geometric features of a dune train in terms of a spatial process observed at a fixed point in time, we show that sampling errors decrease as the number of samples collected increases, and the number of traverses of the channel over which the samples are collected increases. It also is preferable that bed load sampling be conducted at a pace which allows a number of bed forms to pass through the sampling cross section. The situations we analyze and simulate pertain to moderate transport conditions in small rivers. In such circumstances, bed load sampling schemes typically should involve four or five traverses of a river, and the collection of 20–40 samples at a rate of five or six samples per hour. By ensuring that spatial and temporal variability in the transport process is accounted for, such a sampling design reduces both random and systematic errors and hence minimizes the total error involved in the sampling process.

  1. Control of oral malodour by dentifrices measured by gas chromatography.

    PubMed

    Newby, Evelyn E; Hickling, Jenneth M; Hughes, Francis J; Proskin, Howard M; Bosma, Marylynn P

    2008-04-01

    To evaluate the effect of toothpaste treatments on levels of oral volatile sulphur compounds (VSCs) measured by gas chromatography in two clinical studies. These were blinded, randomised, controlled, crossover studies with 16 (study A) or 20 (study B) healthy volunteers between the ages of 19-54. Study A: breath samples were collected at baseline, immediately and lhr after brushing. Four dentifrices (Zinc A, Zinc B, commercially available triclosan dentifrice and zinc free control) were evaluated. Study B: breath samples were collected at baseline, immediately, 1, 2, 3 and 7 hours after treatment. Subjects consumed a light breakfast then provided an additional breath sample between baseline assessment and treatment. Two dentifrices (gel-to-foam and a commercially available triclosan dentrifrice) were evaluated. Breath samples were collected in syringes and analysed for VSCs (hydrogen sulphide, methyl mercaptan and Total VSCs) utilising gas chromatography (GC) with flame photometric detection. Study A: immediately after treatment, a statistically significant reduction in VSCs from baseline was observed for Zinc A product only. A statistically significant reduction in VSCs from baseline was observed after 1 hour for all products. Both zinc products exhibited a significantly greater reduction from baseline VSCs than Colgate Total and Control at all time points. Study B: a statistically significant reduction in VSCs from baseline was observed at all time points for both products. The gel-to-foam product exhibited significantly greater reduction from baseline Total VSC concentration than Colgate Total at all time points from 1 hour post-treatment. Control of oral malodour by toothpaste treatment, evaluated as VSC levels using GC, has been demonstrated. Zinc is effective at reducing VSCs and the efficacy of zinc is formulation dependent. A gel-to-foam dentifrice was more effective at reducing VSCs than Colgate Total up to 7 hours.

  2. Evaluation of needle trap micro-extraction and automatic alveolar sampling for point-of-care breath analysis.

    PubMed

    Trefz, Phillip; Rösner, Lisa; Hein, Dietmar; Schubert, Jochen K; Miekisch, Wolfram

    2013-04-01

    Needle trap devices (NTDs) have shown many advantages such as improved detection limits, reduced sampling time and volume, improved stability, and reproducibility if compared with other techniques used in breath analysis such as solid-phase extraction and solid-phase micro-extraction. Effects of sampling flow (2-30 ml/min) and volume (10-100 ml) were investigated in dry gas standards containing hydrocarbons, aldehydes, and aromatic compounds and in humid breath samples. NTDs contained (single-bed) polymer packing and (triple-bed) combinations of divinylbenzene/Carbopack X/Carboxen 1000. Substances were desorbed from the NTDs by means of thermal expansion and analyzed by gas chromatography-mass spectrometry. An automated CO2-controlled sampling device for direct alveolar sampling at the point-of-care was developed and tested in pilot experiments. Adsorption efficiency for small volatile organic compounds decreased and breakthrough increased when sampling was done with polymer needles from a water-saturated matrix (breath) instead from dry gas. Humidity did not affect analysis with triple-bed NTDs. These NTDs showed only small dependencies on sampling flow and low breakthrough from 1-5 %. The new sampling device was able to control crucial parameters such as sampling flow and volume. With triple-bed NTDs, substance amounts increased linearly with increasing sample volume when alveolar breath was pre-concentrated automatically. When compared with manual sampling, automatic sampling showed comparable or better results. Thorough control of sampling and adequate choice of adsorption material is mandatory for application of needle trap micro-extraction in vivo. The new CO2-controlled sampling device allows direct alveolar sampling at the point-of-care without the need of any additional sampling, storage, or pre-concentration steps.

  3. General constraints on sampling wildlife on FIA plots

    USGS Publications Warehouse

    Bailey, L.L.; Sauer, J.R.; Nichols, J.D.; Geissler, P.H.; McRoberts, Ronald E.; Reams, Gregory A.; Van Deusen, Paul C.; McWilliams, William H.; Cieszewski, Chris J.

    2005-01-01

    This paper reviews the constraints to sampling wildlife populations at FIA points. Wildlife sampling programs must have well-defined goals and provide information adequate to meet those goals. Investigators should choose a State variable based on information needs and the spatial sampling scale. We discuss estimation-based methods for three State variables: species richness, abundance, and patch occupancy. All methods incorporate two essential sources of variation: detectability estimation and spatial variation. FIA sampling imposes specific space and time criteria that may need to be adjusted to meet local wildlife objectives.

  4. Multi-point estimation of total energy expenditure: a comparison between zinc-reduction and platinum-equilibration methodologies.

    PubMed

    Sonko, Bakary J; Miller, Leland V; Jones, Richard H; Donnelly, Joseph E; Jacobsen, Dennis J; Hill, James O; Fennessey, Paul V

    2003-12-15

    Reducing water to hydrogen gas by zinc or uranium metal for determining D/H ratio is both tedious and time consuming. This has forced most energy metabolism investigators to use the "two-point" technique instead of the "Multi-point" technique for estimating total energy expenditure (TEE). Recently, we purchased a new platinum (Pt)-equilibration system that significantly reduces both time and labor required for D/H ratio determination. In this study, we compared TEE obtained from nine overweight but healthy subjects, estimated using the traditional Zn-reduction method to that obtained from the new Pt-equilibration system. Rate constants, pool spaces, and CO2 production rates obtained from use of the two methodologies were not significantly different. Correlation analysis demonstrated that TEEs estimated using the two methods were significantly correlated (r=0.925, p=0.0001). Sample equilibration time was reduced by 66% compared to those of similar methods. The data demonstrated that the Zn-reduction method could be replaced by the Pt-equilibration method when TEE was estimated using the "Multi-Point" technique. Furthermore, D equilibration time was significantly reduced.

  5. Evaluating Mass Analyzers as Candidates for Small, Portable, Rugged Single Point Mass Spectrometers for Analysis of Permanent Gases

    NASA Technical Reports Server (NTRS)

    Arkin, C. Richard; Ottens, Andrew K.; Diaz, Jorge A.; Griffin, Timothy P.; Follestein, Duke; Adams, Fredrick; Steinrock, T. (Technical Monitor)

    2001-01-01

    For Space Shuttle launch safety, there is a need to monitor the concentration of H2, He, O2 and Ar around the launch vehicle. Currently a large mass spectrometry system performs this task, using long transport lines to draw in samples. There is great interest in replacing this stationary system with several miniature, portable, rugged mass spectrometers which act as point sensors which can be placed at the sampling point. Five commercial and two non-commercial analyzers are evaluated. The five commercial systems include the Leybold Inficon XPR-2 linear quadrupole, the Stanford Research (SRS-100) linear quadrupole, the Ferran linear quadrupole array, the ThermoQuest Polaris-Q quadrupole ion trap, and the IonWerks Time-of-Flight (TOF). The non-commercial systems include a compact double focusing sector (CDFMS) developed at the University of Minnesota, and a quadrupole ion trap (UF-IT) developed at the University of Florida. The System Volume is determined by measuring the entire system volume including the mass analyzer, its associated electronics, the associated vacuum system, the high vacuum pump and rough pump. Also measured are any ion gauge controllers or other required equipment. Computers are not included. Scan Time is the time required for one scan to be acquired and the data to be transferred. It is determined by measuring the time required acquiring a known number of scans and dividing by said number of scans. Limit of Detection is determined first by performing a zero-span calibration (using a 10-point data set). Then the limit of detection (LOD) is defined as 3 times the standard deviation of the zero data set. (An LOD of 10 ppm or less is considered acceptable.)

  6. Scanner baseliner monitoring and control in high volume manufacturing

    NASA Astrophysics Data System (ADS)

    Samudrala, Pavan; Chung, Woong Jae; Aung, Nyan; Subramany, Lokesh; Gao, Haiyong; Gomez, Juan-Manuel

    2016-03-01

    We analyze performance of different customized models on baseliner overlay data and demonstrate the reduction in overlay residuals by ~10%. Smart Sampling sets were assessed and compared with the full wafer measurements. We found that performance of the grid can still be maintained by going to one-third of total sampling points, while reducing metrology time by 60%. We also demonstrate the feasibility of achieving time to time matching using scanner fleet manager and thus identify the tool drifts even when the tool monitoring controls are within spec limits. We also explore the scanner feedback constant variation with illumination sources.

  7. Trends and advances in food analysis by real-time polymerase chain reaction.

    PubMed

    Salihah, Nur Thaqifah; Hossain, Mohammad Mosharraf; Lubis, Hamadah; Ahmed, Minhaz Uddin

    2016-05-01

    Analyses to ensure food safety and quality are more relevant now because of rapid changes in the quantity, diversity and mobility of food. Food-contamination must be determined to maintain health and up-hold laws, as well as for ethical and cultural concerns. Real-time polymerase chain reaction (RT-PCR), a rapid and inexpensive quantitative method to detect the presence of targeted DNA-segments in samples, helps in determining both accidental and intentional adulterations of foods by biological contaminants. This review presents recent developments in theory, techniques, and applications of RT-PCR in food analyses, RT-PCR addresses the limitations of traditional food analyses in terms of sensitivity, range of analytes, multiplexing ability, cost, time, and point-of-care applications. A range of targets, including species of plants or animals which are used as food ingredients, food-borne bacteria or viruses, genetically modified organisms, and allergens, even in highly processed foods can be identified by RT-PCR, even at very low concentrations. Microfluidic RT-PCR eliminates the separate sample-processing step to create opportunities for point-of-care analyses. We also cover the challenges related to using RT-PCR for food analyses, such as the need to further improve sample handling.

  8. Comparison of polyacrylamide and agarose gel thin-layer isoelectric focusing for the characterization of beta-lactamases.

    PubMed

    Vecoli, C; Prevost, F E; Ververis, J J; Medeiros, A A; O'Leary, G P

    1983-08-01

    Plasmid-mediated beta-lactamases from strains of Escherichia coli and Pseudomonas aeruginosa were separated by isoelectric focusing on a 0.8-mm thin-layer agarose gel with a pH gradient of 3.5 to 9.5. Their banding patterns and isoelectric points were compared with those obtained with a 2.0-mm polyacrylamide gel as the support medium. The agarose method produced banding patterns and isoelectric points which corresponded to the polyacrylamide gel data for most samples. Differences were observed for HMS-1 and PSE-1 beta-lactamases. The HMS-1 sample produced two highly resolvable enzyme bands in agarose gels rather than the single faint enzyme band observed on polyacrylamide gels. The PSE-1 sample showed an isoelectric point shift of 0.2 pH unit between polyacrylamide and agarose gel (pI 5.7 and 5.5, respectively). The short focusing time, lack of toxic hazard, and ease of formulation make agarose a practical medium for the characterization of beta-lactamases.

  9. Comparison of polyacrylamide and agarose gel thin-layer isoelectric focusing for the characterization of beta-lactamases.

    PubMed Central

    Vecoli, C; Prevost, F E; Ververis, J J; Medeiros, A A; O'Leary, G P

    1983-01-01

    Plasmid-mediated beta-lactamases from strains of Escherichia coli and Pseudomonas aeruginosa were separated by isoelectric focusing on a 0.8-mm thin-layer agarose gel with a pH gradient of 3.5 to 9.5. Their banding patterns and isoelectric points were compared with those obtained with a 2.0-mm polyacrylamide gel as the support medium. The agarose method produced banding patterns and isoelectric points which corresponded to the polyacrylamide gel data for most samples. Differences were observed for HMS-1 and PSE-1 beta-lactamases. The HMS-1 sample produced two highly resolvable enzyme bands in agarose gels rather than the single faint enzyme band observed on polyacrylamide gels. The PSE-1 sample showed an isoelectric point shift of 0.2 pH unit between polyacrylamide and agarose gel (pI 5.7 and 5.5, respectively). The short focusing time, lack of toxic hazard, and ease of formulation make agarose a practical medium for the characterization of beta-lactamases. Images PMID:6605714

  10. Dietary patterns among Norwegian 2-year-olds in 1999 and in 2007 and associations with child and parent characteristics.

    PubMed

    Kristiansen, Anne Lene; Lande, Britt; Sexton, Joseph Andrew; Andersen, Lene Frost

    2013-07-14

    Infant and childhood nutrition influences short- and long-term health. The objective of the present paper has been to explore dietary patterns and their associations with child and parent characteristics at two time points. Parents of Norwegian 2-year-olds were, in 1999 (n 3000) and in 2007 (n 2984), invited to participate in a national dietary survey. At both time points, diet was assessed by a semi-quantitative FFQ that also provided information on several child and parent characteristics. A total of 1373 participants in the 1999 sample and 1472 participants in the 2007 sample were included in the analyses. Dietary patterns were identified by principal components analysis and related to child and parent characteristics using the general linear model. Four dietary patterns were identified at each time point. The 'unhealthy' and 'healthy' patterns in 1999 and 2007 showed similarities with regard to loadings of food groups. Both the 'bread and spread-based' pattern in 1999 and the 'traditional' pattern in 2007 had high positive loadings for bread and spreads; however, the 'traditional' pattern did also include positive associations with a warm meal. The last patterns identified in 1999 and in 2007 were not comparable with regard to loadings of food groups. All dietary patterns were significantly associated with one or several child and parent characteristics. In conclusion, the 'unhealthy' patterns in 1999 and in 2007 showed similarities with regard to loadings of food groups and were, at both time points, associated with sex, breastfeeding at 12 months of age, parity, maternal age and maternal work situation.

  11. Latin Hypercube Sampling (LHS) at variable resolutions for enhanced watershed scale Soil Sampling and Digital Soil Mapping.

    NASA Astrophysics Data System (ADS)

    Hamalainen, Sampsa; Geng, Xiaoyuan; He, Juanxia

    2017-04-01

    Latin Hypercube Sampling (LHS) at variable resolutions for enhanced watershed scale Soil Sampling and Digital Soil Mapping. Sampsa Hamalainen, Xiaoyuan Geng, and Juanxia, He. AAFC - Agriculture and Agr-Food Canada, Ottawa, Canada. The Latin Hypercube Sampling (LHS) approach to assist with Digital Soil Mapping has been developed for some time now, however the purpose of this work was to complement LHS with use of multiple spatial resolutions of covariate datasets and variability in the range of sampling points produced. This allowed for specific sets of LHS points to be produced to fulfil the needs of various partners from multiple projects working in the Ontario and Prince Edward Island provinces of Canada. Secondary soil and environmental attributes are critical inputs that are required in the development of sampling points by LHS. These include a required Digital Elevation Model (DEM) and subsequent covariate datasets produced as a result of a Digital Terrain Analysis performed on the DEM. These additional covariates often include but are not limited to Topographic Wetness Index (TWI), Length-Slope (LS) Factor, and Slope which are continuous data. The range of specific points created in LHS included 50 - 200 depending on the size of the watershed and more importantly the number of soil types found within. The spatial resolution of covariates included within the work ranged from 5 - 30 m. The iterations within the LHS sampling were run at an optimal level so the LHS model provided a good spatial representation of the environmental attributes within the watershed. Also, additional covariates were included in the Latin Hypercube Sampling approach which is categorical in nature such as external Surficial Geology data. Some initial results of the work include using a 1000 iteration variable within the LHS model. 1000 iterations was consistently a reasonable value used to produce sampling points that provided a good spatial representation of the environmental attributes. When working within the same spatial resolution for covariates, however only modifying the desired number of sampling points produced, the change of point location portrayed a strong geospatial relationship when using continuous data. Access to agricultural fields and adjacent land uses is often "pinned" as the greatest deterrent to performing soil sampling for both soil survey and soil attribute validation work. The lack of access can be a result of poor road access and/or difficult geographical conditions to navigate for field work individuals. This seems a simple yet continuous issue to overcome for the scientific community and in particular, soils professionals. The ability to assist with the ease of access to sampling points will be in the future a contribution to the Latin Hypercube Sampling (LHS) approach. By removing all locations in the initial instance from the DEM, the LHS model can be restricted to locations only with access from the adjacent road or trail. To further the approach, a road network geospatial dataset can be included within spatial Geographic Information Systems (GIS) applications to access already produced points using a shortest-distance network method.

  12. Water potential in excised leaf tissue: comparison of a commercial dew point hygrometer and a thermocouple psychrometer on soybean, wheat, and barley.

    PubMed

    Nelsen, C E; Safir, G R; Hanson, A D

    1978-01-01

    Leaf water potential (Psi(leaf)) determinations were made on excised leaf samples using a commercial dew point hygrometer (Wescor Inc., Logan, Utah) and a thermocouple psychrometer operated in the isopiestic mode. With soybean leaves (Glycine max L.), there was good agreement between instruments; equilibration times were 2 to 3 hours. With cereals (Triticum aestivum L. and Hordeum vulgare L.), agreement between instruments was poor for moderately wilted leaves when 7-mm-diameter punches were used in the hygrometer and 20-mm slices were used in the psychrometer, because the Psi(leaf) values from the dew point hygrometer were too high. Agreement was improved by replacing the 7-mm punch samples in the hygrometer by 13-mm slices, which had a lower cut edge to volume ratio. Equilibration times for cereals were normally 6 to 8 hours. Spuriously high Psi(leaf) values obtained with 7-mm leaf punches may be associated with the ion release and reabsorption that occur upon tissue excision; such errors evidently depend both on the species and on tissue water status.

  13. Non-target time trend screening: a data reduction strategy for detecting emerging contaminants in biological samples.

    PubMed

    Plassmann, Merle M; Tengstrand, Erik; Åberg, K Magnus; Benskin, Jonathan P

    2016-06-01

    Non-targeted mass spectrometry-based approaches for detecting novel xenobiotics in biological samples are hampered by the occurrence of naturally fluctuating endogenous substances, which are difficult to distinguish from environmental contaminants. Here, we investigate a data reduction strategy for datasets derived from a biological time series. The objective is to flag reoccurring peaks in the time series based on increasing peak intensities, thereby reducing peak lists to only those which may be associated with emerging bioaccumulative contaminants. As a result, compounds with increasing concentrations are flagged while compounds displaying random, decreasing, or steady-state time trends are removed. As an initial proof of concept, we created artificial time trends by fortifying human whole blood samples with isotopically labelled standards. Different scenarios were investigated: eight model compounds had a continuously increasing trend in the last two to nine time points, and four model compounds had a trend that reached steady state after an initial increase. Each time series was investigated at three fortification levels and one unfortified series. Following extraction, analysis by ultra performance liquid chromatography high-resolution mass spectrometry, and data processing, a total of 21,700 aligned peaks were obtained. Peaks displaying an increasing trend were filtered from randomly fluctuating peaks using time trend ratios and Spearman's rank correlation coefficients. The first approach was successful in flagging model compounds spiked at only two to three time points, while the latter approach resulted in all model compounds ranking in the top 11 % of the peak lists. Compared to initial peak lists, a combination of both approaches reduced the size of datasets by 80-85 %. Overall, non-target time trend screening represents a promising data reduction strategy for identifying emerging bioaccumulative contaminants in biological samples. Graphical abstract Using time trends to filter out emerging contaminants from large peak lists.

  14. Using blue mussels (Mytilus spp.) as biosentinels of Cryptosporidium spp. and Toxoplasma gondii contamination in marine aquatic environments

    USDA-ARS?s Scientific Manuscript database

    Methods to monitor microbial contamination typically involve collecting discrete samples at specific time-points and analyzing for a single contaminant. While informative, many of these methods suffer from poor recovery rates and only provide a snapshot of the microbial load at the time of collectio...

  15. Real-time solution of linear computational problems using databases of parametric reduced-order models with arbitrary underlying meshes

    NASA Astrophysics Data System (ADS)

    Amsallem, David; Tezaur, Radek; Farhat, Charbel

    2016-12-01

    A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.

  16. Development of a magnetic lab-on-a-chip for point-of-care sepsis diagnosis

    NASA Astrophysics Data System (ADS)

    Schotter, Joerg; Shoshi, Astrit; Brueckl, Hubert

    2009-05-01

    We present design criteria, operation principles and experimental examples of magnetic marker manipulation for our magnetic lab-on-a-chip prototype. It incorporates both magnetic sample preparation and detection by embedded GMR-type magnetoresistive sensors and is optimized for the automated point-of-care detection of four different sepsis-indicative cytokines directly from about 5 μl of whole blood. The sample volume, magnetic particle size and cytokine concentration determine the microfluidic volume, sensor size and dimensioning of the magnetic gradient field generators. By optimizing these parameters to the specific diagnostic task, best performance is expected with respect to sensitivity, analysis time and reproducibility.

  17. Dual-cloud point extraction coupled to high performance liquid chromatography for simultaneous determination of trace sulfonamide antimicrobials in urine and water samples.

    PubMed

    Nong, Chunyan; Niu, Zongliang; Li, Pengyao; Wang, Chunping; Li, Wanyu; Wen, Yingying

    2017-04-15

    Dual-cloud point extraction (dCPE) was successfully developed for simultaneous extraction of trace sulfonamides (SAs) including sulfamerazine (SMZ), sulfadoxin (SDX), sulfathiazole (STZ) in urine and water samples. Several parameters affecting the extraction were optimized, such as sample pH, concentration of Triton X-114, extraction temperature and time, centrifugation rate and time, back-extraction solution pH, back-extraction temperature and time, back-extraction centrifugation rate and time. High performance liquid chromatography (HPLC) was applied for the SAs analysis. Under the optimum extraction and detection conditions, successful separation of the SAs was achieved within 9min, and excellent analytical performances were attained. Good linear relationships (R 2 ≥0.9990) between peak area and concentration for SMZ and STZ were optimized from 0.02 to 10μg/mL, for SDX from 0.01 to 10μg/mL. Detection limits of 3.0-6.2ng/mL were achieved. Satisfactory recoveries ranging from 85 to 108% were determined with urine, lake and tap water spiked at 0.2, 0.5 and 1μg/mL, respectively, with relative standard deviations (RSDs, n=6) of 1.5-7.7%. This method was demonstrated to be convenient, rapid, cost-effective and environmentally benign, and could be used as an alternative tool to existing methods for analysing trace residues of SAs in urine and water samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Reviving common standards in point-count surveys for broad inference across studies

    USGS Publications Warehouse

    Matsuoka, Steven M.; Mahon, C. Lisa; Handel, Colleen M.; Solymos, Peter; Bayne, Erin M.; Fontaine, Patricia C.; Ralph, C.J.

    2014-01-01

    We revisit the common standards recommended by Ralph et al. (1993, 1995a) for conducting point-count surveys to assess the relative abundance of landbirds breeding in North America. The standards originated from discussions among ornithologists in 1991 and were developed so that point-count survey data could be broadly compared and jointly analyzed by national data centers with the goals of monitoring populations and managing habitat. Twenty years later, we revisit these standards because (1) they have not been universally followed and (2) new methods allow estimation of absolute abundance from point counts, but these methods generally require data beyond the original standards to account for imperfect detection. Lack of standardization and the complications it introduces for analysis become apparent from aggregated data. For example, only 3% of 196,000 point counts conducted during the period 1992-2011 across Alaska and Canada followed the standards recommended for the count period and count radius. Ten-minute, unlimited-count-radius surveys increased the number of birds detected by >300% over 3-minute, 50-m-radius surveys. This effect size, which could be eliminated by standardized sampling, was ≥10 times the published effect sizes of observers, time of day, and date of the surveys. We suggest that the recommendations by Ralph et al. (1995a) continue to form the common standards when conducting point counts. This protocol is inexpensive and easy to follow but still allows the surveys to be adjusted for detection probabilities. Investigators might optionally collect additional information so that they can analyze their data with more flexible forms of removal and time-of-detection models, distance sampling, multiple-observer methods, repeated counts, or combinations of these methods. Maintaining the common standards as a base protocol, even as these study-specific modifications are added, will maximize the value of point-count data, allowing compilation and analysis by regional and national data centers.

  19. Exploring the variation of oral microbiota in supragingival plaque during and after head-and-neck radiotherapy using pyrosequencing.

    PubMed

    Gao, Li; Hu, Yuejian; Wang, Yuxia; Jiang, Wenxin; He, Zhiyan; Zhu, Cailian; Ma, Rui; Huang, Zhengwei

    2015-09-01

    The aim of this article was to study the variation in oral microflora of the subgingival plaque during and after radiotherapy. During and after radiotherapy, microbial samples were collected at seven time points (early stage, medium stage, and later stage of radiotherapy, and 1 month, 3 months, 6 months, and 1 year after radiotherapy) in three subjects for a total of 21 samples. Polymerase chain reaction (PCR) amplification was carried out on the 16S rDNA hypervariable V1-V3 region, and then the PCR products were determined by high-throughput pyrosequencing. The rarefaction curve indicating the richness of the microflora demonstrated that the number of operational taxonomic units (OTUs) was in decline from the early stage of radiotherapy to the time point 1 month after radiotherapy and then trended upward. The Shannon diversity index declined during radiotherapy (ranging from 4.59 to 3.73), and generally rose after radiotherapy, with the lowest value of 3.5 (1 month after radiotherapy) and highest value of 4.75 (6 months after radiotherapy). A total of 120 genera were found; five genera (Actinomyces, Veillonella, Prevotella, Streptococcus, Campylobacter) were found in all subjects across all time points. The richness and diversity of oral ecology decreased with increased radiation dose, and it was gradually restored with time. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Serving Real-Time Point Observation Data in netCDF using Climate and Forecasting Discrete Sampling Geometry Conventions

    NASA Astrophysics Data System (ADS)

    Ward-Garrison, C.; May, R.; Davis, E.; Arms, S. C.

    2016-12-01

    NetCDF is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. The Climate and Forecasting (CF) metadata conventions for netCDF foster the ability to work with netCDF files in general and useful ways. These conventions include metadata attributes for physical units, standard names, and spatial coordinate systems. While these conventions have been successful in easing the use of working with netCDF-formatted output from climate and forecast models, their use for point-based observation data has been less so. Unidata has prototyped using the discrete sampling geometry (DSG) CF conventions to serve, using the THREDDS Data Server, the real-time point observation data flowing across the Internet Data Distribution (IDD). These data originate in text format reports for individual stations (e.g. METAR surface data or TEMP upper air data) and are converted and stored in netCDF files in real-time. This work discusses the experiences and challenges of using the current CF DSG conventions for storing such real-time data. We also test how parts of netCDF's extended data model can address these challenges, in order to inform decisions for a future version of CF (CF 2.0) that would take advantage of features of the netCDF enhanced data model.

  1. Differential Detection of Enterovirus and Herpes Simplex Virus in Cerebrospinal Fluid by Real-Time RT-PCR.

    PubMed

    Sarquiz-Martínez, Brenda; González-Bonilla, César R; Santacruz-Tinoco, Clara Esperanza; Muñoz-Medina, José E; Pardavé-Alejandre, Héctor D; Barbosa-Cabrera, Elizabeth; Ramírez-González, José Ernesto; Díaz-Quiñonez, José Alberto

    2017-01-01

    Enterovirus (EV) and herpes simplex virus 1 and 2 (HSV1 and HSV2) are the main etiologic agents of central nervous system infections. Early laboratory confirmation of these infections is performed by viral culture of the cerebrospinal fluid (CSF), or the detection of specific antibodies in serum (e.g., HSV). The sensitivity of viral culture ranges from 65 to 75%, with a recovery time varying from 3 to 10 days. Serological tests are faster and easy to carry out, but they exhibit cross-reactivity between HSV1 and HSV2. Although molecular techniques are more sensitive (sensitivity >95%), they are more expensive and highly susceptible to cross-contamination. A real-time RT-PCR for the detection of EV, HSV1, and HSV2 was compared with end-point nested PCR. We tested 87 CSF samples of patients with a clinical diagnosis of viral meningitis or encephalitis. Fourteen samples were found to be positive by RT-PCR, but only 8 were positive by end-point PCR. The RT-PCR showed a specificity range of 94-100%, the negative predictive value was 100%, and the positive predictive value was 62, 100, and 28% for HSV1, HSV2, and EV, respectively. Real-time RT-PCR detected EV, HSV1, and HSV2 with a higher sensitivity and specificity than end-point nested RT-PCR. © 2017 S. Karger AG, Basel.

  2. Lipidic cubic phase injector is a viable crystal delivery system for time-resolved serial crystallography

    DOE PAGES

    Nogly, Przemyslaw; Panneels, Valerie; Nelson, Garrett; ...

    2016-08-22

    Serial femtosecond crystallography (SFX) using X-ray free-electron laser sources is an emerging method with considerable potential for time-resolved pump-probe experiments. Here we present a lipidic cubic phase SFX structure of the light-driven proton pump bacteriorhodopsin (bR) to 2.3 Å resolution and a method to investigate protein dynamics with modest sample requirement. Time-resolved SFX (TR-SFX) with a pump-probe delay of 1 ms yields difference Fourier maps compatible with the dark to M state transition of bR. Importantly, the method is very sample efficient and reduces sample consumption to about 1 mg per collected time point. Accumulation of M intermediate within themore » crystal lattice is confirmed by time-resolved visible absorption spectroscopy. Furthermore, this study provides an important step towards characterizing the complete photocycle dynamics of retinal proteins and demonstrates the feasibility of a sample efficient viscous medium jet for TR-SFX.« less

  3. Lipidic cubic phase injector is a viable crystal delivery system for time-resolved serial crystallography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nogly, Przemyslaw; Panneels, Valerie; Nelson, Garrett

    Serial femtosecond crystallography (SFX) using X-ray free-electron laser sources is an emerging method with considerable potential for time-resolved pump-probe experiments. Here we present a lipidic cubic phase SFX structure of the light-driven proton pump bacteriorhodopsin (bR) to 2.3 Å resolution and a method to investigate protein dynamics with modest sample requirement. Time-resolved SFX (TR-SFX) with a pump-probe delay of 1 ms yields difference Fourier maps compatible with the dark to M state transition of bR. Importantly, the method is very sample efficient and reduces sample consumption to about 1 mg per collected time point. Accumulation of M intermediate within themore » crystal lattice is confirmed by time-resolved visible absorption spectroscopy. Furthermore, this study provides an important step towards characterizing the complete photocycle dynamics of retinal proteins and demonstrates the feasibility of a sample efficient viscous medium jet for TR-SFX.« less

  4. Lipidic cubic phase injector is a viable crystal delivery system for time-resolved serial crystallography

    PubMed Central

    Nogly, Przemyslaw; Panneels, Valerie; Nelson, Garrett; Gati, Cornelius; Kimura, Tetsunari; Milne, Christopher; Milathianaki, Despina; Kubo, Minoru; Wu, Wenting; Conrad, Chelsie; Coe, Jesse; Bean, Richard; Zhao, Yun; Båth, Petra; Dods, Robert; Harimoorthy, Rajiv; Beyerlein, Kenneth R.; Rheinberger, Jan; James, Daniel; DePonte, Daniel; Li, Chufeng; Sala, Leonardo; Williams, Garth J.; Hunter, Mark S.; Koglin, Jason E.; Berntsen, Peter; Nango, Eriko; Iwata, So; Chapman, Henry N.; Fromme, Petra; Frank, Matthias; Abela, Rafael; Boutet, Sébastien; Barty, Anton; White, Thomas A.; Weierstall, Uwe; Spence, John; Neutze, Richard; Schertler, Gebhard; Standfuss, Jörg

    2016-01-01

    Serial femtosecond crystallography (SFX) using X-ray free-electron laser sources is an emerging method with considerable potential for time-resolved pump-probe experiments. Here we present a lipidic cubic phase SFX structure of the light-driven proton pump bacteriorhodopsin (bR) to 2.3 Å resolution and a method to investigate protein dynamics with modest sample requirement. Time-resolved SFX (TR-SFX) with a pump-probe delay of 1 ms yields difference Fourier maps compatible with the dark to M state transition of bR. Importantly, the method is very sample efficient and reduces sample consumption to about 1 mg per collected time point. Accumulation of M intermediate within the crystal lattice is confirmed by time-resolved visible absorption spectroscopy. This study provides an important step towards characterizing the complete photocycle dynamics of retinal proteins and demonstrates the feasibility of a sample efficient viscous medium jet for TR-SFX. PMID:27545823

  5. Comparison of sampling procedures and microbiological and non-microbiological parameters to evaluate cleaning and disinfection in broiler houses.

    PubMed

    Luyckx, K; Dewulf, J; Van Weyenberg, S; Herman, L; Zoons, J; Vervaet, E; Heyndrickx, M; De Reu, K

    2015-04-01

    Cleaning and disinfection of the broiler stable environment is an essential part of farm hygiene management. Adequate cleaning and disinfection is essential for prevention and control of animal diseases and zoonoses. The goal of this study was to shed light on the dynamics of microbiological and non-microbiological parameters during the successive steps of cleaning and disinfection and to select the most suitable sampling methods and parameters to evaluate cleaning and disinfection in broiler houses. The effectiveness of cleaning and disinfection protocols was measured in six broiler houses on two farms through visual inspection, adenosine triphosphate hygiene monitoring and microbiological analyses. Samples were taken at three time points: 1) before cleaning, 2) after cleaning, and 3) after disinfection. Before cleaning and after disinfection, air samples were taken in addition to agar contact plates and swab samples taken from various sampling points for enumeration of total aerobic flora, Enterococcus spp., and Escherichia coli and the detection of E. coli and Salmonella. After cleaning, air samples, swab samples, and adenosine triphosphate swabs were taken and a visual score was also assigned for each sampling point. The mean total aerobic flora determined by swab samples decreased from 7.7±1.4 to 5.7±1.2 log CFU/625 cm2 after cleaning and to 4.2±1.6 log CFU/625 cm2 after disinfection. Agar contact plates were used as the standard for evaluating cleaning and disinfection, but in this study they were found to be less suitable than swabs for enumeration. In addition to measuring total aerobic flora, Enterococcus spp. seemed to be a better hygiene indicator to evaluate cleaning and disinfection protocols than E. coli. All stables were Salmonella negative, but the detection of its indicator organism E. coli provided additional information for evaluating cleaning and disinfection protocols. Adenosine triphosphate analyses gave additional information about the hygiene level of the different sampling points. © 2015 Poultry Science Association Inc.

  6. Effects of sampling strategy, detection probability, and independence of counts on the use of point counts

    USGS Publications Warehouse

    Pendleton, G.W.; Ralph, C. John; Sauer, John R.; Droege, Sam

    1995-01-01

    Many factors affect the use of point counts for monitoring bird populations, including sampling strategies, variation in detection rates, and independence of sample points. The most commonly used sampling plans are stratified sampling, cluster sampling, and systematic sampling. Each of these might be most useful for different objectives or field situations. Variation in detection probabilities and lack of independence among sample points can bias estimates and measures of precision. All of these factors should be con-sidered when using point count methods.

  7. Estimating subsurface water volumes and transit times in Hokkaido river catchments, Japan, using high-accuracy tritium analysis

    NASA Astrophysics Data System (ADS)

    Gusyev, Maksym; Yamazaki, Yusuke; Morgenstern, Uwe; Stewart, Mike; Kashiwaya, Kazuhisa; Hirai, Yasuyuki; Kuribayashi, Daisuke; Sawano, Hisaya

    2015-04-01

    The goal of this study is to estimate subsurface water transit times and volumes in headwater catchments of Hokkaido, Japan, using the New Zealand high-accuracy tritium analysis technique. Transit time provides insights into the subsurface water storage and therefore provides a robust and quick approach to quantifying the subsurface groundwater volume. Our method is based on tritium measurements in river water. Tritium is a component of meteoric water, decays with a half-life of 12.32 years, and is inert in the subsurface after the water enters the groundwater system. Therefore, tritium is ideally suited for characterization of the catchment's responses and can provide information on mean water transit times up to 200 years. Only in recent years has it become possible to use tritium for dating of stream and river water, due to the fading impact of the bomb-tritium from thermo-nuclear weapons testing, and due to improved measurement accuracy for the extremely low natural tritium concentrations. Transit time of the water discharge is one of the most crucial parameters for understanding the response of catchments and estimating subsurface water volume. While many tritium transit time studies have been conducted in New Zealand, only a limited number of tritium studies have been conducted in Japan. In addition, the meteorological, orographic and geological conditions of Hokkaido Island are similar to those in parts of New Zealand, allowing for comparison between these regions. In 2014, three field trips were conducted in Hokkaido in June, July and October to sample river water at river gauging stations operated by the Ministry of Land, Infrastructure, Transport and Tourism (MLIT). These stations have altitudes between 36 m and 860 m MSL and drainage areas between 45 and 377 km2. Each sampled point is located upstream of MLIT dams, with hourly measurements of precipitation and river water levels enabling us to distinguish between the snow melt and baseflow contributions to the river discharge. For the June sampling, the tritium and stable isotope results indicate below normal river discharges with a strong contribution of snow melt at some sampling points, and relatively short groundwater transit times. The tritium concentration results are used to interpret mean transit times (MTTs) for each sampling point using a tritium input curve constructed from historical International Atomic Energy Agency and available Japanese data, and subsurface volumes are estimated from the MTTs and measured river discharges.

  8. Cloud point extraction and flame atomic absorption spectrometric determination of cadmium and nickel in drinking and wastewater samples.

    PubMed

    Naeemullah; Kazi, Tasneem G; Shah, Faheem; Afridi, Hassan I; Baig, Jameel Ahmed; Soomro, Abdul Sattar

    2013-01-01

    A simple method for the preconcentration of cadmium (Cd) and nickel (Ni) in drinking and wastewater samples was developed. Cloud point extraction has been used for the preconcentration of both metals, after formation of complexes with 8-hydroxyquinoline (8-HQ) and extraction with the surfactant octylphenoxypolyethoxyethanol (Triton X-114). Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation, and the Cd and Ni contents were measured by flame atomic absorption spectrometry. The experimental variables, such as pH, amounts of reagents (8-HQ and Triton X-114), temperature, incubation time, and sample volume, were optimized. After optimization of the complexation and extraction conditions, enhancement factors of 80 and 61, with LOD values of 0.22 and 0.52 microg/L, were obtained for Cd and Ni, respectively. The proposed method was applied satisfactorily for the determination of both elements in drinking and wastewater samples.

  9. Device for modular input high-speed multi-channel digitizing of electrical data

    DOEpatents

    VanDeusen, A.L.; Crist, C.E.

    1995-09-26

    A multi-channel high-speed digitizer module converts a plurality of analog signals to digital signals (digitizing) and stores the signals in a memory device. The analog input channels are digitized simultaneously at high speed with a relatively large number of on-board memory data points per channel. The module provides an automated calibration based upon a single voltage reference source. Low signal noise at such a high density and sample rate is accomplished by ensuring the A/D converters are clocked at the same point in the noise cycle each time so that synchronous noise sampling occurs. This sampling process, in conjunction with an automated calibration, yields signal noise levels well below the noise level present on the analog reference voltages. 1 fig.

  10. Eating Problems and Their Risk Factors: A 7-Year Longitudinal Study of a Population Sample of Norwegian Adolescent Girls

    ERIC Educational Resources Information Center

    Kansi, Juliska; Wichstrom, Lars; Bergman, Lars R.

    2005-01-01

    The longitudinal stability of eating problems and their relationships to risk factors were investigated in a representative population sample of 623 Norwegian girls aged 13-14 followed over 7 years (3 time points). Three eating problem symptoms were measured: Restriction, Bulimia-food preoccupation, and Diet, all taken from the 12-item Eating…

  11. Continuity of Functional-Somatic Symptoms from Late Childhood to Young Adulthood in a Community Sample

    ERIC Educational Resources Information Center

    Steinhausen, Hans-Christoph; Metzke, Christa Winkler

    2007-01-01

    Background: The goal of this study was to assess the course of functional-somatic symptoms from late childhood to young adulthood and the associations of these symptoms with young adult psychopathology. Methods: Data were collected in a large community sample at three different points in time (1994, 1997, and 2001). Functional-somatic symptoms…

  12. Stress, Social Support, and Outcomes in Two Probability Samples of Homeless Adults

    ERIC Educational Resources Information Center

    Toro, Paul A.; Tulloch, Elizabeth; Ouellette, Nicole

    2008-01-01

    This study investigated the main effects of social support measures and their stress-buffering effects in two samples of homeless adults (Ns =249 and 219) obtained in the same large county (surrounding Detroit) at different points in time over an 8-year period (1992-1994 and 2000-2002). The findings suggest that the construct of social support,…

  13. Two-dimensional T2 distribution mapping in rock core plugs with optimal k-space sampling.

    PubMed

    Xiao, Dan; Balcom, Bruce J

    2012-07-01

    Spin-echo single point imaging has been employed for 1D T(2) distribution mapping, but a simple extension to 2D is challenging since the time increase is n fold, where n is the number of pixels in the second dimension. Nevertheless 2D T(2) mapping in fluid saturated rock core plugs is highly desirable because the bedding plane structure in rocks often results in different pore properties within the sample. The acquisition time can be improved by undersampling k-space. The cylindrical shape of rock core plugs yields well defined intensity distributions in k-space that may be efficiently determined by new k-space sampling patterns that are developed in this work. These patterns acquire 22.2% and 11.7% of the k-space data points. Companion density images may be employed, in a keyhole imaging sense, to improve image quality. T(2) weighted images are fit to extract T(2) distributions, pixel by pixel, employing an inverse Laplace transform. Images reconstructed with compressed sensing, with similar acceleration factors, are also presented. The results show that restricted k-space sampling, in this application, provides high quality results. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. A new method for estimating the demographic history from DNA sequences: an importance sampling approach

    PubMed Central

    Ait Kaci Azzou, Sadoune; Larribe, Fabrice; Froda, Sorana

    2015-01-01

    The effective population size over time (demographic history) can be retraced from a sample of contemporary DNA sequences. In this paper, we propose a novel methodology based on importance sampling (IS) for exploring such demographic histories. Our starting point is the generalized skyline plot with the main difference being that our procedure, skywis plot, uses a large number of genealogies. The information provided by these genealogies is combined according to the IS weights. Thus, we compute a weighted average of the effective population sizes on specific time intervals (epochs), where the genealogies that agree more with the data are given more weight. We illustrate by a simulation study that the skywis plot correctly reconstructs the recent demographic history under the scenarios most commonly considered in the literature. In particular, our method can capture a change point in the effective population size, and its overall performance is comparable with the one of the bayesian skyline plot. We also introduce the case of serially sampled sequences and illustrate that it is possible to improve the performance of the skywis plot in the case of an exponential expansion of the effective population size. PMID:26300910

  15. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions. II. A simplified implementation.

    PubMed

    Tao, Guohua; Miller, William H

    2012-09-28

    An efficient time-dependent (TD) Monte Carlo (MC) importance sampling method has recently been developed [G. Tao and W. H. Miller, J. Chem. Phys. 135, 024104 (2011)] for the evaluation of time correlation functions using the semiclassical (SC) initial value representation (IVR) methodology. In this TD-SC-IVR method, the MC sampling uses information from both time-evolved phase points as well as their initial values, and only the "important" trajectories are sampled frequently. Even though the TD-SC-IVR was shown in some benchmark examples to be much more efficient than the traditional time-independent sampling method (which uses only initial conditions), the calculation of the SC prefactor-which is computationally expensive, especially for large systems-is still required for accepted trajectories. In the present work, we present an approximate implementation of the TD-SC-IVR method that is completely prefactor-free; it gives the time correlation function as a classical-like magnitude function multiplied by a phase function. Application of this approach to flux-flux correlation functions (which yield reaction rate constants) for the benchmark H + H(2) system shows very good agreement with exact quantum results. Limitations of the approximate approach are also discussed.

  16. The acute effects of 3,4-methylenedioxymethamphetamine and d-methamphetamine on human cognitive functioning.

    PubMed

    Stough, Con; King, Rebecca; Papafotiou, Katherine; Swann, Phillip; Ogden, Edward; Wesnes, Keith; Downey, Luke A

    2012-04-01

    This study investigated the acute (3-h) and 24-h post-dose cognitive effects of oral 3,4-methylenedioxymethamphetamine (MDMA), d-methamphetamine, and placebo in a within-subject double-blind laboratory-based study in order to compare the effect of these two commonly used illicit drugs on a large number of recreational drug users. Sixty-one abstinent recreational users of illicit drugs comprised the participant sample, with 33 females and 28 males, mean age 25.45 years. The three testing sessions involved oral consumption of 100 mg MDMA, 0.42 mg/kg d-methamphetamine, or a matching placebo. The drug administration was counter-balanced, double-blind, and medically supervised. Cognitive performance was assessed during drug peak (3 h) and at 24 h post-dosing time-points. Blood samples were also taken to quantify the levels of drug present at the cognitive testing time-points. Blood concentrations of both methamphetamine and MDMA at drug peak samples were consistent with levels observed in previous studies. The major findings concern poorer performance in the MDMA condition at peak concentration for the trail-making measures and an index of working memory (trend level), and more accurate performance on a choice reaction task within the methamphetamine condition. Most of the differences in performance between the MDMA, methamphetamine, and placebo treatments diminished by the 24-h testing time-point, although some performance improvements subsisted for choice reaction time for the methamphetamine condition. Further research into the acute effects of amphetamine preparations is necessary to further quantify the acute disruption of aspects of human functioning crucial to complex activities such as attention, selective memory, and psychomotor performance.

  17. Preclinical evaluation of spatial frequency domain-enabled wide-field quantitative imaging for enhanced glioma resection

    NASA Astrophysics Data System (ADS)

    Sibai, Mira; Fisher, Carl; Veilleux, Israel; Elliott, Jonathan T.; Leblond, Frederic; Roberts, David W.; Wilson, Brian C.

    2017-07-01

    5-Aminolevelunic acid-induced protoporphyrin IX (PpIX) fluorescence-guided resection (FGR) enables maximum safe resection of glioma by providing real-time tumor contrast. However, the subjective visual assessment and the variable intrinsic optical attenuation of tissue limit this technique to reliably delineating only high-grade tumors that display strong fluorescence. We have previously shown, using a fiber-optic probe, that quantitative assessment using noninvasive point spectroscopic measurements of the absolute PpIX concentration in tissue further improves the accuracy of FGR, extending it to surgically curable low-grade glioma. More recently, we have shown that implementing spatial frequency domain imaging with a fluorescent-light transport model enables recovery of two-dimensional images of [PpIX], alleviating the need for time-consuming point sampling of the brain surface. We present first results of this technique modified for in vivo imaging on an RG2 rat brain tumor model. Despite the moderate errors in retrieving the absorption and reduced scattering coefficients in the subdiffusive regime of 14% and 19%, respectively, the recovered [PpIX] maps agree within 10% of the point [PpIX] values measured by the fiber-optic probe, validating its potential as an extension or an alternative to point sampling during glioma resection.

  18. Sensor-triggered sampling to determine instantaneous airborne vapor exposure concentrations.

    PubMed

    Smith, Philip A; Simmons, Michael K; Toone, Phillip

    2018-06-01

    It is difficult to measure transient airborne exposure peaks by means of integrated sampling for organic chemical vapors, even with very short-duration sampling. Selection of an appropriate time to measure an exposure peak through integrated sampling is problematic, and short-duration time-weighted average (TWA) values obtained with integrated sampling are not likely to accurately determine actual peak concentrations attained when concentrations fluctuate rapidly. Laboratory analysis for integrated exposure samples is preferred from a certainty standpoint over results derived in the field from a sensor, as a sensor user typically must overcome specificity issues and a number of potential interfering factors to obtain similarly reliable data. However, sensors are currently needed to measure intra-exposure period concentration variations (i.e., exposure peaks). In this article, the digitized signal from a photoionization detector (PID) sensor triggered collection of whole-air samples when toluene or trichloroethylene vapors attained pre-determined levels in a laboratory atmosphere generation system. Analysis by gas chromatography-mass spectrometry of whole-air samples (with both 37 and 80% relative humidity) collected using the triggering mechanism with rapidly increasing vapor concentrations showed good agreement with the triggering set point values. Whole-air samples (80% relative humidity) in canisters demonstrated acceptable 17-day storage recoveries, and acceptable precision and bias were obtained. The ability to determine exceedance of a ceiling or peak exposure standard by laboratory analysis of an instantaneously collected sample, and to simultaneously provide a calibration point to verify the correct operation of a sensor was demonstrated. This latter detail may increase the confidence in reliability of sensor data obtained across an entire exposure period.

  19. Replica approach to mean-variance portfolio optimization

    NASA Astrophysics Data System (ADS)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  <  1, where N is the dimension of the portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r  =  1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1  -  r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical exponent, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  20. In-Situ Real-Time Focus Detection during Laser Processing Using Double-Hole Masks and Advanced Image Sensor Software

    PubMed Central

    Hoang, Phuong Le; Ahn, Sanghoon; Kim, Jeng-o; Kang, Heeshin; Noh, Jiwhan

    2017-01-01

    In modern high-intensity ultrafast laser processing, detecting the focal position of the working laser beam, at which the intensity is the highest and the beam diameter is the lowest, and immediately locating the target sample at that point are challenging tasks. A system that allows in-situ real-time focus determination and fabrication using a high-power laser has been in high demand among both engineers and scientists. Conventional techniques require the complicated mathematical theory of wave optics, employing interference as well as diffraction phenomena to detect the focal position; however, these methods are ineffective and expensive for industrial application. Moreover, these techniques could not perform detection and fabrication simultaneously. In this paper, we propose an optical design capable of detecting the focal point and fabricating complex patterns on a planar sample surface simultaneously. In-situ real-time focus detection is performed using a bandpass filter, which only allows for the detection of laser transmission. The technique enables rapid, non-destructive, and precise detection of the focal point. Furthermore, it is sufficiently simple for application in both science and industry for mass production, and it is expected to contribute to the next generation of laser equipment, which can be used to fabricate micro-patterns with high complexity. PMID:28671566

  1. Caesium-137 and strontium-90 temporal series in the Tagus River: experimental results and a modelling study.

    PubMed

    Miró, Conrado; Baeza, Antonio; Madruga, María J; Periañez, Raul

    2012-11-01

    The objective of this work consisted of analysing the spatial and temporal evolution of two radionuclide concentrations in the Tagus River. Time-series analysis techniques and numerical modelling have been used in this study. (137)Cs and (90)Sr concentrations have been measured from 1994 to 1999 at several sampling points in Spain and Portugal. These radionuclides have been introduced into the river by the liquid releases from several nuclear power plants in Spain, as well as from global fallout. Time-series analysis techniques have allowed the determination of radionuclide transit times along the river, and have also pointed out the existence of temporal cycles of radionuclide concentrations at some sampling points, which are attributed to water management in the reservoirs placed along the Tagus River. A stochastic dispersion model, in which transport with water, radioactive decay and water-sediment interactions are solved through Monte Carlo methods, has been developed. Model results are, in general, in reasonable agreement with measurements. The model has finally been applied to the calculation of mean ages of radioactive content in water and sediments in each reservoir. This kind of model can be a very useful tool to support the decision-making process after an eventual emergency situation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Resilience and distress: Israelis respond to the disengagement from Gaza and the second Lebanese war.

    PubMed

    Ben-Zur, Hasida; Gilbar, Ora

    2011-10-01

    Resilience and distress in Israeli society were assessed at three points in time: before and after the Israeli disengagement from Gaza, and after the second Lebanese war. A random sample of 366 Israelis was assessed for nation-related anxiety and hostility, personal resources and post-traumatic symptoms. The lowest levels of anxiety were observed at the second time point, after the disengagement. Respondents with high-resilience profiles showed lower levels of post-traumatic symptoms and higher levels of personal resources. The findings underscore Israelis' resilience and the importance of personal resources in ongoing nationally stressful situations.

  3. Attosecond transient absorption instrumentation for thin film materials: Phase transitions, heat dissipation, signal stabilization, timing correction, and rapid sample rotation.

    PubMed

    Jager, Marieke F; Ott, Christian; Kaplan, Christopher J; Kraus, Peter M; Neumark, Daniel M; Leone, Stephen R

    2018-01-01

    We present an extreme ultraviolet (XUV) transient absorption apparatus tailored to attosecond and femtosecond measurements on bulk solid-state thin-film samples, specifically when the sample dynamics are sensitive to heating effects. The setup combines methodology for stabilizing sub-femtosecond time-resolution measurements over 48 h and techniques for mitigating heat buildup in temperature-dependent samples. Single-point beam stabilization in pump and probe arms and periodic time-zero reference measurements are described for accurate timing and stabilization. A hollow-shaft motor configuration for rapid sample rotation, raster scanning capability, and additional diagnostics are described for heat mitigation. Heat transfer simulations performed using a finite element analysis allow comparison of sample rotation and traditional raster scanning techniques for 100 Hz pulsed laser measurements on vanadium dioxide, a material that undergoes an insulator-to-metal transition at a modest temperature of 340 K. Experimental results are presented confirming that the vanadium dioxide (VO 2 ) sample cannot cool below its phase transition temperature between laser pulses without rapid rotation, in agreement with the simulations. The findings indicate the stringent conditions required to perform rigorous broadband XUV time-resolved absorption measurements on bulk solid-state samples, particularly those with temperature sensitivity, and elucidate a clear methodology to perform them.

  4. Attosecond transient absorption instrumentation for thin film materials: Phase transitions, heat dissipation, signal stabilization, timing correction, and rapid sample rotation

    NASA Astrophysics Data System (ADS)

    Jager, Marieke F.; Ott, Christian; Kaplan, Christopher J.; Kraus, Peter M.; Neumark, Daniel M.; Leone, Stephen R.

    2018-01-01

    We present an extreme ultraviolet (XUV) transient absorption apparatus tailored to attosecond and femtosecond measurements on bulk solid-state thin-film samples, specifically when the sample dynamics are sensitive to heating effects. The setup combines methodology for stabilizing sub-femtosecond time-resolution measurements over 48 h and techniques for mitigating heat buildup in temperature-dependent samples. Single-point beam stabilization in pump and probe arms and periodic time-zero reference measurements are described for accurate timing and stabilization. A hollow-shaft motor configuration for rapid sample rotation, raster scanning capability, and additional diagnostics are described for heat mitigation. Heat transfer simulations performed using a finite element analysis allow comparison of sample rotation and traditional raster scanning techniques for 100 Hz pulsed laser measurements on vanadium dioxide, a material that undergoes an insulator-to-metal transition at a modest temperature of 340 K. Experimental results are presented confirming that the vanadium dioxide (VO2) sample cannot cool below its phase transition temperature between laser pulses without rapid rotation, in agreement with the simulations. The findings indicate the stringent conditions required to perform rigorous broadband XUV time-resolved absorption measurements on bulk solid-state samples, particularly those with temperature sensitivity, and elucidate a clear methodology to perform them.

  5. Trace-metal contamination in the glacierized Rio Santa watershed, Peru.

    PubMed

    Guittard, Alexandre; Baraer, Michel; McKenzie, Jeffrey M; Mark, Bryan G; Wigmore, Oliver; Fernandez, Alfonso; Rapre, Alejo C; Walsh, Elizabeth; Bury, Jeffrey; Carey, Mark; French, Adam; Young, Kenneth R

    2017-11-25

    The objective of this research is to characterize the variability of trace metals in the Rio Santa watershed based on synoptic sampling applied at a large scale. To that end, we propose a combination of methods based on the collection of water, suspended sediments, and riverbed sediments at different points of the watershed within a very limited period. Forty points within the Rio Santa watershed were sampled between June 21 and July 8, 2013. Forty water samples, 36 suspended sediments, and 34 riverbed sediments were analyzed for seven trace metals. The results, which were normalized using the USEPA guideline for water and sediments, show that the Rio Santa water exhibits Mn concentrations higher than the guideline at more than 50% of the sampling points. As is the second highest contaminating element in the water, with approximately 10% of the samples containing concentrations above the guideline. Sediments collected in the Rio Santa riverbed were heavily contaminated by at least four of the tested elements at nearly 85% of the sample points, with As presenting the highest normalized concentration, at more than ten times the guideline. As, Cd, Fe, Pb, and Zn present similar concentration trends in the sediment all along the Rio Santa.The findings indicate that care should be taken in using the Rio Santa water and sediments for purposes that could affect the health of humans or the ecosystem. The situation is worse in some tributaries in the southern part of the watershed that host both active and abandoned mines and ore-processing plants.

  6. Dynamics of salivary proteins and metabolites during extreme endurance sports - a case study.

    PubMed

    Zauber, Henrik; Mosler, Stephan; von Heßberg, Andreas; Schulze, Waltraud X

    2012-07-01

    As noninvasively accessible body fluid, saliva is of growing interest in diagnostics. To exemplify the diagnostic potential of saliva, we used a mass spectrometry-based approach to gain insights into adaptive physiological processes underlying long-lasting endurance work load in a case study. Saliva was collected from male and female athlete at four diurnal time points throughout a 1060 km nonstop cycling event. Total sampling time covered 180 h comprising 62 h of endurance cycling as well as reference samples taken over 3 days before the event, and over 2 days after. Altogether, 1405 proteins and 62 metabolites were identified in these saliva samples, of which 203 could be quantified across the majority of the sampling time points. Many proteins show clear diurnal abundance patterns in saliva. In many cases, these patterns were disturbed and altered by the long-term endurance stress. During the stress phase, metabolites of energy mobilization, such as creatinine and glucose were of high abundance, as well as metabolites with antioxidant functions. Lysozyme, amylase, and proteins with redox-regulatory function showed significant increase in average abundance during work phase compared to rest or recovery phase. The recovery phase was characterized by an increased abundance of immunoglobulins. Our work exemplifies the application of high-throughput technologies to understand adaptive processes in human physiology. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Microwave absorbing properties and enhanced infrared reflectance of Fe/Cu composites prepared by chemical plating

    NASA Astrophysics Data System (ADS)

    Li, Xiaoguang; Ji, Guangbin; Lv, Hualiang; Wang, Min; Du, Youwei

    2014-04-01

    Fe/Cu composite samples with Cu particles depositing on carbonyl iron sheets were prepared by chemical plating. Cu additions were uniformly distributed on the grain boundaries of the flaky carbonyl iron while keeping the internal structure of iron. Meanwhile, we found that the chemical plating time made a key point on both the microwave absorbing properties and infrared emissivity. With the growth of chemical plating time, the value of reflection loss gives a linear decrease and the infrared emissivity is reduced with a tendency of index reduction. When the plating time is less than 30 min, the reflection loss of the samples maintains above -20 GHz, moreover, prolonging the plating time more than 30 min, the infrared emissivity of the samples is reduced to 0.50 or less. It can be concluded that both the microwave absorbing and infrared properties are excellent at the optimal plating time of 30 min.

  8. Emotion Regulation Profiles, Temperament, and Adjustment Problems in Preadolescents

    PubMed Central

    Zalewski, Maureen; Lengua, Liliana J.; Trancik, Anika; Wilson, Anna C.; Bazinet, Alissa

    2014-01-01

    The longitudinal relations of emotion regulation profiles to temperament and adjustment in a community sample of preadolescents (N = 196, 8–11 years at Time 1) were investigated using person-oriented latent profile analysis (LPA). Temperament, emotion regulation, and adjustment were measured at 3 different time points, with each time point occurring 1 year apart. LPA identified 5 frustration and 4 anxiety regulation profiles based on children’s physiological, behavioral, and self-reported reactions to emotion-eliciting tasks. The relation of effortful control to conduct problems was mediated by frustration regulation profiles, as was the relation of effortful control to depression. Anxiety regulation profiles did not mediate relations between temperament and adjustment. PMID:21413935

  9. Euthanasia Method for Mice in Rapid Time-Course Pulmonary Pharmacokinetic Studies

    PubMed Central

    Schoell, Adam R; Heyde, Bruce R; Weir, Dana E; Chiang, Po-Chang; Hu, Yiding; Tung, David K

    2009-01-01

    To develop a means of euthanasia to support rapid time-course pharmacokinetic studies in mice, we compared retroorbital and intravenous lateral tail vein injection of ketamine–xylazine with regard to preparation time, utility, tissue distribution, and time to onset of euthanasia. Tissue distribution and time to onset of euthanasia did not differ between administration methods. However, retroorbital injection could be performed more rapidly than intravenous injection and was considered to be a technically simple and superior alternative for mouse euthanasia. Retroorbital ketamine–xylazine, CO2 gas, and intraperitoneal pentobarbital then were compared as euthanasia agents in a rapid time-point pharmacokinetic study. Retroorbital ketamine–xylazine was the most efficient and consistent of the 3 methods, with an average time to death of approximately 5 s after injection. In addition, euthanasia by retroorbital ketamine–xylazine enabled accurate sample collection at closely spaced time points and satisfied established criteria for acceptable euthanasia technique. PMID:19807971

  10. Euthanasia method for mice in rapid time-course pulmonary pharmacokinetic studies.

    PubMed

    Schoell, Adam R; Heyde, Bruce R; Weir, Dana E; Chiang, Po-Chang; Hu, Yiding; Tung, David K

    2009-09-01

    To develop a means of euthanasia to support rapid time-course pharmacokinetic studies in mice, we compared retroorbital and intravenous lateral tail vein injection of ketamine-xylazine with regard to preparation time, utility, tissue distribution, and time to onset of euthanasia. Tissue distribution and time to onset of euthanasia did not differ between administration methods. However, retroorbital injection could be performed more rapidly than intravenous injection and was considered to be a technically simple and superior alternative for mouse euthanasia. Retroorbital ketamine-xylazine, CO(2) gas, and intraperitoneal pentobarbital then were compared as euthanasia agents in a rapid time-point pharmacokinetic study. Retroorbital ketamine-xylazine was the most efficient and consistent of the 3 methods, with an average time to death of approximately 5 s after injection. In addition, euthanasia by retroorbital ketamine-xylazine enabled accurate sample collection at closely spaced time points and satisfied established criteria for acceptable euthanasia technique.

  11. Breaking through the bandwidth barrier in distributed fiber vibration sensing by sub-Nyquist randomized sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Jingdong; Zhu, Tao; Zheng, Hua; Kuang, Yang; Liu, Min; Huang, Wei

    2017-04-01

    The round trip time of the light pulse limits the maximum detectable frequency response range of vibration in phase-sensitive optical time domain reflectometry (φ-OTDR). We propose a method to break the frequency response range restriction of φ-OTDR system by modulating the light pulse interval randomly which enables a random sampling for every vibration point in a long sensing fiber. This sub-Nyquist randomized sampling method is suits for detecting sparse-wideband- frequency vibration signals. Up to MHz resonance vibration signal with over dozens of frequency components and 1.153MHz single frequency vibration signal are clearly identified for a sensing range of 9.6km with 10kHz maximum sampling rate.

  12. VizieR Online Data Catalog: ChaMP X-ray point source catalog (Kim+, 2007)

    NASA Astrophysics Data System (ADS)

    Kim, M.; Kim, D.-W.; Wilkes, B. J.; Green, P. J.; Kim, E.; Anderson, C. S.; Barkhouse, W. A.; Evans, N. R.; Ivezic, Z.; Karovska, M.; Kashyap, V. L.; Lee, M. G.; Maksym, P.; Mossman, A. E.; Silverman, J. D.; Tananbaum, H. D.

    2009-01-01

    We present the Chandra Multiwavelength Project (ChaMP) X-ray point source catalog with ~6800 X-ray sources detected in 149 Chandra observations covering ~10deg2. The full ChaMP catalog sample is 7 times larger than the initial published ChaMP catalog. The exposure time of the fields in our sample ranges from 0.9 to 124ks, corresponding to a deepest X-ray flux limit of f0.5-8.0=9x10-16ergs/cm2/s. The ChaMP X-ray data have been uniformly reduced and analyzed with ChaMP-specific pipelines and then carefully validated by visual inspection. The ChaMP catalog includes X-ray photometric data in eight different energy bands as well as X-ray spectral hardness ratios and colors. To best utilize the ChaMP catalog, we also present the source reliability, detection probability, and positional uncertainty. (10 data files).

  13. Pulse-echo ultrasonic imaging method for eliminating sample thickness variation effects

    NASA Technical Reports Server (NTRS)

    Roth, Don J. (Inventor)

    1995-01-01

    A pulse-echo, immersion method for ultrasonic evaluation of a material is discussed. It accounts for and eliminates nonlevelness in the equipment set-up and sample thickness variation effects employs a single transducer, automatic scanning and digital imaging to obtain an image of a property of the material, such as pore fraction. The nonlevelness and thickness variation effects are accounted for by pre-scan adjusments of the time window to insure that the echoes received at each scan point are gated in the center of the window. This information is input into the scan file so that, during the automatic scanning for the material evaluation, each received echo is centered in its time window. A cross-correlation function calculates the velocity at each scan point, which is then proportionalized to a color or grey scale and displayed on a video screen.

  14. On the precision of automated activation time estimation

    NASA Technical Reports Server (NTRS)

    Kaplan, D. T.; Smith, J. M.; Rosenbaum, D. S.; Cohen, R. J.

    1988-01-01

    We examined how the assignment of local activation times in epicardial and endocardial electrograms is affected by sampling rate, ambient signal-to-noise ratio, and sinx/x waveform interpolation. Algorithms used for the estimation of fiducial point locations included dV/dtmax, and a matched filter detection algorithm. Test signals included epicardial and endocardial electrograms overlying both normal and infarcted regions of dog myocardium. Signal-to-noise levels were adjusted by combining known data sets with white noise "colored" to match the spectral characteristics of experimentally recorded noise. For typical signal-to-noise ratios and sampling rates, the template-matching algorithm provided the greatest precision in reproducibly estimating fiducial point location, and sinx/x interpolation allowed for an additional significant improvement. With few restrictions, combining these two techniques may allow for use of digitization rates below the Nyquist rate without significant loss of precision.

  15. Real-time photonic sampling with improved signal-to-noise and distortion ratio using polarization-dependent modulators

    NASA Astrophysics Data System (ADS)

    Liang, Dong; Zhang, Zhiyao; Liu, Yong; Li, Xiaojun; Jiang, Wei; Tan, Qinggui

    2018-04-01

    A real-time photonic sampling structure with effective nonlinearity suppression and excellent signal-to-noise ratio (SNR) performance is proposed. The key points of this scheme are the polarization-dependent modulators (P-DMZMs) and the sagnac loop structure. Thanks to the polarization sensitive characteristic of P-DMZMs, the differences between transfer functions of the fundamental signal and the distortion become visible. Meanwhile, the selection of specific biases in P-DMZMs is helpful to achieve a preferable linearized performance with a low noise level for real-time photonic sampling. Compared with the quadrature-biased scheme, the proposed scheme is capable of valid nonlinearity suppression and is able to provide a better SNR performance even in a large frequency range. The proposed scheme is proved to be effective and easily implemented for real time photonic applications.

  16. Accuracy and optimal timing of activity measurements in estimating the absorbed dose of radioiodine in the treatment of Graves' disease

    NASA Astrophysics Data System (ADS)

    Merrill, S.; Horowitz, J.; Traino, A. C.; Chipkin, S. R.; Hollot, C. V.; Chait, Y.

    2011-02-01

    Calculation of the therapeutic activity of radioiodine 131I for individualized dosimetry in the treatment of Graves' disease requires an accurate estimate of the thyroid absorbed radiation dose based on a tracer activity administration of 131I. Common approaches (Marinelli-Quimby formula, MIRD algorithm) use, respectively, the effective half-life of radioiodine in the thyroid and the time-integrated activity. Many physicians perform one, two, or at most three tracer dose activity measurements at various times and calculate the required therapeutic activity by ad hoc methods. In this paper, we study the accuracy of estimates of four 'target variables': time-integrated activity coefficient, time of maximum activity, maximum activity, and effective half-life in the gland. Clinical data from 41 patients who underwent 131I therapy for Graves' disease at the University Hospital in Pisa, Italy, are used for analysis. The radioiodine kinetics are described using a nonlinear mixed-effects model. The distributions of the target variables in the patient population are characterized. Using minimum root mean squared error as the criterion, optimal 1-, 2-, and 3-point sampling schedules are determined for estimation of the target variables, and probabilistic bounds are given for the errors under the optimal times. An algorithm is developed for computing the optimal 1-, 2-, and 3-point sampling schedules for the target variables. This algorithm is implemented in a freely available software tool. Taking into consideration 131I effective half-life in the thyroid and measurement noise, the optimal 1-point time for time-integrated activity coefficient is a measurement 1 week following the tracer dose. Additional measurements give only a slight improvement in accuracy.

  17. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    NASA Astrophysics Data System (ADS)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and number of points, varies with the abundance, size and distributional pattern of target biota. Therefore, we advocate either the incorporation of prior knowledge or the use of baseline surveys to establish key properties of intended target biota in the initial stages of monitoring programs.

  18. Hepatitis E Virus in Pork Production Chain in Czech Republic, Italy, and Spain, 2010

    PubMed Central

    Di Bartolo, Ilaria; Diez-Valcarce, Marta; Vasickova, Petra; Kralik, Petr; Hernandez, Marta; Angeloni, Giorgia; Ostanello, Fabio; Bouwknegt, Martijn; Rodríguez-Lázaro, David; Pavlik, Ivo

    2012-01-01

    We evaluated the prevalence of hepatitis E virus (HEV) in the pork production chain in Czech Republic, Italy, and Spain during 2010. A total of 337 fecal, liver, and meat samples from animals at slaughterhouses were tested for HEV by real-time quantitative PCR. Overall, HEV was higher in Italy (53%) and Spain (39%) than in Czech Republic (7.5%). HEV was detected most frequently in feces in Italy (41%) and Spain (39%) and in liver (5%) and meat (2.5%) in Czech Republic. Of 313 sausages sampled at processing and point of sale, HEV was detected only in Spain (6%). HEV sequencing confirmed only g3 HEV strains. Indicator virus (porcine adenovirus) was ubiquitous in fecal samples and absent in liver samples and was detected in 1 slaughterhouse meat sample. At point of sale, we found porcine adenovirus in sausages (1%–2%). The possible dissemination of HEV and other fecal viruses through pork production demands containment measures. PMID:22840221

  19. Sampled control stability of the ESA instrument pointing system

    NASA Astrophysics Data System (ADS)

    Thieme, G.; Rogers, P.; Sciacovelli, D.

    Stability analysis and simulation results are presented for the ESA Instrument Pointing System (IPS) that is to be used in Spacelab's second launch. Of the two IPS plant dynamic models used in the ESA and NASA activities, one is based on six interconnected rigid bodies that represent the IPS and plant dynamic models used in the ESA and NASA activities, one is based on six interconnected rigid bodies that represent the IPS and its payload, while the other follows the NASA practice of defining an IPS-Spacelab 2 plant configuration through a structural finite element model, which is then used to generate modal data for various pointing directions. In both cases, the IPS dynamic plant model is truncated, then discretized at the sampling frequency and interfaces to a PID-based control law. A stability analysis has been carried out in discrete domain for various instrument pointing directions, taking into account suitable parameter variation ranges. A number of time simulations are presented.

  20. Calculation of power spectrums from digital time series with missing data points

    NASA Technical Reports Server (NTRS)

    Murray, C. W., Jr.

    1980-01-01

    Two algorithms are developed for calculating power spectrums from the autocorrelation function when there are missing data points in the time series. Both methods use an average sampling interval to compute lagged products. One method, the correlation function power spectrum, takes the discrete Fourier transform of the lagged products directly to obtain the spectrum, while the other, the modified Blackman-Tukey power spectrum, takes the Fourier transform of the mean lagged products. Both techniques require fewer calculations than other procedures since only 50% to 80% of the maximum lags need be calculated. The algorithms are compared with the Fourier transform power spectrum and two least squares procedures (all for an arbitrary data spacing). Examples are given showing recovery of frequency components from simulated periodic data where portions of the time series are missing and random noise has been added to both the time points and to values of the function. In addition the methods are compared using real data. All procedures performed equally well in detecting periodicities in the data.

  1. The coalescent of a sample from a binary branching process.

    PubMed

    Lambert, Amaury

    2018-04-25

    At time 0, start a time-continuous binary branching process, where particles give birth to a single particle independently (at a possibly time-dependent rate) and die independently (at a possibly time-dependent and age-dependent rate). A particular case is the classical birth-death process. Stop this process at time T>0. It is known that the tree spanned by the N tips alive at time T of the tree thus obtained (called a reduced tree or coalescent tree) is a coalescent point process (CPP), which basically means that the depths of interior nodes are independent and identically distributed (iid). Now select each of the N tips independently with probability y (Bernoulli sample). It is known that the tree generated by the selected tips, which we will call the Bernoulli sampled CPP, is again a CPP. Now instead, select exactly k tips uniformly at random among the N tips (a k-sample). We show that the tree generated by the selected tips is a mixture of Bernoulli sampled CPPs with the same parent CPP, over some explicit distribution of the sampling probability y. An immediate consequence is that the genealogy of a k-sample can be obtained by the realization of k random variables, first the random sampling probability Y and then the k-1 node depths which are iid conditional on Y=y. Copyright © 2018. Published by Elsevier Inc.

  2. Lab-on-chip systems for integrated bioanalyses

    PubMed Central

    Madaboosi, Narayanan; Soares, Ruben R.G.; Fernandes, João Tiago S.; Novo, Pedro; Moulas, Geraud; Chu, Virginia

    2016-01-01

    Biomolecular detection systems based on microfluidics are often called lab-on-chip systems. To fully benefit from the miniaturization resulting from microfluidics, one aims to develop ‘from sample-to-answer’ analytical systems, in which the input is a raw or minimally processed biological, food/feed or environmental sample and the output is a quantitative or qualitative assessment of one or more analytes of interest. In general, such systems will require the integration of several steps or operations to perform their function. This review will discuss these stages of operation, including fluidic handling, which assures that the desired fluid arrives at a specific location at the right time and under the appropriate flow conditions; molecular recognition, which allows the capture of specific analytes at precise locations on the chip; transduction of the molecular recognition event into a measurable signal; sample preparation upstream from analyte capture; and signal amplification procedures to increase sensitivity. Seamless integration of the different stages is required to achieve a point-of-care/point-of-use lab-on-chip device that allows analyte detection at the relevant sensitivity ranges, with a competitive analysis time and cost. PMID:27365042

  3. User's manual for SYNC: A FORTRAN program for merging and time-synchronizing data

    NASA Technical Reports Server (NTRS)

    Maine, R. E.

    1981-01-01

    The FORTRAN 77 computer program SYNC for merging and time synchronizing data is described. The program SYNC reads one or more input files which contain either synchronous data frames or time-tagged data points, which can be compressed. The program decompresses and time synchronizes the data, correcting for any channel time skews. Interpolation and hold last value synchronization algorithms are available. The output from SYNC is a file of time synchronized data frames at any requested sample rate.

  4. Comparison of point counts and territory mapping for detecting effects of forest management on songbirds

    USGS Publications Warehouse

    Newell, Felicity L.; Sheehan, James; Wood, Petra Bohall; Rodewald, Amanda D.; Buehler, David A.; Keyser, Patrick D.; Larkin, Jeffrey L.; Beachy, Tiffany A.; Bakermans, Marja H.; Boves, Than J.; Evans, Andrea; George, Gregory A.; McDermott, Molly E.; Perkins, Kelly A.; White, Matthew; Wigley, T. Bently

    2013-01-01

    Point counts are commonly used to assess changes in bird abundance, including analytical approaches such as distance sampling that estimate density. Point-count methods have come under increasing scrutiny because effects of detection probability and field error are difficult to quantify. For seven forest songbirds, we compared fixed-radii counts (50 m and 100 m) and density estimates obtained from distance sampling to known numbers of birds determined by territory mapping. We applied point-count analytic approaches to a typical forest management question and compared results to those obtained by territory mapping. We used a before–after control impact (BACI) analysis with a data set collected across seven study areas in the central Appalachians from 2006 to 2010. Using a 50-m fixed radius, variance in error was at least 1.5 times that of the other methods, whereas a 100-m fixed radius underestimated actual density by >3 territories per 10 ha for the most abundant species. Distance sampling improved accuracy and precision compared to fixed-radius counts, although estimates were affected by birds counted outside 10-ha units. In the BACI analysis, territory mapping detected an overall treatment effect for five of the seven species, and effects were generally consistent each year. In contrast, all point-count methods failed to detect two treatment effects due to variance and error in annual estimates. Overall, our results highlight the need for adequate sample sizes to reduce variance, and skilled observers to reduce the level of error in point-count data. Ultimately, the advantages and disadvantages of different survey methods should be considered in the context of overall study design and objectives, allowing for trade-offs among effort, accuracy, and power to detect treatment effects.

  5. Point Counts of Birds in Bottomland Hardwood Forests of the Mississippi Alluvial Valley: Duration, Minimum Sample Size, and Points Versus Visits

    Treesearch

    Winston Paul Smith; Daniel J. Twedt; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford; Robert J. Cooper

    1993-01-01

    To compare efficacy of point count sampling in bottomland hardwood forests, duration of point count, number of point counts, number of visits to each point during a breeding season, and minimum sample size are examined.

  6. A Factorial Data Rate and Dwell Time Experiment in the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    DeLoach, R.

    2000-01-01

    This report is an introductory tutorial on the application of formal experiment design methods to wind tunnel testing, for the benefit of aeronautical engineers with little formal experiment design training. It also describes the results of a Study to determine whether increases in the sample rate and dwell time of the National Transonic Facility data system Would result in significant changes in force and moment data. Increases in sample rate from 10 samples per second to 50 samples per second were examined, as were changes in dwell time from one second per data point to two seconds. These changes were examined for a representative aircraft model in a range of tunnel operating conditions defined by angles of attack from 0 deg to 3.8 degrees, total pressure from 15.0 psi to 24.1 psi, and Mach numbers from 0.52 to 0.82. No statistically significant effect was associated with the change in sample rate. The change in dwell time from one second to two seconds affected axial force measurements, and to a lesser degree normal force measurements. This dwell effect comprises a "rectification error" caused by incomplete cancellation of the positive and negative elements of certain low frequency dynamic components that are not rejected by the one-Hz low-pass filters of the data system. These low frequency effects may be due to tunnel circuit phenomena and other sources. The magnitude of the dwell effect depends on dynamic pressure, with angle of attack and Mach number influencing the strength of this dependence. An analysis is presented which suggests that the magnitude of the rectification error depends on the ratio of measurement dwell time to the period of the low-frequency dynamics, as well as the amplitude of the dynamics The essential conclusion of this analysis is that extending the dwell time (or, equivalently, replicating short-dwell data points) reduces the rectification error.

  7. The fourth dimension in FIA

    Treesearch

    Francis A. Roesch

    2012-01-01

    In the past, the goal of forest inventory was to determine the extent of the timber resource. Predictions of how the resource was changing were made by comparing differences between successive inventories. The general view of the associated sample design included selection probabilities based on land area observed at a discrete point in time. That is, time was not...

  8. Water content measurement in forest soils and decayed wood using time domain reflectometry

    Treesearch

    Andrew Gray; Thomas Spies

    1995-01-01

    The use of time domain reflectometry to measure moisture content in forest soils and woody debris was evaluated. Calibrations were developed on undisturbed soil cores from four forest stands and on point samples from decayed logs. An algorithm for interpreting irregularly shaped traces generated by the reflectometer was also developed. Two different calibration...

  9. An adaptive clustering algorithm for image matching based on corner feature

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-04-01

    The traditional image matching algorithm always can not balance the real-time and accuracy better, to solve the problem, an adaptive clustering algorithm for image matching based on corner feature is proposed in this paper. The method is based on the similarity of the matching pairs of vector pairs, and the adaptive clustering is performed on the matching point pairs. Harris corner detection is carried out first, the feature points of the reference image and the perceived image are extracted, and the feature points of the two images are first matched by Normalized Cross Correlation (NCC) function. Then, using the improved algorithm proposed in this paper, the matching results are clustered to reduce the ineffective operation and improve the matching speed and robustness. Finally, the Random Sample Consensus (RANSAC) algorithm is used to match the matching points after clustering. The experimental results show that the proposed algorithm can effectively eliminate the most wrong matching points while the correct matching points are retained, and improve the accuracy of RANSAC matching, reduce the computation load of whole matching process at the same time.

  10. [Formula: see text]Interpreting change on the neurobehavioral symptom inventory and the PTSD checklist in military personnel.

    PubMed

    Belanger, Heather G; Lange, Rael T; Bailie, Jason; Iverson, Grant L; Arrieux, Jacques P; Ivins, Brian J; Cole, Wesley R

    2016-10-01

    The purpose of this study was to examine the prevalence and stability of symptom reporting in a healthy military sample and to develop reliable change indices for two commonly used self-report measures in the military health care system. Participants were 215 U.S. active duty service members recruited from Fort Bragg, NC as normal controls as part of a larger study. Participants completed the Neurobehavioral Symptom Inventory (NSI) and Posttraumatic Checklist (PCL) twice, separated by approximately 30 days. Depending on the endorsement level used (i.e. ratings of 'mild' or greater vs. ratings of 'moderate' or greater), approximately 2-15% of this sample met DSM-IV symptom criteria for Postconcussional Disorder across time points, while 1-6% met DSM-IV symptom criteria for Posttraumatic Stress Disorder. Effect sizes for change from Time 1 to Time 2 on individual symptoms were small (Cohen's d = .01 to .13). The test-retest reliability for the NSI total score was r = .78 and the PCL score was r = .70. An eight-point change in symptom reporting represented reliable change on the NSI total score, with a seven-point change needed on the PCL. Postconcussion-like symptoms are not unique to mild TBI and are commonly reported in a healthy soldier sample. It is important for clinicians to use normative data when evaluating a service member or veteran and when evaluating the likelihood that a change in symptom reporting is reliable and clinically meaningful.

  11. Leaps and lulls in the developmental transcriptome of Dictyostelium discoideum.

    PubMed

    Rosengarten, Rafael David; Santhanam, Balaji; Fuller, Danny; Katoh-Kurasawa, Mariko; Loomis, William F; Zupan, Blaz; Shaulsky, Gad

    2015-04-13

    Development of the soil amoeba Dictyostelium discoideum is triggered by starvation. When placed on a solid substrate, the starving solitary amoebae cease growth, communicate via extracellular cAMP, aggregate by tens of thousands and develop into multicellular organisms. Early phases of the developmental program are often studied in cells starved in suspension while cAMP is provided exogenously. Previous studies revealed massive shifts in the transcriptome under both developmental conditions and a close relationship between gene expression and morphogenesis, but were limited by the sampling frequency and the resolution of the methods. Here, we combine the superior depth and specificity of RNA-seq-based analysis of mRNA abundance with high frequency sampling during filter development and cAMP pulsing in suspension. We found that the developmental transcriptome exhibits mostly gradual changes interspersed by a few instances of large shifts. For each time point we treated the entire transcriptome as single phenotype, and were able to characterize development as groups of similar time points separated by gaps. The grouped time points represented gradual changes in mRNA abundance, or molecular phenotype, and the gaps represented times during which many genes are differentially expressed rapidly, and thus the phenotype changes dramatically. Comparing developmental experiments revealed that gene expression in filter developed cells lagged behind those treated with exogenous cAMP in suspension. The high sampling frequency revealed many genes whose regulation is reproducibly more complex than indicated by previous studies. Gene Ontology enrichment analysis suggested that the transition to multicellularity coincided with rapid accumulation of transcripts associated with DNA processes and mitosis. Later development included the up-regulation of organic signaling molecules and co-factor biosynthesis. Our analysis also demonstrated a high level of synchrony among the developing structures throughout development. Our data describe D. discoideum development as a series of coordinated cellular and multicellular activities. Coordination occurred within fields of aggregating cells and among multicellular bodies, such as mounds or migratory slugs that experience both cell-cell contact and various soluble signaling regimes. These time courses, sampled at the highest temporal resolution to date in this system, provide a comprehensive resource for studies of developmental gene expression.

  12. Physiological and Pathological Impact of Blood Sampling by Retro-Bulbar Sinus Puncture and Facial Vein Phlebotomy in Laboratory Mice

    PubMed Central

    Holst, Birgitte; Hau, Jann; Rozell, Björn; Abelson, Klas Stig Peter

    2014-01-01

    Retro-bulbar sinus puncture and facial vein phlebotomy are two widely used methods for blood sampling in laboratory mice. However, the animal welfare implications associated with these techniques are currently debated, and the possible physiological and pathological implications of blood sampling using these methods have been sparsely investigated. Therefore, this study was conducted to assess and compare the impacts of blood sampling by retro-bulbar sinus puncture and facial vein phlebotomy. Blood was obtained from either the retro-bulbar sinus or the facial vein from male C57BL/6J mice at two time points, and the samples were analyzed for plasma corticosterone. Body weights were measured at the day of blood sampling and the day after blood sampling, and the food consumption was recorded automatically during the 24 hours post-procedure. At the end of study, cheeks and orbital regions were collected for histopathological analysis to assess the degree of tissue trauma. Mice subjected to facial vein phlebotomy had significantly elevated plasma corticosterone levels at both time points in contrast to mice subjected to retro-bulbar sinus puncture, which did not. Both groups of sampled mice lost weight following blood sampling, but the body weight loss was higher in mice subjected to facial vein phlebotomy. The food consumption was not significantly different between the two groups. At gross necropsy, subcutaneous hematomas were found in both groups and the histopathological analyses revealed extensive tissue trauma after both facial vein phlebotomy and retro-bulbar sinus puncture. This study demonstrates that both blood sampling methods have a considerable impact on the animals' physiological condition, which should be considered whenever blood samples are obtained. PMID:25426941

  13. Results of Performance Tests Performed on the John Watts WW Casing Connection on 7" Pipe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Watts

    2000-02-01

    Stress Engineering Services (SES) was contracted by Mr. John Watts to test his ''WW'' threaded connection developed for oilfield oil and gas service. This work was a continuation of testing performed by SES as reported in August of 1999. The connection design tested was identified as ''WW''. The samples were all integral (no coupled connections) and contained a wedge thread form with 90{sup o} flank angles relative to the pipe centerline. The wedge thread form is a variable width thread that primarily engages on the flanks. This thread form provides very high torque capacity and good stabbing ability and makeup.more » The test procedure selected for one of the samples was the newly written ISO 13679 procedure for full scale testing of casing and tubing connections, which is currently going through the ISO acceptance process. The ISO procedure requires a variety of tests that includes makeup/breakout testing, internal gas sealability/external water sealability testing with axial tension, axial compression, bending, internal gas thermal cycle tests and limit load (failure) tests. This test procedure was performed with one sample. Four samples were tested to failure. Table 1 contains a summary of the tasks performed by SES. The project started with the delivery of test samples by Mr. Watts. Pipe from the previous round of tests was used for the new samples. Figure 1 shows the structural and sealing results relative to the pipe body. Sample 1 was used to determine the torque capacity of the connection. Torque was applied to the capacity of SES's equipment which was 28,424 ft-lbs. From this, an initial recommended torque range of 7,200 to 8,800 ft-lbs. was selected. The sample was disassembled and while there was no galling observed in the threads, the end of the pin had collapsed inward. Sample 2 received three makeups. Breakouts 1 and 2 also had collapsing of the pin end, with no thread galling. From these make/breaks, it was decided to reduce the amount of lubricant applied to the connection by applying it to the box or pin only and reducing the amount applied. Samples 3 and 4 received one makeup only. Sample 5 initially received two make/breaks to test for galling resistance before final makeup, No galling was observed. Later, three additional make/breaks were performed with no pin end collapse and galling over 1/2 a thread occurring on one of the breakouts. During the make/break tests, the stabbing and hand tight makeup of the WW connection was found to be very easy and trouble free. There was no tendency to crossthread, even when stabbed at an angle, and it screwed together very smoothly up to hand tight. During power tight makeup, there was no heat generated in the box (as checked by hand contact) and no jerkiness associated with any of the makeups or breakouts. Sample 2 was tested in pure compression. The maximum load obtained was 1,051 kips and the connection was beginning to significantly deform as the sample buckled. Actual pipe yield was 1,226 kips. Sample 3 was capped-end pressure tested to failure. The capped-end yield pressure of the pipe was 16,572 psi and the sample began to leak at 12,000 psi. Sample 4 was tested in pure tension. The maximum load obtained was 978 kips and the connection failed by fracture at the pin critical section. Actual pipe yield was 1,226 kips. Sample 5 was tested in combined tension/compression and internal gas pressure. The sample was assembled, setup and tested four times. The first time was with a torque of 7,298 ft-lbs and the connection leaked halfway to ISO Load Point 2 with loads of 693 kips and 4,312 psi. The second time the torque was increased to 14,488 ft-lbs and a leak occurred at 849 kips and 9,400 psi, which was ISO Load Point 2. The third time the makeup torque was again increased, to 20,456 ft-lbs, and a leak occurred at 716 kips and 11,342 psi, ISO Load Point 4. The fourth test was with the same torque as before, 20,617 ft-lbs, and the connection successfully tested up to load step 56, ISO Load Point 6 (second round) before leaking at 354 kips and 11,876 psi. At this point, time and funds prevented additional testing to be performed.« less

  14. Effects of major depression on moment-in-time work performance.

    PubMed

    Wang, Philip S; Beck, Arne L; Berglund, Pat; McKenas, David K; Pronk, Nicolaas P; Simon, Gregory E; Kessler, Ronald C

    2004-10-01

    Although major depression is thought to have substantial negative effects on work performance, the possibility of recall bias limits self-report studies of these effects. The authors used the experience sampling method to address this problem by collecting comparative data on moment-in-time work performance among service workers who were depressed and those who were not depressed. The group studied included 105 airline reservation agents and 181 telephone customer service representatives selected from a larger baseline sample; depressed workers were deliberately oversampled. Respondents were given pagers and experience sampling method diaries for each day of the study. A computerized autodialer paged respondents at random time points. When paged, respondents reported on their work performance in the diary. Moment-in-time work performance was assessed at five random times each day over a 7-day data collection period (35 data points for each respondent). Seven conditions (allergies, arthritis, back pain, headaches, high blood pressure, asthma, and major depression) occurred often enough in this group of respondents to be studied. Major depression was the only condition significantly related to decrements in both of the dimensions of work performance assessed in the diaries: task focus and productivity. These effects were equivalent to approximately 2.3 days absent because of sickness per depressed worker per month of being depressed. Previous studies based on days missed from work significantly underestimate the adverse economic effects associated with depression. Productivity losses related to depression appear to exceed the costs of effective treatment.

  15. GC/MS analysis of pesticides in the Ferrara area (Italy) surface water: a chemometric study.

    PubMed

    Pasti, Luisa; Nava, Elisabetta; Morelli, Marco; Bignami, Silvia; Dondi, Francesco

    2007-01-01

    The development of a network to monitor surface waters is a critical element in the assessment, restoration and protection of water quality. In this study, concentrations of 42 pesticides--determined by GC-MS on samples from 11 points along the Ferrara area rivers--have been analyzed by chemometric tools. The data were collected over a three-year period (2002-2004). Principal component analysis of the detected pesticides was carried out in order to define the best spatial locations for the sampling points. The results obtained have been interpreted in view of agricultural land use. Time series data regarding pesticide contents in surface waters has been analyzed using the Autocorrelation function. This chemometric tool allows for seasonal trends and makes it possible to optimize sampling frequency in order to detect the effective maximum pesticide content.

  16. A timer inventory based upon manual and automated analysis of ERTS-1 and supporting aircraft data using multistage probability sampling. [Plumas National Forest, California

    NASA Technical Reports Server (NTRS)

    Nichols, J. D.; Gialdini, M.; Jaakkola, S.

    1974-01-01

    A quasi-operational study demonstrating that a timber inventory based on manual and automated analysis of ERTS-1, supporting aircraft data and ground data was made using multistage sampling techniques. The inventory proved to be a timely, cost effective alternative to conventional timber inventory techniques. The timber volume on the Quincy Ranger District of the Plumas National Forest was estimated to be 2.44 billion board feet with a sampling error of 8.2 percent. Costs per acre for the inventory procedure at 1.1 cent/acre compared favorably with the costs of a conventional inventory at 25 cents/acre. A point-by-point comparison of CALSCAN-classified ERTS data with human-interpreted low altitude photo plots indicated no significant differences in the overall classification accuracies.

  17. The Relations of Preschool Children's Emotion Knowledge and Socially Appropriate Behaviors to Peer Likability

    ERIC Educational Resources Information Center

    Sette, Stefania; Spinrad, Tracy L.; Baumgartner, Emma

    2017-01-01

    The purpose of the present study was to examine the relations of children's emotion knowledge (and its components) and socially appropriate behavior to peer likability in a sample of Italian preschool children at two time-points. At both Time 1 (T1; n = 46 boys, 42 girls) and a year later at Time 2 (T2; n = 26 boys, 22 girls), children's emotion…

  18. Metabolic changes in serum steroids induced by total-body irradiation of female C57B/6 mice.

    PubMed

    Moon, Ju-Yeon; Shin, Hee-June; Son, Hyun-Hwa; Lee, Jeongae; Jung, Uhee; Jo, Sung-Kee; Kim, Hyun Sik; Kwon, Kyung-Hoon; Park, Kyu Hwan; Chung, Bong Chul; Choi, Man Ho

    2014-05-01

    The short- and long-term effects of a single exposure to gamma radiation on steroid metabolism were investigated in mice. Gas chromatography-mass spectrometry was used to generate quantitative profiles of serum steroid levels in mice that had undergone total-body irradiation (TBI) at doses of 0Gy, 1Gy, and 4Gy. Following TBI, serum samples were collected at the pre-dose time point and 1, 3, 6, and 9 months after TBI. Serum levels of progestins, progesterone, 5β-DHP, 5α-DHP, and 20α-DHP showed a significant down-regulation following short-term exposure to 4Gy, with the exception of 20α-DHP, which was significantly decreased at each of the time points measured. The corticosteroids 5α-THDOC and 5α-DHB were significantly elevated at each of the time points measured after exposure to either 1 or 4Gy. Among the sterols, 24S-OH-cholestoerol showed a dose-related elevation after irradiation that reached significance in the high dose group at the 6- and 9-month time points. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Scalable boson sampling with time-bin encoding using a loop-based architecture.

    PubMed

    Motes, Keith R; Gilchrist, Alexei; Dowling, Jonathan P; Rohde, Peter P

    2014-09-19

    We present an architecture for arbitrarily scalable boson sampling using two nested fiber loops. The architecture has fixed experimental complexity, irrespective of the size of the desired interferometer, whose scale is limited only by fiber and switch loss rates. The architecture employs time-bin encoding, whereby the incident photons form a pulse train, which enters the loops. Dynamically controlled loop coupling ratios allow the construction of the arbitrary linear optics interferometers required for boson sampling. The architecture employs only a single point of interference and may thus be easier to stabilize than other approaches. The scheme has polynomial complexity and could be realized using demonstrated present-day technologies.

  20. Experimental results for the rapid determination of the freezing point of fuels

    NASA Technical Reports Server (NTRS)

    Mathiprakasam, B.

    1984-01-01

    Two methods for the rapid determination of the freezing point of fuels were investigated: an optical method, which detected the change in light transmission from the disappearance of solid particles in the melted fuel; and a differential thermal analysis (DTA) method, which sensed the latent heat of fusion. A laboratory apparatus was fabricated to test the two methods. Cooling was done by thermoelectric modules using an ice-water bath as a heat sink. The DTA method was later modified to eliminate the reference fuel. The data from the sample were digitized and a point of inflection, which corresponds to the ASTM D-2386 freezing point (final melting point), was identified from the derivative. The apparatus was modifified to cool the fuel to -60 C and controls were added for maintaining constant cooling rate, rewarming rate, and hold time at minimum temperature. A parametric series of tests were run for twelve fuels with freezing points from -10 C to -50 C, varying cooling rate, rewarming rate, and hold time. Based on the results, an optimum test procedure was established. The results showed good agreement with ASTM D-2386 freezing point and differential scanning calorimetry results.

  1. Stability of BDNF in Human Samples Stored Up to 6 Months and Correlations of Serum and EDTA-Plasma Concentrations.

    PubMed

    Polyakova, Maryna; Schlögl, Haiko; Sacher, Julia; Schmidt-Kassow, Maren; Kaiser, Jochen; Stumvoll, Michael; Kratzsch, Jürgen; Schroeter, Matthias L

    2017-06-03

    Brain-derived neurotrophic factor (BDNF), an important neural growth factor, has gained growing interest in neuroscience, but many influencing physiological and analytical aspects still remain unclear. In this study we assessed the impact of storage time at room temperature, repeated freeze/thaw cycles, and storage at -80 °C up to 6 months on serum and ethylenediaminetetraacetic acid (EDTA)-plasma BDNF. Furthermore, we assessed correlations of serum and plasma BDNF concentrations in two independent sets of samples. Coefficients of variations (CVs) for serum BDNF concentrations were significantly lower than CVs of plasma concentrations ( n = 245, p = 0.006). Mean serum and plasma concentrations at all analyzed time points remained within the acceptable change limit of the inter-assay precision as declared by the manufacturer. Serum and plasma BDNF concentrations correlated positively in both sets of samples and at all analyzed time points of the stability assessment ( r = 0.455 to r s = 0.596; p < 0.004). In summary, when considering the acceptable change limit, BDNF was stable in serum and in EDTA-plasma up to 6 months. Due to a higher reliability, we suggest favoring serum over EDTA-plasma for future experiments assessing peripheral BDNF concentrations.

  2. Evaluation of microRNA Stability in Plasma and Serum from Healthy Dogs.

    PubMed

    Enelund, Lars; Nielsen, Lise N; Cirera, Susanna

    2017-01-01

    Early and specific detection of cancer is of great importance for successful treatment of the disease. New biomarkers, such as microRNAs, could improve treatment efficiency and survival ratio. In human medicine, deregulation of microRNA profiles in circulation has shown great potential as a new type of biomarker for cancer diagnostics. There are, however, few studies of circulating microRNAs in dogs. Extracellular circulating microRNAs have shown a high level of stability in human blood and other body fluids. Nevertheless, there are still important issues to be solved before microRNAs can be applied routinely as a clinical tool, one of them being their stability over time in media commonly used for blood sampling. Evaluation of the stability of microRNA levels in plasma and serum from healthy dogs after storage at room temperature for different time points before being processed. The levels of four microRNAs (cfa-let-7a, cfa-miR-16, cfa-miR-23a and cfa-miR-26a) known to be stably expressed from other canine studies, have been measured by quantitative real-time PCR (qPCR). MicroRNA levels were found sufficiently stable for gene profiling in serum- and plasma stored at room temperature for 1 hour but not for samples stored at room temperature for 24 hours. Storage at room temperature of serum and plasma samples intended for microRNA profiling should be kept for a minimum period of time before proceeding with RNA isolation. For the four microRNAs investigated in the present study, we did not find significant differences in microRNA levels between serum and plasma samples from the same time point. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  3. Designing single- and multiple-shell sampling schemes for diffusion MRI using spherical code.

    PubMed

    Cheng, Jian; Shen, Dinggang; Yap, Pew-Thian

    2014-01-01

    In diffusion MRI (dMRI), determining an appropriate sampling scheme is crucial for acquiring the maximal amount of information for data reconstruction and analysis using the minimal amount of time. For single-shell acquisition, uniform sampling without directional preference is usually favored. To achieve this, a commonly used approach is the Electrostatic Energy Minimization (EEM) method introduced in dMRI by Jones et al. However, the electrostatic energy formulation in EEM is not directly related to the goal of optimal sampling-scheme design, i.e., achieving large angular separation between sampling points. A mathematically more natural approach is to consider the Spherical Code (SC) formulation, which aims to achieve uniform sampling by maximizing the minimal angular difference between sampling points on the unit sphere. Although SC is well studied in the mathematical literature, its current formulation is limited to a single shell and is not applicable to multiple shells. Moreover, SC, or more precisely continuous SC (CSC), currently can only be applied on the continuous unit sphere and hence cannot be used in situations where one or several subsets of sampling points need to be determined from an existing sampling scheme. In this case, discrete SC (DSC) is required. In this paper, we propose novel DSC and CSC methods for designing uniform single-/multi-shell sampling schemes. The DSC and CSC formulations are solved respectively by Mixed Integer Linear Programming (MILP) and a gradient descent approach. A fast greedy incremental solution is also provided for both DSC and CSC. To our knowledge, this is the first work to use SC formulation for designing sampling schemes in dMRI. Experimental results indicate that our methods obtain larger angular separation and better rotational invariance than the generalized EEM (gEEM) method currently used in the Human Connectome Project (HCP).

  4. Improvement on Timing Accuracy of LIDAR for Remote Sensing

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Huang, W.; Zhou, X.; Huang, Y.; He, C.; Li, X.; Zhang, L.

    2018-05-01

    The traditional timing discrimination technique for laser rangefinding in remote sensing, which is lower in measurement performance and also has a larger error, has been unable to meet the high precision measurement and high definition lidar image. To solve this problem, an improvement of timing accuracy based on the improved leading-edge timing discrimination (LED) is proposed. Firstly, the method enables the corresponding timing point of the same threshold to move forward with the multiple amplifying of the received signal. Then, timing information is sampled, and fitted the timing points through algorithms in MATLAB software. Finally, the minimum timing error is calculated by the fitting function. Thereby, the timing error of the received signal from the lidar is compressed and the lidar data quality is improved. Experiments show that timing error can be significantly reduced by the multiple amplifying of the received signal and the algorithm of fitting the parameters, and a timing accuracy of 4.63 ps is achieved.

  5. Limited sampling strategy for determining metformin area under the plasma concentration–time curve

    PubMed Central

    Santoro, Ana Beatriz; Stage, Tore Bjerregaard; Struchiner, Claudio José; Christensen, Mette Marie Hougaard; Brosen, Kim

    2016-01-01

    Aim The aim was to develop and validate limited sampling strategy (LSS) models to predict the area under the plasma concentration–time curve (AUC) for metformin. Methods Metformin plasma concentrations (n = 627) at 0–24 h after a single 500 mg dose were used for LSS development, based on all subsets linear regression analysis. The LSS‐derived AUC(0,24 h) was compared with the parameter ‘best estimate’ obtained by non‐compartmental analysis using all plasma concentration data points. Correlation between the LSS‐derived and the best estimated AUC(0,24 h) (r 2), bias and precision of the LSS estimates were quantified. The LSS models were validated in independent cohorts. Results A two‐point (3 h and 10 h) regression equation with no intercept estimated accurately the individual AUC(0,24 h) in the development cohort: r 2 = 0.927, bias (mean, 95% CI) –0.5, −2.7–1.8% and precision 6.3, 4.9–7.7%. The accuracy of the two point LSS model was verified in study cohorts of individuals receiving single 500 or 1000 mg (r 2 = –0.933–0.934) or seven 1000 mg daily doses (r 2 = 0.918), as well as using data from 16 published studies covering a wide range of metformin doses, demographics, clinical and experimental conditions (r 2 = 0.976). The LSS model reproduced previously reported results for effects of polymorphisms in OCT2 and MATE1 genes on AUC(0,24 h) and renal clearance of metformin. Conclusions The two point LSS algorithm may be used to assess the systemic exposure to metformin under diverse conditions, with reduced costs of sampling and analysis, and saving time for both subjects and investigators. PMID:27324407

  6. Potential Information Loss Due to Categorization of Minimum Inhibitory Concentration Frequency Distributions.

    PubMed

    Mazloom, Reza; Jaberi-Douraki, Majid; Comer, Jeffrey R; Volkova, Victoriya

    2018-01-01

    A bacterial isolate's susceptibility to antimicrobial is expressed as the lowest drug concentration inhibiting its visible growth, termed minimum inhibitory concentration (MIC). The susceptibilities of isolates from a host population at a particular time vary, with isolates with specific MICs present at different frequencies. Currently, for either clinical or monitoring purposes, an isolate is most often categorized as Susceptible, Intermediate, or Resistant to the antimicrobial by comparing its MIC to a breakpoint value. Such data categorizations are known in statistics to cause information loss compared to analyzing the underlying frequency distributions. The U.S. National Antimicrobial Resistance Monitoring System (NARMS) includes foodborne bacteria at the food animal processing and retail product points. The breakpoints used to interpret the MIC values for foodborne bacteria are those relevant to clinical treatments by the antimicrobials in humans in whom the isolates were to cause infection. However, conceptually different objectives arise when inference is sought concerning changes in susceptibility/resistance across isolates of a bacterial species in host populations among different sampling points or times. For the NARMS 1996-2013 data for animal processing and retail, we determined the fraction of comparisons of susceptibility/resistance to 44 antimicrobial drugs of twelve classes of a bacterial species in a given animal host or product population where there was a significant change in the MIC frequency distributions between consecutive years or the two sampling points, while the categorization-based analyses concluded no change. The categorization-based analyses missed significant changes in 54% of the year-to-year comparisons and in 71% of the slaughter-to-retail within-year comparisons. Hence, analyses using the breakpoint-based categorizations of the MIC data may miss significant developments in the resistance distributions between the sampling points or times. Methods considering the MIC frequency distributions in their entirety may be superior for epidemiological analyses of resistance dynamics in populations.

  7. Development and Validation of Limited-Sampling Strategies for Predicting Amoxicillin Pharmacokinetic and Pharmacodynamic Parameters

    PubMed Central

    Suarez-Kurtz, Guilherme; Ribeiro, Frederico Mota; Vicente, Flávio L.; Struchiner, Claudio J.

    2001-01-01

    Amoxicillin plasma concentrations (n = 1,152) obtained from 48 healthy subjects in two bioequivalence studies were used to develop limited-sampling strategy (LSS) models for estimating the area under the concentration-time curve (AUC), the maximum concentration of drug in plasma (Cmax), and the time interval of concentration above MIC susceptibility breakpoints in plasma (T>MIC). Each subject received 500-mg amoxicillin, as reference and test capsules or suspensions, and plasma concentrations were measured by a validated microbiological assay. Linear regression analysis and a “jack-knife” procedure revealed that three-point LSS models accurately estimated (R2, 0.92; precision, <5.8%) the AUC from 0 h to infinity (AUC0-∞) of amoxicillin for the four formulations tested. Validation tests indicated that a three-point LSS model (1, 2, and 5 h) developed for the reference capsule formulation predicts the following accurately (R2, 0.94 to 0.99): (i) the individual AUC0-∞ for the test capsule formulation in the same subjects, (ii) the individual AUC0-∞ for both reference and test suspensions in 24 other subjects, and (iii) the average AUC0-∞ following single oral doses (250 to 1,000 mg) of various amoxicillin formulations in 11 previously published studies. A linear regression equation was derived, using the same sampling time points of the LSS model for the AUC0-∞, but using different coefficients and intercept, for estimating Cmax. Bioequivalence assessments based on LSS-derived AUC0-∞'s and Cmax's provided results similar to those obtained using the original values for these parameters. Finally, two-point LSS models (R2 = 0.86 to 0.95) were developed for T>MICs of 0.25 or 2.0 μg/ml, which are representative of microorganisms susceptible and resistant to amoxicillin. PMID:11600352

  8. Fire Effects on Soil and Dissolved Organic Matter in a Southern Appalachian Hardwood Forest: Movement of Fire-Altered Organic Matter Across the Terrestrial-Aquatic Interface Following the Great Smoky Mountains National Park Fire of 2016

    NASA Astrophysics Data System (ADS)

    Matosziuk, L.; Gallo, A.; Hatten, J. A.; Heckman, K. A.; Nave, L. E.; Sanclements, M.; Strahm, B. D.; Weiglein, T.

    2017-12-01

    Wildfire can dramatically affect the quantity and quality of soil organic matter (SOM), producing thermally altered organic material such as pyrogenic carbon (PyC) and polyaromatic hydrocarbons (PAHs). The movement of this thermally altered material through terrestrial and aquatic ecosystems can differ from that of unburned SOM, with far-reaching consequences for soil carbon cycling and water quality. Unfortunately, due to the rapid ecological changes following fire and the lack of robust pre-fire controls, the cycling of fire-altered carbon is still poorly understood. In December 2016, the Chimney Tops 2 fire in Great Smoky Mountains National Park burned over co-located terrestrial and aquatic NEON sites. We have leveraged the wealth of pre-fire data at these sites (chemical, physical, and microbial characterization of soils, continuous measurements of both soil and stream samples, and five soil cores up to 110 cm in depth) to conduct a thorough study of the movement of fire-altered organic matter through terrestrial and aquatic ecosystems. Stream samples have been collected weekly beginning 5 weeks post-fire. Grab samples of soil were taken at discrete time points in the first two months after the fire. Eight weeks post-fire, a second set of cores was taken and resin lysimeters installed at three different depths. A third set of cores and grab samples will be taken 8-12 months after the fire. In addition to routine soil characterization techniques, solid samples from cores and grab samples at all time points will be analyzed for PyC and PAHs. To determine the effect of fire on dissolved organic matter (DOM), hot water extracts of these soil samples, as well as the stream samples and lysimeter samples, will also be analyzed for PyC and PAHs. Selected samples will be analyzed by 1D- and 2D-NMR to further characterize the chemical composition of DOM. This extensive investigation of the quantity and quality of fire-altered organic material at discrete time points will provide insight into the production and cycling of thermally-altered SOM and DOM. We hypothesize that PyC will be an important source of SOM to surface mineral soil horizons, and that the quantity of DOM will increase after fire, providing a rapid pulse of C to deep soils and aquatic systems.

  9. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. L. Hoskinson; R C. Rope; L G. Blackwood

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and amore » predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences accounted for almost 87% of the cost difference. The sum of these differences could result in a $34 per acre cost difference for the fertilization. Because of these differences, better analysis or better sampling methods may need to be done, or more samples collected, to ensure that the soil measurements are truly representative of the field’s spatial variability.« less

  10. Sampling and detection of airborne influenza virus towards point-of-care applications.

    PubMed

    Ladhani, Laila; Pardon, Gaspard; Meeuws, Hanne; van Wesenbeeck, Liesbeth; Schmidt, Kristiane; Stuyver, Lieven; van der Wijngaart, Wouter

    2017-01-01

    Airborne transmission of the influenza virus contributes significantly to the spread of this infectious pathogen, particularly over large distances when carried by aerosol droplets with long survival times. Efficient sampling of virus-loaded aerosol in combination with a low limit of detection of the collected virus could enable rapid and early detection of airborne influenza virus at the point-of-care setting. Here, we demonstrate a successful sampling and detection of airborne influenza virus using a system specifically developed for such applications. Our system consists of a custom-made electrostatic precipitation (ESP)-based bioaerosol sampler that is coupled with downstream quantitative polymerase chain reaction (qPCR) analysis. Aerosolized viruses are sampled directly into a miniaturized collector with liquid volume of 150 μL, which constitutes a simple and direct interface with subsequent biological assays. This approach reduces sample dilution by at least one order of magnitude when compared to other liquid-based aerosol bio-samplers. Performance of our ESP-based sampler was evaluated using influenza virus-loaded sub-micron aerosols generated from both cultured and clinical samples. Despite the miniaturized collection volume, we demonstrate a collection efficiency of at least 10% and sensitive detection of a minimum of 3721 RNA copies. Furthermore, we show that an improved extraction protocol can allow viral recovery of down to 303 RNA copies and a maximum sampler collection efficiency of 47%. A device with such a performance would reduce sampling times dramatically, from a few hours with current sampling methods down to a couple of minutes with our ESP-based bioaerosol sampler.

  11. Phosphorus and nitrogen concentrations and loads at Illinois River south of Siloam Springs, Arkansas, 1997-1999

    USGS Publications Warehouse

    Green, W. Reed; Haggard, Brian E.

    2001-01-01

    Water-quality sampling consisting of every other month (bimonthly) routine sampling and storm event sampling (six storms annually) is used to estimate annual phosphorus and nitrogen loads at Illinois River south of Siloam Springs, Arkansas. Hydrograph separation allowed assessment of base-flow and surfacerunoff nutrient relations and yield. Discharge and nutrient relations indicate that water quality at Illinois River south of Siloam Springs, Arkansas, is affected by both point and nonpoint sources of contamination. Base-flow phosphorus concentrations decreased with increasing base-flow discharge indicating the dilution of phosphorus in water from point sources. Nitrogen concentrations increased with increasing base-flow discharge, indicating a predominant ground-water source. Nitrogen concentrations at higher base-flow discharges often were greater than median concentrations reported for ground water (from wells and springs) in the Springfield Plateau aquifer. Total estimated phosphorus and nitrogen annual loads for calendar year 1997-1999 using the regression techniques presented in this paper (35 samples) were similar to estimated loads derived from integration techniques (1,033 samples). Flow-weighted nutrient concentrations and nutrient yields at the Illinois River site were about 10 to 100 times greater than national averages for undeveloped basins and at North Sylamore Creek and Cossatot River (considered to be undeveloped basins in Arkansas). Total phosphorus and soluble reactive phosphorus were greater than 10 times and total nitrogen and dissolved nitrite plus nitrate were greater than 10 to 100 times the national and regional averages for undeveloped basins. These results demonstrate the utility of a strategy whereby samples are collected every other month and during selected storm events annually, with use of regression models to estimate nutrient loads. Annual loads of phosphorus and nitrogen estimated using regression techniques could provide similar results to estimates using integration techniques, with much less investment.

  12. Reliable noninvasive prenatal testing by massively parallel sequencing of circulating cell-free DNA from maternal plasma processed up to 24h after venipuncture.

    PubMed

    Buysse, Karen; Beulen, Lean; Gomes, Ingrid; Gilissen, Christian; Keesmaat, Chantal; Janssen, Irene M; Derks-Willemen, Judith J H T; de Ligt, Joep; Feenstra, Ilse; Bekker, Mireille N; van Vugt, John M G; Geurts van Kessel, Ad; Vissers, Lisenka E L M; Faas, Brigitte H W

    2013-12-01

    Circulating cell-free fetal DNA (ccffDNA) in maternal plasma is an attractive source for noninvasive prenatal testing (NIPT). The amount of total cell-free DNA significantly increases 24h after venipuncture, leading to a relative decrease of the ccffDNA fraction in the blood sample. In this study, we evaluated the downstream effects of extended processing times on the reliability of aneuploidy detection by massively parallel sequencing (MPS). Whole blood from pregnant women carrying normal and trisomy 21 (T21) fetuses was collected in regular EDTA anti-coagulated tubes and processed within 6h, 24 and 48h after venipuncture. Samples of all three different time points were further analyzed by MPS using Z-score calculation and the percentage of ccffDNA based on X-chromosome reads. Both T21 samples were correctly identified as such at all time-points. However, after 48h, a higher deviation in Z-scores was noticed. Even though the percentage of ccffDNA in a plasma sample has been shown previously to significantly decrease 24h after venipuncture, the percentages based on MPS results did not show a significant decrease after 6, 24 or 48h. The quality and quantity of ccffDNA extracted from plasma samples processed up to 24h after venipuncture are sufficiently high for reliable downstream NIPT analysis by MPS. Furthermore, we show that it is important to determine the percentage of ccffDNA in the fraction of the sample that is actually used for NIPT, as downstream procedures might influence the fetal or maternal fraction. © 2013.

  13. Comment on "The optimal timing of post-treatment sampling for the assessment of anthelminthic drug efficacy against Ascaris infections in humans".

    PubMed

    Krücken, Jürgen; Fraundorfer, Kira; Mugisha, Jean Claude; Ramünke, Sabrina; Sifft, Kevin C; Geus, Dominik; Habarugira, Felix; Ndoli, Jules; Sendegeya, Augustin; Mukampunga, Caritas; Aebischer, Toni; McKay-Demeler, Janina; Gahutu, Jean Bosco; Mockenhaupt, Frank P; von Samson-Himmelstjerna, Georg

    2018-05-18

    A recent publication by Levecke et al. (Int. J. Parasitol, 2018, 8, 67-69) provides important insights into the kinetics of worm expulsion from humans following treatment with albendazole. This is an important aspect of determining the optimal time-point for post treatment sampling to examine anthelmintic drug efficacy. The authors conclude that for the determination of drug efficacy against Ascaris, samples should be taken not before day 14 and recommend a period between days 14 and 21. Using this recommendation, they conclude that previous data (Krücken et al., 2017; Int. J. Parasitol, 7, 262-271) showing a reduction of egg shedding by 75.4% in schoolchildren in Rwanda and our conclusions from these data should be interpreted with caution. In reply to this, we would like to indicate that the very low efficacy of 0% in one school and 52-56% in three other schools, while the drug was fully efficient in other schools, cannot simply be explained by the time point of sampling. Moreover, there was no correlation between the sampling day and albendazole efficacy. We would also like to indicate that we very carefully interpreted our data and, for example, nowhere claimed that we found anthelmintic resistance. Rather, we stated that our data indicated that benzimidazole resistance may be suspected in the study population. We strongly agree that the data presented by Levecke et al. suggests that recommendations for efficacy testing of anthelmintic drugs should be revised. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Numerical study and ex vivo assessment of HIFU treatment time reduction through optimization of focal point trajectory

    NASA Astrophysics Data System (ADS)

    Grisey, A.; Yon, S.; Pechoux, T.; Letort, V.; Lafitte, P.

    2017-03-01

    Treatment time reduction is a key issue to expand the use of high intensity focused ultrasound (HIFU) surgery, especially for benign pathologies. This study aims at quantitatively assessing the potential reduction of the treatment time arising from moving the focal point during long pulses. In this context, the optimization of the focal point trajectory is crucial to achieve a uniform thermal dose repartition and avoid boiling. At first, a numerical optimization algorithm was used to generate efficient trajectories. Thermal conduction was simulated in 3D with a finite difference code and damages to the tissue were modeled using the thermal dose formula. Given an initial trajectory, the thermal dose field was first computed, then, making use of Pontryagin's maximum principle, the trajectory was iteratively refined. Several initial trajectories were tested. Then, an ex vivo study was conducted in order to validate the efficicency of the resulting optimized strategies. Single pulses were performed at 3MHz on fresh veal liver samples with an Echopulse and the size of each unitary lesion was assessed by cutting each sample along three orthogonal planes and measuring the dimension of the whitened area based on photographs. We propose a promising approach to significantly shorten HIFU treatment time: the numerical optimization algorithm was shown to provide a reliable insight on trajectories that can improve treatment strategies. The model must now be improved in order to take in vivo conditions into account and extensively validated.

  15. spsann - optimization of sample patterns using spatial simulated annealing

    NASA Astrophysics Data System (ADS)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a computationally intensive method. As such, many strategies were used to reduce the computation time and memory usage: a) bottlenecks were implemented in C++, b) a finite set of candidate locations is used for perturbing the sample points, and c) data matrices are computed only once and then updated at each iteration instead of being recomputed. spsann is available at GitHub under a licence GLP Version 2.0 and will be further developed to: a) allow the use of a cost surface, b) implement other sensitive parts of the source code in C++, c) implement other optimizing criteria, d) allow to add or delete points to/from an existing point pattern.

  16. Ratio-based estimators for a change point in persistence.

    PubMed

    Halunga, Andreea G; Osborn, Denise R

    2012-11-01

    We study estimation of the date of change in persistence, from [Formula: see text] to [Formula: see text] or vice versa. Contrary to statements in the original papers, our analytical results establish that the ratio-based break point estimators of Kim [Kim, J.Y., 2000. Detection of change in persistence of a linear time series. Journal of Econometrics 95, 97-116], Kim et al. [Kim, J.Y., Belaire-Franch, J., Badillo Amador, R., 2002. Corringendum to "Detection of change in persistence of a linear time series". Journal of Econometrics 109, 389-392] and Busetti and Taylor [Busetti, F., Taylor, A.M.R., 2004. Tests of stationarity against a change in persistence. Journal of Econometrics 123, 33-66] are inconsistent when a mean (or other deterministic component) is estimated for the process. In such cases, the estimators converge to random variables with upper bound given by the true break date when persistence changes from [Formula: see text] to [Formula: see text]. A Monte Carlo study confirms the large sample downward bias and also finds substantial biases in moderate sized samples, partly due to properties at the end points of the search interval.

  17. TREFEX: Trend Estimation and Change Detection in the Response of MOX Gas Sensors

    PubMed Central

    Pashami, Sepideh; Lilienthal, Achim J.; Schaffernicht, Erik; Trincavelli, Marco

    2013-01-01

    Many applications of metal oxide gas sensors can benefit from reliable algorithms to detect significant changes in the sensor response. Significant changes indicate a change in the emission modality of a distant gas source and occur due to a sudden change of concentration or exposure to a different compound. As a consequence of turbulent gas transport and the relatively slow response and recovery times of metal oxide sensors, their response in open sampling configuration exhibits strong fluctuations that interfere with the changes of interest. In this paper we introduce TREFEX, a novel change point detection algorithm, especially designed for metal oxide gas sensors in an open sampling system. TREFEX models the response of MOX sensors as a piecewise exponential signal and considers the junctions between consecutive exponentials as change points. We formulate non-linear trend filtering and change point detection as a parameter-free convex optimization problem for single sensors and sensor arrays. We evaluate the performance of the TREFEX algorithm experimentally for different metal oxide sensors and several gas emission profiles. A comparison with the previously proposed GLR method shows a clearly superior performance of the TREFEX algorithm both in detection performance and in estimating the change time. PMID:23736853

  18. Water potential in excised leaf tissue. Comparison of a commercial dew point hygrometer and a thermocouple psychrometer on soybean, wheat, and barley

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelsen, C.E.; Safir, G.R.; Hanson, A.D.

    1978-01-01

    Leaf water potential (Psi/sub leaf/) determinations were made on excised leaf samples using a commercial dew point hygrometer (Wescor Inc., Logan, Utah) and a thermocouple psychrometer operated in the isopiestic mode. With soybean leaves (Glycine max L.), there was good agreement between instruments; equilibration times were 2 to 3 hours. With cereals (Triticum aestivum L. and Hordeum vulgare L.), agreement between instruments was poor for moderately wilted leaves when 7-mm-diameter punches were used in the hygrometer and 20-mm slices were used in the psychrometer, because the Psi/sub leaf/ values from the dew point hygrometer were too high. Agreement was improvedmore » by replacing the 7-mm punch samples in the hygrometer by 13-mm slices, which had a lower cut edge to volume ratio. Equilibration times for cereals were normally 6 to 8 hours. Spuriously high Psi/sub leaf/ values obtained with 7-mm leaf punches may be associated with the ion release and reabsorption that occur upon tissue excision; such errors evidently depend both on the species and on tissue water status.« less

  19. Novel pH sensing semiconductor for point-of-care detection of HIV-1 viremia

    PubMed Central

    Gurrala, R.; Lang, Z.; Shepherd, L.; Davidson, D.; Harrison, E.; McClure, M.; Kaye, S.; Toumazou, C.; Cooke, G. S.

    2016-01-01

    The timely detection of viremia in HIV-infected patients receiving antiviral treatment is key to ensuring effective therapy and preventing the emergence of drug resistance. In high HIV burden settings, the cost and complexity of diagnostics limit their availability. We have developed a novel complementary metal-oxide semiconductor (CMOS) chip based, pH-mediated, point-of-care HIV-1 viral load monitoring assay that simultaneously amplifies and detects HIV-1 RNA. A novel low-buffer HIV-1 pH-LAMP (loop-mediated isothermal amplification) assay was optimised and incorporated into a pH sensitive CMOS chip. Screening of 991 clinical samples (164 on the chip) yielded a sensitivity of 95% (in vitro) and 88.8% (on-chip) at >1000 RNA copies/reaction across a broad spectrum of HIV-1 viral clades. Median time to detection was 20.8 minutes in samples with >1000 copies RNA. The sensitivity, specificity and reproducibility are close to that required to produce a point-of-care device which would be of benefit in resource poor regions, and could be performed on an USB stick or similar low power device. PMID:27829667

  20. Study of the model of calibrating differences of brightness temperature from geostationary satellite generated by time zone differences

    NASA Astrophysics Data System (ADS)

    Li, Weidong; Shan, Xinjian; Qu, Chunyan

    2010-11-01

    In comparison with polar-orbiting satellites, geostationary satellites have a higher time resolution and wider field of visions, which can cover eleven time zones (an image covers about one third of the Earth's surface). For a geostationary satellite panorama graph at a point of time, the brightness temperature of different zones is unable to represent the thermal radiation information of the surface at the same point of time because of the effect of different sun solar radiation. So it is necessary to calibrate brightness temperature of different zones with respect to the same point of time. A model of calibrating the differences of the brightness temperature of geostationary satellite generated by time zone differences is suggested in this study. A total of 16 curves of four positions in four different stages are given through sample statistics of brightness temperature of every 5 days synthetic data which are from four different time zones (time zones 4, 6, 8, and 9). The above four stages span January -March (winter), April-June (spring), July-September (summer), and October-December (autumn). Three kinds of correct situations and correct formulas based on curves changes are able to better eliminate brightness temperature rising or dropping caused by time zone differences.

  1. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  2. Dynamic measurements of CO diffusing capacity using discrete samples of alveolar gas.

    PubMed

    Graham, B L; Mink, J T; Cotton, D J

    1983-01-01

    It has been shown that measurements of the diffusing capacity of the lung for CO made during a slow exhalation [DLCO(exhaled)] yield information about the distribution of the diffusing capacity in the lung that is not available from the commonly measured single-breath diffusing capacity [DLCO(SB)]. Current techniques of measuring DLCO(exhaled) require the use of a rapid-responding (less than 240 ms, 10-90%) CO meter to measure the CO concentration in the exhaled gas continuously during exhalation. DLCO(exhaled) is then calculated using two sample points in the CO signal. Because DLCO(exhaled) calculations are highly affected by small amounts of noise in the CO signal, filtering techniques have been used to reduce noise. However, these techniques reduce the response time of the system and may introduce other errors into the signal. We have developed an alternate technique in which DLCO(exhaled) can be calculated using the concentration of CO in large discrete samples of the exhaled gas, thus eliminating the requirement of a rapid response time in the CO analyzer. We show theoretically that this method is as accurate as other DLCO(exhaled) methods but is less affected by noise. These findings are verified in comparisons of the discrete-sample method of calculating DLCO(exhaled) to point-sample methods in normal subjects, patients with emphysema, and patients with asthma.

  3. Longitudinal Study of Performance of Students Entering Harper College, Years 1967-1975. Vol. IX, No. 6.

    ERIC Educational Resources Information Center

    Lucas, John A.

    Analysis of the transcripts of 200 full-time and 200 part-time beginning traditional credit students randomly sampled from the population of students entering each fall from 1967 to 1975 at William Rainey Harper College indicated that: (1) overall student grade point average rose in direct relationship to changes in grading policy; (2) the grade…

  4. Observing Anthropometric and Acanthosis Nigrican Changes among Children Over Time

    ERIC Educational Resources Information Center

    Law, Jennifer; Northrup, Karen; Wittberg, Richard; Lilly, Christa; Cottrell, Lesley

    2013-01-01

    This study assessed the anthropometrics and acanthosis nigricans (AN) in a sample of 7,337 children at two assessments. Four groups of children were identified based on the presence of AN at both time points: those who never had the marker, those who gained the marker, those who lost the marker, and those who maintained the marker. Group…

  5. Time Diary and Questionnaire Assessment of Factors Associated with Academic and Personal Success among University Undergraduates

    ERIC Educational Resources Information Center

    George, Darren; Dixon, Sinikka; Stansal, Emory; Gelb, Shannon Lund; Pheri, Tabitha

    2008-01-01

    Objective and Participants: A sample of 231 students attending a private liberal arts university in central Alberta, Canada, completed a 5-day time diary and a 71-item questionnaire assessing the influence of personal, cognitive, and attitudinal factors on success. Methods: The authors used 3 success measures: cumulative grade point average (GPA),…

  6. The effects of mating status and time since mating on female sex pheromone levels in the rice leaf bug, Trigonotylus caelestialium

    NASA Astrophysics Data System (ADS)

    Yamane, Takashi; Yasuda, Tetsuya

    2014-02-01

    Although mating status affects future mating opportunities, the biochemical changes that occur in response to mating are not well understood. This study investigated the effects of mating status on the quantities of sex pheromone components found in whole-body extracts and volatile emissions of females of the rice leaf bug, Trigonotylus caelestialium. When sampled at one of four time points within a 4-day postmating period, females that had copulated with a male had greater whole-body quantities of sex pheromone components than those of virgin females sampled at the same times. The quantities of sex pheromone components emitted by virgin females over a 24-h period were initially high but then steadily decreased, whereas 24-h emissions were persistently low among mated females when measured at three time points within the 4 days after mating. As a result, soon after mating, the mated females emitted less sex pheromones than virgin females, but there were no significant differences between mated and virgin females at the end of the experiment. Thus, postmating reduction in the rate of emission of sex pheromones could explain previously observed changes in female attractiveness to male T. caelestialium.

  7. Development of a digital microfluidic platform for point of care testing

    PubMed Central

    Sista, Ramakrishna; Hua, Zhishan; Thwar, Prasanna; Sudarsan, Arjun; Srinivasan, Vijay; Eckhardt, Allen; Pollack, Michael; Pamula, Vamsee

    2009-01-01

    Point of care testing is playing an increasingly important role in improving the clinical outcome in health care management. The salient features of a point of care device are quick results, integrated sample preparation and processing, small sample volumes, portability, multifunctionality and low cost. In this paper, we demonstrate some of these salient features utilizing an electrowetting-based Digital Microfluidic platform. We demonstrate the performance of magnetic bead-based immunoassays (cardiac troponin I) on a digital microfluidic cartridge in less than 8 minutes using whole blood samples. Using the same microfluidic cartridge, a 40-cycle real-time polymerase chain reaction was performed within 12 minutes by shuttling a droplet between two thermal zones. We further demonstrate, on the same cartridge, the capability to perform sample preparation for bacterial and fungal infectious disease pathogens (methicillin-resistance Staphylococcus aureus and Candida albicans) and for human genomic DNA using magnetic beads. In addition to rapid results and integrated sample preparation, electrowetting-based digital microfluidic instruments are highly portable because fluid pumping is performed electronically. All the digital microfluidic chips presented here were fabricated on printed circuit boards utilizing mass production techniques that keep the cost of the chip low. Due to the modularity and scalability afforded by digital microfluidics, multifunctional testing capability, such as combinations within and between immunoassays, DNA amplification, and enzymatic assays, can be brought to the point of care at a relatively low cost because a single chip can be configured in software for different assays required along the path of care. PMID:19023472

  8. Use of the Fakopp TreeSonic acoustic device to estimate wood quality characteristics in loblolly pine trees planted at different densities

    Treesearch

    Ralph L. Amateis; Harold E. Burkhart

    2015-01-01

    A Fakopp TreeSonic acoustic device was used to measure time of flight (TOF) impulses through sample trees prior to felling from 27-year-old loblolly pine (Pinus taeda L.) plantations established at different planting densities. After felling, the sample trees were sawn into lumber and the boards subjected to edgewise bending under 2-point loading. Bending properties...

  9. Simulation of a Geiger-Mode Imaging LADAR System for Performance Assessment

    PubMed Central

    Kim, Seongjoon; Lee, Impyeong; Kwon, Yong Joon

    2013-01-01

    As LADAR systems applications gradually become more diverse, new types of systems are being developed. When developing new systems, simulation studies are an essential prerequisite. A simulator enables performance predictions and optimal system parameters at the design level, as well as providing sample data for developing and validating application algorithms. The purpose of the study is to propose a method for simulating a Geiger-mode imaging LADAR system. We develop simulation software to assess system performance and generate sample data for the applications. The simulation is based on three aspects of modeling—the geometry, radiometry and detection. The geometric model computes the ranges to the reflection points of the laser pulses. The radiometric model generates the return signals, including the noises. The detection model determines the flight times of the laser pulses based on the nature of the Geiger-mode detector. We generated sample data using the simulator with the system parameters and analyzed the detection performance by comparing the simulated points to the reference points. The proportion of the outliers in the simulated points reached 25.53%, indicating the need for efficient outlier elimination algorithms. In addition, the false alarm rate and dropout rate of the designed system were computed as 1.76% and 1.06%, respectively. PMID:23823970

  10. Where Do I Start (Beginning the Investigation)?

    NASA Astrophysics Data System (ADS)

    Kornacki, Jeffrey L.

    No doubt some will open directly to this chapter, because your product is contaminated with an undesirable microbe, or perhaps you have been asked to do such an investigation for another company's facility not previously observed by you and naturally you want tips on how to find where the contaminant is getting into the product stream. This chapter takes the reader through the process of beginning the investigation including understanding the process including the production schedule and critically reviewing previously generated laboratory data. Understanding the critical control points and validity of their critical limits is also important. Scoping the extent of the problem is next. It is always a good idea for the factory to have a rigorously validated cleaning and sanitation procedure that provides a documented "sanitation breakpoint," which can be useful in the "scoping" process, although some contamination events may extend past these "break-points." Touring the facility is next wherein preliminary pre-selection of areas for future sampling can be done. Operational samples and observations in non-food contact areas can be taken at this time. Then the operations personnel need to be consulted and plans made for an appropriate amount of time to observe equipment break down for "post-operational" sampling and "pre-operational" investigational sampling. Hence the chapter further discusses preparing operations personnel for the disruptions that go along with these investigations and assembling the sampling team. The chapter concludes with a discussion of post-startup observations after an investigation and sampling.

  11. Time-Resolved Transposon Insertion Sequencing Reveals Genome-Wide Fitness Dynamics during Infection.

    PubMed

    Yang, Guanhua; Billings, Gabriel; Hubbard, Troy P; Park, Joseph S; Yin Leung, Ka; Liu, Qin; Davis, Brigid M; Zhang, Yuanxing; Wang, Qiyao; Waldor, Matthew K

    2017-10-03

    Transposon insertion sequencing (TIS) is a powerful high-throughput genetic technique that is transforming functional genomics in prokaryotes, because it enables genome-wide mapping of the determinants of fitness. However, current approaches for analyzing TIS data assume that selective pressures are constant over time and thus do not yield information regarding changes in the genetic requirements for growth in dynamic environments (e.g., during infection). Here, we describe structured analysis of TIS data collected as a time series, termed pattern analysis of conditional essentiality (PACE). From a temporal series of TIS data, PACE derives a quantitative assessment of each mutant's fitness over the course of an experiment and identifies mutants with related fitness profiles. In so doing, PACE circumvents major limitations of existing methodologies, specifically the need for artificial effect size thresholds and enumeration of bacterial population expansion. We used PACE to analyze TIS samples of Edwardsiella piscicida (a fish pathogen) collected over a 2-week infection period from a natural host (the flatfish turbot). PACE uncovered more genes that affect E. piscicida 's fitness in vivo than were detected using a cutoff at a terminal sampling point, and it identified subpopulations of mutants with distinct fitness profiles, one of which informed the design of new live vaccine candidates. Overall, PACE enables efficient mining of time series TIS data and enhances the power and sensitivity of TIS-based analyses. IMPORTANCE Transposon insertion sequencing (TIS) enables genome-wide mapping of the genetic determinants of fitness, typically based on observations at a single sampling point. Here, we move beyond analysis of endpoint TIS data to create a framework for analysis of time series TIS data, termed pattern analysis of conditional essentiality (PACE). We applied PACE to identify genes that contribute to colonization of a natural host by the fish pathogen Edwardsiella piscicida. PACE uncovered more genes that affect E. piscicida 's fitness in vivo than were detected using a terminal sampling point, and its clustering of mutants with related fitness profiles informed design of new live vaccine candidates. PACE yields insights into patterns of fitness dynamics and circumvents major limitations of existing methodologies. Finally, the PACE method should be applicable to additional "omic" time series data, including screens based on clustered regularly interspaced short palindromic repeats with Cas9 (CRISPR/Cas9). Copyright © 2017 Yang et al.

  12. 3D sensitivity encoded ellipsoidal MR spectroscopic imaging of gliomas at 3T☆

    PubMed Central

    Ozturk-Isik, Esin; Chen, Albert P.; Crane, Jason C.; Bian, Wei; Xu, Duan; Han, Eric T.; Chang, Susan M.; Vigneron, Daniel B.; Nelson, Sarah J.

    2010-01-01

    Purpose The goal of this study was to implement time efficient data acquisition and reconstruction methods for 3D magnetic resonance spectroscopic imaging (MRSI) of gliomas at a field strength of 3T using parallel imaging techniques. Methods The point spread functions, signal to noise ratio (SNR), spatial resolution, metabolite intensity distributions and Cho:NAA ratio of 3D ellipsoidal, 3D sensitivity encoding (SENSE) and 3D combined ellipsoidal and SENSE (e-SENSE) k-space sampling schemes were compared with conventional k-space data acquisition methods. Results The 3D SENSE and e-SENSE methods resulted in similar spectral patterns as the conventional MRSI methods. The Cho:NAA ratios were highly correlated (P<.05 for SENSE and P<.001 for e-SENSE) with the ellipsoidal method and all methods exhibited significantly different spectral patterns in tumor regions compared to normal appearing white matter. The geometry factors ranged between 1.2 and 1.3 for both the SENSE and e-SENSE spectra. When corrected for these factors and for differences in data acquisition times, the empirical SNRs were similar to values expected based upon theoretical grounds. The effective spatial resolution of the SENSE spectra was estimated to be same as the corresponding fully sampled k-space data, while the spectra acquired with ellipsoidal and e-SENSE k-space samplings were estimated to have a 2.36–2.47-fold loss in spatial resolution due to the differences in their point spread functions. Conclusion The 3D SENSE method retained the same spatial resolution as full k-space sampling but with a 4-fold reduction in scan time and an acquisition time of 9.28 min. The 3D e-SENSE method had a similar spatial resolution as the corresponding ellipsoidal sampling with a scan time of 4:36 min. Both parallel imaging methods provided clinically interpretable spectra with volumetric coverage and adequate SNR for evaluating Cho, Cr and NAA. PMID:19766422

  13. The microbiology of the peri-implant sulcus following successful implantation of oral prosthetic treatments.

    PubMed

    Asadzadeh, Nafiseh; Naderynasab, Mahbobeh; Fard, Fojhan Ghorbanian; Rohi, Ali; Haghi, Hamidreza Rajati

    2012-01-01

    Oral implants are widely used in partially and fully edentulous patients; however, the integration of an implant can be endangered by factors such as intraoral bacteria or inflammatory reactions. The purpose of this study was to evaluate the microbial flora present in the sulcus around dental implants and to assess the relationship between gingival health and microbial flora present. Twenty patients who had received oral implants with no complications were followed for a period of 9 months. Assessment of probing depth, the presence of bleeding on probing and microbial sampling from the peri-implant sulcus were performed at three different time points- 4 weeks after surgery, 1 month and 6 months after loading. The samples were taken by paper points and transferred to the microbiology lab in thioglyocolate cultures. In order to do a colony count and isolate the aerobic capnophilic and anerobic bacteria the samples were cultured and incubated on laboratory media. The colonies were also identified using various diagnostic tests. Alterations in the presence of various bacterial species over time and gum health were tested using analysis of variance (ANOVA) with Tukey's test post hoc. The average pocket depth for each patient ranged from 1.37 ± 0.39 mm to 2.55 ± 0.72 mm. The bacteria isolated from the cultured samples included aerobic, facultative anerobic, obligate anerobic and capnophilic bacteria. The anerobic conditions created in the peri-implant sulcus might with time enhance the number of anerobic bacteria present following dental implant loading.

  14. Small Demodex populations colonize most parts of the skin of healthy dogs.

    PubMed

    Ravera, Iván; Altet, Laura; Francino, Olga; Sánchez, Armand; Roldán, Wendy; Villanueva, Sergio; Bardagí, Mar; Ferrer, Lluís

    2013-02-01

    It is unproven that all dogs harbour Demodex mites in their skin. In fact, several microscopic studies have failed to demonstrate mites in healthy dogs. Demodex canis is a normal inhabitant of the skin of most, if not all, dogs. This hypothesis was tested using a sensitive real-time PCR to detect Demodex DNA in the skin of dogs. One hundred dogs living in a humane society shelter, 20 privately owned and healthy dogs and eight dogs receiving immunosuppressive or antineoplastic therapy. Hair samples (250-300 hairs with their hair bulbs) were taken from five or 20 skin locations. A real-time PCR that amplifies a 166 bp sequence of the D. canis chitin synthase gene was used. The percentage of positive dogs increased with the number of sampling points. When a large canine population was sampled at five cutaneous locations, 18% of dogs were positive for Demodex DNA. When 20 skin locations were sampled, all dogs tested positive for mite DNA. Our study indicates that Demodex colonization of the skin is present in all dogs, independent of age, sex, breed or coat. Nevertheless, the population of mites in a healthy dog appears to be small. Demodex DNA was amplified from all 20 cutaneous points investigated, without statistically significant differences. Using a real-time PCR technique, Demodex mites, albeit in very low numbers, were found to be normal inhabitants of haired areas of the skin of healthy dogs. © 2013 The Authors. Veterinary Dermatology © 2013 ESVD and ACVD.

  15. Influence of respiratory tract disease and mode of inhalation on detectability of budesonide in equine urine and plasma.

    PubMed

    Barton, Ann Kristin; Heinemann, Henrike; Schenk, Ina; Machnik, Marc; Gehlen, Heidrun

    2017-02-01

    OBJECTIVE To evaluate the influence of respiratory tract disease (ie, recurrent airway obstruction [RAO]) and mode of inhalation on detectability of inhaled budesonide in equine plasma and urine samples. ANIMALS 16 horses (8 healthy control horses and 8 horses affected by RAO, as determined by results of clinical examination, blood gas analysis, bronchoscopy, and cytologic examination of bronchoalveolar lavage fluid). PROCEDURES 4 horses of each group inhaled budesonide (3 μg/kg) twice daily for 10 days while at rest, and the remaining 4 horses of each group inhaled budesonide during lunging exercise. Plasma and urine samples were obtained 4 to 96 hours after inhalation and evaluated for budesonide and, in urine samples, the metabolites 6β-hydroxybudesonide and 16α-hydroxyprednisolone. RESULTS Detected concentrations of budesonide were significantly higher at all time points for RAO-affected horses, compared with concentrations for the control horses. All samples of RAO-affected horses contained budesonide concentrations above the limit of detection at 96 hours after inhalation, whereas this was found for only 2 control horses. Detected concentrations of budesonide were higher, but not significantly so, at all time points in horses that inhaled budesonide during exercise, compared with concentrations for inhalation at rest. CONCLUSIONS AND CLINICAL RELEVANCE Results of this study indicated that the time interval between inhalation of a glucocorticoid and participation in sporting events should be increased when inhalation treatment is administered during exercise to horses affected by respiratory tract disease.

  16. Multiobjective sampling design for parameter estimation and model discrimination in groundwater solute transport

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1989-01-01

    Sampling design for site characterization studies of solute transport in porous media is formulated as a multiobjective problem. Optimal design of a sampling network is a sequential process in which the next phase of sampling is designed on the basis of all available physical knowledge of the system. Three objectives are considered: model discrimination, parameter estimation, and cost minimization. For the first two objectives, physically based measures of the value of information obtained from a set of observations are specified. In model discrimination, value of information of an observation point is measured in terms of the difference in solute concentration predicted by hypothesized models of transport. Points of greatest difference in predictions can contribute the most information to the discriminatory power of a sampling design. Sensitivity of solute concentration to a change in a parameter contributes information on the relative variance of a parameter estimate. Inclusion of points in a sampling design with high sensitivities to parameters tends to reduce variance in parameter estimates. Cost minimization accounts for both the capital cost of well installation and the operating costs of collection and analysis of field samples. Sensitivities, discrimination information, and well installation and sampling costs are used to form coefficients in the multiobjective problem in which the decision variables are binary (zero/one), each corresponding to the selection of an observation point in time and space. The solution to the multiobjective problem is a noninferior set of designs. To gain insight into effective design strategies, a one-dimensional solute transport problem is hypothesized. Then, an approximation of the noninferior set is found by enumerating 120 designs and evaluating objective functions for each of the designs. Trade-offs between pairs of objectives are demonstrated among the models. The value of an objective function for a given design is shown to correspond to the ability of a design to actually meet an objective.

  17. Longitudinal assessment of local and global functional connectivity following sports-related concussion.

    PubMed

    Meier, Timothy B; Bellgowan, Patrick S F; Mayer, Andrew R

    2017-02-01

    Growing evidence suggests that sports-related concussions (SRC) may lead to acute changes in intrinsic functional connectivity, although most studies to date have been cross-sectional in nature with relatively modest sample sizes. We longitudinally assessed changes in local and global resting state functional connectivity using metrics that do not require a priori seed or network selection (regional homogeneity; ReHo and global brain connectivity; GBC, respectively). A large sample of collegiate athletes (N = 43) was assessed approximately one day (1.74 days post-injury, N = 34), one week (8.44 days, N = 34), and one month post-concussion (32.47 days, N = 30). Healthy contact sport-athletes served as controls (N = 51). Concussed athletes showed improvement in mood symptoms at each time point (p's < 0.05), but had significantly higher mood scores than healthy athletes at every time point (p's < 0.05). In contrast, self-reported symptoms and cognitive deficits improved over time following concussion (p's < 0.001), returning to healthy levels by one week post-concussion. ReHo in sensorimotor, visual, and temporal cortices increased over time post-concussion, and was greatest at one month post-injury. Conversely, ReHo in the frontal cortex decreased over time following SRC, with the greatest decrease evident at one month post-concussion. Differences in ReHo relative to healthy athletes were primarily observed at one month post-concussion rather than the more acute time points. Contrary to our hypothesis, no significant cross-sectional or longitudinal differences in GBC were observed. These results are suggestive of a delayed onset of local connectivity changes following SRC.

  18. Cooperative processing in primary somatosensory cortex and posterior parietal cortex during tactile working memory.

    PubMed

    Ku, Yixuan; Zhao, Di; Bodner, Mark; Zhou, Yong-Di

    2015-08-01

    In the present study, causal roles of both the primary somatosensory cortex (SI) and the posterior parietal cortex (PPC) were investigated in a tactile unimodal working memory (WM) task. Individual magnetic resonance imaging-based single-pulse transcranial magnetic stimulation (spTMS) was applied, respectively, to the left SI (ipsilateral to tactile stimuli), right SI (contralateral to tactile stimuli) and right PPC (contralateral to tactile stimuli), while human participants were performing a tactile-tactile unimodal delayed matching-to-sample task. The time points of spTMS were 300, 600 and 900 ms after the onset of the tactile sample stimulus (duration: 200 ms). Compared with ipsilateral SI, application of spTMS over either contralateral SI or contralateral PPC at those time points significantly impaired the accuracy of task performance. Meanwhile, the deterioration in accuracy did not vary with the stimulating time points. Together, these results indicate that the tactile information is processed cooperatively by SI and PPC in the same hemisphere, starting from the early delay of the tactile unimodal WM task. This pattern of processing of tactile information is different from the pattern in tactile-visual cross-modal WM. In a tactile-visual cross-modal WM task, SI and PPC contribute to the processing sequentially, suggesting a process of sensory information transfer during the early delay between modalities. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  19. Behavior of sensitivities in the one-dimensional advection-dispersion equation: Implications for parameter estimation and sampling design

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1987-01-01

    The spatial and temporal variability of sensitivities has a significant impact on parameter estimation and sampling design for studies of solute transport in porous media. Physical insight into the behavior of sensitivities is offered through an analysis of analytically derived sensitivities for the one-dimensional form of the advection-dispersion equation. When parameters are estimated in regression models of one-dimensional transport, the spatial and temporal variability in sensitivities influences variance and covariance of parameter estimates. Several principles account for the observed influence of sensitivities on parameter uncertainty. (1) Information about a physical parameter may be most accurately gained at points in space and time with a high sensitivity to the parameter. (2) As the distance of observation points from the upstream boundary increases, maximum sensitivity to velocity during passage of the solute front increases and the consequent estimate of velocity tends to have lower variance. (3) The frequency of sampling must be “in phase” with the S shape of the dispersion sensitivity curve to yield the most information on dispersion. (4) The sensitivity to the dispersion coefficient is usually at least an order of magnitude less than the sensitivity to velocity. (5) The assumed probability distribution of random error in observations of solute concentration determines the form of the sensitivities. (6) If variance in random error in observations is large, trends in sensitivities of observation points may be obscured by noise and thus have limited value in predicting variance in parameter estimates among designs. (7) Designs that minimize the variance of one parameter may not necessarily minimize the variance of other parameters. (8) The time and space interval over which an observation point is sensitive to a given parameter depends on the actual values of the parameters in the underlying physical system.

  20. Effects of the Safe Drinking Water Act Amendments of 1986 on Army Fixed Installation Water Treatment Plants

    DTIC Science & Technology

    1992-06-01

    Parathion degradation product Chloramines Bromacil (4>-Nitrophenol) Chlorate Cyanazine Prometon Chlorine Cryomazine 2,4,5- T Chlorine Dioxide DCPA (and...value is the residual disinfectant concentration. T is the disinfectant contact time. Explanations of how C and T are calculated are included in Appendix...each chlorine residual disinfectant concentration sampling point. c) Disinfectant Contact Time. The disinfectant contact time ( T ) must be determined

  1. Progress, Potential and Pitfalls in the Use of Bomb 14C to Constrain Soil Carbon Dynamics

    NASA Astrophysics Data System (ADS)

    Baisden, W. T.

    2007-12-01

    Forty four years have passed since atmospheric testing of thermonuclear weapons injected a major 14C spike into the atmosphere-biosphere-hydrosphere system. The use of bomb 14C, in combination with millennial decay of 14C, remains the most effective empirical tool for constraining rates of carbon (C) cycling in soils at timescales beyond experimental manipulations (>5 years). In the last 20 years, accelerator mass spectrometry has greatly increased the potential and throughput of soil 14C studies. At present, atmospheric Δ14C appears to be stabilizing at more constant values as a result of reinjection of bomb 14C from decadal storage in forests and soils. This means that current and future studies using bomb 14C have different sensitivities and uncertainties compared to those carried out during periods of rapid Δ14C decline such as the 1970s, 80s and 90s. Bomb 14C proves most effective when archived soil samples are available: simply using bulk Δ14C from samples collected at two or more times can surpass single time point Δ14C from soil fractions in providing robust C cycling rates. Of course, measurement of Δ14C in soil fractions from time series samples can significantly improve estimates of C cycling parameters. Samples collected between ca. 1965 and 1995 have now greatly surpassed pre-bomb samples in utility, although pre-bomb samples retain considerable usefulness for estimating the size of inert (millennial) C pools. Major pitfalls in the use of bomb 14C, particularly for single time point samples and fractions, are mainly associated with model assumptions. For example, calculated residence times can be highly sensitive to a minor component of old C (<10% of total C). Similarly, calculated residence times are also highly dependent upon rates of soil C accumulation or loss. A final key source of error is lag times between C fixation from atmospheric CO2 and incorporation in the measured soil C pool, either due to long-lived plant tissue, or residence times in other soil pools/horizons. All work using Δ14C should consider sensitivity and uncertainty related to these issues. Major potential exists in the use of Δ14C to constrain soil C dynamics as a function of soil depth, in relation to major unexplained losses of soil C, and to probe the mechanisms and rates of soil organic matter stabilization. These areas of major potential all lay outside conventional use of Δ14C to calculate simple residence times.

  2. The Multigroup Multilevel Categorical Latent Growth Curve Models

    ERIC Educational Resources Information Center

    Hung, Lai-Fa

    2010-01-01

    Longitudinal data describe developmental patterns and enable predictions of individual changes beyond sampled time points. Major methodological issues in longitudinal data include modeling random effects, subject effects, growth curve parameters, and autoregressive residuals. This study embedded the longitudinal model within a multigroup…

  3. Validation of a quantitative Eimeria spp. PCR for fresh droppings of broiler chickens.

    PubMed

    Peek, H W; Ter Veen, C; Dijkman, R; Landman, W J M

    2017-12-01

    A quantitative Polymerase Chain Reaction (qPCR) for the seven chicken Eimeria spp. was modified and validated for direct use on fresh droppings. The analytical specificity of the qPCR on droppings was 100%. Its analytical sensitivity (non-sporulated oocysts/g droppings) was 41 for E. acervulina, ≤2900 for E. brunetti, 710 for E. praecox, 1500 for E. necatrix, 190 for E. tenella, 640 for E. maxima, and 1100 for E. mitis. Field validation of the qPCR was done using droppings with non-sporulated oocysts from 19 broiler flocks. To reduce the number of qPCR tests five grams of each pooled sample (consisting of ten fresh droppings) per time point were blended into one mixed sample. Comparison of the oocysts per gram (OPG)-counting method with the qPCR using pooled samples (n = 1180) yielded a Pearson's correlation coefficient of 0.78 (95% CI: 0.76-0.80) and a Pearson's correlation coefficient of 0.76 (95% CI: 0.70-0.81) using mixed samples (n = 236). Comparison of the average of the OPG-counts of the five pooled samples with the mixed sample per time point (n = 236) showed a Pearson's correlation coefficient (R) of 0.94 (95% CI: 0.92-0.95) for the OPG-counting method and 0.87 (95% CI: 0.84-0.90) for the qPCR. This indicates that mixed samples are practically equivalent to the mean of five pooled samples. The good correlation between the OPG-counting method and the qPCR was further confirmed by the visual agreement between the total oocyst/g shedding patterns measured with both techniques in the 19 broiler flocks using the mixed samples.

  4. On the relation between correlation dimension, approximate entropy and sample entropy parameters, and a fast algorithm for their calculation

    NASA Astrophysics Data System (ADS)

    Zurek, Sebastian; Guzik, Przemyslaw; Pawlak, Sebastian; Kosmider, Marcin; Piskorski, Jaroslaw

    2012-12-01

    We explore the relation between correlation dimension, approximate entropy and sample entropy parameters, which are commonly used in nonlinear systems analysis. Using theoretical considerations we identify the points which are shared by all these complexity algorithms and show explicitly that the above parameters are intimately connected and mutually interdependent. A new geometrical interpretation of sample entropy and correlation dimension is provided and the consequences for the interpretation of sample entropy, its relative consistency and some of the algorithms for parameter selection for this quantity are discussed. To get an exact algorithmic relation between the three parameters we construct a very fast algorithm for simultaneous calculations of the above, which uses the full time series as the source of templates, rather than the usual 10%. This algorithm can be used in medical applications of complexity theory, as it can calculate all three parameters for a realistic recording of 104 points within minutes with the use of an average notebook computer.

  5. Effects of Sampling Strategy, Detection Probability, and Independence of Counts on the Use of Point Counts

    Treesearch

    Grey W. Pendleton

    1995-01-01

    Many factors affect the use of point counts for monitoring bird populations, including sampling strategies, variation in detection rates, and independence of sample points. The most commonly used sampling plans are stratified sampling, cluster sampling, and systematic sampling. Each of these might be most useful for different objectives or field situations. Variation...

  6. Multivariate survivorship analysis using two cross-sectional samples.

    PubMed

    Hill, M E

    1999-11-01

    As an alternative to survival analysis with longitudinal data, I introduce a method that can be applied when one observes the same cohort in two cross-sectional samples collected at different points in time. The method allows for the estimation of log-probability survivorship models that estimate the influence of multiple time-invariant factors on survival over a time interval separating two samples. This approach can be used whenever the survival process can be adequately conceptualized as an irreversible single-decrement process (e.g., mortality, the transition to first marriage among a cohort of never-married individuals). Using data from the Integrated Public Use Microdata Series (Ruggles and Sobek 1997), I illustrate the multivariate method through an investigation of the effects of race, parity, and educational attainment on the survival of older women in the United States.

  7. Intensity of Territorial Marking Predicts Wolf Reproduction: Implications for Wolf Monitoring

    PubMed Central

    García, Emilio J.

    2014-01-01

    Background The implementation of intensive and complex approaches to monitor large carnivores is resource demanding, restricted to endangered species, small populations, or small distribution ranges. Wolf monitoring over large spatial scales is difficult, but the management of such contentious species requires regular estimations of abundance to guide decision-makers. The integration of wolf marking behaviour with simple sign counts may offer a cost-effective alternative to monitor the status of wolf populations over large spatial scales. Methodology/Principal Findings We used a multi-sampling approach, based on the collection of visual and scent wolf marks (faeces and ground scratching) and the assessment of wolf reproduction using howling and observation points, to test whether the intensity of marking behaviour around the pup-rearing period (summer-autumn) could reflect wolf reproduction. Between 1994 and 2007 we collected 1,964 wolf marks in a total of 1,877 km surveyed and we searched for the pups' presence (1,497 howling and 307 observations points) in 42 sampling sites with a regular presence of wolves (120 sampling sites/year). The number of wolf marks was ca. 3 times higher in sites with a confirmed presence of pups (20.3 vs. 7.2 marks). We found a significant relationship between the number of wolf marks (mean and maximum relative abundance index) and the probability of wolf reproduction. Conclusions/Significance This research establishes a real-time relationship between the intensity of wolf marking behaviour and wolf reproduction. We suggest a conservative cutting point of 0.60 for the probability of wolf reproduction to monitor wolves on a regional scale combined with the use of the mean relative abundance index of wolf marks in a given area. We show how the integration of wolf behaviour with simple sampling procedures permit rapid, real-time, and cost-effective assessments of the breeding status of wolf packs with substantial implications to monitor wolves at large spatial scales. PMID:24663068

  8. N leaching to groundwater from dairy production involving grazing over the winter on a clay-loam soil.

    PubMed

    Necpalova, M; Fenton, O; Casey, I; Humphreys, J

    2012-08-15

    This study investigated concentrations of various N species in shallow groundwater (<2.2m below ground level) and N losses from dairy production involving grazing over the winter period on a clay loam soil with a high natural attenuation capacity in southern Ireland (52°51'N, 08°21'W) over a 2-year period. A dense network of shallow groundwater piezometers was installed to determine groundwater flow direction and N spatial and temporal variation. Estimated vertical travel times through the unsaturated zone (<0.5 yr, time lag) allowed the correlation of management with groundwater N within a short space of time. There was a two way interaction of the system and sampling date (P<0.05) on concentrations of DON, oxidised N and NO(3)(-)-N. In contrast, concentrations of NH(4)(+)-N and NO(2)(-)-N were unaffected by the dairy system. Grazing over the winter had no effect on N losses to groundwater. Mean concentrations of DON, NH(4)(+)-N, NO(2)(-)-N and NO(3)(-)-N were 2.16, 0.35, 0.01 and 0.37 mg L(-1) respectively. Soil attenuation processes such as denitrification and DNRA resulted in increased NH(4)(+)-N levels. For this reason, DON and NH(4)(+)-N represented the highest proportion of N losses from the site. Some of the spatial and temporal variation of N concentrations was explained by correlations with selected chemical and hydro-topographical parameters (NO(3)(-)-N/Cl(-) ratio, distance of the sampling point from the closest receptor, watertable depth, depth of sampling piezometer, DOC concentration). A high explanatory power of NO(3)(-)-N/Cl(-) ratio and the distance of the sampling point from the closest receptor indicated the influence of point sources and groundwater-surface water interactions. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Validation of a modification to Performance-Tested Method 010403: microwell DNA hybridization assay for detection of Listeria spp. in selected foods and selected environmental surfaces.

    PubMed

    Alles, Susan; Peng, Linda X; Mozola, Mark A

    2009-01-01

    A modification to Performance-Tested Method 010403, GeneQuence Listeria Test (DNAH method), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C, and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there were statistically significant differences in method performance between the DNAH method and reference culture procedures for only 2 foods (pasteurized crab meat and lettuce) at the 27 h enrichment time point and for only a single food (pasteurized crab meat) in one trial at the 30 h enrichment time point. Independent laboratory testing with 3 foods showed statistical equivalence between the methods for all foods, and results support the findings of the internal trials. Overall, considering both internal and independent laboratory trials, sensitivity of the DNAH method relative to the reference culture procedures was 90.5%. Results of testing 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the DNAH method was more productive than the reference U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS) culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the DNAH method at the 24 h time point. Overall, sensitivity of the DNAH method at 24 h relative to that of the USDA-FSIS method was 152%. The DNAH method exhibited extremely high specificity, with only 1% false-positive reactions overall.

  10. Time dependent reduction in platelet aggregation using the multiplate analyser and hirudin blood due to platelet clumping.

    PubMed

    Chapman, Kent; Favaloro, Emmanuel J

    2018-05-01

    The Multiplate is a popular instrument that measures platelet function using whole blood. Potentially considered a point of care instrument, it is also used by hemostasis laboratories. The instrument is usually utilized to assess antiplatelet medication or as a screen of platelet function. According to the manufacturer, testing should be performed within 0.5-3 hours of blood collection, and preferably using manufacturer provided hirudin tubes. We report time-associated reduction in platelet aggregation using the Multiplate and hirudin blood collection tubes, for all the major employed agonists. Blood for Multiplate analysis was collected into manufacturer supplied hirudin tubes, and 21 consecutive samples assessed using manufacturer supplied agonists (ADP, arachidonic acid, TRAP, collagen and ristocetin), at several time-points post-sample collection within the recommended test time period. Blood was also collected into EDTA as a reference method for platelet counts, with samples collected into sodium citrate and hirudin used for comparative counts. All platelet agonists showed a diminution of response with time. Depending on the agonist, the reduction caused 5-20% and 22-47% of responses initially in the normal reference range to fall below the reference range at 120min and 180min, respectively. Considering any agonist, 35% and 67% of initially "normal" responses became 'abnormal' at 120 min and 180 min, respectively. Platelet counts showed generally minimal changes in EDTA blood, but were markedly reduced over time in both citrate and hirudin blood, with up to 40% and 60% reduction, respectively, at 240 min. The presence of platelet clumping (micro-aggregate formation) was also observed in a time dependent manner, especially for hirudin. In conclusion, considering any platelet agonist, around two-thirds of samples can, within the recommended 0.5-3 hour testing window post-blood collection, yield a reduction in platelet aggregation that may lead to a change in interpretation (i.e., normal to reduced). Thus, the stability of Multiplate testing can more realistically be considered as being between 30-120 min of blood collection for samples collected into hirudin.

  11. Non-Aqueous Titration Method for Determining Suppressor Concentration in the MCU Next Generation Solvent (NGS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor-Pashow, Kathryn M. L.; Jones, Daniel H.

    A non-aqueous titration method has been used for quantifying the suppressor concentration in the MCU solvent hold tank (SHT) monthly samples since the Next Generation Solvent (NGS) was implemented in 2013. The titration method measures the concentration of the NGS suppressor (TiDG) as well as the residual tri-n-octylamine (TOA) that is a carryover from the previous solvent. As the TOA concentration has decreased over time, it has become difficult to resolve the TiDG equivalence point as the TOA equivalence point has moved closer. In recent samples, the TiDG equivalence point could not be resolved, and therefore, the TiDG concentration wasmore » determined by subtracting the TOA concentration as measured by semi-volatile organic analysis (SVOA) from the total base concentration as measured by titration. In order to improve the titration method so that the TiDG concentration can be measured directly, without the need for the SVOA data, a new method has been developed that involves spiking of the sample with additional TOA to further separate the two equivalence points in the titration. This method has been demonstrated on four recent SHT samples and comparison to results obtained using the SVOA TOA subtraction method shows good agreement. Therefore, it is recommended that the titration procedure be revised to include the TOA spike addition, and this to become the primary method for quantifying the TiDG.« less

  12. Critical fluid light scattering

    NASA Technical Reports Server (NTRS)

    Gammon, Robert W.

    1988-01-01

    The objective is to measure the decay rates of critical density fluctuations in a simple fluid (xenon) very near its liquid-vapor critical point using laser light scattering and photon correlation spectroscopy. Such experiments were severely limited on Earth by the presence of gravity which causes large density gradients in the sample when the compressibility diverges approaching the critical point. The goal is to measure fluctuation decay rates at least two decades closer to the critical point than is possible on earth, with a resolution of 3 microK. This will require loading the sample to 0.1 percent of the critical density and taking data as close as 100 microK to the critical temperature. The minimum mission time of 100 hours will allow a complete range of temperature points to be covered, limited by the thermal response of the sample. Other technical problems have to be addressed such as multiple scattering and the effect of wetting layers. The experiment entails measurement of the scattering intensity fluctuation decay rate at two angles for each temperature and simultaneously recording the scattering intensities and sample turbidity (from the transmission). The analyzed intensity and turbidity data gives the correlation length at each temperature and locates the critical temperature. The fluctuation decay rate data from these measurements will provide a severe test of the generalized hydrodynamic theories of transport coefficients in the critical regions. When compared to equivalent data from binary liquid critical mixtures they will test the universality of critical dynamics.

  13. The Living Dead: Bacterial Community Structure of a Cadaver at the Onset and End of the Bloat Stage of Decomposition

    PubMed Central

    Hyde, Embriette R.; Haarmann, Daniel P.; Lynne, Aaron M.; Bucheli, Sibyl R.; Petrosino, Joseph F.

    2013-01-01

    Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition. PMID:24204941

  14. The living dead: bacterial community structure of a cadaver at the onset and end of the bloat stage of decomposition.

    PubMed

    Hyde, Embriette R; Haarmann, Daniel P; Lynne, Aaron M; Bucheli, Sibyl R; Petrosino, Joseph F

    2013-01-01

    Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition.

  15. Methods for Assessment of Species Richness and Occupancy Across Space, Time, Taxonomic Groups, and Ecoregions

    DTIC Science & Technology

    2017-03-26

    logistic constraints and associated travel time between points in the central and western Great Basin. The geographic and temporal breadth of our...surveys (MacKenzie and Royle 2005). In most cases, less time is spent traveling between sites on a given day when the single-day design is implemented...with the single-day design (110 hr). These estimates did not include return- travel time , which did not limit sampling effort. As a result, we could

  16. Mathematical modeling and assessment of microbial migration during the sprouting of alfalfa in trays in a nonuniformly contaminated seed batch using Enterobacter aerogenes as a surrogate for Salmonella Stanley.

    PubMed

    Liu, Bin; Schaffner, Donald W

    2007-11-01

    Raw seed sprouts have been implicated in several food poisoning outbreaks in the past 10 years. The U.S. Food and Drug Administration recommends that sprout growers use interventions (such as testing of spent irrigation water) to control the presence of pathogens in the finished product. During the sprouting process, initially low concentrations of pathogen may increase, and contamination may spread within a batch of sprouting seeds. A model of pathogen growth as a function of time and distance from the contamination spot during the sprouting of alfalfa in trays has been developed with Enterobacter aerogenes. The probability of detecting contamination was assessed by logistic regression at various time points and distances by sampling from sprouts or irrigation water. Our results demonstrate that microbial populations and possibility of detection were greatly reduced at distances of > or = 20 cm from the point of contamination in a seed batch during tray sprouting; however, the probability of detecting microbial contamination at distances less than 10 cm from the point of inoculation was almost 100% at the end of the sprouting process. Our results also show that sampling irrigation water, especially large volumes of water, is highly effective at detecting contamination: by collecting 100 ml of irrigation water for membrane filtration, the probability of detection was increased by three to four times during the first 6 h of seed germination. Our findings have quantified the degree to which a small level of contamination will spread throughout a tray of sprouting alfalfa seeds and subsequently be detected by either sprout or irrigation water sampling.

  17. Transtheoretical Model Constructs for Physical Activity Behavior are Invariant across Time among Ethnically Diverse Adults in Hawaii

    PubMed Central

    Nigg, Claudio R; Motl, Robert W; Horwath, Caroline; Dishman, Rod K

    2012-01-01

    Objectives Physical activity (PA) research applying the Transtheoretical Model (TTM) to examine group differences and/or change over time requires preliminary evidence of factorial validity and invariance. The current study examined the factorial validity and longitudinal invariance of TTM constructs recently revised for PA. Method Participants from an ethnically diverse sample in Hawaii (N=700) completed questionnaires capturing each TTM construct. Results Factorial validity was confirmed for each construct using confirmatory factor analysis with full-information maximum likelihood. Longitudinal invariance was evidenced across a shorter (3-month) and longer (6-month) time period via nested model comparisons. Conclusions The questionnaires for each validated TTM construct are provided, and can now be generalized across similar subgroups and time points. Further validation of the provided measures is suggested in additional populations and across extended time points. PMID:22778669

  18. A Semiparametric Change-Point Regression Model for Longitudinal Observations.

    PubMed

    Xing, Haipeng; Ying, Zhiliang

    2012-12-01

    Many longitudinal studies involve relating an outcome process to a set of possibly time-varying covariates, giving rise to the usual regression models for longitudinal data. When the purpose of the study is to investigate the covariate effects when experimental environment undergoes abrupt changes or to locate the periods with different levels of covariate effects, a simple and easy-to-interpret approach is to introduce change-points in regression coefficients. In this connection, we propose a semiparametric change-point regression model, in which the error process (stochastic component) is nonparametric and the baseline mean function (functional part) is completely unspecified, the observation times are allowed to be subject-specific, and the number, locations and magnitudes of change-points are unknown and need to be estimated. We further develop an estimation procedure which combines the recent advance in semiparametric analysis based on counting process argument and multiple change-points inference, and discuss its large sample properties, including consistency and asymptotic normality, under suitable regularity conditions. Simulation results show that the proposed methods work well under a variety of scenarios. An application to a real data set is also given.

  19. Monte Carlo approaches to sampling forested tracts with lines or points

    Treesearch

    Harry T. Valentine; Jeffrey H. Gove; Timothy G. Gregoire

    2001-01-01

    Several line- and point-based sampling methods can be employed to estimate the aggregate dimensions of trees standing on a forested tract or pieces of coarse woody debris lying on the forest floor. Line methods include line intersect sampling, horizontal line sampling, and transect relascope sampling; point methods include variable- and fixed-radius plot sampling, and...

  20. Determination of trace inorganic mercury species in water samples by cloud point extraction and UV-vis spectrophotometry.

    PubMed

    Ulusoy, Halil Ibrahim

    2014-01-01

    A new micelle-mediated extraction method was developed for preconcentration of ultratrace Hg(II) ions prior to spectrophotometric determination. 2-(2'-Thiazolylazo)-p-cresol (TAC) and Ponpe 7.5 were used as the chelating agent and nonionic surfactant, respectively. Hg(II) ions form a hydrophobic complex with TAC in a micelle medium. The main factors affecting cloud point extraction efficiency, such as pH of the medium, concentrations of TAC and Ponpe 7.5, and equilibration temperature and time, were investigated in detail. An overall preconcentration factor of 33.3 was obtained upon preconcentration of a 50 mL sample. The LOD obtained under the optimal conditions was 0.86 microg/L, and the RSD for five replicate measurements of 100 microg/L Hg(II) was 3.12%. The method was successfully applied to the determination of Hg in environmental water samples.

  1. Temporal variation in phenotypic and genotypic traits in two sockeye salmon populations, Tustumena Lake, Alaska

    USGS Publications Warehouse

    Woody, Carol Ann; Olsen, Jeffrey B.; Reynolds, Joel H.; Bentzen, Paul

    2000-01-01

    Sockeye salmon Oncorhynchus nerka in two tributary streams (about 20 km apart) of the same lake were compared for temporal variation in phenotypic (length, depth adjusted for length) and genotypic (six microsatellite loci) traits. Peak run time (July 16 versus 11 August) and run duration (43 versus 26 d) differed between streams. Populations were sampled twice, including an overlapping point in time. Divergence at microsatellite loci followed a temporal cline: Population sample groups collected at the same time were not different (F ST = 0), whereas those most separated in time were different (F ST = 0.011, P = 0.001). Although contemporaneous sample groups did not differ significantly in microsatellite genotypes (F ST = 0), phenotypic traits did differ significantly (MANOVA, P < 0.001). Fish from the larger stream were larger; fish from the smaller stream were smaller, suggesting differential fitness related to size. Results indicate run time differences among and within sockeye salmon populations may strongly influence levels of gene flow.

  2. From samples to populations in retinex models

    NASA Astrophysics Data System (ADS)

    Gianini, Gabriele

    2017-05-01

    Some spatial color algorithms, such as Brownian Milano retinex (MI-retinex) and random spray retinex (RSR), are based on sampling. In Brownian MI-retinex, memoryless random walks (MRWs) explore the neighborhood of a pixel and are then used to compute its output. Considering the relative redundancy and inefficiency of MRW exploration, the algorithm RSR replaced the walks by samples of points (the sprays). Recent works point to the fact that a mapping from the sampling formulation to the probabilistic formulation of the corresponding sampling process can offer useful insights into the models, at the same time featuring intrinsically noise-free outputs. The paper continues the development of this concept and shows that the population-based versions of RSR and Brownian MI-retinex can be used to obtain analytical expressions for the outputs of some test images. The comparison of the two analytic expressions from RSR and from Brownian MI-retinex demonstrates not only that the two outputs are, in general, different but also that they depend in a qualitatively different way upon the features of the image.

  3. TH-CD-209-10: Scanning Proton Arc Therapy (SPArc) - The First Robust and Delivery-Efficient Spot Scanning Proton Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, X; Li, X; Zhang, J

    Purpose: To develop a delivery-efficient proton spot-scanning arc therapy technique with robust plan quality. Methods: We developed a Scanning Proton Arc(SPArc) optimization algorithm integrated with (1)Control point re-sampling by splitting control point into adjacent sub-control points; (2)Energy layer re-distribution by assigning the original energy layers to the new sub-control points; (3)Energy layer filtration by deleting low MU weighting energy layers; (4)Energy layer re-sampling by sampling additional layers to ensure the optimal solution. A bilateral head and neck oropharynx case and a non-mobile lung target case were tested. Plan quality and total estimated delivery time were compared to original robust optimizedmore » multi-field step-and-shoot arc plan without SPArc optimization (Arcmulti-field) and standard robust optimized Intensity Modulated Proton Therapy(IMPT) plans. Dose-Volume-Histograms (DVH) of target and Organ-at-Risks (OARs) were analyzed along with all worst case scenarios. Total delivery time was calculated based on the assumption of a 360 degree gantry room with 1 RPM rotation speed, 2ms spot switching time, beam current 1nA, minimum spot weighting 0.01 MU, energy-layer-switching-time (ELST) from 0.5 to 4s. Results: Compared to IMPT, SPArc delivered less integral dose(−14% lung and −8% oropharynx). For lung case, SPArc reduced 60% of skin max dose, 35% of rib max dose and 15% of lung mean dose. Conformity Index is improved from 7.6(IMPT) to 4.0(SPArc). Compared to Arcmulti-field, SPArc reduced number of energy layers by 61%(276 layers in lung) and 80%(1008 layers in oropharynx) while kept the same robust plan quality. With ELST from 0.5s to 4s, it reduced 55%–60% of Arcmulti-field delivery time for the lung case and 56%–67% for the oropharynx case. Conclusion: SPArc is the first robust and delivery-efficient proton spot-scanning arc therapy technique which could be implemented in routine clinic. For modern proton machine with ELST close to 0.5s, SPArc would be a popular treatment option for both single and multi-room center.« less

  4. An Indoor Positioning Technique Based on a Feed-Forward Artificial Neural Network Using Levenberg-Marquardt Learning Method

    NASA Astrophysics Data System (ADS)

    Pahlavani, P.; Gholami, A.; Azimi, S.

    2017-09-01

    This paper presents an indoor positioning technique based on a multi-layer feed-forward (MLFF) artificial neural networks (ANN). Most of the indoor received signal strength (RSS)-based WLAN positioning systems use the fingerprinting technique that can be divided into two phases: the offline (calibration) phase and the online (estimation) phase. In this paper, RSSs were collected for all references points in four directions and two periods of time (Morning and Evening). Hence, RSS readings were sampled at a regular time interval and specific orientation at each reference point. The proposed ANN based model used Levenberg-Marquardt algorithm for learning and fitting the network to the training data. This RSS readings in all references points and the known position of these references points was prepared for training phase of the proposed MLFF neural network. Eventually, the average positioning error for this network using 30% check and validation data was computed approximately 2.20 meter.

  5. Development and Characterization of a Laser-Induced Acoustic Desorption Source.

    PubMed

    Huang, Zhipeng; Ossenbrüggen, Tim; Rubinsky, Igor; Schust, Matthias; Horke, Daniel A; Küpper, Jochen

    2018-03-20

    A laser-induced acoustic desorption source, developed for use at central facilities, such as free-electron lasers, is presented. It features prolonged measurement times and a fixed interaction point. A novel sample deposition method using aerosol spraying provides a uniform sample coverage and hence stable signal intensity. Utilizing strong-field ionization as a universal detection scheme, the produced molecular plume is characterized in terms of number density, spatial extend, fragmentation, temporal distribution, translational velocity, and translational temperature. The effect of desorption laser intensity on these plume properties is evaluated. While translational velocity is invariant for different desorption laser intensities, pointing to a nonthermal desorption mechanism, the translational temperature increases significantly and higher fragmentation is observed with increased desorption laser fluence.

  6. Use of microwaves to improve nutritional value of soybeans for future space inhabitants

    NASA Technical Reports Server (NTRS)

    Singh, G.

    1983-01-01

    Whole soybeans from four different varieties at different moisture contents were microwaved for varying times to determine the conditions for maximum destruction of trypsin inhibitor and lipoxygenase activities, and optimal growth of chicks. Microwaving 150 gm samples of soybeans (at 14 to 28% moisture) for 1.5 min was found optimal for reduction of trypsin inhibitor and lipoxygenase activities. Microwaving 1 kgm samples of soybeans for 9 minutes destroyed 82% of the trypsin inhibitor activity and gave optimal chick growth. It should be pointed out that the microwaving time would vary according to the weight of the sample and the power of the microwave oven. The microwave oven used in the above experiments was rated at 650 watts 2450 MHz.

  7. Colorimetric assay for urinary track infection disease diagnostic on flexible substrate

    NASA Astrophysics Data System (ADS)

    Safavieh, Mohammadali; Ahmed, Minhaz Uddin; Zourob, Mohammed

    2012-10-01

    We are presenting cassette as a novel point of care diagnostic device. This device is easy to use, low cost to prepare, high throughput and can analyze several samples at the same time. We first, demonstrate the preparation method of the device. Then, fabrication of the flexible substrate has been presented. The device has been used for detection of the real sample of E.coli bacteria following by colorimetric detection. We have shown that we could detect 30 cfu/ml bacteria and 100 fg/μl of Staphylococous aureus DNA in 1 hr using LAMP amplification technique. This device will be helpful in hospitals and doctor's office for analysis of several patients' samples at the same time.

  8. Multiscale study on stochastic reconstructions of shale samples

    NASA Astrophysics Data System (ADS)

    Lili, J.; Lin, M.; Jiang, W. B.

    2016-12-01

    Shales are known to have multiscale pore systems, composed of macroscale fractures, micropores, and nanoscale pores within gas or oil-producing organic material. Also, shales are fissile and laminated, and the heterogeneity in horizontal is quite different from that in vertical. Stochastic reconstructions are extremely useful in situations where three-dimensional information is costly and time consuming. Thus the purpose of our paper is to reconstruct stochastically equiprobable 3D models containing information from several scales. In this paper, macroscale and microscale images of shale structure in the Lower Silurian Longmaxi are obtained by X-ray microtomography and nanoscale images are obtained by scanning electron microscopy. Each image is representative for all given scales and phases. Especially, the macroscale is four times coarser than the microscale, which in turn is four times lower in resolution than the nanoscale image. Secondly, the cross correlation-based simulation method (CCSIM) and the three-step sampling method are combined together to generate stochastic reconstructions for each scale. It is important to point out that the boundary points of pore and matrix are selected based on multiple-point connectivity function in the sampling process, and thus the characteristics of the reconstructed image can be controlled indirectly. Thirdly, all images with the same resolution are developed through downscaling and upscaling by interpolation, and then we merge multiscale categorical spatial data into a single 3D image with predefined resolution (the microscale image). 30 realizations using the given images and the proposed method are generated. The result reveals that the proposed method is capable of preserving the multiscale pore structure, both vertically and horizontally, which is necessary for accurate permeability prediction. The variogram curves and pore-size distribution for both original 3D sample and the generated 3D realizations are compared. The result indicates that the agreement between the original 3D sample and the generated stochastic realizations is excellent. This work is supported by "973" Program (2014CB239004), the Key Instrument Developing Project of the CAS (ZDYZ2012-1-08-02) and the National Natural Science Foundation of China (Grant No. 41574129).

  9. Cumulative Effect of Racial Discrimination on the Mental Health of Ethnic Minorities in the United Kingdom.

    PubMed

    Wallace, Stephanie; Nazroo, James; Bécares, Laia

    2016-07-01

    To examine the longitudinal association between cumulative exposure to racial discrimination and changes in the mental health of ethnic minority people. We used data from 4 waves (2009-2013) of the UK Household Longitudinal Study, a longitudinal household panel survey of approximately 40 000 households, including an ethnic minority boost sample of approximately 4000 households. Ethnic minority people who reported exposure to racial discrimination at 1 time point had 12-Item Short Form Health Survey (SF-12) mental component scores 1.93 (95% confidence interval [CI] = -3.31, -0.56) points lower than did those who reported no exposure to racial discrimination, whereas those who had been exposed to 2 or more domains of racial discrimination, at 2 different time points, had SF-12 mental component scores 8.26 (95% CI = -13.33, -3.18) points lower than did those who reported no experiences of racial discrimination. Controlling for racial discrimination and other socioeconomic factors reduced ethnic inequalities in mental health. Cumulative exposure to racial discrimination has incremental negative long-term effects on the mental health of ethnic minority people in the United Kingdom. Studies that examine exposure to racial discrimination at 1 point in time may underestimate the contribution of racism to poor health.

  10. Which skills and factors better predict winning and losing in high-level men's volleyball?

    PubMed

    Peña, Javier; Rodríguez-Guerra, Jorge; Buscà, Bernat; Serra, Núria

    2013-09-01

    The aim of this study was to determine which skills and factors better predicted the outcomes of regular season volleyball matches in the Spanish "Superliga" and were significant for obtaining positive results in the game. The study sample consisted of 125 matches played during the 2010-11 Spanish men's first division volleyball championship. Matches were played by 12 teams composed of 148 players from 17 different nations from October 2010 to March 2011. The variables analyzed were the result of the game, team category, home/away court factors, points obtained in the break point phase, number of service errors, number of service aces, number of reception errors, percentage of positive receptions, percentage of perfect receptions, reception efficiency, number of attack errors, number of blocked attacks, attack points, percentage of attack points, attack efficiency, and number of blocks performed by both teams participating in the match. The results showed that the variables of team category, points obtained in the break point phase, number of reception errors, and number of blocked attacks by the opponent were significant predictors of winning or losing the matches. Odds ratios indicated that the odds of winning a volleyball match were 6.7 times greater for the teams belonging to higher rankings and that every additional point in Complex II increased the odds of winning a match by 1.5 times. Every reception and blocked ball error decreased the possibility of winning by 0.6 and 0.7 times, respectively.

  11. Ice Wedge Polygon Bromide Tracer Experiment in Subsurface Flow, Barrow, Alaska, 2015-2016

    DOE Data Explorer

    Nathan Wales

    2018-02-15

    Time series of bromide tracer concentrations at several points within a low-centered polygon and a high-centered polygon. Concentration values were obtained from the analysis of water samples via ion chromatography with an accuracy of 0.01 mg/l.

  12. Interpolation Approach To Computer-Generated Holograms

    NASA Astrophysics Data System (ADS)

    Yatagai, Toyohiko

    1983-10-01

    A computer-generated hologram (CGH) for reconstructing independent NxN resolution points would actually require a hologram made up of NxN sampling cells. For dependent sampling points of Fourier transform CGHs, the required memory size for computation by using an interpolation technique for reconstructed image points can be reduced. We have made a mosaic hologram which consists of K x K subholograms with N x N sampling points multiplied by an appropriate weighting factor. It is shown that the mosaic hologram can reconstruct an image with NK x NK resolution points. The main advantage of the present algorithm is that a sufficiently large size hologram of NK x NK sample points is synthesized by K x K subholograms which are successively calculated from the data of N x N sample points and also successively plotted.

  13. System design of the annular suspension and pointing system /ASPS/

    NASA Technical Reports Server (NTRS)

    Cunningham, D. C.; Gismondi, T. P.; Wilson, G. W.

    1978-01-01

    This paper presents the control system design for the Annular Suspension and Pointing System. Actuator sizing and configuration of the system are explained, and the control laws developed for linearizing and compensating the magnetic bearings, roll induction motor and gimbal torquers are given. Decoupling, feedforward and error compensation for the vernier and gimbal controllers is developed. The algorithm for computing the strapdown attitude reference is derived, and the allowable sampling rates, time delays and quantization of control signals are specified.

  14. A Longitudinal Study of Household Water, Sanitation, and Hygiene Characteristics and Environmental Enteropathy Markers in Children Less than 24 Months in Iquitos, Peru

    PubMed Central

    Exum, Natalie G.; Lee, Gwenyth O.; Olórtegui, Maribel Paredes; Yori, Pablo Peñataro; Salas, Mery Siguas; Trigoso, Dixner Rengifo; Colston, Josh M.; Schwab, Kellogg J.; McCormick, Benjamin J. J.; Kosek, Margaret N.

    2018-01-01

    Abstract. Poor child gut health, resulting from a lack of access to an improved toilet or clean water, has been proposed as a biological mechanism underlying child stunting and oral vaccine failure. Characteristics related to household sanitation, water use, and hygiene were measured among a birth cohort of 270 children from peri-urban Iquitos Peru. These children had monthly stool samples and urine samples at four time points and serum samples at (2–4) time points analyzed for biomarkers related to intestinal inflammation and permeability. We found that less storage of fecal matter near the household along with a reliable water connection were associated with reduced inflammation, most prominently the fecal biomarker myeloperoxidase (MPO) (no sanitation facility compared with those with an onsite toilet had −0.43 log MPO, 95% confidence interval [CI]: −0.74, −0.13; and households with an intermittent connection versus those with a continuous supply had +0.36 log MPO, 95% CI: 0.08, 0.63). These results provide preliminary evidence for the hypothesis that children less than 24 months of age living in unsanitary conditions will have elevated gut inflammation. PMID:29436350

  15. The Use of Geostatistics in the Study of Floral Phenology of Vulpia geniculata (L.) Link

    PubMed Central

    León Ruiz, Eduardo J.; García Mozo, Herminia; Domínguez Vilches, Eugenio; Galán, Carmen

    2012-01-01

    Traditionally phenology studies have been focused on changes through time, but there exist many instances in ecological research where it is necessary to interpolate among spatially stratified samples. The combined use of Geographical Information Systems (GIS) and Geostatistics can be an essential tool for spatial analysis in phenological studies. Geostatistics are a family of statistics that describe correlations through space/time and they can be used for both quantifying spatial correlation and interpolating unsampled points. In the present work, estimations based upon Geostatistics and GIS mapping have enabled the construction of spatial models that reflect phenological evolution of Vulpia geniculata (L.) Link throughout the study area during sampling season. Ten sampling points, scattered troughout the city and low mountains in the “Sierra de Córdoba” were chosen to carry out the weekly phenological monitoring during flowering season. The phenological data were interpolated by applying the traditional geostatitical method of Kriging, which was used to ellaborate weekly estimations of V. geniculata phenology in unsampled areas. Finally, the application of Geostatistics and GIS to create phenological maps could be an essential complement in pollen aerobiological studies, given the increased interest in obtaining automatic aerobiological forecasting maps. PMID:22629169

  16. The use of geostatistics in the study of floral phenology of Vulpia geniculata (L.) link.

    PubMed

    León Ruiz, Eduardo J; García Mozo, Herminia; Domínguez Vilches, Eugenio; Galán, Carmen

    2012-01-01

    Traditionally phenology studies have been focused on changes through time, but there exist many instances in ecological research where it is necessary to interpolate among spatially stratified samples. The combined use of Geographical Information Systems (GIS) and Geostatistics can be an essential tool for spatial analysis in phenological studies. Geostatistics are a family of statistics that describe correlations through space/time and they can be used for both quantifying spatial correlation and interpolating unsampled points. In the present work, estimations based upon Geostatistics and GIS mapping have enabled the construction of spatial models that reflect phenological evolution of Vulpia geniculata (L.) Link throughout the study area during sampling season. Ten sampling points, scattered throughout the city and low mountains in the "Sierra de Córdoba" were chosen to carry out the weekly phenological monitoring during flowering season. The phenological data were interpolated by applying the traditional geostatitical method of Kriging, which was used to elaborate weekly estimations of V. geniculata phenology in unsampled areas. Finally, the application of Geostatistics and GIS to create phenological maps could be an essential complement in pollen aerobiological studies, given the increased interest in obtaining automatic aerobiological forecasting maps.

  17. A Longitudinal Study of Household Water, Sanitation, and Hygiene Characteristics and Environmental Enteropathy Markers in Children Less than 24 Months in Iquitos, Peru.

    PubMed

    Exum, Natalie G; Lee, Gwenyth O; Olórtegui, Maribel Paredes; Yori, Pablo Peñataro; Salas, Mery Siguas; Trigoso, Dixner Rengifo; Colston, Josh M; Schwab, Kellogg J; McCormick, Benjamin J J; Kosek, Margaret N

    2018-04-01

    Poor child gut health, resulting from a lack of access to an improved toilet or clean water, has been proposed as a biological mechanism underlying child stunting and oral vaccine failure. Characteristics related to household sanitation, water use, and hygiene were measured among a birth cohort of 270 children from peri-urban Iquitos Peru. These children had monthly stool samples and urine samples at four time points and serum samples at (2-4) time points analyzed for biomarkers related to intestinal inflammation and permeability. We found that less storage of fecal matter near the household along with a reliable water connection were associated with reduced inflammation, most prominently the fecal biomarker myeloperoxidase (MPO) (no sanitation facility compared with those with an onsite toilet had -0.43 log MPO, 95% confidence interval [CI]: -0.74, -0.13; and households with an intermittent connection versus those with a continuous supply had +0.36 log MPO, 95% CI: 0.08, 0.63). These results provide preliminary evidence for the hypothesis that children less than 24 months of age living in unsanitary conditions will have elevated gut inflammation.

  18. Evaluation of left ventricular scar identification from contrast enhanced magnetic resonance imaging for guidance of ventricular catheter ablation therapy

    NASA Astrophysics Data System (ADS)

    Rettmann, M. E.; Lehmann, H. I.; Johnson, S. B.; Packer, D. L.

    2016-03-01

    Patients with ventricular arrhythmias typically exhibit myocardial scarring, which is believed to be an important anatomic substrate for reentrant circuits, thereby making these regions a key target in catheter ablation therapy. In ablation therapy, a catheter is guided into the left ventricle and radiofrequency energy is delivered into the tissue to interrupt arrhythmic electrical pathways. Low bipolar voltage regions are typically localized during the procedure through point-by-point construction of an electroanatomic map by sampling the endocardial surface with the ablation catheter and are used as a surrogate for myocardial scar. This process is time consuming, requires significant skill, and has the potential to miss low voltage sites. This has led to efforts to quantify myocardial scar preoperatively using delayed, contrast-enhanced MRI. In this paper, we evaluate the utility of left ventricular scar identification from delayed contrast enhanced magnetic resonance imaging for guidance of catheter ablation of ventricular arrhythmias. Myocardial infarcts were created in three canines followed by a delayed, contrast enhanced MRI scan and electroanatomic mapping. The left ventricle and myocardial scar is segmented from preoperative MRI images and sampled points from the procedural electroanatomical map are registered to the segmented endocardial surface. Sampled points with low bipolar voltage points visually align with the segmented scar regions. This work demonstrates the potential utility of using preoperative delayed, enhanced MRI to identify myocardial scarring for guidance of ventricular catheter ablation therapy.

  19. Critical fluid thermal equilibration experiment (19-IML-1)

    NASA Technical Reports Server (NTRS)

    Wilkinson, R. Allen

    1992-01-01

    Gravity sometimes blocks all experimental techniques of making a desired measurement. Any pure fluid possesses a liquid-vapor critical point. It is defined by a temperature, pressure, and density state in thermodynamics. The critical issue that this experiment attempts to understand is the time it takes for a sample to reach temperature and density equilibrium as the critical point is approached; is it infinity due to mass and thermal diffusion, or do pressure waves speed up energy transport while mass is still under diffusion control. The objectives are to observe: (1) large phase domain homogenization without and with stirring; (2) time evolution of heat and mass after temperature step is applied to a one phase equilibrium sample; (3) phase evolution and configuration upon going two phase from a one phase equilibrium state; (4) effects of stirring on a low g two phase configuration; (5) two phase to one phase healing dynamics starting from a two phase low g configuration; and (6) effects of shuttle acceleration events on spatially and temporally varying compressible critical fluid dynamics.

  20. Water Potential in Excised Leaf Tissue

    PubMed Central

    Nelsen, Charles E.; Safir, Gene R.; Hanson, Andrew D.

    1978-01-01

    Leaf water potential (Ψleaf) determinations were made on excised leaf samples using a commercial dew point hygrometer (Wescor Inc., Logan, Utah) and a thermocouple psychrometer operated in the isopiestic mode. With soybean leaves (Glycine max L.), there was good agreement between instruments; equilibration times were 2 to 3 hours. With cereals (Triticum aestivum L. and Hordeum vulgare L.), agreement between instruments was poor for moderately wilted leaves when 7-mm-diameter punches were used in the hygrometer and 20-mm slices were used in the psychrometer, because the Ψleaf values from the dew point hygrometer were too high. Agreement was improved by replacing the 7-mm punch samples in the hygrometer by 13-mm slices, which had a lower cut edge to volume ratio. Equilibration times for cereals were normally 6 to 8 hours. Spuriously high Ψleaf values obtained with 7-mm leaf punches may be associated with the ion release and reabsorption that occur upon tissue excision; such errors evidently depend both on the species and on tissue water status. PMID:16660227

  1. Quadtree of TIN: a new algorithm of dynamic LOD

    NASA Astrophysics Data System (ADS)

    Zhang, Junfeng; Fei, Lifan; Chen, Zhen

    2009-10-01

    Currently, Real-time visualization of large-scale digital elevation model mainly employs the regular structure of GRID based on quadtree and triangle simplification methods based on irregular triangulated network (TIN). TIN is a refined means to express the terrain surface in the computer science, compared with GRID. However, the data structure of TIN model is complex, and is difficult to realize view-dependence representation of level of detail (LOD) quickly. GRID is a simple method to realize the LOD of terrain, but contains more triangle count. A new algorithm, which takes full advantage of the two methods' merit, is presented in this paper. This algorithm combines TIN with quadtree structure to realize the view-dependence LOD controlling over the irregular sampling point sets, and holds the details through the distance of viewpoint and the geometric error of terrain. Experiments indicate that this approach can generate an efficient quadtree triangulation hierarchy over any irregular sampling point sets and achieve dynamic and visual multi-resolution performance of large-scale terrain at real-time.

  2. Paper-based sample-to-answer molecular diagnostic platform for point-of-care diagnostics.

    PubMed

    Choi, Jane Ru; Tang, Ruihua; Wang, ShuQi; Wan Abas, Wan Abu Bakar; Pingguan-Murphy, Belinda; Xu, Feng

    2015-12-15

    Nucleic acid testing (NAT), as a molecular diagnostic technique, including nucleic acid extraction, amplification and detection, plays a fundamental role in medical diagnosis for timely medical treatment. However, current NAT technologies require relatively high-end instrumentation, skilled personnel, and are time-consuming. These drawbacks mean conventional NAT becomes impractical in many resource-limited disease-endemic settings, leading to an urgent need to develop a fast and portable NAT diagnostic tool. Paper-based devices are typically robust, cost-effective and user-friendly, holding a great potential for NAT at the point of care. In view of the escalating demand for the low cost diagnostic devices, we highlight the beneficial use of paper as a platform for NAT, the current state of its development, and the existing challenges preventing its widespread use. We suggest a strategy involving integrating all three steps of NAT into one single paper-based sample-to-answer diagnostic device for rapid medical diagnostics in the near future. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Stability and Change in Interests: A Longitudinal Study of Adolescents from Grades 8 through 12

    ERIC Educational Resources Information Center

    Tracey, Terence J. G.; Robbins, Steven B.; Hofsess, Christy D.

    2005-01-01

    The pattern of RIASEC interests and academic skills were assessed longitudinally from a large-scale national database at three time points: eight grade, 10th grade, and 12th grade. Validation and cross-validation samples of 1000 males and 1000 females in each set were used to test the pattern of these scores over time relative to mean changes,…

  4. Oversampling of digitized images. [effects on interpolation in signal processing

    NASA Technical Reports Server (NTRS)

    Fischel, D.

    1976-01-01

    Oversampling is defined as sampling with a device whose characteristic width is greater than the interval between samples. This paper shows why oversampling should be avoided and discusses the limitations in data processing if circumstances dictate that oversampling cannot be circumvented. Principally, oversampling should not be used to provide interpolating data points. Rather, the time spent oversampling should be used to obtain more signal with less relative error, and the Sampling Theorem should be employed to provide any desired interpolated values. The concepts are applicable to single-element and multielement detectors.

  5. Multichannel infrared fiber optic radiometer for controlled microwave heating

    NASA Astrophysics Data System (ADS)

    Drizlikh, S.; Zur, Albert; Katzir, Abraham

    1990-07-01

    An infrared fiberoptic multichannel radiometer was used for monitoring and controlling the temperature of samples in a microwave heating system. The temperature of water samples was maintained at about 40 °C, with a standard deviation of +/- 0.2°C and a maximum deviation of +/- 0.5°C. The temperature was monitored on the same time at several points on the surface and inside the sample. This novel controlled system is reliable and precise. Such system would be very useful for medical applications such as hypothermia and hyperthermi a.

  6. Optical EVPA rotations in blazars: testing a stochastic variability model with RoboPol data

    NASA Astrophysics Data System (ADS)

    Kiehlmann, S.; Blinov, D.; Pearson, T. J.; Liodakis, I.

    2017-12-01

    We identify rotations of the polarization angle in a sample of blazars observed for three seasons with the RoboPol instrument. A simplistic stochastic variability model is tested against this sample of rotation events. The model is capable of producing samples of rotations with parameters similar to the observed ones, but fails to reproduce the polarization fraction at the same time. Even though we can neither accept nor conclusively reject the model, we point out various aspects of the observations that are fully consistent with a random walk process.

  7. Wide-field synovial fluid imaging using polarized lens-free on-chip microscopy for point-of-care diagnostics of gout (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhang, Yibo; Lee, Seung Yoon; Zhang, Yun; Furst, Daniel; Fitzgerald, John; Ozcan, Aydogan

    2016-03-01

    Gout and pseudogout are forms of crystal arthropathy caused by monosodium urate (MSU) and calcium pyrophosphate dehydrate (CPPD) crystals in the joint, respectively, that can result in painful joints. Detecting the unique-shaped, birefringent MSU/CPPD crystals in a synovial fluid sample using a compensated polarizing microscope has been the gold-standard for diagnosis since the 1960's. However, this can be time-consuming and inaccurate, especially if there are only few crystals in the fluid. The high-cost and bulkiness of conventional microscopes can also be limiting for point-of-care diagnosis. Lens-free on-chip microscopy based on digital holography routinely achieves high-throughput and high-resolution imaging in a cost-effective and field-portable design. Here we demonstrate, for the first time, polarized lens-free on-chip imaging of MSU and CPPD crystals over a wide field-of-view (FOV ~ 20.5 mm2, i.e., <20-fold larger compared a typical 20X objective-lens FOV) for point-of-care diagnostics of gout and pseudogout. Circularly polarizer partially-coherent light is used to illuminate the synovial fluid sample on a glass slide, after which a quarter-wave-plate and an angle-mismatched linear polarizer are used to analyze the transmitted light. Two lens-free holograms of the MSU/CPPD sample are taken, with the sample rotated by 90°, to rule out any non-birefringent objects within the specimen. A phase-recovery algorithm is also used to improve the reconstruction quality, and digital pseudo-coloring is utilized to match the color and contrast of the lens-free image to that of a gold-standard microscope image to ease the examination by a rheumatologist or a laboratory technician, and to facilitate computerized analysis.

  8. Recursive algorithms for phylogenetic tree counting.

    PubMed

    Gavryushkina, Alexandra; Welch, David; Drummond, Alexei J

    2013-10-28

    In Bayesian phylogenetic inference we are interested in distributions over a space of trees. The number of trees in a tree space is an important characteristic of the space and is useful for specifying prior distributions. When all samples come from the same time point and no prior information available on divergence times, the tree counting problem is easy. However, when fossil evidence is used in the inference to constrain the tree or data are sampled serially, new tree spaces arise and counting the number of trees is more difficult. We describe an algorithm that is polynomial in the number of sampled individuals for counting of resolutions of a constraint tree assuming that the number of constraints is fixed. We generalise this algorithm to counting resolutions of a fully ranked constraint tree. We describe a quadratic algorithm for counting the number of possible fully ranked trees on n sampled individuals. We introduce a new type of tree, called a fully ranked tree with sampled ancestors, and describe a cubic time algorithm for counting the number of such trees on n sampled individuals. These algorithms should be employed for Bayesian Markov chain Monte Carlo inference when fossil data are included or data are serially sampled.

  9. Information content of household-stratified epidemics.

    PubMed

    Kinyanjui, T M; Pellis, L; House, T

    2016-09-01

    Household structure is a key driver of many infectious diseases, as well as a natural target for interventions such as vaccination programs. Many theoretical and conceptual advances on household-stratified epidemic models are relatively recent, but have successfully managed to increase the applicability of such models to practical problems. To be of maximum realism and hence benefit, they require parameterisation from epidemiological data, and while household-stratified final size data has been the traditional source, increasingly time-series infection data from households are becoming available. This paper is concerned with the design of studies aimed at collecting time-series epidemic data in order to maximize the amount of information available to calibrate household models. A design decision involves a trade-off between the number of households to enrol and the sampling frequency. Two commonly used epidemiological study designs are considered: cross-sectional, where different households are sampled at every time point, and cohort, where the same households are followed over the course of the study period. The search for an optimal design uses Bayesian computationally intensive methods to explore the joint parameter-design space combined with the Shannon entropy of the posteriors to estimate the amount of information in each design. For the cross-sectional design, the amount of information increases with the sampling intensity, i.e., the designs with the highest number of time points have the most information. On the other hand, the cohort design often exhibits a trade-off between the number of households sampled and the intensity of follow-up. Our results broadly support the choices made in existing epidemiological data collection studies. Prospective problem-specific use of our computational methods can bring significant benefits in guiding future study designs. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  10. A general theory on frequency and time-frequency analysis of irregularly sampled time series based on projection methods - Part 1: Frequency analysis

    NASA Astrophysics Data System (ADS)

    Lenoir, Guillaume; Crucifix, Michel

    2018-03-01

    We develop a general framework for the frequency analysis of irregularly sampled time series. It is based on the Lomb-Scargle periodogram, but extended to algebraic operators accounting for the presence of a polynomial trend in the model for the data, in addition to a periodic component and a background noise. Special care is devoted to the correlation between the trend and the periodic component. This new periodogram is then cast into the Welch overlapping segment averaging (WOSA) method in order to reduce its variance. We also design a test of significance for the WOSA periodogram, against the background noise. The model for the background noise is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, more general than the classical Gaussian white or red noise processes. CARMA parameters are estimated following a Bayesian framework. We provide algorithms that compute the confidence levels for the WOSA periodogram and fully take into account the uncertainty in the CARMA noise parameters. Alternatively, a theory using point estimates of CARMA parameters provides analytical confidence levels for the WOSA periodogram, which are more accurate than Markov chain Monte Carlo (MCMC) confidence levels and, below some threshold for the number of data points, less costly in computing time. We then estimate the amplitude of the periodic component with least-squares methods, and derive an approximate proportionality between the squared amplitude and the periodogram. This proportionality leads to a new extension for the periodogram: the weighted WOSA periodogram, which we recommend for most frequency analyses with irregularly sampled data. The estimated signal amplitude also permits filtering in a frequency band. Our results generalise and unify methods developed in the fields of geosciences, engineering, astronomy and astrophysics. They also constitute the starting point for an extension to the continuous wavelet transform developed in a companion article (Lenoir and Crucifix, 2018). All the methods presented in this paper are available to the reader in the Python package WAVEPAL.

  11. Surface sampling techniques for 3D object inspection

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong S.; Gerhardt, Lester A.

    1995-03-01

    While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.

  12. Space infrared telescope pointing control system. Infrared telescope tracking in the presence of target motion

    NASA Technical Reports Server (NTRS)

    Powell, J. D.; Schneider, J. B.

    1986-01-01

    The use of charge-coupled-devices, or CCD's, has been documented by a number of sources as an effective means of providing a measurement of spacecraft attitude with respect to the stars. A method exists of defocussing and interpolation of the resulting shape of a star image over a small subsection of a large CCD array. This yields an increase in the accuracy of the device by better than an order of magnitude over the case when the star image is focussed upon a single CCD pixel. This research examines the effect that image motion has upon the overall precision of this star sensor when applied to an orbiting infrared observatory. While CCD's collect energy within the visible spectrum of light, the targets of scientific interest may well have no appreciable visible emissions. Image motion has the effect of smearing the image of the star in the direction of motion during a particular sampling interval. The presence of image motion is incorporated into a Kalman filter for the system, and it is shown that the addition of a gyro command term is adequate to compensate for the effect of image motion in the measurement. The updated gyro model is included in this analysis, but has natural frequencies faster than the projected star tracker sample rate for dim stars. The system state equations are reduced by modelling gyro drift as a white noise process. There exists a tradeoff in selected star tracker sample time between the CCD, which has improved noise characteristics as sample time increases, and the gyro, which will potentially drift further between long attitude updates. A sample time which minimizes pointing estimation error exists for the random drift gyro model as well as for a random walk gyro model.

  13. Determining the sources of suspended sediment in a Mediterranean groundwater-dominated river: the Na Borges basin (Mallorca, Spain).

    NASA Astrophysics Data System (ADS)

    Estrany, Joan; Martinez-Carreras, Nuria

    2013-04-01

    Tracers have been acknowledged as a useful tool to identify sediment sources, based upon a variety of techniques and chemical and physical sediment properties. Sediment fingerprinting supports the notion that changes in sedimentation rates are not just related to increased/reduced erosion and transport in the same areas, but also to the establishment of different pathways increasing sediment connectivity. The Na Borges is a Mediterranean lowland agricultural river basin (319 km2) where traditional soil and water conservation practices have been applied over millennia to provide effective protection of cultivated land. During the twentieth century, industrialisation and pressure from tourism activities have increased urbanised surfaces, which have impacts on the processes that control streamflow. Within this context, source material sampling was focused in Na Borges on obtaining representative samples from potential sediment sources (comprised topsoil; i.e., 0-2 cm) susceptible to mobilisation by water and subsequent routing to the river channel network, while those representing channel bank sources were collected from actively eroding channel margins and ditches. Samples of road dust and of solids from sewage treatment plants were also collected. During two hydrological years (2004-2006), representative suspended sediment samples for use in source fingerprinting studies were collected at four flow gauging stations and at eight secondary sampling points using time-integrating sampling samplers. Likewise, representative bed-channel sediment samples were obtained using the resuspension approach at eight sampling points in the main stem of the Na Borges River. These deposits represent the fine sediment temporarily stored in the bed-channel and were also used for tracing source contributions. A total of 102 individual time-integrated sediment samples, 40 bulk samples and 48 bed-sediment samples were collected. Upon return to the laboratory, source material samples were oven-dried at 40° C, disaggregated using a pestle and mortar, and dry sieved to

  14. Soil moisture optimal sampling strategy for Sentinel 1 validation super-sites in Poland

    NASA Astrophysics Data System (ADS)

    Usowicz, Boguslaw; Lukowski, Mateusz; Marczewski, Wojciech; Lipiec, Jerzy; Usowicz, Jerzy; Rojek, Edyta; Slominska, Ewa; Slominski, Jan

    2014-05-01

    Soil moisture (SM) exhibits a high temporal and spatial variability that is dependent not only on the rainfall distribution, but also on the topography of the area, physical properties of soil and vegetation characteristics. Large variability does not allow on certain estimation of SM in the surface layer based on ground point measurements, especially in large spatial scales. Remote sensing measurements allow estimating the spatial distribution of SM in the surface layer on the Earth, better than point measurements, however they require validation. This study attempts to characterize the SM distribution by determining its spatial variability in relation to the number and location of ground point measurements. The strategy takes into account the gravimetric and TDR measurements with different sampling steps, abundance and distribution of measuring points on scales of arable field, wetland and commune (areas: 0.01, 1 and 140 km2 respectively), taking into account the different status of SM. Mean values of SM were lowly sensitive on changes in the number and arrangement of sampling, however parameters describing the dispersion responded in a more significant manner. Spatial analysis showed autocorrelations of the SM, which lengths depended on the number and the distribution of points within the adopted grids. Directional analysis revealed a differentiated anisotropy of SM for different grids and numbers of measuring points. It can therefore be concluded that both the number of samples, as well as their layout on the experimental area, were reflected in the parameters characterizing the SM distribution. This suggests the need of using at least two variants of sampling, differing in the number and positioning of the measurement points, wherein the number of them must be at least 20. This is due to the value of the standard error and range of spatial variability, which show little change with the increase in the number of samples above this figure. Gravimetric method gives a more varied distribution of SM than those derived from TDR measurements. It should be noted that reducing the number of samples in the measuring grid leads to flattening the distribution of SM from both methods and increasing the estimation error at the same time. Grid of sensors for permanent measurement points should include points that have similar distributions of SM in the vicinity. Results of the analysis including number, the maximum correlation ranges and the acceptable estimation error should be taken into account when choosing of the measurement points. Adoption or possible adjustment of the distribution of the measurement points should be verified by performing additional measuring campaigns during the dry and wet periods. Presented approach seems to be appropriate for creation of regional-scale test (super) sites, to validate products of satellites equipped with SAR (Synthetic Aperture Radar), operating in C-band, with spatial resolution suited to single field scale, as for example: ERS-1, ERS-2, Radarsat and Sentinel-1, which is going to be launched in next few months. The work was partially funded by the Government of Poland through an ESA Contract under the PECS ELBARA_PD project No. 4000107897/13/NL/KML.

  15. Sampling and assessment accuracy in mate choice: a random-walk model of information processing in mating decision.

    PubMed

    Castellano, Sergio; Cermelli, Paolo

    2011-04-07

    Mate choice depends on mating preferences and on the manner in which mate-quality information is acquired and used to make decisions. We present a model that describes how these two components of mating decision interact with each other during a comparative evaluation of prospective mates. The model, with its well-explored precedents in psychology and neurophysiology, assumes that decisions are made by the integration over time of noisy information until a stopping-rule criterion is reached. Due to this informational approach, the model builds a coherent theoretical framework for developing an integrated view of functions and mechanisms of mating decisions. From a functional point of view, the model allows us to investigate speed-accuracy tradeoffs in mating decision at both population and individual levels. It shows that, under strong time constraints, decision makers are expected to make fast and frugal decisions and to optimally trade off population-sampling accuracy (i.e. the number of sampled males) against individual-assessment accuracy (i.e. the time spent for evaluating each mate). From the proximate-mechanism point of view, the model makes testable predictions on the interactions of mating preferences and choosiness in different contexts and it might be of compelling empirical utility for a context-independent description of mating preference strength. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. SEM Based CARMA Time Series Modeling for Arbitrary N.

    PubMed

    Oud, Johan H L; Voelkle, Manuel C; Driver, Charles C

    2018-01-01

    This article explains in detail the state space specification and estimation of first and higher-order autoregressive moving-average models in continuous time (CARMA) in an extended structural equation modeling (SEM) context for N = 1 as well as N > 1. To illustrate the approach, simulations will be presented in which a single panel model (T = 41 time points) is estimated for a sample of N = 1,000 individuals as well as for samples of N = 100 and N = 50 individuals, followed by estimating 100 separate models for each of the one-hundred N = 1 cases in the N = 100 sample. Furthermore, we will demonstrate how to test the difference between the full panel model and each N = 1 model by means of a subject-group-reproducibility test. Finally, the proposed analyses will be applied in an empirical example, in which the relationships between mood at work and mood at home are studied in a sample of N = 55 women. All analyses are carried out by ctsem, an R-package for continuous time modeling, interfacing to OpenMx.

  17. Decrease in Ionized and Total Magnesium Blood Concentrations in Endurance Athletes Following an Exercise Bout Restores within Hours-Potential Consequences for Monitoring and Supplementation.

    PubMed

    Terink, Rieneke; Balvers, Michiel G J; Hopman, Maria T; Witkamp, Renger F; Mensink, Marco; Gunnewiek, Jacqueline M T Klein

    2017-06-01

    Magnesium is essential for optimal sport performance, generating an interest to monitor its status in athletes. However, before measuring magnesium status in blood could become routine, more insight into its diurnal fluctuations and effects of exercise itself is necessary. Therefore, we measured the effect of an acute bout of exercise on ionized (iMg) and total plasma magnesium (tMg) in blood obtained from 18 healthy well-trained endurance athletes (age, 31.1 ± 8.1 yr.; VO 2max , 50.9 ± 7.5 ml/kg/min) at multiple time points, and compared this with a resting situation. At both days, 7 blood samples were taken at set time points (8:30 fasted, 11:00, 12:30, 13:30, 15:00, 16:00, 18:30). The control day was included to correct for a putative diurnal fluctuation of magnesium. During the exercise day, athletes performed a 90 min bicycle ergometer test (70% VO 2max ) between 11:00 and 12:30. Whole blood samples were analyzed for iMg and plasma for tMg concentrations. Both concentrations decreased significantly after exercise (0.52 ± 0.04-0.45 ± 0.03 mmol/L and 0.81 ± 0.07-0.73 ± 0.06 mmol/L, respectively, p < .001) while no significant decline was observed during that time-interval on control days. Both, iMg and tMg, returned to baseline, on average, 2.5 hr after exercise. These findings suggest that timing of blood sampling to analyze Mg status is important. Additional research is needed to establish the recovery time after different types of exercise to come to a general advice regarding the timing of magnesium status assessment in practice.

  18. Changes in the Ginsenoside Content During Fermentation Using an Appliance for the Preparation of Red Ginseng.

    PubMed

    Lee, So Jin; Ha, Na; Kim, Yunjeong; Kim, Min-Gul

    2016-01-01

    The total amount of ginsenoside in fermented red ginseng (FRG) is increased by microbial fermentation. The aim of this study was to evaluate whether fermentation time and temperature affect the ginsenoside content during fermentation using an appliance for the preparation of red ginseng. The FRG and fermented red ginseng extracts (FRG-e) were prepared using an appliance for the preparation of red ginseng. The temperature was recorded and time points for sampling were scheduled at pre-fermentation (0[Formula: see text]h) and 18, 36, 48, 60 and 72[Formula: see text]h after the addition of the microbial strains. Samples of FRG and FRG-e were collected to identify changes in the ginsenoside contents at each time point during the fermentation process. The ginsenoside content was analyzed using high performance liquid chromatography (HPLC). The levels of ginsenoside Rh1, Rg3, and compound Y, which are known to have effective pharmacological properties, increased more than three-fold in the final products of FRG relative to samples prior to fermentation. Although the ginsenoside constituents of FRG-e decreased or increased and then decreased during fermentation, the total amount of ginsenoside in FRG-e was even higher than those in FRG; the total amounts of ginsenoside in FRG-e and FRG were 8282.8 and 738.0[Formula: see text]mg, respectively. This study examined the changes in composition of ginsenosides and suggests a method to manufacture high-content total ginsenosides according to the fermentation temperature and process time. Reducing the extraction time is expected to improve the decrease of ginsenosides in FRG-e as a function of the fermentation time.

  19. In vitro activity of an ear rinse containing tromethamine, EDTA, benzyl alcohol and 0.1% ketoconazole on Malassezia organisms from dogs with otitis externa.

    PubMed

    Cole, Lynette K; Luu, Dao H; Rajala-Schultz, Paivi J; Meadows, Cheyney; Torres, Audrey H

    2007-04-01

    The purpose of this study was to evaluate the in vitro activity of an ear rinse containing tromethamine, EDTA, benzyl alcohol and 0.1% ketoconazole in purified water on Malassezia organisms from dogs with otitis externa. Malassezia organisms were collected from ear swab samples from the external ear canal of 19 dogs with otitis externa plus one control strain of Malassezia pachydermatis. Three test solutions were evaluated: ER (EDTA, tromethamine, benzyl alcohol), ER + keto (EDTA, tromethamine, benzyl alcohol, ketoconazole), and H2O (purified water). Ten-millilitre aliquots of each test solution was transferred into 20 tubes and inoculated with one of the isolates (1 tube per isolate: 19 clinical and 1 control strain). Samples were retrieved from each tube at five time points (0, 15, 30, 45 and 60 min), transferred to Petri dishes, mixed with Sabouraud dextrose agar supplemented with 0.5% Tween 80 and incubated. Following incubation, the plates were examined for growth and colonies counted as colony-forming units per millilitre. The data were analysed using a repeated measures analysis, with pair-wise comparisons of solution-time combinations. There was a significant reduction in Malassezia growth in ER + keto at all time points (P < 0.0001) compared to time zero. Neither ER nor H2O had any effect on the growth of Malassezia. ER + keto was significantly more effective in reducing Malassezia growth (P < 0.0001) at all time points compared to both ER and H2O. ER + keto may be useful in the treatment of Malassezia otitis externa. Future studies should be performed to evaluate the in vivo efficacy of ER + keto as treatment for otic infections caused by Malassezia.

  20. Treatment of hyperthyroidism with radioiodine targeted activity: A comparison between two dosimetric methods.

    PubMed

    Amato, Ernesto; Campennì, Alfredo; Leotta, Salvatore; Ruggeri, Rosaria M; Baldari, Sergio

    2016-06-01

    Radioiodine therapy is an effective and safe treatment of hyperthyroidism due to Graves' disease, toxic adenoma, toxic multinodular goiter. We compared the outcomes of a traditional calculation method based on an analytical fit of the uptake curve and subsequent dose calculation with the MIRD approach, and an alternative computation approach based on a formulation implemented in a public-access website, searching for the best timing of radioiodine uptake measurements in pre-therapeutic dosimetry. We report about sixty-nine hyperthyroid patients that were treated after performing a pre-therapeutic dosimetry calculated by fitting a six-point uptake curve (3-168h). In order to evaluate the results of the radioiodine treatment, patients were followed up to sixty-four months after treatment (mean 47.4±16.9). Patient dosimetry was then retrospectively recalculated with the two above-mentioned methods. Several time schedules for uptake measurements were considered, with different timings and total number of points. Early time schedules, sampling uptake up to 48h, do not allow to set-up an accurate treatment plan, while schedules including the measurement at one week give significantly better results. The analytical fit procedure applied to the three-point time schedule 3(6)-24-168h gave results significantly more accurate than the website approach exploiting either the same schedule, or the single measurement at 168h. Consequently, the best strategy among the ones considered is to sample the uptake at 3(6)-24-168h, and carry out an analytical fit of the curve, while extra measurements at 48 and 72h lead only marginal improvements in the accuracy of therapeutic activity determination. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  1. Identification of biogeochemical hot spots using time-lapse hydrogeophysics

    NASA Astrophysics Data System (ADS)

    Franz, T. E.; Loecke, T.; Burgin, A.

    2016-12-01

    The identification and monitoring of biogeochemical hot spots and hot moments is difficult using point based sampling techniques and sensors. Without proper monitoring and accounting of water, energy, and trace gas fluxes it is difficult to assess the environmental footprint of land management practices. One key limitation is optimal placement of sensors/chambers that adequately capture the point scale fluxes and thus a reasonable integration to landscape scale flux. In this work we present time-lapse hydrogeophysical imaging at an old agricultural field converted into a wetland mitigation bank near Dayton, Ohio. While the wetland was previously instrumented with a network of soil sensors and surface chambers to capture a suite of state variables and fluxes, we hypothesize that time-lapse hydrogeophysical imaging is an underutilized and critical reconnaissance tool for effective network design and landscape scaling. Here we combine the time-lapse hydrogeophysical imagery with the multivariate statistical technique of Empirical Orthogonal Functions (EOF) in order to isolate the spatial and temporal components of the imagery. Comparisons of soil core information (e.g. soil texture, soil carbon) from around the study site and organized within like spatial zones reveal statistically different mean values of soil properties. Moreover, the like spatial zones can be used to identify a finite number of future sampling locations, evaluation of the placement of existing sensors/chambers, upscale/downscale observations, all of which are desirable techniques for commercial use in precision agriculture. Finally, we note that combining the EOF analysis with continuous monitoring from point sensors or remote sensing products may provide a robust statistical framework for scaling observations through time as well as provide appropriate datasets for use in landscape biogeochemical models.

  2. Real-time global illumination on mobile device

    NASA Astrophysics Data System (ADS)

    Ahn, Minsu; Ha, Inwoo; Lee, Hyong-Euk; Kim, James D. K.

    2014-02-01

    We propose a novel method for real-time global illumination on mobile devices. Our approach is based on instant radiosity, which uses a sequence of virtual point lights in order to represent the e ect of indirect illumination. Our rendering process consists of three stages. With the primary light, the rst stage generates a local illumination with the shadow map on GPU The second stage of the global illumination uses the re ective shadow map on GPU and generates the sequence of virtual point lights on CPU. Finally, we use the splatting method of Dachsbacher et al 1 and add the indirect illumination to the local illumination on GPU. With the limited computing resources in mobile devices, a small number of virtual point lights are allowed for real-time rendering. Our approach uses the multi-resolution sampling method with 3D geometry and attributes simultaneously and reduce the total number of virtual point lights. We also use the hybrid strategy, which collaboratively combines the CPUs and GPUs available in a mobile SoC due to the limited computing resources in mobile devices. Experimental results demonstrate the global illumination performance of the proposed method.

  3. Simple point vortex model for the relaxation of 2D superfluid turbulence in a Bose-Einstein condensate

    NASA Astrophysics Data System (ADS)

    Kim, Joon Hyun; Kwon, Woo Jin; Shin, Yong-Il

    2016-05-01

    In a recent experiment, it was found that the dissipative evolution of a corotating vortex pair in a trapped Bose-Einstein condensate is well described by a point vortex model with longitudinal friction on the vortex motion and the thermal friction coefficient was determined as a function of sample temperature. In this poster, we present a numerical study on the relaxation of 2D superfluid turbulence based on the dissipative point vortex model. We consider a homogeneous system in a cylindrical trap having randomly distributed vortices and implement the vortex-antivortex pair annihilation by removing a pair when its separation becomes smaller than a certain threshold value. We characterize the relaxation of the turbulent vortex states with the decay time required for the vortex number to be reduced to a quarter of initial number. We find the vortex decay time is inversely proportional to the thermal friction coefficient. In particular, we observe the decay times obtained from this work show good quantitative agreement with the experimental results in, indicating that in spite of its simplicity, the point vortex model reasonably captures the physics in the relaxation dynamics of the real system.

  4. A Method for Identifying Pollution Sources of Heavy Metals and PAH for a Risk-Based Management of a Mediterranean Harbour

    PubMed Central

    Moranda, Arianna

    2017-01-01

    A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities. PMID:29270328

  5. A Method for Identifying Pollution Sources of Heavy Metals and PAH for a Risk-Based Management of a Mediterranean Harbour.

    PubMed

    Paladino, Ombretta; Moranda, Arianna; Seyedsalehi, Mahdi

    2017-01-01

    A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities.

  6. A precision multi-sampler for deep-sea hydrothermal microbial mat studies

    NASA Astrophysics Data System (ADS)

    Breier, J. A.; Gomez-Ibanez, D.; Reddington, E.; Huber, J. A.; Emerson, D.

    2012-12-01

    A new tool was developed for deep-sea microbial mat studies by remotely operated vehicles and was successfully deployed during a cruise to the hydrothermal vent systems of the Mid-Cayman Rise. The Mat Sampler allows for discrete, controlled material collection from complex microbial structures, vertical-profiling within thick microbial mats and particulate and fluid sample collection from venting seafloor fluids. It has a reconfigurable and expandable sample capacity based on magazines of 6 syringes, filters, or water bottles. Multiple magazines can be used such that 12-36 samples can be collected routinely during a single dive; several times more if the dive is dedicated for this purpose. It is capable of hosting in situ physical, electrochemical, and optical sensors, including temperature and oxygen probes in order to guide sampling and to record critical environmental parameters at the time and point of sample collection. The precision sampling capability of this instrument will greatly enhance efforts to understand the structured, delicate, microbial mat communities that grow in diverse benthic habitats.

  7. On the Traceability of Commercial Saffron Samples Using ¹H-NMR and FT-IR Metabolomics.

    PubMed

    Consonni, Roberto; Ordoudi, Stella A; Cagliani, Laura R; Tsiangali, Maria; Tsimidou, Maria Z

    2016-02-29

    In previous works on authentic samples of saffron of known history (harvest and processing year, storage conditions, and length of time) some biomarkers were proposed using both FT-IR and NMR metabolomics regarding the shelf life of the product. This work addresses the difficulties to trace back the "age" of commercial saffron samples of unknown history, sets a limit value above which these products can be considered substandard, and offers a useful tool to combat saffron mislabeling and fraud with low-quality saffron material. Investigations of authentic and commercial saffron samples of different origin and harvest year, which had been stored under controlled conditions for different lengths of time, allowed a clear-cut clustering of samples in two groups according to the storage period irrespectively of the provenience. In this respect, the four-year cut off point proposed in our previous work assisted to trace back the "age" of unknown samples and to check for possible mislabeling practices.

  8. Optimization of Sample Points for Monitoring Arable Land Quality by Simulated Annealing while Considering Spatial Variations

    PubMed Central

    Wang, Junxiao; Wang, Xiaorui; Zhou, Shenglu; Wu, Shaohua; Zhu, Yan; Lu, Chunfeng

    2016-01-01

    With China’s rapid economic development, the reduction in arable land has emerged as one of the most prominent problems in the nation. The long-term dynamic monitoring of arable land quality is important for protecting arable land resources. An efficient practice is to select optimal sample points while obtaining accurate predictions. To this end, the selection of effective points from a dense set of soil sample points is an urgent problem. In this study, data were collected from Donghai County, Jiangsu Province, China. The number and layout of soil sample points are optimized by considering the spatial variations in soil properties and by using an improved simulated annealing (SA) algorithm. The conclusions are as follows: (1) Optimization results in the retention of more sample points in the moderate- and high-variation partitions of the study area; (2) The number of optimal sample points obtained with the improved SA algorithm is markedly reduced, while the accuracy of the predicted soil properties is improved by approximately 5% compared with the raw data; (3) With regard to the monitoring of arable land quality, a dense distribution of sample points is needed to monitor the granularity. PMID:27706051

  9. Application of Positron Doppler Broadening Spectroscopy to the Measurement of the Uniformity of Composite Materials

    NASA Astrophysics Data System (ADS)

    Quarles, C. A.; Sheffield, Thomas; Stacy, Scott; Yang, Chun

    2009-03-01

    The uniformity of rubber-carbon black composite materials has been investigated with positron Doppler Broadening Spectroscopy (DBS). The number of grams of carbon black (CB) mixed into one hundred grams of rubber, phr, is used to characterize a sample. A typical concentration for rubber in tires is 50 phr. The S parameter measured by DBS has been found to depend on the phr of the sample as well as the type of rubber and carbon black. The variation in carbon black concentration within a surface area of about 5 mm diameter can be measured by moving a standard Na-22 or Ge-68 positron source over an extended sample. The precision of the concentration measurement depends on the dwell time at a point on the sample. The time required to determine uniformity over an extended sample can be reduced by running with much higher counting rate than is typical in DBS and correcting for the systematic variation of S parameter with counting rate. Variation in CB concentration with mixing time at the level of about 0.5% has been observed.

  10. Where We Look When We Drive with or without Active Steering Wheel Control

    PubMed Central

    Mars, Franck; Navarro, Jordan

    2012-01-01

    Current theories on the role of visuomotor coordination in driving agree that active sampling of the road by the driver informs the arm-motor system in charge of performing actions on the steering wheel. Still under debate, however, is the nature of visual cues and gaze strategies used by drivers. In particular, the tangent point hypothesis, which states that drivers look at a specific point on the inside edge line, has recently become the object of controversy. An alternative hypothesis proposes that drivers orient gaze toward the desired future path, which happens to be often situated in the vicinity of the tangent point. The present study contributed to this debate through the analyses of the distribution of gaze orientation with respect to the tangent point. The results revealed that drivers sampled the roadway in the close vicinity of the tangent point rather than the tangent point proper. This supports the idea that drivers look at the boundary of a safe trajectory envelop near the inside edge line. Furthermore, the study investigated for the first time the reciprocal influence of manual control on gaze control in the context of driving. This was achieved through the comparison of gaze behavior when drivers actively steered the vehicle or when steering was performed by an automatic controller. The results showed an increase in look-ahead fixations in the direction of the bend exit and a small but consistent reduction in the time spent looking in the area of the tangent point when steering was passive. This may be the consequence of a change in the balance between cognitive and sensorimotor anticipatory gaze strategies. It might also reflect bidirectional coordination control between the eye and arm-motor systems, which goes beyond the common assumption that the eyes lead the hands when driving. PMID:22928043

  11. Effect of different brewing times on antioxidant activity and polyphenol content of loosely packed and bagged black teas (Camellia sinensis L.).

    PubMed

    Nikniaz, Zeinab; Mahdavi, Reza; Ghaemmaghami, Seyed Jamal; Lotfi Yagin, Neda; Nikniaz, Leila

    2016-01-01

    Determination and comparison of the effect of infusion time on the antioxidant activity and total polyphenol contents of bagged and loosely packed black teas. For twenty loosely packed and eleven bagged tea samples, the antioxidant activity and total polyphenol content were analyzed using FRAP and Folin-Ciocalteau methods, respectively. The ANOVA with Tukey post-hoc test and independent t-test were used for statistical analysis. The antioxidant activity and polyphenol content of various brands of tea samples were significantly different. There were significant differences in the antioxidant activity of loosely packed teas between 5, 15(p=0.03), 30(p=0.02) and 60(p=0.007) minutes of brewing times. Besides, there was a significant difference in antioxidant activity of bagged samples infused for 1 minute with four other infusion time points (p<0.001). In the case of polyphenol content, in loosely-packed tea samples, there were not significant differences between different brewing times (p=0.15). However, in bagged samples, the polyphenol contents of samples that were brewed for 1 minute were significantly lower than samples brewed for 3, 4, and 5 minutes (p<0.05). The antioxidant activity and polyphenol content of tea bags were significantly higher than those ofloosely-packed forms of the same brands at 5-min of brewing time (p<0.001). The infusion time and the form of tea (loosely packed or bagged) were shown to be important determinants of the antioxidant activity and polyphenol content of black tea infusions in addition to the variety, growing environment and manufacturing conditions.

  12. Timing Recovery Strategies in Magnetic Recording Systems

    NASA Astrophysics Data System (ADS)

    Kovintavewat, Piya

    At some point in a digital communications receiver, the received analog signal must be sampled. Good performance requires that these samples be taken at the right times. The process of synchronizing the sampler with the received analog waveform is known as timing recovery. Conventional timing recovery techniques perform well only when operating at high signal-to-noise ratio (SNR). Nonetheless, iterative error-control codes allow reliable communication at very low SNR, where conventional techniques fail. This paper provides a detailed review on the timing recovery strategies based on per-survivor processing (PSP) that are capable of working at low SNR. We also investigate their performance in magnetic recording systems because magnetic recording is a primary method of storage for a variety of applications, including desktop, mobile, and server systems. Results indicate that the timing recovery strategies based on PSP perform better than the conventional ones and are thus worth being employed in magnetic recording systems.

  13. An antithetic variate to facilitate upper-stem height measurements for critical height sampling with importance sampling

    Treesearch

    Thomas B. Lynch; Jeffrey H. Gove

    2013-01-01

    Critical height sampling (CHS) estimates cubic volume per unit area by multiplying the sum of critical heights measured on trees tallied in a horizontal point sample (HPS) by the HPS basal area factor. One of the barriers to practical application of CHS is the fact that trees near the field location of the point-sampling sample point have critical heights that occur...

  14. Challenges in early clinical development of adjuvanted vaccines.

    PubMed

    Della Cioppa, Giovanni; Jonsdottir, Ingileif; Lewis, David

    2015-06-08

    A three-step approach to the early development of adjuvanted vaccine candidates is proposed, the goal of which is to allow ample space for exploratory and hypothesis-generating human experiments and to select dose(s) and dosing schedule(s) to bring into full development. Although the proposed approach is more extensive than the traditional early development program, the authors suggest that by addressing key questions upfront the overall time, size and cost of development will be reduced and the probability of public health advancement enhanced. The immunogenicity end-points chosen for early development should be critically selected: an established immunological parameter with a well characterized assay should be selected as primary end-point for dose and schedule finding; exploratory information-rich end-points should be limited in number and based on pre-defined hypothesis generating plans, including system biology and pathway analyses. Building a pharmacodynamic profile is an important aspect of early development: to this end, multiple early (within 24h) and late (up to one year) sampling is necessary, which can be accomplished by sampling subgroups of subjects at different time points. In most cases the final target population, even if vulnerable, should be considered for inclusion in early development. In order to obtain the multiple formulations necessary for the dose and schedule finding, "bed-side mixing" of various components of the vaccine is often necessary: this is a complex and underestimated area that deserves serious research and logistical support. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Simulation study of a chaotic cavity transducer based virtual phased array used for focusing in the bulk of a solid material.

    PubMed

    Delrue, Steven; Van Den Abeele, Koen; Bou Matar, Olivier

    2016-04-01

    In acoustic and ultrasonic non-destructive testing techniques, it is sometimes beneficial to concentrate sound energy at a chosen location in space and at a specific instance in time, for example to improve the signal-to-noise ratio or activate the nonlinearity of damage features. Time Reversal (TR) techniques, taking advantage of the reversible character of the wave equation, are particularly suited to focus ultrasonic waves in time and space. The characteristics of the energy focusing in solid media using principles of time reversed acoustics are highly influenced by the nature and dimensions of the medium, the number of transducers and the length of the received signals. Usually, a large number of transducers enclosing the domain of interest is needed to improve the quality of the focusing. However, in the case of highly reverberant media, the number of transducers can be reduced to only one (single-channel TR). For focusing in a non-reverberant medium, which is impossible when using only one source, an adaptation of the single-channel reciprocal TR procedure has been recently suggested by means of a Chaotic Cavity Transducer (CCT), a single element transducer glued on a cavity of chaotic shape. In this paper, a CCT is used to focus elastic energy, at different times, in different points along a predefined line on the upper surface of a thick solid sample. Doing so, all focusing points can act as a virtual phased array transducer, allowing to focus in any point along the depth direction of the sample. This is impossible using conventional reciprocal TR, as you need to have access to all points in the bulk of the material for detecting signals to be used in the TR process. To asses and provide a better understanding of this concept, a numerical study has been developed, allowing to verify the basic concepts of the virtual phased array and to illustrate multi-component time reversal focusing in the bulk of a solid material. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Critical point relascope sampling for unbiased volume estimation of downed coarse woody debris

    Treesearch

    Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey; Mark J. Ducey

    2005-01-01

    Critical point relascope sampling is developed and shown to be design-unbiased for the estimation of log volume when used with point relascope sampling for downed coarse woody debris. The method is closely related to critical height sampling for standing trees when trees are first sampled with a wedge prism. Three alternative protocols for determining the critical...

  17. Variability of Stimulant Levels in Nine Sports Supplements Over a 9-Month Period.

    PubMed

    Attipoe, Selasi; Cohen, Pieter A; Eichner, Amy; Deuster, Patricia A

    2016-10-01

    Many studies have found that some dietary supplement product labels do not accurately reflect the actual ingredients. However, studies have not been performed to determine if ingredients in the same dietary supplement product vary over time. The objective of this study was to assess the consistency of stimulant ingredients in popular sports supplements sold in the United States over a 9-month period. Three samples of nine popular sports supplements were purchased over the 9-month period. The 27 samples were analyzed for caffeine and several other stimulants (including adulterants). The identity and quantity of stimulants were compared with stimulants listed on the label and stimulants found at earlier time points to determine the variability in individual products over the 9-month period. The primary outcome measure was the variability of stimulant amounts in the products examined. Many supplements did not contain the same number and quantity of stimulants at all time points over the 9-month period. Caffeine content varied widely in five of the six caffeinated supplements compared with the initial measurement (-7% to +266%). In addition, the stimulants-synephrine, octopamine, cathine, ephedrine, pseudoephedrine, strychnine, and methylephedrine-occurred in variable amounts in eight of the nine products. The significance of these findings is uncertain: the sample size was insufficient to support statistical analysis. In our sample of nine popular sports supplements, the presence and quantity of stimulants varied over a 9-month period. However, future studies are warranted to determine if the variability found is significant and generalizable to other supplements.

  18. Evaluating Mass Analyzers as Candidates for Small, Portable, Rugged Single Point Mass Spectrometers for Analysis of Permanent Gases

    NASA Technical Reports Server (NTRS)

    Arkin, C. Richard; Ottens, Andrew K.; Diaz, Jorge A.; Griffin, Timothy P.; Follestein, Duke; Adams, Fredrick; Steinrock, T. (Technical Monitor)

    2001-01-01

    For Space Shuttle launch safety, there is a need to monitor the concentration Of H2, He, O2, and Ar around the launch vehicle. Currently a large mass spectrometry system performs this task, using long transport lines to draw in samples. There is great interest in replacing this stationary system with several miniature, portable, rugged mass spectrometers which act as point sensors which can be placed at the sampling point. Five commercial and two non-commercial analyzers are evaluated. The five commercial systems include the Leybold Inficon XPR-2 linear quadrupole, the Stanford Research (SRS-100) linear quadrupole, the Ferran linear quadrupole array, the ThermoQuest Polaris-Q quadrupole ion trap, and the IonWerks Time-of-Flight (TOF). The non-commercial systems include a compact double focusing sector (CDFMS) developed at the University of Minnesota, and a quadrupole ion trap (UF-IT) developed at the University of Florida.

  19. Psychophysics of time perception and intertemporal choice models

    NASA Astrophysics Data System (ADS)

    Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.

    2008-03-01

    Intertemporal choice and psychophysics of time perception have been attracting attention in econophysics and neuroeconomics. Several models have been proposed for intertemporal choice: exponential discounting, general hyperbolic discounting (exponential discounting with logarithmic time perception of the Weber-Fechner law, a q-exponential discount model based on Tsallis's statistics), simple hyperbolic discounting, and Stevens' power law-exponential discounting (exponential discounting with Stevens' power time perception). In order to examine the fitness of the models for behavioral data, we estimated the parameters and AICc (Akaike Information Criterion with small sample correction) of the intertemporal choice models by assessing the points of subjective equality (indifference points) at seven delays. Our results have shown that the orders of the goodness-of-fit for both group and individual data were [Weber-Fechner discounting (general hyperbola) > Stevens' power law discounting > Simple hyperbolic discounting > Exponential discounting], indicating that human time perception in intertemporal choice may follow the Weber-Fechner law. Indications of the results for neuropsychopharmacological treatments of addiction and biophysical processing underlying temporal discounting and time perception are discussed.

  20. Ex Vivo Machine Perfusion in CTA with a Novel Oxygen Carrier System to Enhance Graft Preservation and Immunologic Outcomes

    DTIC Science & Technology

    2014-10-01

    rectus abdominal muscle, autotransplantation, heterotopic, superior epigastric vein, cold ischemia time, immunomodulation, transcriptomics...composite flap (muscle, adipose tissue and skin) from the whole rectus abdominal muscle (RAM). This model was maximized through extensive anatomical...The biopsies included Skin - Subcutaneous Fat – Muscle. (9 tissue samples per each biopsy time point for each flap) The biopsies were taken by punches

  1. Interferometric Creep Testing.

    DTIC Science & Technology

    1985-03-01

    33 3 FIGURES (Continued) 16. Temperature of Zerodur sample and apparent strain * as a function of time with PZT-modulated mirror (point b...moves vertically if all mirrors are at 45 deg. The lower beam path et remains constant if the prism moves up or down or if the Zerodur plate expands...using a 2-in. Zerodur test sample at room temperature and no load except that from the weight of the top steel mirror disk, equivalent to 0.5 psi

  2. Effect of Cell-seeded Hydroxyapatite Scaffolds on Rabbit Radius Bone Regeneration

    DTIC Science & Technology

    2013-06-22

    OK) for 14 d via a tissue processer (Leica TP1020 system; Bannockburn, IL). Samples were then embedded in photocuring resin (Technovit 7200 VLC ...Kulzer, Germany) and polymerized under blue light for 24 h. Block samples were adhered to a parallel plexiglass slide using the Exakt 7210 VLC system...induction, choice of evaluation time point, and use of a nonhealing defect. For example, a more challenging radial defect (1.5 cm) in rabbits and the

  3. Optical and Radio Frequency Refractivity Fluctuations from High Resolution Point Sensors: Sea Breezes and Other Observations

    DTIC Science & Technology

    2007-03-01

    velocity and direction along with vertical velocities are derived from the measured time of flight for the ultrasonic signals (manufacture’s...data set. To prevent aliasing a wave must be sample at least twice per period so the Nyquist frequency is sn ff 2 = . 3. Sampling Requirements...an order of magnitude or more. To refine models or conduct climatologically studies for Cn2 requires direct measurements to identify the underlying

  4. Model-Based Design of Long-Distance Tracer Transport Experiments in Plants.

    PubMed

    Bühler, Jonas; von Lieres, Eric; Huber, Gregor J

    2018-01-01

    Studies of long-distance transport of tracer isotopes in plants offer a high potential for functional phenotyping, but so far measurement time is a bottleneck because continuous time series of at least 1 h are required to obtain reliable estimates of transport properties. Hence, usual throughput values are between 0.5 and 1 samples h -1 . Here, we propose to increase sample throughput by introducing temporal gaps in the data acquisition of each plant sample and measuring multiple plants one after each other in a rotating scheme. In contrast to common time series analysis methods, mechanistic tracer transport models allow the analysis of interrupted time series. The uncertainties of the model parameter estimates are used as a measure of how much information was lost compared to complete time series. A case study was set up to systematically investigate different experimental schedules for different throughput scenarios ranging from 1 to 12 samples h -1 . Selected designs with only a small amount of data points were found to be sufficient for an adequate parameter estimation, implying that the presented approach enables a substantial increase of sample throughput. The presented general framework for automated generation and evaluation of experimental schedules allows the determination of a maximal sample throughput and the respective optimal measurement schedule depending on the required statistical reliability of data acquired by future experiments.

  5. A Modular Low-Complexity ECG Delineation Algorithm for Real-Time Embedded Systems.

    PubMed

    Bote, Jose Manuel; Recas, Joaquin; Rincon, Francisco; Atienza, David; Hermida, Roman

    2018-03-01

    This work presents a new modular and low-complexity algorithm for the delineation of the different ECG waves (QRS, P and T peaks, onsets, and end). Involving a reduced number of operations per second and having a small memory footprint, this algorithm is intended to perform real-time delineation on resource-constrained embedded systems. The modular design allows the algorithm to automatically adjust the delineation quality in runtime to a wide range of modes and sampling rates, from a ultralow-power mode when no arrhythmia is detected, in which the ECG is sampled at low frequency, to a complete high-accuracy delineation mode, in which the ECG is sampled at high frequency and all the ECG fiducial points are detected, in the case of arrhythmia. The delineation algorithm has been adjusted using the QT database, providing very high sensitivity and positive predictivity, and validated with the MIT database. The errors in the delineation of all the fiducial points are below the tolerances given by the Common Standards for Electrocardiography Committee in the high-accuracy mode, except for the P wave onset, for which the algorithm is above the agreed tolerances by only a fraction of the sample duration. The computational load for the ultralow-power 8-MHz TI MSP430 series microcontroller ranges from 0.2% to 8.5% according to the mode used.

  6. Rotating magnetic field experiments in a pure superconducting Pb sphere

    NASA Astrophysics Data System (ADS)

    Vélez, Saül; García-Santiago, Antoni; Hernandez, Joan Manel; Tejada, Javier

    2009-10-01

    The magnetic properties of a sphere of pure type-I superconducting lead (Pb) under rotating magnetic fields have been investigated in different experimental conditions by measuring the voltage generated in a set of detection coils by the response of the sample to the time variation in the magnetic field. The influence of the frequency of rotation of the magnet, the time it takes to record each data point and the temperature of the sample during the measuring process is explored. A strong reduction in the thermodynamic critical field and the onset of hysteretical effects in the magnetic field dependence of the amplitude of the magnetic susceptibility are observed for large frequencies and large values of the recording time. Heating of the sample during the motion of normal zones in the intermediate state and the dominance of a resistive term in the contribution of the Lenz’s law to the magnetic susceptibility in the normal state under time varying magnetic fields are suggested as possible explanations for these effects.

  7. Stability of Alprostadil in 0.9% Sodium Chloride Stored in Polyvinyl Chloride Containers.

    PubMed

    McCluskey, Susan V; Kirkham, Kylian; Munson, Jessica M

    2017-01-01

    The stability of alprostadil diluted in 0.9% sodium chloride stored in polyvinyl chloride (VIAFLEX) containers at refrigerated temperature, protected from light, is reported. Five solutions of alprostadil 11 mcg/mL were prepared in 250 mL 0.9% sodium chloride polyvinyl chloride (PL146) containers. The final concentration of alcohol was 2%. Samples were stored under refrigeration (2°C to 8°C) with protection from light. Two containers were submitted for potency testing and analyzed in duplicate with the stability-indicating high-performance liquid chromatography assay at specific time points over 14 days. Three containers were submitted for pH and visual testing at specific time points over 14 days. Stability was defined as retention of 90% to 110% of initial alprostadil concentration, with maintenance of the original clear, colorless, and visually particulate-free solution. Study results reported retention of 90% to 110% initial alprostadil concentration at all time points through day 10. One sample exceeded 110% potency at day 14. pH values did not change appreciably over the 14 days. There were no color changes or particle formation detected in the solutions over the study period. This study concluded that during refrigerated, light-protected storage in polyvinyl chloride (VIAFLEX) containers, a commercial alcohol-containing alprostadil formulation diluted to 11 mcg/mL with 0.9% sodium chloride 250 mL was stable for 10 days. Copyright© by International Journal of Pharmaceutical Compounding, Inc.

  8. Assessment of Biodosimetry Methods for a Mass-Casualty Radiological Incident: Medical Response and Management Considerations

    PubMed Central

    Sullivan, Julie M.; Prasanna, Pataje G. S.; Grace, Marcy B.; Wathen, Lynne; Wallace, Rodney L.; Koerner, John F.; Coleman, C. Norman

    2013-01-01

    Following a mass-casualty nuclear disaster, effective medical triage has the potential to save tens of thousands of lives. In order to best use the available scarce resources, there is an urgent need for biodosimetry tools to determine an individual’s radiation dose. Initial triage for radiation exposure will include location during the incident, symptoms, and physical examination. Stepwise triage will include point of care assessment of less than or greater than 2 Gy, followed by secondary assessment, possibly with high throughput screening, to further define an individual’s dose. Given the multisystem nature of radiation injury, it is unlikely that any single biodosimetry assay can be used as a stand-alone tool to meet the surge in capacity with the timeliness and accuracy needed. As part of the national preparedness and planning for a nuclear or radiological incident, we reviewed the primary literature to determine the capabilities and limitations of a number of biodosimetry assays currently available or under development for use in the initial and secondary triage of patients. Understanding the requirements from a response standpoint and the capability and logistics for the various assays will help inform future biodosimetry technology development and acquisition. Factors considered include: type of sample required, dose detection limit, time interval when the assay is feasible biologically, time for sample preparation and analysis, ease of use, logistical requirements, potential throughput, point-of-care capability, and the ability to support patient diagnosis and treatment within a therapeutically relevant time point. PMID:24162058

  9. Analysis of ground-measured and passive-microwave-derived snow depth variations in midwinter across the Northern Great Plains

    USGS Publications Warehouse

    Chang, A.T.C.; Kelly, R.E.J.; Josberger, E.G.; Armstrong, R.L.; Foster, J.L.; Mognard, N.M.

    2005-01-01

    Accurate estimation of snow mass is important for the characterization of the hydrological cycle at different space and time scales. For effective water resources management, accurate estimation of snow storage is needed. Conventionally, snow depth is measured at a point, and in order to monitor snow depth in a temporally and spatially comprehensive manner, optimum interpolation of the points is undertaken. Yet the spatial representation of point measurements at a basin or on a larger distance scale is uncertain. Spaceborne scanning sensors, which cover a wide swath and can provide rapid repeat global coverage, are ideally suited to augment the global snow information. Satellite-borne passive microwave sensors have been used to derive snow depth (SD) with some success. The uncertainties in point SD and areal SD of natural snowpacks need to be understood if comparisons are to be made between a point SD measurement and satellite SD. In this paper three issues are addressed relating satellite derivation of SD and ground measurements of SD in the northern Great Plains of the United States from 1988 to 1997. First, it is shown that in comparing samples of ground-measured point SD data with satellite-derived 25 ?? 25 km2 pixels of SD from the Defense Meteorological Satellite Program Special Sensor Microwave Imager, there are significant differences in yearly SD values even though the accumulated datasets showed similarities. Second, from variogram analysis, the spatial variability of SD from each dataset was comparable. Third, for a sampling grid cell domain of 1?? ?? 1?? in the study terrain, 10 distributed snow depth measurements per cell are required to produce a sampling error of 5 cm or better. This study has important implications for validating SD derivations from satellite microwave observations. ?? 2005 American Meteorological Society.

  10. Dynamic laser speckle analyzed considering inhomogeneities in the biological sample

    NASA Astrophysics Data System (ADS)

    Braga, Roberto A.; González-Peña, Rolando J.; Viana, Dimitri Campos; Rivera, Fernando Pujaico

    2017-04-01

    Dynamic laser speckle phenomenon allows a contactless and nondestructive way to monitor biological changes that are quantified by second-order statistics applied in the images in time using a secondary matrix known as time history of the speckle pattern (THSP). To avoid being time consuming, the traditional way to build the THSP restricts the data to a line or column. Our hypothesis is that the spatial restriction of the information could compromise the results, particularly when undesirable and unexpected optical inhomogeneities occur, such as in cell culture media. It tested a spatial random approach to collect the points to form a THSP. Cells in a culture medium and in drying paint, representing homogeneous samples in different levels, were tested, and a comparison with the traditional method was carried out. An alternative random selection based on a Gaussian distribution around a desired position was also presented. The results showed that the traditional protocol presented higher variation than the outcomes using the random method. The higher the inhomogeneity of the activity map, the higher the efficiency of the proposed method using random points. The Gaussian distribution proved to be useful when there was a well-defined area to monitor.

  11. Fetal exposures and perinatal influences on the stool microbiota of premature infants

    PubMed Central

    Chernikova, Diana A.; Koestler, Devin C.; Hoen, Anne Gatewood; Housman, Molly L.; Hibberd, Patricia L.; Moore, Jason H.; Morrison, Hilary G.; Sogin, Mitchell L.; Ul-Abideen, Muhammad Zain; Madan, Juliette C.

    2015-01-01

    Objective To test the hypothesis that maternal complications significantly affect gut colonization patterns in very low birth weight infants. Methods 49 serial stool samples were obtained weekly from 9 extremely premature infants enrolled in a prospective longitudinal study. Sequencing of the bacterial 16S rRNA gene from stool samples was performed to approximate the intestinal microbiome. Linear mixed effects models were used to evaluate relationships between perinatal complications and intestinal microbiome development. Results Subjects with prenatal exposure to a non-sterile intrauterine environment, i.e. PPPROM and chorioamnionitis exposure, were found to have a relatively higher abundance of potentially pathogenic bacteria in the stool across all time points compared to subjects without those exposures, irrespective of exposure to postnatal antibiotics. Compared with those delivered by Caesarean section, vaginally delivered subjects were found to have significantly lower diversity of stool microbiota across all time points, with lower abundance of many genera, most in the family Enterobacteriaceae. Conclusions We identified persistently increased potential pathogen abundance in the developing stool microbiota of subjects exposed to a non-sterile uterine environment. Maternal complications appear to significantly influence the diversity and bacterial composition of the stool microbiota of premature infants, with findings persisting over time. PMID:25394613

  12. A coupled hidden Markov model for disease interactions

    PubMed Central

    Sherlock, Chris; Xifara, Tatiana; Telfer, Sandra; Begon, Mike

    2013-01-01

    To investigate interactions between parasite species in a host, a population of field voles was studied longitudinally, with presence or absence of six different parasites measured repeatedly. Although trapping sessions were regular, a different set of voles was caught at each session, leading to incomplete profiles for all subjects. We use a discrete time hidden Markov model for each disease with transition probabilities dependent on covariates via a set of logistic regressions. For each disease the hidden states for each of the other diseases at a given time point form part of the covariate set for the Markov transition probabilities from that time point. This allows us to gauge the influence of each parasite species on the transition probabilities for each of the other parasite species. Inference is performed via a Gibbs sampler, which cycles through each of the diseases, first using an adaptive Metropolis–Hastings step to sample from the conditional posterior of the covariate parameters for that particular disease given the hidden states for all other diseases and then sampling from the hidden states for that disease given the parameters. We find evidence for interactions between several pairs of parasites and of an acquired immune response for two of the parasites. PMID:24223436

  13. Experimental investigation of fatigue behavior of carbon fiber composites using fully-reversed four-point bending test

    NASA Astrophysics Data System (ADS)

    Amiri, Ali

    Carbon fiber reinforced polymers (CFRP) have become an increasingly notable material for use in structural engineering applications. Some of their advantages include high strength-to-weight ratio, high stiffness-to-weight ratio, and good moldability. Prediction of the fatigue life of composite laminates has been the subject of various studies due to the cyclic loading experienced in many applications. Both theoretical studies and experimental tests have been performed to estimate the endurance limit and fatigue life of composite plates. One of the main methods to predict fatigue life is the four-point bending test. In most previous works, the tests have been done in one direction (load ratio, R, > 0). In the current work, we have designed and manufactured a special fixture to perform a fully reversed bending test (R = -1). Static four-point bending tests were carried out on three (0°/90°)15 and (± 45°)15 samples to measure the mechanical properties of CFRP. Testing was displacement-controlled at the rate of 10 mm/min until failure. In (0°/90°)15 samples, all failed by cracking/buckling on the compressive side of the sample. While in (± 45°)15 all three tests, no visual fracture or failure of the samples was observed. 3.4 times higher stresses were reached during four-point static bending test of (0° /90°)15 samples compared to (± 45°)15. Same trend was seen in literature for similar tests. Four-point bending fatigue tests were carried out on (0° /90°)15 sample with stress ratio, R = -1 and frequency of 5 Hz. Applied maximum stresses were approximately 45%, 56%, 67%, 72% and 76% of the measured yield stress for (0° /90°)15 samples. There was visible cracking through the thickness of the samples. The expected downward trend in fatigue life with increasing maximum applied stress was observed in S-N curves of samples. There appears to be a threshold for ‘infinite’ life, defined as 1.7 million cycles in the current work, at a maximum stress of about 200 MPa. The decay in flexural modulus of the beam as it goes under cyclic loading was calculated and it was seen that flexural modulus shows an exponential decay which can be expressed as: E = E0e AN. Four-point bending fatigue tests were carried out on three (±45°)15 samples with stress ratio, R = -1 and frequency of 5 Hz. Maximum applied stress was 85% of the measured yield stress of (±45°)15 samples. None of the samples failed, nor any sign of crack was seen. Tests were stopped once the number of cycles passed 1.7×106. In general, current study provided additional insight into the fatigue and static behavior of polymer composites and effect of fiber orientation in their mechanical behavior.

  14. Electrochemical Aptamer-Based Sensors for Rapid Point-of-Use Monitoring of the Mycotoxin Ochratoxin A Directly in a Food Stream.

    PubMed

    Somerson, Jacob; Plaxco, Kevin W

    2018-04-15

    The ability to measure the concentration of specific small molecules continuously and in real-time in complex sample streams would impact many areas of agriculture, food safety, and food production. Monitoring for mycotoxin taint in real time during food processing, for example, could improve public health. Towards this end, we describe here an inexpensive electrochemical DNA-based sensor that supports real-time monitor of the mycotoxin ochratoxin A in a flowing stream of foodstuffs.

  15. NEW APPROACHES TO ESTIMATION OF SOLID-WASTE QUANTITY AND COMPOSITION

    EPA Science Inventory

    Efficient and statistically sound sampling protocols for estimating the quantity and composition of solid waste over a stated period of time in a given location, such as a landfill site or at a specific point in an industrial or commercial process, are essential to the design ...

  16. Social Experiences in Infancy and Early Childhood Co-Sleeping

    ERIC Educational Resources Information Center

    Hayes, Marie J.; Fukumizu, Michio; Troese, Marcia; Sallinen, Bethany A.; Gilles, Allyson A.

    2007-01-01

    Infancy and early childhood sleep-wake behaviours from current and retrospective parental reports were used to explore the relationship between sleeping arrangements and parent-child nighttime interactions at both time points. Children (N = 45) from educated, middle-class families, mostly breastfed in infancy, composed a convenience sample that…

  17. Least-mean-square spatial filter for IR sensors.

    PubMed

    Takken, E H; Friedman, D; Milton, A F; Nitzberg, R

    1979-12-15

    A new least-mean-square filter is defined for signal-detection problems. The technique is proposed for scanning IR surveillance systems operating in poorly characterized but primarily low-frequency clutter interference. Near-optimal detection of point-source targets is predicted both for continuous-time and sampled-data systems.

  18. Early Handedness in Infancy Predicts Language Ability in Toddlers

    ERIC Educational Resources Information Center

    Nelson, Eliza L.; Campbell, Julie M.; Michel, George F.

    2014-01-01

    Researchers have long been interested in the relationship between handedness and language in development. However, traditional handedness studies using single age groups, small samples, or too few measurement time points have not capitalized on individual variability and may have masked 2 recently identified patterns in infants: those with a…

  19. Preparation of Solid Derivatives by Differential Scanning Calorimetry.

    ERIC Educational Resources Information Center

    Crandall, E. W.; Pennington, Maxine

    1980-01-01

    Describes the preparation of selected aldehydes and ketones, alcohols, amines, phenols, haloalkanes, and tertiaryamines by differential scanning calorimetry. Technique is advantageous because formation of the reaction product occurs and the melting point of the product is obtained on the same sample in a short time with no additional purification…

  20. Interaction of pulsating and spinning waves in condensed phase combustion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Booty, M.R.; Margolis, S.B.; Matkowsky, B.J.

    1986-10-01

    The authors employ a nonlinear stability analysis in the neighborhood of a multiple bifurcation point to describe the interaction of pulsating and spinning modes of condensed phase combustion. Such phenomena occur in the synthesis of refractory materials. In particular, they consider the propagation of combustion waves in a long thermally insulated cylindrical sample and show that steady, planar combustion is stable for a modified activation energy/melting parameter less than a critical value. Above this critical value primary bifurcation states, corresponding to time-periodic pulsating and spinning modes of combustion, emanate from the steadily propagating solution. By varying the sample radius, themore » authors split a multiple bifurcation point to obtain bifurcation diagrams which exhibit secondary, tertiary, and quarternary branching to various types of quasi-periodic combustion waves.« less

  1. Improved detection limits for electrospray ionization on a magnetic sector mass spectrometer by using an array detector.

    PubMed

    Cody, R B; Tamura, J; Finch, J W; Musselman, B D

    1994-03-01

    Array detection was compared with point detection for solutions of hen egg-white lysozyme, equine myoglobin, and ubiquitin analyzed by electrospray ionization with a magnetic sector mass spectrometer. The detection limits for samples analyzed by using the array detector system were at least 10 times lower than could be achieved by using a point detector on the same mass spectrometer. The minimum detectable quantity of protein corresponded to a signal-to-background ratio of approximately 2∶1 for a 500 amol/μL solution of hen egg-white lysozyme. However, the ultimate practical sample concentrations appeared to be in the 10-100 fmol/μL range for the analysis of dilute solutions of relatively pure proteins or simple mixtures.

  2. Human presence impacts fungal diversity of inflated lunar/Mars analog habitat.

    PubMed

    Blachowicz, A; Mayer, T; Bashir, M; Pieber, T R; De León, P; Venkateswaran, K

    2017-07-11

    An inflatable lunar/Mars analog habitat (ILMAH), simulated closed system isolated by HEPA filtration, mimics International Space Station (ISS) conditions and future human habitation on other planets except for the exchange of air between outdoor and indoor environments. The ILMAH was primarily commissioned to measure physiological, psychological, and immunological characteristics of human inhabiting in isolation, but it was also available for other studies such as examining its microbiological aspects. Characterizing and understanding possible changes and succession of fungal species is of high importance since fungi are not only hazardous to inhabitants but also deteriorate the habitats. Observing the mycobiome changes in the presence of human will enable developing appropriate countermeasures with reference to crew health in a future closed habitat. Succession of fungi was characterized utilizing both traditional and state-of-the-art molecular techniques during the 30-day human occupation of the ILMAH. Surface samples were collected at various time points and locations to observe both the total and viable fungal populations of common environmental and opportunistic pathogenic species. To estimate the cultivable fungal population, potato dextrose agar plate counts method was utilized. The internal transcribed spacer region-based iTag Illumina sequencing was employed to measure the community structure and fluctuation of the mycobiome over time in various locations. Treatment of samples with propidium monoazide (PMA; a DNA intercalating dye for selective detection of viable microbial populations) had a significant effect on the microbial diversity compared to non-PMA-treated samples. Statistical analysis confirmed that viable fungal community structure changed (increase in diversity and decrease in fungal burden) over the occupation time. Samples collected at day 20 showed distinct fungal profiles from samples collected at any other time point (before or after). Viable fungal families like Davidiellaceae, Teratosphaeriaceae, Pleosporales, and Pleosporaceae were shown to increase during the occupation time. The results of this study revealed that the overall fungal diversity in the closed habitat changed during human presence; therefore, it is crucial to properly maintain a closed habitat to preserve it from deteriorating and keep it safe for its inhabitants. Differences in community profiles were observed when statistically treated, especially of the mycobiome of samples collected at day 20. On a genus level Epiccocum, Alternaria, Pleosporales, Davidiella, and Cryptococcus showed increased abundance over the occupation time.

  3. Mass discharge in a tracer plume: Evaluation of the Theissen Polygon Method

    PubMed Central

    Mackay, Douglas M.; Einarson, Murray D.; Kaiser, Phil M.; Nozawa-Inoue, Mamie; Goyal, Sham; Chakraborty, Irina; Rasa, Ehsan; Scow, Kate M.

    2013-01-01

    A tracer plume was created within a thin aquifer by injection for 299 days of two adjacent “sub-plumes” to represent one type of plume heterogeneity encountered in practice. The plume was monitored by snapshot sampling of transects of fully screened wells. The mass injection rate and total mass injected were known. Using all wells in each transect (0.77 m well spacing, 1.4 points/m2 sampling density), the Theissen Polygon Method (TPM) yielded apparently accurate mass discharge (Md) estimates at 3 transects for 12 snapshots. When applied to hypothetical sparser transects using subsets of the wells with average spacing and sampling density from 1.55 to 5.39 m and 0.70 to 0.20 points/m2, respectively, the TPM accuracy depended on well spacing and location of the wells in the hypothesized transect with respect to the sub-plumes. Potential error was relatively low when the well spacing was less than the widths of the sub-plumes (> 0.35 points/m2). Potential error increased for well spacing similar to or greater than the sub-plume widths, or when less than 1% of the plume area was sampled. For low density sampling of laterally heterogeneous plumes, small changes in groundwater flow direction can lead to wide fluctuations in Md estimates by the TPM. However, sampling conducted when flow is known or likely to be in a preferred direction can potentially allow more useful comparisons of Md over multiyear time frames, such as required for performance evaluation of natural attenuation or engineered remediation systems. PMID:22324777

  4. Collection, transport and general processing of clinical specimens in Microbiology laboratory.

    PubMed

    Sánchez-Romero, M Isabel; García-Lechuz Moya, Juan Manuel; González López, Juan José; Orta Mira, Nieves

    2018-02-06

    The interpretation and the accuracy of the microbiological results still depend to a great extent on the quality of the samples and their processing within the Microbiology laboratory. The type of specimen, the appropriate time to obtain the sample, the way of sampling, the storage and transport are critical points in the diagnostic process. The availability of new laboratory techniques for unusual pathogens, makes necessary the review and update of all the steps involved in the processing of the samples. Nowadays, the laboratory automation and the availability of rapid techniques allow the precision and turn-around time necessary to help the clinicians in the decision making. In order to be efficient, it is very important to obtain clinical information to use the best diagnostic tools. Copyright © 2018 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  5. Sample Size and Allocation of Effort in Point Count Sampling of Birds in Bottomland Hardwood Forests

    Treesearch

    Winston P. Smith; Daniel J. Twedt; Robert J. Cooper; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford

    1995-01-01

    To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect...

  6. Sample size and classification error for Bayesian change-point models with unlabelled sub-groups and incomplete follow-up.

    PubMed

    White, Simon R; Muniz-Terrera, Graciela; Matthews, Fiona E

    2018-05-01

    Many medical (and ecological) processes involve the change of shape, whereby one trajectory changes into another trajectory at a specific time point. There has been little investigation into the study design needed to investigate these models. We consider the class of fixed effect change-point models with an underlying shape comprised two joined linear segments, also known as broken-stick models. We extend this model to include two sub-groups with different trajectories at the change-point, a change and no change class, and also include a missingness model to account for individuals with incomplete follow-up. Through a simulation study, we consider the relationship of sample size to the estimates of the underlying shape, the existence of a change-point, and the classification-error of sub-group labels. We use a Bayesian framework to account for the missing labels, and the analysis of each simulation is performed using standard Markov chain Monte Carlo techniques. Our simulation study is inspired by cognitive decline as measured by the Mini-Mental State Examination, where our extended model is appropriate due to the commonly observed mixture of individuals within studies who do or do not exhibit accelerated decline. We find that even for studies of modest size ( n = 500, with 50 individuals observed past the change-point) in the fixed effect setting, a change-point can be detected and reliably estimated across a range of observation-errors.

  7. A new method of regional CBF measurement using one point arterial sampling based on microsphere model with I-123 IMP SPECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odano, I.; Takahashi, N.; Ohkubo, M.

    1994-05-01

    We developed a new method for quantitative measurement of rCBF with Iodine-123-IMP based on the microsphere model, which was accurate, more simple and relatively non-invasive than the continuous withdrawal method. IMP is assumed to behave as a chemical microsphere in the brain. Then regional CBF is measured by the continuous withdrawal of arterial blood and the microsphere model as follows: F=Cb(t)/integral Ca(t)*N, where F is rCBF (ml/100g/min), Cb(t) is the brain activity concentration. The integral Ca(t) is the total activity of arterial whole-blood withdrawn, and N is the fraction of the integral Ca(t) that is true tracer activity. We analyzedmore » 14 patients. A dose of 222 MBq of IMP was injected i.v. over 1 min, and withdrawal of the arterial blood was performed from 0 to 5 min (integral Ca(t)), after which arterial blood samples (one point Ca(t)) were obtained at 5, 6, 7, 8, 9, 10 min, respectively. Then the integral Ca(t) was mathematically inferred from the value of one point Ca(t). When we examined the correlation between integral Ca(t)*N and one point Ca(t), and % error of one point Ca(t) compared with integral Ca(t)*N, the minimum of the % error was 8.1% and the maximum of the correlation coefficient was 0.943, the both values of which were obtained at 6 min. We concluded that 6 min was the best time to take arterial blood sample by one point sampling method for assuming the integral Ca(t)*N. IMP SPECT studies were performed with a ring-type SPECT scanner, Compared with rCBF measured by Xe-133 method, a significant correlation was observed in this method (r=0.773). One point Ca(t) method is very easy and quickly for measurement of rCBF without inserting catheters and without arterial blood treatment with octanol.« less

  8. Gas chromatography/principal component similarity system for detection of E. coli and S. aureus contaminating salmon and hamburger.

    PubMed

    Nakai, S; Wang, Z H; Dou, J; Nakamura, S; Ogawa, M; Nakai, E; Vanderstoep, J

    1999-02-01

    Coho, Atlantic, Spring, and Sockeye salmon and five commercial samples of hamburger patties were analyzed by processing gas chromatography (GC) data of volatile compounds using the principal component similarity (PCS) technique. PCS scattergrams of the samples inoculated with Escherichia coli and Staphylococcus aureus followed by incubation showed the pattern-shift lines moving away from the data point for uninoculated, unincubated reference samples in different directions with increasing incubation time. When the PCS scattergrams were drawn for samples incubated overnight, the samples inoculated with the two bacterial species and the uninoculated samples appeared as three separated groups. This GC/PCS approach has the potential to ensure quality of samples by discriminating good samples from potentially spoiled samples. The latter may require further microbial assays to identify the bacteria species potentially contaminating foods.

  9. Integrated crystal mounting and alignment system for high-throughput biological crystallography

    DOEpatents

    Nordmeyer, Robert A.; Snell, Gyorgy P.; Cornell, Earl W.; Kolbe, William F.; Yegian, Derek T.; Earnest, Thomas N.; Jaklevich, Joseph M.; Cork, Carl W.; Santarsiero, Bernard D.; Stevens, Raymond C.

    2007-09-25

    A method and apparatus for the transportation, remote and unattended mounting, and visual alignment and monitoring of protein crystals for synchrotron generated x-ray diffraction analysis. The protein samples are maintained at liquid nitrogen temperatures at all times: during shipment, before mounting, mounting, alignment, data acquisition and following removal. The samples must additionally be stably aligned to within a few microns at a point in space. The ability to accurately perform these tasks remotely and automatically leads to a significant increase in sample throughput and reliability for high-volume protein characterization efforts. Since the protein samples are placed in a shipping-compatible layered stack of sample cassettes each holding many samples, a large number of samples can be shipped in a single cryogenic shipping container.

  10. Integrated crystal mounting and alignment system for high-throughput biological crystallography

    DOEpatents

    Nordmeyer, Robert A.; Snell, Gyorgy P.; Cornell, Earl W.; Kolbe, William; Yegian, Derek; Earnest, Thomas N.; Jaklevic, Joseph M.; Cork, Carl W.; Santarsiero, Bernard D.; Stevens, Raymond C.

    2005-07-19

    A method and apparatus for the transportation, remote and unattended mounting, and visual alignment and monitoring of protein crystals for synchrotron generated x-ray diffraction analysis. The protein samples are maintained at liquid nitrogen temperatures at all times: during shipment, before mounting, mounting, alignment, data acquisition and following removal. The samples must additionally be stably aligned to within a few microns at a point in space. The ability to accurately perform these tasks remotely and automatically leads to a significant increase in sample throughput and reliability for high-volume protein characterization efforts. Since the protein samples are placed in a shipping-compatible layered stack of sample cassettes each holding many samples, a large number of samples can be shipped in a single cryogenic shipping container.

  11. Development of a predictive limited sampling strategy for estimation of mycophenolic acid area under the concentration time curve in patients receiving concomitant sirolimus or cyclosporine.

    PubMed

    Figurski, Michal J; Nawrocki, Artur; Pescovitz, Mark D; Bouw, Rene; Shaw, Leslie M

    2008-08-01

    Limited sampling strategies for estimation of the area under the concentration time curve (AUC) for mycophenolic acid (MPA) co-administered with sirolimus (SRL) have not been previously evaluated. The authors developed and validated 68 regression models for estimation of MPA AUC for two groups of patients, one with concomitant SRL (n = 24) and the second with concomitant cyclosporine (n=14), using various combinations of time points between 0 and 4 hours after drug administration. To provide as robust a model as possible, a dataset-splitting method similar to a bootstrap was used. In this method, the dataset was randomly split in half 100 times. Each time, one half of the data was used to estimate the equation coefficients, and the other half was used to test and validate the models. Final models were obtained by calculating the median values of the coefficients. Substantial differences were found in the pharmacokinetics of MPA between these groups. The mean MPA AUC as well as the standard deviation was much greater in the SRL group, 56.4 +/- 23.5 mg.h/L, compared with 30.4 +/- 11.0 mg.h/L in the cyclosporine group (P < 0.001). Mean maximum concentration was also greater in the SRL group: 16.4 +/- 7.7 mg/L versus 11.7 +/- 7.1mg/L (P < 0.005). The second absorption peak in the pharmacokinetic profile, presumed to result from enterohepatic recycling of glucuronide MPA, was observed in 70% of the profiles in the SRL group and in 35% of profiles from the cyclosporine group. Substantial differences in the predictive performance of the regression models, based on the same time points, were observed between the two groups. The best model for the SRL group was based on 0 (trough) and 40 minutes and 4 hour time points with R2, root mean squared error, and predictive performance values of 0.82, 10.0, and 78%, respectively. In the cyclosporine group, the best model was 0 and 40 minutes and 2 hours, with R2, RMSE, and predictive performance values of 0.86, 4.1, and 83%, respectively. The model with 2 hours as the last time point is also recommended for the SRL group for practical reasons, with the above parameters of 0.77, 11.3, and 69%, respectively.

  12. Determination of the water retention of peat soils in the range of the permanent wilting point.

    NASA Astrophysics Data System (ADS)

    Nünning, Lena; Bechtold, Michel; Dettmann, Ullrich; Piayda, Arndt; Tiemeyer, Bärbel; Durner, Wolfgang

    2017-04-01

    Global coverage of peatlands decreases due to the use of peat for horticulture and to the drainage of peatlands for agriculture and forestry. While alternatives for peat in horticulture exist, profitable agriculture on peatlands and climate protection are far more difficult to combine. A controlled water management that is optimized to stabilize yields while reducing peat degradation provides a promising path in this direction. For this goal, profound knowledge of hydraulic properties of organic soil is essential, for which, however, literature is scarce. This study aimed to compare different methods to determine the water retention of organic soils in the dry range (pF 3 to 4.5). Three common methods were compared: two pressure based apparatus (ceramic plate vs. membrane, Eijkelkamp) and a dew point potentiameter (WP4C, Decagon Devices), which is based on the equilibrium of soil water potential and air humidity. Two different types of organic soil samples were analyzed: i) samples wet from the field and ii) samples that were rewetted after oven-drying. Additional WP4C measurements were performed at samples from standard evaporation experiments directly after they have been finished. Results were: 1) no systematic differences between pressure apparatus and WP4C measurements, 2) however, high moisture variability of the samples from the pressure apparatus as well as high variability of the WP4C measurements at these samples when they were removed from these devices which indicated that applied pressure did not establish well in all samples, 3) rewetted oven-dried samples resulted in up to three times lower soil moistures even after long equilibrium times, i.e. there was a strong and long-lasting hysteresis effect that was highest for less degraded peat samples, 4) and highly consistent WP4C measurements at samples from the end of the evaporation experiment. Results provide useful information for deriving reliable water retention characteristics for organic soils.

  13. Chirp-Z analysis for sol-gel transition monitoring.

    PubMed

    Martinez, Loïc; Caplain, Emmanuel; Serfaty, Stéphane; Griesmar, Pascal; Gouedard, Gérard; Gindre, Marcel

    2004-04-01

    Gelation is a complex reaction that transforms a liquid medium into a solid one: the gel. In gel state, some gel materials (DMAP) have the singular property to ring in an audible frequency range when a pulse is applied. Before the gelation point, there is no transmission of slow waves observed; after the gelation point, the speed of sound in the gel rapidly increases from 0.1 to 10 m/s. The time evolution of the speed of sound can be measured, in frequency domain, by following the frequency spacing of the resonance peaks from the Synchronous Detection (SD) measurement method. Unfortunately, due to a constant frequency sampling rate, the relative error for low speeds (0.1 m/s) is 100%. In order to maintain a low constant relative error, in the whole speed time evolution range, Chirp-Z Transform (CZT) is used. This operation transforms a time variant signal to a time invariant one using only a time dependant stretching factor (S). In the frequency domain, the CZT enables us to stretch each collected spectrum from time signals. The blind identification of the S factor gives us the complete time evolution law of the speed of sound. Moreover, this method proves that the frequency bandwidth follows the same time law. These results point out that the minimum wavelength stays constant and that it only depends on the gel.

  14. Measurements of tungsten migration in the DIII-D divertor

    NASA Astrophysics Data System (ADS)

    Wampler, W. R.; Rudakov, D. L.; Watkins, J. G.; McLean, A. G.; Unterberg, E. A.; Stangeby, P. C.

    2017-12-01

    An experimental study of migration of tungsten in the DIII-D divertor is described, in which the outer strike point of L-mode plasmas was positioned on a toroidal ring of tungsten-coated metal inserts. Net deposition of tungsten on the divertor just outside the strike point was measured on graphite samples exposed to various plasma durations using the divertor materials evaluation system. Tungsten coverage, measured by Rutherford backscattering spectroscopy (RBS), was found to be low and nearly independent of both radius and exposure time closer to the strike point, whereas farther from the strike point the W coverage was much larger and increased with exposure time. Depth profiles from RBS show this was due to accumulation of thicker mixed-material deposits farther from the strike point where the plasma temperature is lower. These results are consistent with a low near-surface steady-state coverage on graphite undergoing net erosion, and continuing accumulation in regions of net deposition. This experiment provides data needed to validate, and further improve computational simulations of erosion and deposition of material on plasma-facing components and transport of impurities in magnetic fusion devices. Such simulations are underway and will be reported later.

  15. Development of a Rapid Point-of-Use DNA Test for the Screening of Genuity® Roundup Ready 2 Yield® Soybean in Seed Samples.

    PubMed

    Chandu, Dilip; Paul, Sudakshina; Parker, Mathew; Dudin, Yelena; King-Sitzes, Jennifer; Perez, Tim; Mittanck, Don W; Shah, Manali; Glenn, Kevin C; Piepenburg, Olaf

    2016-01-01

    Testing for the presence of genetically modified material in seed samples is of critical importance for all stakeholders in the agricultural industry, including growers, seed manufacturers, and regulatory bodies. While rapid antibody-based testing for the transgenic protein has fulfilled this need in the past, the introduction of new variants of a given transgene demands new diagnostic regimen that allows distinguishing different traits at the nucleic acid level. Although such molecular tests can be performed by PCR in the laboratory, their requirement for expensive equipment and sophisticated operation have prevented its uptake in point-of-use applications. A recently developed isothermal DNA amplification technique, recombinase polymerase amplification (RPA), combines simple sample preparation and amplification work-flow procedures with the use of minimal detection equipment in real time. Here, we report the development of a highly sensitive and specific RPA-based detection system for Genuity Roundup Ready 2 Yield (RR2Y) material in soybean (Glycine max) seed samples and present the results of studies applying the method in both laboratory and field-type settings.

  16. Dechlorane plus, a chlorinated flame retardant, in the Great Lakes.

    PubMed

    Hoh, Eunha; Zhu, Lingyan; Hites, Ronald A

    2006-02-15

    A highly chlorinated flame retardant, Dechlorane Plus (DP), was detected and identified in ambient air, fish, and sediment samples from the Great Lakes region. The identity of this compound was confirmed by comparing its gas chromatographic retention times and mass spectra with those of authentic material. This compound exists as two gas chromatographically separable stereoisomers (syn and anti), the structures of which were characterized by one- and two-dimensional proton nuclear magnetic resonance. DP was detected in most air samples, even at remote sites. The atmospheric DP concentrations were higher at the eastern Great Lakes sites (Sturgeon Point, NY, and Cleveland, OH) than those at the western Great Lakes sites (Eagle Harbor, MI, Chicago, IL, and Sleeping Bear Dunes, MI). Atthe Sturgeon Point site, DP concentrations once reached 490 pg/m3. DP atmospheric concentrations were comparable to those of BDE-209 at the eastern Great Lakes sites. DP was also found in sediment cores from Lakes Michigan and Erie. The peak DP concentrations were comparable to BDE-209 concentrations in the sediment core from Lake Erie butwere about 30 times lower than BDE-209 concentrations in the core from Lake Michigan. In the sediment cores, the DP concentrations peaked around 1975-1980, and the surficial concentrations were 10-80% of peak concentrations. Higher DP concentrations in air samples from Sturgeon Point, NY, and in the sediment core from Lake Erie suggest that DP's manufacturing facility in Niagara Falls, NY, may be a source. DP was also detected in archived fish (walleye) from Lake Erie, suggesting that this compound is, at least partially, bioavailable.

  17. Optical biosensor technologies for molecular diagnostics at the point-of-care

    NASA Astrophysics Data System (ADS)

    Schotter, Joerg; Schrittwieser, Stefan; Muellner, Paul; Melnik, Eva; Hainberger, Rainer; Koppitsch, Guenther; Schrank, Franz; Soulantika, Katerina; Lentijo-Mozo, Sergio; Pelaz, Beatriz; Parak, Wolfgang; Ludwig, Frank; Dieckhoff, Jan

    2015-05-01

    Label-free optical schemes for molecular biosensing hold a strong promise for point-of-care applications in medical research and diagnostics. Apart from diagnostic requirements in terms of sensitivity, specificity, and multiplexing capability, also other aspects such as ease of use and manufacturability have to be considered in order to pave the way to a practical implementation. We present integrated optical waveguide as well as magnetic nanoparticle based molecular biosensor concepts that address these aspects. The integrated optical waveguide devices are based on low-loss photonic wires made of silicon nitride deposited by a CMOS compatible plasma-enhanced chemical vapor deposition (PECVD) process that allows for backend integration of waveguides on optoelectronic CMOS chips. The molecular detection principle relies on evanescent wave sensing in the 0.85 μm wavelength regime by means of Mach-Zehnder interferometers, which enables on-chip integration of silicon photodiodes and, thus, the realization of system-on-chip solutions. Our nanoparticle-based approach is based on optical observation of the dynamic response of functionalized magneticcore/ noble-metal-shell nanorods (`nanoprobes') to an externally applied time-varying magnetic field. As target molecules specifically bind to the surface of the nanoprobes, the observed dynamics of the nanoprobes changes, and the concentration of target molecules in the sample solution can be quantified. This approach is suitable for dynamic real-time measurements and only requires minimal sample preparation, thus presenting a highly promising point-of-care diagnostic system. In this paper, we present a prototype of a diagnostic device suitable for highly automated sample analysis by our nanoparticle-based approach.

  18. Different Antibiotic Resistance and Sporulation Properties within Multiclonal Clostridium difficile PCR Ribotypes 078, 126, and 033 in a Single Calf Farm

    PubMed Central

    Zidaric, Valerija; Pardon, Bart; dos Vultos, Tiago; Deprez, Piet; Brouwer, Michael Sebastiaan Maria; Roberts, Adam P.; Henriques, Adriano O.

    2012-01-01

    Clostridium difficile strains were sampled periodically from 50 animals at a single veal calf farm over a period of 6 months. At arrival, 10% of animals were C. difficile positive, and the peak incidence was determined to occur at the age of 18 days (16%). The prevalence then decreased, and at slaughter, C. difficile could not be isolated. Six different PCR ribotypes were detected, and strains within a single PCR ribotype could be differentiated further by pulsed-field gel electrophoresis (PFGE). The PCR ribotype diversity was high up to the animal age of 18 days, but at later sampling points, PCR ribotype 078 and the highly related PCR ribotype 126 predominated. Resistance to tetracycline, doxycycline, and erythromycin was detected, while all strains were susceptible to amoxicillin and metronidazole. Multiple variations of the resistance gene tet(M) were present at the same sampling point, and these changed over time. We have shown that PCR ribotypes often associated with cattle (ribotypes 078, 126, and 033) were not clonal but differed in PFGE type, sporulation properties, antibiotic sensitivities, and tetracycline resistance determinants, suggesting that multiple strains of the same PCR ribotype infected the calves and that calves were likely to be infected prior to arrival at the farm. Importantly, strains isolated at later time points were more likely to be resistant to tetracycline and erythromycin and showed higher early sporulation efficiencies in vitro, suggesting that these two properties converge to promote the persistence of C. difficile in the environment or in hosts. PMID:23001653

  19. Repeat synoptic sampling reveals drivers of change in carbon and nutrient chemistry of Arctic catchments

    NASA Astrophysics Data System (ADS)

    Zarnetske, J. P.; Abbott, B. W.; Bowden, W. B.; Iannucci, F.; Griffin, N.; Parker, S.; Pinay, G.; Aanderud, Z.

    2017-12-01

    Dissolved organic carbon (DOC), nutrients, and other solute concentrations are increasing in rivers across the Arctic. Two hypotheses have been proposed to explain these trends: 1. distributed, top-down permafrost degradation, and 2. discrete, point-source delivery of DOC and nutrients from permafrost collapse features (thermokarst). While long-term monitoring at a single station cannot discriminate between these mechanisms, synoptic sampling of multiple points in the stream network could reveal the spatial structure of solute sources. In this context, we sampled carbon and nutrient chemistry three times over two years in 119 subcatchments of three distinct Arctic catchments (North Slope, Alaska). Subcatchments ranged from 0.1 to 80 km2, and included three distinct types of Arctic landscapes - mountainous, tundra, and glacial-lake catchments. We quantified the stability of spatial patterns in synoptic water chemistry and analyzed high-frequency time series from the catchment outlets across the thaw season to identify source areas for DOC, nutrients, and major ions. We found that variance in solute concentrations between subcatchments collapsed at spatial scales between 1 to 20 km2, indicating a continuum of diffuse- and point-source dynamics, depending on solute and catchment characteristics (e.g. reactivity, topography, vegetation, surficial geology). Spatially-distributed mass balance revealed conservative transport of DOC and nitrogen, and indicates there may be strong in-stream retention of phosphorus, providing a network-scale confirmation of previous reach-scale studies in these Arctic catchments. Overall, we present new approaches to analyzing synoptic data for change detection and quantification of ecohydrological mechanisms in ecosystems in the Arctic and beyond.

  20. Topochemical Analysis of Cell Wall Components by TOF-SIMS.

    PubMed

    Aoki, Dan; Fukushima, Kazuhiko

    2017-01-01

    Time-of-flight secondary ion mass spectrometry (TOF-SIMS) is a recently developing analytical tool and a type of imaging mass spectrometry. TOF-SIMS provides mass spectral information with a lateral resolution on the order of submicrons, with widespread applicability. Sometimes, it is described as a surface analysis method without the requirement for sample pretreatment; however, several points need to be taken into account for the complete utilization of the capabilities of TOF-SIMS. In this chapter, we introduce methods for TOF-SIMS sample treatments, as well as basic knowledge of wood samples TOF-SIMS spectral and image data analysis.

  1. Microorganism Response to Stressed Terrestrial Environments: A Raman Spectroscopic Perspective of Extremophilic Life Strategies

    NASA Astrophysics Data System (ADS)

    Jorge-Villar, Susana E.; Edwards, Howell G. M.

    2013-03-01

    Raman spectroscopy is a valuable analytical technique for the identification of biomolecules and minerals in natural samples, which involves little or minimal sample manipulation. In this paper, we evaluate the advantages and disadvantages of this technique applied to the study of extremophiles. Furthermore, we provide a review of the results published, up to the present point in time, of the bio- and geo-strategies adopted by different types of extremophile colonies of microorganisms. We also show the characteristic Raman signatures for the identification of pigments and minerals, which appear in those complex samples.

  2. Gravitational lensing of quasars as seen by the Hubble Space Telescope Snapshot Survey

    NASA Technical Reports Server (NTRS)

    Maoz, D.; Bahcall, J. N.; Doxsey, R.; Schneider, D. P.; Bahcall, N. A.; Lahav, O.; Yanny, B.

    1992-01-01

    Results from the ongoing HST Snapshot Survey are presented, with emphasis on 152 high-luminosity, z greater than 1 quasars. One quasar among those observed, 1208 + 1011, is a candidate lens system with subarcsecond image separation. Six other quasars have point sources within 6 arcsec. Ground-based observations of five of these cases show that the companion point sources are foreground Galactic stars. The predicted lensing frequency of the sample is calculated for a variety of cosmological models. The effect of uncertainties in some of the observational parameters upon the predictions is discussed. No correlation of the drift rate with time, right ascension, declination, or point error is found.

  3. Travel Time Estimation Using Freeway Point Detector Data Based on Evolving Fuzzy Neural Inference System.

    PubMed

    Tang, Jinjun; Zou, Yajie; Ash, John; Zhang, Shen; Liu, Fang; Wang, Yinhai

    2016-01-01

    Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed) collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN), two learning processes are proposed: (1) a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2) a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE), root mean square error (RMSE), and mean absolute relative error (MARE) are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR), instantaneous model (IM), linear model (LM), neural network (NN), and cumulative plots (CP).

  4. Travel Time Estimation Using Freeway Point Detector Data Based on Evolving Fuzzy Neural Inference System

    PubMed Central

    Tang, Jinjun; Zou, Yajie; Ash, John; Zhang, Shen; Liu, Fang; Wang, Yinhai

    2016-01-01

    Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed) collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN), two learning processes are proposed: (1) a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2) a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE), root mean square error (RMSE), and mean absolute relative error (MARE) are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR), instantaneous model (IM), linear model (LM), neural network (NN), and cumulative plots (CP). PMID:26829639

  5. User's manual for the Graphical Constituent Loading Analysis System (GCLAS)

    USGS Publications Warehouse

    Koltun, G.F.; Eberle, Michael; Gray, J.R.; Glysson, G.D.

    2006-01-01

    This manual describes the Graphical Constituent Loading Analysis System (GCLAS), an interactive cross-platform program for computing the mass (load) and average concentration of a constituent that is transported in stream water over a period of time. GCLAS computes loads as a function of an equal-interval streamflow time series and an equal- or unequal-interval time series of constituent concentrations. The constituent-concentration time series may be composed of measured concentrations or a combination of measured and estimated concentrations. GCLAS is not intended for use in situations where concentration data (or an appropriate surrogate) are collected infrequently or where an appreciable amount of the concentration values are censored. It is assumed that the constituent-concentration time series used by GCLAS adequately represents the true time-varying concentration. Commonly, measured constituent concentrations are collected at a frequency that is less than ideal (from a load-computation standpoint), so estimated concentrations must be inserted in the time series to better approximate the expected chemograph. GCLAS provides tools to facilitate estimation and entry of instantaneous concentrations for that purpose. Water-quality samples collected for load computation frequently are collected in a single vertical or at single point in a stream cross section. Several factors, some of which may vary as a function of time and (or) streamflow, can affect whether the sample concentrations are representative of the mean concentration in the cross section. GCLAS provides tools to aid the analyst in assessing whether concentrations in samples collected in a single vertical or at single point in a stream cross section exhibit systematic bias with respect to the mean concentrations. In cases where bias is evident, the analyst can construct coefficient relations in GCLAS to reduce or eliminate the observed bias. GCLAS can export load and concentration data in formats suitable for entry into the U.S. Geological Survey's National Water Information System. GCLAS can also import and export data in formats that are compatible with various commonly used spreadsheet and statistics programs.

  6. Assessing blood coagulation status with laser speckle rheology

    PubMed Central

    Tripathi, Markandey M.; Hajjarian, Zeinab; Van Cott, Elizabeth M.; Nadkarni, Seemantini K.

    2014-01-01

    We have developed and investigated a novel optical approach, Laser Speckle Rheology (LSR), to evaluate a patient’s coagulation status by measuring the viscoelastic properties of blood during coagulation. In LSR, a blood sample is illuminated with laser light and temporal speckle intensity fluctuations are measured using a high-speed CMOS camera. During blood coagulation, changes in the viscoelastic properties of the clot restrict Brownian displacements of light scattering centers within the sample, altering the rate of speckle intensity fluctuations. As a result, blood coagulation status can be measured by relating the time scale of speckle intensity fluctuations with clinically relevant coagulation metrics including clotting time and fibrinogen content. Our results report a close correlation between coagulation metrics measured using LSR and conventional coagulation results of activated partial thromboplastin time, prothrombin time and functional fibrinogen levels, creating the unique opportunity to evaluate a patient’s coagulation status in real-time at the point of care. PMID:24688816

  7. Point-of-care technologies for molecular diagnostics using a drop of blood.

    PubMed

    Song, Yujun; Huang, Yu-Yen; Liu, Xuewu; Zhang, Xiaojing; Ferrari, Mauro; Qin, Lidong

    2014-03-01

    Molecular diagnostics is crucial for prevention, identification, and treatment of disease. Traditional technologies for molecular diagnostics using blood are limited to laboratory use because they rely on sample purification and sophisticated instruments, are labor and time intensive, expensive, and require highly trained operators. This review discusses the frontiers of point-of-care (POC) diagnostic technologies using a drop of blood obtained from a finger prick. These technologies, including emerging biotechnologies, nanotechnologies, and microfluidics, hold the potential for rapid, accurate, and inexpensive disease diagnostics. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Slow updating of the achromatic point after a change in illumination

    PubMed Central

    Lee, R. J.; Dawson, K. A.; Smithson, H. E.

    2015-01-01

    For a colour constant observer, the colour appearance of a surface is independent of the spectral composition of the light illuminating it. We ask how rapidly colour appearance judgements are updated following a change in illumination. We obtained repeated binary colour classifications for a set of stimuli defined by their reflectance functions and rendered under either sunlight or skylight. We used these classifications to derive boundaries in colour space that identify the observer’s achromatic point. In steady-state conditions of illumination, the achromatic point lay close to the illuminant chromaticity. In our experiment the illuminant changed abruptly every 21 seconds (at the onset of every 10th trial), allowing us to track changes in the achromatic point that were caused by the cycle of illuminant changes. In one condition, the test reflectance was embedded in a spatial pattern of reflectance samples under consistent illumination. The achromatic point migrated across colour space between the chromaticities of the steady-state achromatic points. This update took several trials rather than being immediate. To identify the factors that governed perceptual updating of appearance judgements we used two further conditions, one in which the test reflectance was presented in isolation and one in which the surrounding reflectances were rendered under an inconsistent and unchanging illumination. Achromatic settings were not well predicted by the information available from scenes at a single time-point. Instead the achromatic points showed a strong dependence on the history of chromatic samples. The strength of this dependence differed between observers and was modulated by the spatial context. PMID:22275468

  9. φ(2)GFP10, a high-intensity fluorophage, enables detection and rapid drug susceptibility testing of Mycobacterium tuberculosis directly from sputum samples.

    PubMed

    Jain, Paras; Hartman, Travis E; Eisenberg, Nell; O'Donnell, Max R; Kriakov, Jordan; Govender, Karnishree; Makume, Mantha; Thaler, David S; Hatfull, Graham F; Sturm, A Willem; Larsen, Michelle H; Moodley, Preshnie; Jacobs, William R

    2012-04-01

    The difficulty of diagnosing active tuberculosis (TB) and lack of rapid drug susceptibility testing (DST) at the point of care remain critical obstacles to TB control. This report describes a high-intensity mycobacterium-specific-fluorophage (φ(2)GFP10) that for the first time allows direct visualization of Mycobacterium tuberculosis in clinical sputum samples. Engineered features distinguishing φ(2)GFP10 from previous reporter phages include an improved vector backbone with increased cloning capacity and superior expression of fluorescent reporter genes through use of an efficient phage promoter. φ(2)GFP10 produces a 100-fold increase in fluorescence per cell compared to existing reporter phages. DST for isoniazid and oxofloxacin, carried out in cultured samples, was complete within 36 h. Use of φ(2)GFP10 detected M. tuberculosis in clinical sputum samples collected from TB patients. DST for rifampin and kanamycin from sputum samples yielded results after 12 h of incubation with φ(2)GFP10. Fluorophage φ(2)GFP10 has potential for clinical development as a rapid, sensitive, and inexpensive point-of-care diagnostic tool for M. tuberculosis infection and for rapid DST.

  10. Myocardial injury in dogs with snake envenomation and its relation to systemic inflammation.

    PubMed

    Langhorn, Rebecca; Persson, Frida; Ablad, Björn; Goddard, Amelia; Schoeman, Johan P; Willesen, Jakob L; Tarnow, Inge; Kjelgaard-Hansen, Mads

    2014-01-01

    To investigate the presence of myocardial injury in dogs hospitalized for snake envenomation and to examine its relationship with systemic inflammation. Prospective case-control study. University teaching hospital and small animal referral hospital. Dogs naturally envenomed by the European viper (Vipera berus; n = 24), African puff adder (Bitis arietans; n = 5), or snouted cobra (Naja annulifera; n = 9). Blood was collected from dogs envenomed by V. berus at admission, 12-24 hours postadmission, and 5-10 days postadmission. Blood was collected from dogs envenomed by B. arietans or N. annulifera at admission, and 12, 24, and 36 hours postadmission. Concentrations of cardiac troponin I (cTnI), a marker of myocardial injury, and C-reactive protein (CRP), a marker of systemic inflammation, were measured in each blood sample. Evidence of myocardial injury was found in 58% of dogs envenomed by V. berus at one or more time points. A significant correlation between cTnI and CRP concentrations was found at all time points. Evidence of myocardial injury was found in 80% of dogs envenomed by B. arietans at one or more time points; however, no correlation was found between cTnI and CRP concentrations. Evidence of myocardial injury was found in 67% of dogs envenomed by N. annulifera at one or more time points. A significant correlation between cTnI and CRP concentrations was found at admission, but not at other time points. Myocardial injury frequently occurred in dogs with snake envenomation. While the degree of systemic inflammation was significantly correlated with degree of myocardial injury in V. berus envenomation at all time points, this was not the case in dogs envenomed by N. annulifera or B. arietans. This could be due to differences in the toxic substances of the snake venoms or to differences in the cytokines induced by the venom toxins. © Veterinary Emergency and Critical Care Society 2013.

  11. A novel approach to quantifying the spatiotemporal behavior of instrumented grey seals used to sample the environment.

    PubMed

    Baker, Laurie L; Mills Flemming, Joanna E; Jonsen, Ian D; Lidgard, Damian C; Iverson, Sara J; Bowen, W Don

    2015-01-01

    Paired with satellite location telemetry, animal-borne instruments can collect spatiotemporal data describing the animal's movement and environment at a scale relevant to its behavior. Ecologists have developed methods for identifying the area(s) used by an animal (e.g., home range) and those used most intensely (utilization distribution) based on location data. However, few have extended these models beyond their traditional roles as descriptive 2D summaries of point data. Here we demonstrate how the home range method, T-LoCoH, can be expanded to quantify collective sampling coverage by multiple instrumented animals using grey seals (Halichoerus grypus) equipped with GPS tags and acoustic transceivers on the Scotian Shelf (Atlantic Canada) as a case study. At the individual level, we illustrate how time and space-use metrics quantifying individual sampling coverage may be used to determine the rate of acoustic transmissions received. Grey seals collectively sampled an area of 11,308 km (2) and intensely sampled an area of 31 km (2) from June-December. The largest area sampled was in July (2094.56 km (2)) and the smallest area sampled occurred in August (1259.80 km (2)), with changes in sampling coverage observed through time. T-LoCoH provides an effective means to quantify changes in collective sampling effort by multiple instrumented animals and to compare these changes across time. We also illustrate how time and space-use metrics of individual instrumented seal movement calculated using T-LoCoH can be used to account for differences in the amount of time a bioprobe (biological sampling platform) spends in an area.

  12. Landscape Change Detected Over A 60 Year Period In The Arctic National Wildlife Refuge, Alaska, Using High Resolution Aerial Photographs And Satellite Images

    NASA Astrophysics Data System (ADS)

    Jorgenson, J. C.; Jorgenson, M. T.; Boldenow, M.; Orndahl, K. M.

    2016-12-01

    We documented landscape change over a 60 year period in the Arctic National Wildlife Refuge in northeastern Alaska using aerial photographs and satellite images. We used a stratified random sample to allow inference to the whole refuge (78,050 km2), with five random sites in each of seven ecoregions. Each site (2 km2) had a systematic grid of 100 points for a total of 3500 points. We chose study sites in the overlap area covered by acceptable imagery in three time periods: aerial photographs from 1947 - 1955 and 1978 - 1988, Quick Bird and IKONOS satellite images from 2000 - 2007.At each point a 10 meter radius circle was visually evaluated in ARC-MAP for each time period for vegetation type, disturbance, presence of ice wedge polygon microtopography and surface water. A landscape change category was assigned to each point based on differences detected between the three periods. Change types were assigned for time interval 1, interval 2 and overall. Additional explanatory variables included elevation, slope, aspect, geology, physiography and temperature. Overall, 23% of points changed over the study period. Fire was the most common change agent, affecting 28% of the Boreal Forest points. The next most common change was degradation of soil ice wedges (thermokarst), detected at 12% of the points on the North Slope Tundra. The other most common changes included increase in cover of trees or shrubs (7% of Boreal Forest and Brooks Range points) and erosion or deposition on river floodplains and at the Beaufort Sea coast. Changes on the North Slope Tundra tended to be related to landscape wetting, mainly thermokarst. Changes in the Boreal Forest tended to involve landscape drying, including fire, reduced area of lakes and tree increase on wet sites. The second time interval coincided with a shift towards a warmer climate and had greater change in several categories including thermokarst, lake changes and tree and shrub increase.

  13. Distributed optical fiber temperature sensor (DOFTS) system applied to automatic temperature alarm of coal mine and tunnel

    NASA Astrophysics Data System (ADS)

    Zhang, Zaixuan; Wang, Kequan; Kim, Insoo S.; Wang, Jianfeng; Feng, Haiqi; Guo, Ning; Yu, Xiangdong; Zhou, Bangquan; Wu, Xiaobiao; Kim, Yohee

    2000-05-01

    The DOFTS system that has applied to temperature automatically alarm system of coal mine and tunnel has been researched. It is a real-time, on line and multi-point measurement system. The wavelength of LD is 1550 nm, on the 6 km optical fiber, 3000 points temperature signal is sampled and the spatial position is certain. Temperature measured region: -50 degree(s)C--100 degree(s)C; measured uncertain value: +/- 3 degree(s)C; temperature resolution: 0.1 degree(s)C; spatial resolution: <5 cm (optical fiber sensor probe); <8 m (spread optical fiber); measured time: <70 s. In the paper, the operated principles, underground test, test content and practical test results have been discussed.

  14. Real-Time Motion Planning and Safe Navigation in Dynamic Multi-Robot Environments

    DTIC Science & Technology

    2006-12-15

    referee against a robot for pushing or hitting an opponent excessively, as well as for a non- goalie robot entering the team’s own defense area. The DSS... pulling ” a search graph by choosing random samples and then trying to connect a path to those points, some planners “push” samples by first choosing...implement the various roles (attacker, goalie , defender), which in turn build on sub-tactics known as skills [16]. One primitive skill used by almost all

  15. Elevation-relief ratio, hypsometric integral, and geomorphic area-altitude analysis.

    NASA Technical Reports Server (NTRS)

    Pike, R. J.; Wilson, S. E.

    1971-01-01

    Mathematical proof establishes identity of hypsometric integral and elevation-relief ratio, two quantitative topographic descriptors developed independently of one another for entirely different purposes. Operationally, values of both measures are in excellent agreement for arbitrarily bounded topographic samples, as well as for low-order fluvial watersheds. By using a point-sampling technique rather than planimetry, elevation-relief ratio (defined as mean elevation minus minimum elevation divided by relief) is calculated manually in about a third of the time required for the hypsometric integral.

  16. Quantized Synchronization of Chaotic Neural Networks With Scheduled Output Feedback Control.

    PubMed

    Wan, Ying; Cao, Jinde; Wen, Guanghui

    In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control gain matrix, allowable length of sampling intervals, and upper bound of network-induced delays are derived to ensure the quantized synchronization of master-slave chaotic neural networks. Lastly, Chua's circuit system and 4-D Hopfield neural network are simulated to validate the effectiveness of the main results.In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control gain matrix, allowable length of sampling intervals, and upper bound of network-induced delays are derived to ensure the quantized synchronization of master-slave chaotic neural networks. Lastly, Chua's circuit system and 4-D Hopfield neural network are simulated to validate the effectiveness of the main results.

  17. Large-Scale Point-Cloud Visualization through Localized Textured Surface Reconstruction.

    PubMed

    Arikan, Murat; Preiner, Reinhold; Scheiblauer, Claus; Jeschke, Stefan; Wimmer, Michael

    2014-09-01

    In this paper, we introduce a novel scene representation for the visualization of large-scale point clouds accompanied by a set of high-resolution photographs. Many real-world applications deal with very densely sampled point-cloud data, which are augmented with photographs that often reveal lighting variations and inaccuracies in registration. Consequently, the high-quality representation of the captured data, i.e., both point clouds and photographs together, is a challenging and time-consuming task. We propose a two-phase approach, in which the first (preprocessing) phase generates multiple overlapping surface patches and handles the problem of seamless texture generation locally for each patch. The second phase stitches these patches at render-time to produce a high-quality visualization of the data. As a result of the proposed localization of the global texturing problem, our algorithm is more than an order of magnitude faster than equivalent mesh-based texturing techniques. Furthermore, since our preprocessing phase requires only a minor fraction of the whole data set at once, we provide maximum flexibility when dealing with growing data sets.

  18. On the design of experiments for determining ternary mixture free energies from static light scattering data using a nonlinear partial differential equation

    PubMed Central

    Wahle, Chris W.; Ross, David S.; Thurston, George M.

    2012-01-01

    We mathematically design sets of static light scattering experiments to provide for model-independent measurements of ternary liquid mixing free energies to a desired level of accuracy. A parabolic partial differential equation (PDE), linearized from the full nonlinear PDE [D. Ross, G. Thurston, and C. Lutzer, J. Chem. Phys. 129, 064106 (2008)10.1063/1.2937902], describes how data noise affects the free energies to be inferred. The linearized PDE creates a net of spacelike characteristic curves and orthogonal, timelike curves in the composition triangle, and this net governs diffusion of information coming from light scattering measurements to the free energy. Free energy perturbations induced by a light scattering perturbation diffuse along the characteristic curves and towards their concave sides, with a diffusivity that is proportional to the local characteristic curvature radius. Consequently, static light scattering can determine mixing free energies in regions with convex characteristic curve boundaries, given suitable boundary data. The dielectric coefficient is a Lyapunov function for the dynamical system whose trajectories are PDE characteristics. Information diffusion is heterogeneous and system-dependent in the composition triangle, since the characteristics depend on molecular interactions and are tangent to liquid-liquid phase separation coexistence loci at critical points. We find scaling relations that link free energy accuracy, total measurement time, the number of samples, and the interpolation method, and identify the key quantitative tradeoffs between devoting time to measuring more samples, or fewer samples more accurately. For each total measurement time there are optimal sample numbers beyond which more will not improve free energy accuracy. We estimate the degree to which many-point interpolation and optimized measurement concentrations can improve accuracy and save time. For a modest light scattering setup, a sample calculation shows that less than two minutes of measurement time is, in principle, sufficient to determine the dimensionless mixing free energy of a non-associating ternary mixture to within an integrated error norm of 0.003. These findings establish a quantitative framework for designing light scattering experiments to determine the Gibbs free energy of ternary liquid mixtures. PMID:22830693

  19. Exploring the temporal structure of heterochronous sequences using TempEst (formerly Path-O-Gen).

    PubMed

    Rambaut, Andrew; Lam, Tommy T; Max Carvalho, Luiz; Pybus, Oliver G

    2016-01-01

    Gene sequences sampled at different points in time can be used to infer molecular phylogenies on a natural timescale of months or years, provided that the sequences in question undergo measurable amounts of evolutionary change between sampling times. Data sets with this property are termed heterochronous and have become increasingly common in several fields of biology, most notably the molecular epidemiology of rapidly evolving viruses. Here we introduce the cross-platform software tool, TempEst (formerly known as Path-O-Gen), for the visualization and analysis of temporally sampled sequence data. Given a molecular phylogeny and the dates of sampling for each sequence, TempEst uses an interactive regression approach to explore the association between genetic divergence through time and sampling dates. TempEst can be used to (1) assess whether there is sufficient temporal signal in the data to proceed with phylogenetic molecular clock analysis, and (2) identify sequences whose genetic divergence and sampling date are incongruent. Examination of the latter can help identify data quality problems, including errors in data annotation, sample contamination, sequence recombination, or alignment error. We recommend that all users of the molecular clock models implemented in BEAST first check their data using TempEst prior to analysis.

  20. Series-nonuniform rational B-spline signal feedback: From chaos to any embedded periodic orbit or target point.

    PubMed

    Shao, Chenxi; Xue, Yong; Fang, Fang; Bai, Fangzhou; Yin, Peifeng; Wang, Binghong

    2015-07-01

    The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedback control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily.

  1. Comparative study on novel test systems to determine disintegration time of orodispersible films.

    PubMed

    Preis, Maren; Gronkowsky, Dorothee; Grytzan, Dominik; Breitkreutz, Jörg

    2014-08-01

    Orodispersible films (ODFs) are a promising innovative dosage form enabling drug administration without the need for water and minimizing danger of aspiration due to their fast disintegration in small amounts of liquid. This study focuses on the development of a disintegration test system for ODFs. Two systems were developed and investigated: one provides an electronic end-point, and the other shows a transferable setup of the existing disintegration tester for orodispersible tablets. Different ODF preparations were investigated to determine the suitability of the disintegration test systems. The use of different test media and the impact of different storage conditions of ODFs on their disintegration time were additionally investigated. The experiments showed acceptable reproducibility (low deviations within sample replicates due to a clear determination of the measurement end-point). High temperatures and high humidity affected some of the investigated ODFs, resulting in higher disintegration time or even no disintegration within the tested time period. The methods provided clear end-point detection and were applicable for different types of ODFs. By the modification of a conventional test system to enable application for films, a standard method could be presented to ensure uniformity in current quality control settings. © 2014 Royal Pharmaceutical Society.

  2. Series-nonuniform rational B-spline signal feedback: From chaos to any embedded periodic orbit or target point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Chenxi, E-mail: cxshao@ustc.edu.cn; Xue, Yong; Fang, Fang

    2015-07-15

    The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedbackmore » control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily.« less

  3. Cognitive components of self esteem for individuals with severe mental illness.

    PubMed

    Blankertz, L

    2001-10-01

    In a sample of 182 individuals with severe mental illness, the applicability of reflected appraisals and self-enhancement theories as explanations for global self-esteem was examined at two time points on components of stigma, mastery, overall functioning, education, and job prestige. Path analysis demonstrated that the two theories work independently; and that stigma, mastery, and overall functioning are significant, persist over time, and have an enduring effect on self-esteem.

  4. Reducing seed dependent variability of non-uniformly sampled multidimensional NMR data

    NASA Astrophysics Data System (ADS)

    Mobli, Mehdi

    2015-07-01

    The application of NMR spectroscopy to study the structure, dynamics and function of macromolecules requires the acquisition of several multidimensional spectra. The one-dimensional NMR time-response from the spectrometer is extended to additional dimensions by introducing incremented delays in the experiment that cause oscillation of the signal along "indirect" dimensions. For a given dimension the delay is incremented at twice the rate of the maximum frequency (Nyquist rate). To achieve high-resolution requires acquisition of long data records sampled at the Nyquist rate. This is typically a prohibitive step due to time constraints, resulting in sub-optimal data records to the detriment of subsequent analyses. The multidimensional NMR spectrum itself is typically sparse, and it has been shown that in such cases it is possible to use non-Fourier methods to reconstruct a high-resolution multidimensional spectrum from a random subset of non-uniformly sampled (NUS) data. For a given acquisition time, NUS has the potential to improve the sensitivity and resolution of a multidimensional spectrum, compared to traditional uniform sampling. The improvements in sensitivity and/or resolution achieved by NUS are heavily dependent on the distribution of points in the random subset acquired. Typically, random points are selected from a probability density function (PDF) weighted according to the NMR signal envelope. In extreme cases as little as 1% of the data is subsampled. The heavy under-sampling can result in poor reproducibility, i.e. when two experiments are carried out where the same number of random samples is selected from the same PDF but using different random seeds. Here, a jittered sampling approach is introduced that is shown to improve random seed dependent reproducibility of multidimensional spectra generated from NUS data, compared to commonly applied NUS methods. It is shown that this is achieved due to the low variability of the inherent sensitivity of the random subset chosen from a given PDF. Finally, it is demonstrated that metrics used to find optimal NUS distributions are heavily dependent on the inherent sensitivity of the random subset, and such optimisation is therefore less critical when using the proposed sampling scheme.

  5. Sampling challenges in a study examining refugee resettlement

    PubMed Central

    2011-01-01

    Background As almost half of all refugees currently under United Nations protection are from Afghanistan or Iraq and significant numbers have already been resettled outside the region of origin, it is likely that future research will examine their resettlement needs. A number of methodological challenges confront researchers working with culturally and linguistically diverse groups; however, few detailed articles are available to inform other studies. The aim of this paper is to outline challenges with sampling and recruitment of socially invisible refugee groups, describing the method adopted for a mixed methods exploratory study assessing mental health, subjective wellbeing and resettlement perspectives of Afghan and Kurdish refugees living in New Zealand and Australia. Sampling strategies used in previous studies with similar refugee groups were considered before determining the approach to recruitment Methods A snowball approach was adopted for the study, with multiple entry points into the communities being used to choose as wide a range of people as possible to provide further contacts and reduce selection bias. Census data was used to assess the representativeness of the sample. Results A sample of 193 former refugee participants was recruited in Christchurch (n = 98) and Perth (n = 95), 47% were of Afghan and 53% Kurdish ethnicity. A good gender balance (males 52%, females 48%) was achieved overall, mainly as a result of the sampling method used. Differences in the demographic composition of groups in each location were observed, especially in relation to the length of time spent in a refugee situation and time since arrival, reflecting variations in national humanitarian quota intakes. Although some measures were problematic, Census data comparison to assess reasonable representativeness of the study sample was generally reassuring. Conclusions Snowball sampling, with multiple initiation points to reduce selection bias, was necessary to locate and identify participants, provide reassurance and break down barriers. Personal contact was critical for both recruitment and data quality, and highlighted the importance of interviewer cultural sensitivity. Cross-national comparative studies, particularly relating to refugee resettlement within different policy environments, also need to take into consideration the differing pre-migration experiences and time since arrival of refugee groups, as these can add additional layers of complexity to study design and interpretation. PMID:21406104

  6. Sampling challenges in a study examining refugee resettlement.

    PubMed

    Sulaiman-Hill, Cheryl Mr; Thompson, Sandra C

    2011-03-15

    As almost half of all refugees currently under United Nations protection are from Afghanistan or Iraq and significant numbers have already been resettled outside the region of origin, it is likely that future research will examine their resettlement needs. A number of methodological challenges confront researchers working with culturally and linguistically diverse groups; however, few detailed articles are available to inform other studies. The aim of this paper is to outline challenges with sampling and recruitment of socially invisible refugee groups, describing the method adopted for a mixed methods exploratory study assessing mental health, subjective wellbeing and resettlement perspectives of Afghan and Kurdish refugees living in New Zealand and Australia. Sampling strategies used in previous studies with similar refugee groups were considered before determining the approach to recruitment A snowball approach was adopted for the study, with multiple entry points into the communities being used to choose as wide a range of people as possible to provide further contacts and reduce selection bias. Census data was used to assess the representativeness of the sample. A sample of 193 former refugee participants was recruited in Christchurch (n = 98) and Perth (n = 95), 47% were of Afghan and 53% Kurdish ethnicity. A good gender balance (males 52%, females 48%) was achieved overall, mainly as a result of the sampling method used. Differences in the demographic composition of groups in each location were observed, especially in relation to the length of time spent in a refugee situation and time since arrival, reflecting variations in national humanitarian quota intakes. Although some measures were problematic, Census data comparison to assess reasonable representativeness of the study sample was generally reassuring. Snowball sampling, with multiple initiation points to reduce selection bias, was necessary to locate and identify participants, provide reassurance and break down barriers. Personal contact was critical for both recruitment and data quality, and highlighted the importance of interviewer cultural sensitivity. Cross-national comparative studies, particularly relating to refugee resettlement within different policy environments, also need to take into consideration the differing pre-migration experiences and time since arrival of refugee groups, as these can add additional layers of complexity to study design and interpretation.

  7. New clinical insights for transiently evoked otoacoustic emission protocols.

    PubMed

    Hatzopoulos, Stavros; Grzanka, Antoni; Martini, Alessandro; Konopka, Wieslaw

    2009-08-01

    The objective of the study was to optimize the area of a time-frequency analysis and then investigate any stable patterns in the time-frequency structure of otoacoustic emissions in a population of 152 healthy adults sampled over one year. TEOAE recordings were collected from 302 ears in subjects presenting normal hearing and normal impedance values. The responses were analyzed by the Wigner-Ville distribution (WVD). The TF region of analysis was optimized by examining the energy content of various rectangular and triangular TF regions. The TEOAE components from the initial and recordings 12 months later were compared in the optimized TF region. The best region for TF analysis was identified with base point 1 at 2.24 ms and 2466 Hz, base point 2 at 6.72 ms and 2466 Hz, and the top point at 2.24 ms and 5250 Hz. Correlation indices from the TF optimized region were higher, and were statistically significant, than the traditional indices in the selected time window. An analysis of the TF data within a 12-month period indicated a 85% TEOAE component similarity in 90% of the tested subjects.

  8. Corrosion/erosion detection of boiler tubes utilizing pulsed infrared imaging

    NASA Astrophysics Data System (ADS)

    Bales, Maurice J.; Bishop, Chip C.

    1995-05-01

    This paper discusses a new technique for locating and detecting wall thickness reduction in boiler tubes caused by erosion/corrosion. Traditional means for this type of defect detection utilizes ultrasonics (UT) to perform a point by point measurement at given intervals of the tube length, which requires extensive and costly shutdown or `outage' time to complete the inspection, and has led to thin areas going undetected simply because they were located in between the sampling points. Pulsed infrared imaging (PII) can provide nearly 100% inspection of the tubes in a fraction of the time needed for UT. The IR system and heat source used in this study do not require any special access or fixed scaffolding, and can be remotely operated from a distance of up to 100 feet. This technique has been tried experimentally in a laboratory environment and verified in an actual field application. Since PII is a non-contact technique, considerable time and cost savings should be realized as well as the ability to predict failures rather than repairing them once they have occurred.

  9. Persistence of Tetracapsuloides bryosalmonae (Myxozoa) in chronically infected brown trout Salmo trutta.

    PubMed

    Abd-Elfattah, Ahmed; Kumar, Gokhlesh; Soliman, Hatem; El-Matbouli, Mansour

    2014-08-21

    Proliferative kidney disease (PKD) is a widespread disease of farmed and wild salmonid populations in Europe and North America, caused by the myxozoan parasite Tetracapsuloides bryosalmonae. Limited studies have been performed on the epidemiological role in spread of the disease played by fish that survive infection with T. bryosalmonae. The aim of the present study was to evaluate the persistence of T. bryosalmonae developmental stages in chronically infected brown trout Salmo trutta up to 2 yr after initial exposure to laboratory-infected colonies of the parasite's alternate host, the bryozoan Fredericella sultana. Kidney, liver, spleen, intestine, brain, gills and blood were sampled 24, 52, 78 and 104 wk post-exposure (wpe) and tested for T. bryosalmonae by PCR and immunohistochemistry (IHC). Cohabitation trials with specific pathogen free (SPF) F. sultana colonies were conducted to test the viability of T. bryosalmonae. PCR detected T. bryosalmonae DNA in all tissue samples collected at the 4 time points. Developmental stages of T. bryosalmonae were demonstrated by IHC in most samples at the 4 time points. Cohabitation of SPF F. sultana with chronically infected brown trout resulted in successful transmission of T. bryosalmonae to the bryozoan. This study verified the persistence of T. bryosalmonae in chronically infected brown trout and their ability to infect the bryozoan F. sultana up to 104 wpe.

  10. Acute Consumption of Flavan-3-ol-Enriched Dark Chocolate Affects Human Endogenous Metabolism.

    PubMed

    Ostertag, Luisa M; Philo, Mark; Colquhoun, Ian J; Tapp, Henri S; Saha, Shikha; Duthie, Garry G; Kemsley, E Kate; de Roos, Baukje; Kroon, Paul A; Le Gall, Gwénaëlle

    2017-07-07

    Flavan-3-ols and methylxanthines have potential beneficial effects on human health including reducing cardiovascular risk. We performed a randomized controlled crossover intervention trial to assess the acute effects of consumption of flavan-3-ol-enriched dark chocolate, compared with standard dark chocolate and white chocolate, on the human metabolome. We assessed the metabolome in urine and blood plasma samples collected before and at 2 and 6 h after consumption of chocolates in 42 healthy volunteers using a nontargeted metabolomics approach. Plasma samples were assessed and showed differentiation between time points with no further separation among the three chocolate treatments. Multivariate statistics applied to urine samples could readily separate the postprandial time points and distinguish between the treatments. Most of the markers responsible for the multivariate discrimination between the chocolates were of dietary origin. Interestingly, small but significant level changes were also observed for a subset of endogenous metabolites. 1 H NMR revealed that flavan-3-ol-enriched dark chocolate and standard dark chocolate reduced urinary levels of creatinine, lactate, some amino acids, and related degradation products and increased the levels of pyruvate and 4-hydroxyphenylacetate, a phenolic compound of bacterial origin. This study demonstrates that an acute chocolate intervention can significantly affect human metabolism.

  11. Evaluation of Alere i RSV for Rapid Detection of Respiratory Syncytial Virus in Children Hospitalized with Acute Respiratory Tract Infection.

    PubMed

    Peters, Rebecca Marie; Schnee, Sarah Valerie; Tabatabai, Julia; Schnitzler, Paul; Pfeil, Johannes

    2017-04-01

    Alere i RSV is a novel rapid test which applies a nicking enzyme amplification reaction to detect respiratory syncytial virus in point-of-care settings. In this study, we evaluated the Alere i RSV assay by using frozen nasopharyngeal swab samples that were collected in viral transport medium from children hospitalized with acute respiratory tract infection during the 2015-2016 winter season. Alere i RSV assay results were compared to those for Altona RealStar RSV real-time reverse transcription-PCR (RT-PCR). We found that the overall sensitivity and specificity of the Alere i RSV test was 100% (95% confidence intervals [CI], 93% to 100%) and 97% (95% CI, 89% to 100%), respectively. Positive samples were identified within 5 to 7 min from sample collection. Overall, the Alere i RSV test performed well compared to the RT-PCR assay and has the potential to facilitate the detection of RSV in point-of-care settings. Copyright © 2017 Peters et al.

  12. Distribution of trace elements in the coastal sea sediments of Maslinica Bay, Croatia

    NASA Astrophysics Data System (ADS)

    Mikulic, Nenad; Orescanin, Visnja; Elez, Loris; Pavicic, Ljiljana; Pezelj, Durdica; Lovrencic, Ivanka; Lulic, Stipe

    2008-02-01

    Spatial distributions of trace elements in the coastal sea sediments and water of Maslinica Bay (Southern Adriatic), Croatia and possible changes in marine flora and foraminifera communities due to pollution were investigated. Macro, micro and trace elements’ distributions in five granulometric fractions were determined for each sediment sample. Bulk sediment samples were also subjected to leaching tests. Elemental concentrations in sediments, sediment extracts and seawater were measured by source excited energy dispersive X-ray fluorescence (EDXRF). Concentrations of the elements Cr, Cu, Zn, and Pb in bulk sediment samples taken in the Maslinica Bay were from 2.1 to over six times enriched when compared with the background level determined for coarse grained carbonate sediments. A low degree of trace elements leaching determined for bulk sediments pointed to strong bonding of trace elements to sediment mineral phases. The analyses of marine flora pointed to higher eutrophication, which disturbs the balance between communities and natural habitats.

  13. Updating a preoperative surface model with information from real-time tracked 2D ultrasound using a Poisson surface reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Deyu; Rettmann, Maryam E.; Holmes, David R.; Linte, Cristian A.; Packer, Douglas; Robb, Richard A.

    2014-03-01

    In this work, we propose a method for intraoperative reconstruction of a left atrial surface model for the application of cardiac ablation therapy. In this approach, the intraoperative point cloud is acquired by a tracked, 2D freehand intra-cardiac echocardiography device, which is registered and merged with a preoperative, high resolution left atrial surface model built from computed tomography data. For the surface reconstruction, we introduce a novel method to estimate the normal vector of the point cloud from the preoperative left atrial model, which is required for the Poisson Equation Reconstruction algorithm. In the current work, the algorithm is evaluated using a preoperative surface model from patient computed tomography data and simulated intraoperative ultrasound data. Factors such as intraoperative deformation of the left atrium, proportion of the left atrial surface sampled by the ultrasound, sampling resolution, sampling noise, and registration error were considered through a series of simulation experiments.

  14. A Novel Stimulus Artifact Removal Technique for High-Rate Electrical Stimulation

    PubMed Central

    Heffer, Leon F; Fallon, James B

    2008-01-01

    Electrical stimulus artifact corrupting electrophysiological recordings often make the subsequent analysis of the underlying neural response difficult. This is particularly evident when investigating short-latency neural activity in response to high-rate electrical stimulation. We developed and evaluated an off-line technique for the removal of stimulus artifact from electrophysiological recordings. Pulsatile electrical stimulation was presented at rates of up to 5000 pulses/s during extracellular recordings of guinea pig auditory nerve fibers. Stimulus artifact was removed by replacing the sample points at each stimulus artifact event with values interpolated along a straight line, computed from neighbouring sample points. This technique required only that artifact events be identifiable and that the artifact duration remained less than both the inter-stimulus interval and the time course of the action potential. We have demonstrated that this computationally efficient sample-and-interpolate technique removes the stimulus artifact with minimal distortion of the action potential waveform. We suggest that this technique may have potential applications in a range of electrophysiological recording systems. PMID:18339428

  15. Fast and Robust STEM Reconstruction in Complex Environments Using Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Wang, D.; Hollaus, M.; Puttonen, E.; Pfeifer, N.

    2016-06-01

    Terrestrial Laser Scanning (TLS) is an effective tool in forest research and management. However, accurate estimation of tree parameters still remains challenging in complex forests. In this paper, we present a novel algorithm for stem modeling in complex environments. This method does not require accurate delineation of stem points from the original point cloud. The stem reconstruction features a self-adaptive cylinder growing scheme. This algorithm is tested for a landslide region in the federal state of Vorarlberg, Austria. The algorithm results are compared with field reference data, which show that our algorithm is able to accurately retrieve the diameter at breast height (DBH) with a root mean square error (RMSE) of ~1.9 cm. This algorithm is further facilitated by applying an advanced sampling technique. Different sampling rates are applied and tested. It is found that a sampling rate of 7.5% is already able to retain the stem fitting quality and simultaneously reduce the computation time significantly by ~88%.

  16. Crossfit analysis: a novel method to characterize the dynamics of induced plant responses.

    PubMed

    Jansen, Jeroen J; van Dam, Nicole M; Hoefsloot, Huub C J; Smilde, Age K

    2009-12-16

    Many plant species show induced responses that protect them against exogenous attacks. These responses involve the production of many different bioactive compounds. Plant species belonging to the Brassicaceae family produce defensive glucosinolates, which may greatly influence their favorable nutritional properties for humans. Each responding compound may have its own dynamic profile and metabolic relationships with other compounds. The chemical background of the induced response is therefore highly complex and may therefore not reveal all the properties of the response in any single model. This study therefore aims to describe the dynamics of the glucosinolate response, measured at three time points after induction in a feral Brassica, by a three-faceted approach, based on Principal Component Analysis. First the large-scale aspects of the response are described in a 'global model' and then each time-point in the experiment is individually described in 'local models' that focus on phenomena that occur at specific moments in time. Although each local model describes the variation among the plants at one time-point as well as possible, the response dynamics are lost. Therefore a novel method called the 'Crossfit' is described that links the local models of different time-points to each other. Each element of the described analysis approach reveals different aspects of the response. The crossfit shows that smaller dynamic changes may occur in the response that are overlooked by global models, as illustrated by the analysis of a metabolic profiling dataset of the same samples.

  17. Crossfit analysis: a novel method to characterize the dynamics of induced plant responses

    PubMed Central

    2009-01-01

    Background Many plant species show induced responses that protect them against exogenous attacks. These responses involve the production of many different bioactive compounds. Plant species belonging to the Brassicaceae family produce defensive glucosinolates, which may greatly influence their favorable nutritional properties for humans. Each responding compound may have its own dynamic profile and metabolic relationships with other compounds. The chemical background of the induced response is therefore highly complex and may therefore not reveal all the properties of the response in any single model. Results This study therefore aims to describe the dynamics of the glucosinolate response, measured at three time points after induction in a feral Brassica, by a three-faceted approach, based on Principal Component Analysis. First the large-scale aspects of the response are described in a 'global model' and then each time-point in the experiment is individually described in 'local models' that focus on phenomena that occur at specific moments in time. Although each local model describes the variation among the plants at one time-point as well as possible, the response dynamics are lost. Therefore a novel method called the 'Crossfit' is described that links the local models of different time-points to each other. Conclusions Each element of the described analysis approach reveals different aspects of the response. The crossfit shows that smaller dynamic changes may occur in the response that are overlooked by global models, as illustrated by the analysis of a metabolic profiling dataset of the same samples. PMID:20015363

  18. BATS: a Bayesian user-friendly software for analyzing time series microarray experiments.

    PubMed

    Angelini, Claudia; Cutillo, Luisa; De Canditiis, Daniela; Mutarelli, Margherita; Pensky, Marianna

    2008-10-06

    Gene expression levels in a given cell can be influenced by different factors, namely pharmacological or medical treatments. The response to a given stimulus is usually different for different genes and may depend on time. One of the goals of modern molecular biology is the high-throughput identification of genes associated with a particular treatment or a biological process of interest. From methodological and computational point of view, analyzing high-dimensional time course microarray data requires very specific set of tools which are usually not included in standard software packages. Recently, the authors of this paper developed a fully Bayesian approach which allows one to identify differentially expressed genes in a 'one-sample' time-course microarray experiment, to rank them and to estimate their expression profiles. The method is based on explicit expressions for calculations and, hence, very computationally efficient. The software package BATS (Bayesian Analysis of Time Series) presented here implements the methodology described above. It allows an user to automatically identify and rank differentially expressed genes and to estimate their expression profiles when at least 5-6 time points are available. The package has a user-friendly interface. BATS successfully manages various technical difficulties which arise in time-course microarray experiments, such as a small number of observations, non-uniform sampling intervals and replicated or missing data. BATS is a free user-friendly software for the analysis of both simulated and real microarray time course experiments. The software, the user manual and a brief illustrative example are freely available online at the BATS website: http://www.na.iac.cnr.it/bats.

  19. The lead time tradeoff: the case of health states better than dead.

    PubMed

    Pinto-Prades, José Luis; Rodríguez-Míguez, Eva

    2015-04-01

    Lead time tradeoff (L-TTO) is a variant of the time tradeoff (TTO). L-TTO introduces a lead period in full health before illness onset, avoiding the need to use 2 different procedures for states better and worse than dead. To estimate utilities, additive separability is assumed. We tested to what extent violations of this assumption can bias utilities estimated with L-TTO. A sample of 500 members of the Spanish general population evaluated 24 health states, using face-to-face interviews. A total of 188 subjects were interviewed with L-TTO and the rest with TTO. Both samples evaluated the same set of 24 health states, divided into 4 groups with 6 health states per set. Each subject evaluated 1 of the sets. A random effects regression model was fitted to our data. Only health states better than dead were included in the regression since it is in this subset where additive separability can be tested clearly. Utilities were higher in L-TTO in relation to TTO (on average L-TTO adds about 0.2 points to the utility of health states), suggesting that additive separability is violated. The difference between methods increased with the severity of the health state. Thus, L-TTO adds about 0.14 points to the average utility of the less severe states, 0.23 to the intermediate states, and 0.28 points to the more severe estates. L-TTO produced higher utilities than TTO. Health problems are perceived as less severe if a lead period in full health is added upfront, implying that there are interactions between disjointed time periods. The advantages of this method have to be compared with the cost of modeling the interaction between periods. © The Author(s) 2014.

  20. A voxelwise approach to determine consensus regions-of-interest for the study of brain network plasticity.

    PubMed

    Rajtmajer, Sarah M; Roy, Arnab; Albert, Reka; Molenaar, Peter C M; Hillary, Frank G

    2015-01-01

    Despite exciting advances in the functional imaging of the brain, it remains a challenge to define regions of interest (ROIs) that do not require investigator supervision and permit examination of change in networks over time (or plasticity). Plasticity is most readily examined by maintaining ROIs constant via seed-based and anatomical-atlas based techniques, but these approaches are not data-driven, requiring definition based on prior experience (e.g., choice of seed-region, anatomical landmarks). These approaches are limiting especially when functional connectivity may evolve over time in areas that are finer than known anatomical landmarks or in areas outside predetermined seeded regions. An ideal method would permit investigators to study network plasticity due to learning, maturation effects, or clinical recovery via multiple time point data that can be compared to one another in the same ROI while also preserving the voxel-level data in those ROIs at each time point. Data-driven approaches (e.g., whole-brain voxelwise approaches) ameliorate concerns regarding investigator bias, but the fundamental problem of comparing the results between distinct data sets remains. In this paper we propose an approach, aggregate-initialized label propagation (AILP), which allows for data at separate time points to be compared for examining developmental processes resulting in network change (plasticity). To do so, we use a whole-brain modularity approach to parcellate the brain into anatomically constrained functional modules at separate time points and then apply the AILP algorithm to form a consensus set of ROIs for examining change over time. To demonstrate its utility, we make use of a known dataset of individuals with traumatic brain injury sampled at two time points during the first year of recovery and show how the AILP procedure can be applied to select regions of interest to be used in a graph theoretical analysis of plasticity.

Top