Automated storm water sampling on small watersheds
Harmel, R.D.; King, K.W.; Slade, R.M.
2003-01-01
Few guidelines are currently available to assist in designing appropriate automated storm water sampling strategies for small watersheds. Therefore, guidance is needed to develop strategies that achieve an appropriate balance between accurate characterization of storm water quality and loads and limitations of budget, equipment, and personnel. In this article, we explore the important sampling strategy components (minimum flow threshold, sampling interval, and discrete versus composite sampling) and project-specific considerations (sampling goal, sampling and analysis resources, and watershed characteristics) based on personal experiences and pertinent field and analytical studies. These components and considerations are important in achieving the balance between sampling goals and limitations because they determine how and when samples are taken and the potential sampling error. Several general recommendations are made, including: setting low minimum flow thresholds, using flow-interval or variable time-interval sampling, and using composite sampling to limit the number of samples collected. Guidelines are presented to aid in selection of an appropriate sampling strategy based on user's project-specific considerations. Our experiences suggest these recommendations should allow implementation of a successful sampling strategy for most small watershed sampling projects with common sampling goals.
Cheung, Chi Yuen; van der Heijden, Jaques; Hoogtanders, Karin; Christiaans, Maarten; Liu, Yan Lun; Chan, Yiu Han; Choi, Koon Shing; van de Plas, Afke; Shek, Chi Chung; Chau, Ka Foon; Li, Chun Sang; van Hooff, Johannes; Stolk, Leo
2008-02-01
Dried blood spot (DBS) sampling and high-performance liquid chromatography tandem-mass spectrometry have been developed in monitoring tacrolimus levels. Our center favors the use of limited sampling strategy and abbreviated formula to estimate the area under concentration-time curve (AUC(0-12)). However, it is inconvenient for patients because they have to wait in the center for blood sampling. We investigated the application of DBS method in tacrolimus level monitoring using limited sampling strategy and abbreviated AUC estimation approach. Duplicate venous samples were obtained at each time point (C(0), C(2), and C(4)). To determine the stability of blood samples, one venous sample was sent to our laboratory immediately. The other duplicate venous samples, together with simultaneous fingerprick blood samples, were sent to the University of Maastricht in the Netherlands. Thirty six patients were recruited and 108 sets of blood samples were collected. There was a highly significant relationship between AUC(0-12), estimated from venous blood samples, and fingerprick blood samples (r(2) = 0.96, P < 0.0001). Moreover, there was an excellent correlation between whole blood venous tacrolimus levels in the two centers (r(2) = 0.97; P < 0.0001). The blood samples were stable after long-distance transport. DBS sampling can be used in centers using limited sampling and abbreviated AUC(0-12) strategy as drug monitoring.
Limited-sampling strategies for anti-infective agents: systematic review.
Sprague, Denise A; Ensom, Mary H H
2009-09-01
Area under the concentration-time curve (AUC) is a pharmacokinetic parameter that represents overall exposure to a drug. For selected anti-infective agents, pharmacokinetic-pharmacodynamic parameters, such as AUC/MIC (where MIC is the minimal inhibitory concentration), have been correlated with outcome in a few studies. A limited-sampling strategy may be used to estimate pharmacokinetic parameters such as AUC, without the frequent, costly, and inconvenient blood sampling that would be required to directly calculate the AUC. To discuss, by means of a systematic review, the strengths, limitations, and clinical implications of published studies involving a limited-sampling strategy for anti-infective agents and to propose improvements in methodology for future studies. The PubMed and EMBASE databases were searched using the terms "anti-infective agents", "limited sampling", "optimal sampling", "sparse sampling", "AUC monitoring", "abbreviated AUC", "abbreviated sampling", and "Bayesian". The reference lists of retrieved articles were searched manually. Included studies were classified according to modified criteria from the US Preventive Services Task Force. Twenty studies met the inclusion criteria. Six of the studies (involving didanosine, zidovudine, nevirapine, ciprofloxacin, efavirenz, and nelfinavir) were classified as providing level I evidence, 4 studies (involving vancomycin, didanosine, lamivudine, and lopinavir-ritonavir) provided level II-1 evidence, 2 studies (involving saquinavir and ceftazidime) provided level II-2 evidence, and 8 studies (involving ciprofloxacin, nelfinavir, vancomycin, ceftazidime, ganciclovir, pyrazinamide, meropenem, and alpha interferon) provided level III evidence. All of the studies providing level I evidence used prospectively collected data and proper validation procedures with separate, randomly selected index and validation groups. However, most of the included studies did not provide an adequate description of the methods or the characteristics of included patients, which limited their generalizability. Many limited-sampling strategies have been developed for anti-infective agents that do not have a clearly established link between AUC and clinical outcomes in humans. Future studies should first determine if there is an association between AUC monitoring and clinical outcomes. Thereafter, it may be worthwhile to prospectively develop and validate a limited-sampling strategy for the particular anti-infective agent in a similar population.
LeCuyer, Elizabeth A; Swanson, Dena P; Cole, Robert; Kitzman, Harriet
2011-12-01
The effect of maternal attitudes and limit-setting strategies on children's self-regulation (measured as committed compliance) was compared in 151 African-American (AA) and 108 European-American (EA) mothers and their 3-year-old children. There were no ethnic differences in children's compliance, however ethnicity moderated the relationship between maternal authoritarian attitudes and children's compliance. Higher authoritarian attitudes predicted less children's compliance in the EA sample, but greater compliance in the AA sample. Observational limit-setting data revealed that in both ethnic groups, maternal authoritarian attitudes influenced children's self-regulation through maternal use of lower-power (gentle) verbal strategies, fewer physical strategies, and judicious use of higher-power verbal strategies. The findings indicate that the meaning and purpose of authoritarian attitudes varies across these mothers' socio-cultural contexts. Copyright © 2011 Wiley Periodicals, Inc.
The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations
NASA Astrophysics Data System (ADS)
Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.
2017-09-01
We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.
Williams, Brad J; Ciavarini, Steve J; Devlin, Curt; Cohn, Steven M; Xie, Rong; Vissers, Johannes P C; Martin, LeRoy B; Caswell, Allen; Langridge, James I; Geromanos, Scott J
2016-08-01
In proteomics studies, it is generally accepted that depth of coverage and dynamic range is limited in data-directed acquisitions. The serial nature of the method limits both sensitivity and the number of precursor ions that can be sampled. To that end, a number of data-independent acquisition (DIA) strategies have been introduced with these methods, for the most part, immune to the sampling issue; nevertheless, some do have other limitations with respect to sensitivity. The major limitation with DIA approaches is interference, i.e., MS/MS spectra are highly chimeric and often incapable of being identified using conventional database search engines. Utilizing each available dimension of separation prior to ion detection, we present a new multi-mode acquisition (MMA) strategy multiplexing both narrowband and wideband DIA acquisitions in a single analytical workflow. The iterative nature of the MMA workflow limits the adverse effects of interference with minimal loss in sensitivity. Qualitative identification can be performed by selected ion chromatograms or conventional database search strategies. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
van Rijn, S P; Zuur, M A; van Altena, R; Akkerman, O W; Proost, J H; de Lange, W C M; Kerstjens, H A M; Touw, D J; van der Werf, T S; Kosterink, J G W; Alffenaar, J W C
2017-04-01
Ertapenem is a broad-spectrum carbapenem antibiotic whose activity against Mycobacterium tuberculosis is being explored. Carbapenems have antibacterial activity when the plasma concentration exceeds the MIC at least 40% of the time (40% T MIC ). To assess the 40% T MIC in multidrug-resistant tuberculosis (MDR-TB) patients, a limited sampling strategy was developed using a population pharmacokinetic model based on data for healthy volunteers. A two-compartment population pharmacokinetic model was developed with data for 42 healthy volunteers using an iterative two-stage Bayesian method. External validation was performed by Bayesian fitting of the model developed with data for volunteers to the data for individual MDR-TB patients (in which the fitted values of the area under the concentration-time curve from 0 to 24 h [AUC 0-24, fit values] were used) using the population model developed for volunteers as a prior. A Monte Carlo simulation ( n = 1,000) was used to evaluate limited sampling strategies. Additionally, the 40% T MIC with the free fraction ( f 40% T MIC ) of ertapenem in MDR-TB patients was estimated with the population pharmacokinetic model. The population pharmacokinetic model that was developed was shown to overestimate the area under the concentration-time curve from 0 to 24 h (AUC 0-24 ) in MDR-TB patients by 6.8% (range, -17.2 to 30.7%). The best-performing limited sampling strategy, which had a time restriction of 0 to 6 h, was found to be sampling at 1 and 5 h ( r 2 = 0.78, mean prediction error = -0.33%, root mean square error = 5.5%). Drug exposure was overestimated by a mean percentage of 4.2% (range, -15.2 to 23.6%). When a free fraction of 5% was considered and the MIC was set at 0.5 mg/liter, the minimum f 40% T MIC would have been exceeded in 9 out of 12 patients. A population pharmacokinetic model and limited sampling strategy, developed using data from healthy volunteers, were shown to be adequate to predict ertapenem exposure in MDR-TB patients. Copyright © 2017 American Society for Microbiology.
van Rijn, S. P.; Zuur, M. A.; van Altena, R.; Akkerman, O. W.; Proost, J. H.; de Lange, W. C. M.; Kerstjens, H. A. M.; Touw, D. J.; van der Werf, T. S.; Kosterink, J. G. W.
2017-01-01
ABSTRACT Ertapenem is a broad-spectrum carbapenem antibiotic whose activity against Mycobacterium tuberculosis is being explored. Carbapenems have antibacterial activity when the plasma concentration exceeds the MIC at least 40% of the time (40% TMIC). To assess the 40% TMIC in multidrug-resistant tuberculosis (MDR-TB) patients, a limited sampling strategy was developed using a population pharmacokinetic model based on data for healthy volunteers. A two-compartment population pharmacokinetic model was developed with data for 42 healthy volunteers using an iterative two-stage Bayesian method. External validation was performed by Bayesian fitting of the model developed with data for volunteers to the data for individual MDR-TB patients (in which the fitted values of the area under the concentration-time curve from 0 to 24 h [AUC0–24, fit values] were used) using the population model developed for volunteers as a prior. A Monte Carlo simulation (n = 1,000) was used to evaluate limited sampling strategies. Additionally, the 40% TMIC with the free fraction (f 40% TMIC) of ertapenem in MDR-TB patients was estimated with the population pharmacokinetic model. The population pharmacokinetic model that was developed was shown to overestimate the area under the concentration-time curve from 0 to 24 h (AUC0–24) in MDR-TB patients by 6.8% (range, −17.2 to 30.7%). The best-performing limited sampling strategy, which had a time restriction of 0 to 6 h, was found to be sampling at 1 and 5 h (r2 = 0.78, mean prediction error = −0.33%, root mean square error = 5.5%). Drug exposure was overestimated by a mean percentage of 4.2% (range, −15.2 to 23.6%). When a free fraction of 5% was considered and the MIC was set at 0.5 mg/liter, the minimum f 40% TMIC would have been exceeded in 9 out of 12 patients. A population pharmacokinetic model and limited sampling strategy, developed using data from healthy volunteers, were shown to be adequate to predict ertapenem exposure in MDR-TB patients. PMID:28137814
EFL Teachers' Perceptions of Strategy Deficiency Syndrome: A Grounded Theory Study
ERIC Educational Resources Information Center
Ostovar-Namaghi, Seyyed Ali; Ahmadabadi-Tak, Bahareh
2017-01-01
Strategy-deficient language learners struggle to develop their language proficiency through limiting and inappropriate strategies. This study aims at exploring experienced teachers' perceptions of strategy deficiency syndrome among EFL learners. To this end, the perspectives of a purposive sample of experienced teachers teaching in private…
A novel 3D Cartesian random sampling strategy for Compressive Sensing Magnetic Resonance Imaging.
Valvano, Giuseppe; Martini, Nicola; Santarelli, Maria Filomena; Chiappino, Dante; Landini, Luigi
2015-01-01
In this work we propose a novel acquisition strategy for accelerated 3D Compressive Sensing Magnetic Resonance Imaging (CS-MRI). This strategy is based on a 3D cartesian sampling with random switching of the frequency encoding direction with other K-space directions. Two 3D sampling strategies are presented. In the first strategy, the frequency encoding direction is randomly switched with one of the two phase encoding directions. In the second strategy, the frequency encoding direction is randomly chosen between all the directions of the K-Space. These strategies can lower the coherence of the acquisition, in order to produce reduced aliasing artifacts and to achieve a better image quality after Compressive Sensing (CS) reconstruction. Furthermore, the proposed strategies can reduce the typical smoothing of CS due to the limited sampling of high frequency locations. We demonstrated by means of simulations that the proposed acquisition strategies outperformed the standard Compressive Sensing acquisition. This results in a better quality of the reconstructed images and in a greater achievable acceleration.
Limited sampling strategy models for estimating the AUC of gliclazide in Chinese healthy volunteers.
Huang, Ji-Han; Wang, Kun; Huang, Xiao-Hui; He, Ying-Chun; Li, Lu-Jin; Sheng, Yu-Cheng; Yang, Juan; Zheng, Qing-Shan
2013-06-01
The aim of this work is to reduce the cost of required sampling for the estimation of the area under the gliclazide plasma concentration versus time curve within 60 h (AUC0-60t ). The limited sampling strategy (LSS) models were established and validated by the multiple regression model within 4 or fewer gliclazide concentration values. Absolute prediction error (APE), root of mean square error (RMSE) and visual prediction check were used as criterion. The results of Jack-Knife validation showed that 10 (25.0 %) of the 40 LSS based on the regression analysis were not within an APE of 15 % using one concentration-time point. 90.2, 91.5 and 92.4 % of the 40 LSS models were capable of prediction using 2, 3 and 4 points, respectively. Limited sampling strategies were developed and validated for estimating AUC0-60t of gliclazide. This study indicates that the implementation of an 80 mg dosage regimen enabled accurate predictions of AUC0-60t by the LSS model. This study shows that 12, 6, 4, 2 h after administration are the key sampling times. The combination of (12, 2 h), (12, 8, 2 h) or (12, 8, 4, 2 h) can be chosen as sampling hours for predicting AUC0-60t in practical application according to requirement.
A multiple-objective optimal exploration strategy
Christakos, G.; Olea, R.A.
1988-01-01
Exploration for natural resources is accomplished through partial sampling of extensive domains. Such imperfect knowledge is subject to sampling error. Complex systems of equations resulting from modelling based on the theory of correlated random fields are reduced to simple analytical expressions providing global indices of estimation variance. The indices are utilized by multiple objective decision criteria to find the best sampling strategies. The approach is not limited by geometric nature of the sampling, covers a wide range in spatial continuity and leads to a step-by-step procedure. ?? 1988.
Mosher, Heather I; Moorthi, Gayatri; Li, JiangHong; Weeks, Margaret R
2015-09-01
This paper examines peer recruitment dynamics through respondent driven sampling (RDS) with a sample of injection drug users in Hartford, CT to understand the strategies participants use to recruit peers into a study and the extent to which these strategies may introduce risks above the ethical limit despite safeguards in RDS. Out of 526 injection drug users who participated in a mixed-method RDS methodology evaluation study, a nested sample of 61 participants completed an in-depth semi-structured interview at a 2-month follow-up to explore their experiences with the recruitment process. Findings revealed that participants used a variety of strategies to recruit peers, ranging from one-time interactions to more persistent strategies to encourage participation (e.g., selecting peers that can easily be found and contacted later, following up with peers to remind them of their appointment, accompanying peers to the study site, etc.). Some participants described the more persistent strategies as helpful, while some others experienced these strategies as minor peer pressure, creating a feeling of obligation to participate. Narratives revealed that overall, the probability of experiencing study-related risks remains relatively low for most participants; however, a disconcerting finding was that higher study-related risks (e.g., relationship conflict, loss of relationship, physical fights, violence) were seen for recruits who participated but switched coupons or for recruits who decided not to participate in the study and did not return the coupon to the recruiter. Findings indicate that peer recruitment practices in RDS generally pose minimal risk, but that peer recruitment may occasionally exceed the ethical limit, and that enhanced safeguards for studies using peer recruitment methods are recommended. Suggestions for possible enhancements are described. Copyright © 2015 Elsevier B.V. All rights reserved.
Besmer, Michael D.; Hammes, Frederik; Sigrist, Jürg A.; Ort, Christoph
2017-01-01
Monitoring of microbial drinking water quality is a key component for ensuring safety and understanding risk, but conventional monitoring strategies are typically based on low sampling frequencies (e.g., quarterly or monthly). This is of concern because many drinking water sources, such as karstic springs are often subject to changes in bacterial concentrations on much shorter time scales (e.g., hours to days), for example after precipitation events. Microbial contamination events are crucial from a risk assessment perspective and should therefore be targeted by monitoring strategies to establish both the frequency of their occurrence and the magnitude of bacterial peak concentrations. In this study we used monitoring data from two specific karstic springs. We assessed the performance of conventional monitoring based on historical records and tested a number of alternative strategies based on a high-resolution data set of bacterial concentrations in spring water collected with online flow cytometry (FCM). We quantified the effect of increasing sampling frequency and found that for the specific case studied, at least bi-weekly sampling would be needed to detect precipitation events with a probability of >90%. We then proposed an optimized monitoring strategy with three targeted samples per event, triggered by precipitation measurements. This approach is more effective and efficient than simply increasing overall sampling frequency. It would enable the water utility to (1) analyze any relevant event and (2) limit median underestimation of peak concentrations to approximately 10%. We conclude with a generalized perspective on sampling optimization and argue that the assessment of short-term dynamics causing microbial peak loads initially requires increased sampling/analysis efforts, but can be optimized subsequently to account for limited resources. This offers water utilities and public health authorities systematic ways to evaluate and optimize their current monitoring strategies. PMID:29213255
Besmer, Michael D; Hammes, Frederik; Sigrist, Jürg A; Ort, Christoph
2017-01-01
Monitoring of microbial drinking water quality is a key component for ensuring safety and understanding risk, but conventional monitoring strategies are typically based on low sampling frequencies (e.g., quarterly or monthly). This is of concern because many drinking water sources, such as karstic springs are often subject to changes in bacterial concentrations on much shorter time scales (e.g., hours to days), for example after precipitation events. Microbial contamination events are crucial from a risk assessment perspective and should therefore be targeted by monitoring strategies to establish both the frequency of their occurrence and the magnitude of bacterial peak concentrations. In this study we used monitoring data from two specific karstic springs. We assessed the performance of conventional monitoring based on historical records and tested a number of alternative strategies based on a high-resolution data set of bacterial concentrations in spring water collected with online flow cytometry (FCM). We quantified the effect of increasing sampling frequency and found that for the specific case studied, at least bi-weekly sampling would be needed to detect precipitation events with a probability of >90%. We then proposed an optimized monitoring strategy with three targeted samples per event, triggered by precipitation measurements. This approach is more effective and efficient than simply increasing overall sampling frequency. It would enable the water utility to (1) analyze any relevant event and (2) limit median underestimation of peak concentrations to approximately 10%. We conclude with a generalized perspective on sampling optimization and argue that the assessment of short-term dynamics causing microbial peak loads initially requires increased sampling/analysis efforts, but can be optimized subsequently to account for limited resources. This offers water utilities and public health authorities systematic ways to evaluate and optimize their current monitoring strategies.
NASA Astrophysics Data System (ADS)
Lu, Xinguo; Chen, Dan
2017-08-01
Traditional supervised classifiers neglect a large amount of data which not have sufficient follow-up information, only work with labeled data. Consequently, the small sample size limits the advancement of design appropriate classifier. In this paper, a transductive learning method which combined with the filtering strategy in transductive framework and progressive labeling strategy is addressed. The progressive labeling strategy does not need to consider the distribution of labeled samples to evaluate the distribution of unlabeled samples, can effective solve the problem of evaluate the proportion of positive and negative samples in work set. Our experiment result demonstrate that the proposed technique have great potential in cancer prediction based on gene expression.
Efficient sampling of complex network with modified random walk strategies
NASA Astrophysics Data System (ADS)
Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei
2018-02-01
We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.
Nonprobability and probability-based sampling strategies in sexual science.
Catania, Joseph A; Dolcini, M Margaret; Orellana, Roberto; Narayanan, Vasudah
2015-01-01
With few exceptions, much of sexual science builds upon data from opportunistic nonprobability samples of limited generalizability. Although probability-based studies are considered the gold standard in terms of generalizability, they are costly to apply to many of the hard-to-reach populations of interest to sexologists. The present article discusses recent conclusions by sampling experts that have relevance to sexual science that advocates for nonprobability methods. In this regard, we provide an overview of Internet sampling as a useful, cost-efficient, nonprobability sampling method of value to sex researchers conducting modeling work or clinical trials. We also argue that probability-based sampling methods may be more readily applied in sex research with hard-to-reach populations than is typically thought. In this context, we provide three case studies that utilize qualitative and quantitative techniques directed at reducing limitations in applying probability-based sampling to hard-to-reach populations: indigenous Peruvians, African American youth, and urban men who have sex with men (MSM). Recommendations are made with regard to presampling studies, adaptive and disproportionate sampling methods, and strategies that may be utilized in evaluating nonprobability and probability-based sampling methods.
Representativeness-based sampling network design for the State of Alaska
Forrest M. Hoffman; Jitendra Kumar; Richard T. Mills; William W. Hargrove
2013-01-01
Resource and logistical constraints limit the frequency and extent of environmental observations, particularly in the Arctic, necessitating the development of a systematic sampling strategy to maximize coverage and objectively represent environmental variability at desired scales. A quantitative methodology for stratifying sampling domains, informing site selection,...
Effects of occupational future time perspective on managing stressful work situations.
Ho, Henry C Y; Yeung, Dannii Y
2016-08-01
According to the socioemotional selectivity theory (SST; Carstensen, 2006), older adults perceive their future time as increasingly limited, which motivates them to focus more on emotional goals and prefer passive emotion-focused strategies. This study aims to investigate the effect of occupational future time perspective (OFTP) on the use of problem-solving strategies in stressful work situations and to examine the effectiveness of these strategies on psychological well-being. A sample of 199 Chinese clerical workers responded to a structured questionnaire on problem-solving strategy use in relation to hypothetical work scenarios. Results revealed that relative to those with limited OFTP, workers with expansive OFTP preferred problem-focused and proactive strategies in both low- and high-emotionally salient scenarios. Workers with limited OFTP consistently preferred passive strategies irrespective of emotional salience. OFTP moderated the effect of problem-focused strategies on psychological distress. In particular, there was a significant negative relationship between problem-focused strategies and psychological distress among workers with expansive OFTP, but such pattern of relationship was not observed among workers with limited OFTP. Findings of this study inform the training strategies employed by practitioners to fit the developmental goals of workers in order to maximise their strengths at work. © 2015 International Union of Psychological Science.
Identification of problems in search strategies in Cochrane Reviews.
Franco, Juan Víctor Ariel; Garrote, Virginia Laura; Escobar Liquitay, Camila Micaela; Vietto, Valeria
2018-05-15
Search strategies are essential for the adequate retrieval of studies in a systematic review (SR). Our objective was to identify problems in the design and reporting of search strategies in a sample of new Cochrane SRs first published in The Cochrane Library in 2015. We took a random sample of 70 new Cochrane SRs of interventions published in 2015. We evaluated their design and reporting of search strategies using the recommendations from the Cochrane Handbook for Systematic Reviews of Interventions, the Methodological Expectations of Cochrane Intervention Reviews, and the Peer Review of Electronic Search Strategies evidence-based guideline. Most reviews complied with the reporting standards in the Cochrane Handbook and the Methodological Expectations of Cochrane Intervention Reviews; however, 8 SRs did not search trials registers, 3 SRs included language restrictions, and there was inconsistent reporting of contact with individuals and searches of the gray literature. We found problems in the design of the search strategies in 73% of reviews (95% CI, 60-84%) and 53% of these contained problems (95% CI, 38-69%) that could limit both the sensitivity and precision of the search strategies. We found limitations in the design and reporting of search strategies. We consider that a greater adherence to the guidelines could improve their quality. Copyright © 2018 John Wiley & Sons, Ltd.
Mohammadpour, A-H; Nazemian, F; Abtahi, B; Naghibi, M; Gholami, K; Rezaee, S; Nazari, M-R A; Rajabi, O
2008-12-01
Area under the concentration curve (AUC) of mycophenolic acid (MPA) could help to optimize therapeutic drug monitoring during the early post-renal transplant period. The aim of this study was to develop a limited sampling strategy to estimate an abbreviated MPA AUC within the first month after renal transplantation. In this study we selected 19 patients in the early posttransplant period with normal renal graft function (glomerular filtration rate > 70 mL/min). Plasma MPA concentrations were measured using reverse-phase high-performance liquid chromatography. MPA AUC(0-12h) was calculated using the linear trapezoidal rule. Multiple stepwise regression analysis was used to determine the minimal and convenient time points of MPA levels that could be used to derive model equations best fitted to MPA AUC(0-12h). The regression equation for AUC estimation that gave the best performance was AUC = 14.46 C(10) + 15.547 (r(2) = .882). The validation of the method was performed using the jackknife method. Mean prediction error of this model was not different from zero (P > .05) and had a high root mean square prediction error (8.06). In conclusion, this limited sampling strategy provided an effective approach for therapeutic drug monitoring during the early posttransplant period.
NASA Technical Reports Server (NTRS)
Goyet, Catherine; Davis, Daniel; Peltzer, Edward T.; Brewer, Peter G.
1995-01-01
Large-scale ocean observing programs such as the Joint Global Ocean Flux Study (JGOFS) and the World Ocean Circulation Experiment (WOCE) today, must face the problem of designing an adequate sampling strategy. For ocean chemical variables, the goals and observing technologies are quite different from ocean physical variables (temperature, salinity, pressure). We have recently acquired data on the ocean CO2 properties on WOCE cruises P16c and P17c that are sufficiently dense to test for sampling redundancy. We use linear and quadratic interpolation methods on the sampled field to investigate what is the minimum number of samples required to define the deep ocean total inorganic carbon (TCO2) field within the limits of experimental accuracy (+/- 4 micromol/kg). Within the limits of current measurements, these lines were oversampled in the deep ocean. Should the precision of the measurement be improved, then a denser sampling pattern may be desirable in the future. This approach rationalizes the efficient use of resources for field work and for estimating gridded (TCO2) fields needed to constrain geochemical models.
ERIC Educational Resources Information Center
Lougheed, Jessica P.; Hollenstein, Tom
2012-01-01
The present study was designed to test whether the beneficial effects of emotion regulation (ER) have less to do with the use of singular, "adaptive" strategies and more to do with using a range of strategies. Using a community sample of adolescents (N = 177, M = 13.6 years), groups based on five measures of ER (reappraisal, suppression,…
Yin, Ge; Danielsson, Sara; Dahlberg, Anna-Karin; Zhou, Yihui; Qiu, Yanling; Nyberg, Elisabeth; Bignert, Anders
2017-10-01
Environmental monitoring typically assumes samples and sampling activities to be representative of the population being studied. Given a limited budget, an appropriate sampling strategy is essential to support detecting temporal trends of contaminants. In the present study, based on real chemical analysis data on polybrominated diphenyl ethers in snails collected from five subsites in Tianmu Lake, computer simulation is performed to evaluate three sampling strategies by the estimation of required sample size, to reach a detection of an annual change of 5% with a statistical power of 80% and 90% with a significant level of 5%. The results showed that sampling from an arbitrarily selected sampling spot is the worst strategy, requiring much more individual analyses to achieve the above mentioned criteria compared with the other two approaches. A fixed sampling site requires the lowest sample size but may not be representative for the intended study object e.g. a lake and is also sensitive to changes of that particular sampling site. In contrast, sampling at multiple sites along the shore each year, and using pooled samples when the cost to collect and prepare individual specimens are much lower than the cost for chemical analysis, would be the most robust and cost efficient strategy in the long run. Using statistical power as criterion, the results demonstrated quantitatively the consequences of various sampling strategies, and could guide users with respect of required sample sizes depending on sampling design for long term monitoring programs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Raymond-Flesch, Marissa; Siemons, Rachel; Brindis, Claire D
2016-01-01
Limited research has focused on undocumented immigrants' health and access to care. This paper describes participant engagement strategies used to investigate the health needs of immigrants eligible for Deferred Action for Childhood Arrivals (DACA). Community-based strategies engaged advocates and undocumented Californians in study design and recruitment. Outreach in diverse settings, social media, and participant-driven sampling recruited 61 DACA-eligible focus group participants. Social media, community-based organizations (CBOs), family members, advocacy groups, and participant-driven sampling were the most successful recruitment strategies. Participants felt engaging in research was instrumental for sharing their concerns with health care providers and policymakers, noteworthy in light of their previously identified fears and mistrust of government officials. Using multiple culturally responsive strategies including participant-driven sampling, engagement with CBOs, and use of social media, those eligible for DACA eagerly engage as research participants. Educating researchers and institutional review boards (IRBs) about legal and safety concerns can improve research engagement.
ERIC Educational Resources Information Center
Kosma, Maria; Cardinal, Bradley; McCubbin, Jeffrey A.
2004-01-01
The inclusion of a representative sample of understudied populations (e.g., women, minorities, older adults, youth, and people with disabilities) in physical activity promotion studies is a public health priority. Given the limited empirical evidence of effective recruitment strategies and limitations in research methodology for both over…
The Dark Triad Traits from a Life History Perspective in Six Countries.
Jonason, Peter K; Foster, Joshua D; Egorova, Marina S; Parshikova, Oksana; Csathó, Árpád; Oshio, Atsushi; Gouveia, Valdiney V
2017-01-01
Work on the Dark Triad traits has benefited from the use of a life history framework but it has been limited to primarily Western samples and indirect assessments of life history strategies. Here, we examine how the Dark Triad traits (i.e., psychopathy, Machiavellianism, and narcissism) relate to two measures of individual differences in life history strategies. In Study 1 ( N = 937), we replicated prior observed links between life history strategies, as measured by the Mini- K , and the Dark Triad traits using samples recruited from three countries. In Study 2 ( N = 1032), we measured life history strategies using the Consideration of Future Consequences Scale and correlated it with the Dark Triad traits in samples recruited from three additional countries. While there was some variability across participants' sex and country, the results were generally consistent in that psychopathy and (to a lesser extent) Machiavellianism were related to faster life history strategies and narcissism was related to slower life history strategies. These results add cross-cultural data and the use of two measures of life history speed to understand the Dark Triad traits from a life history perspective.
Exploratory Disposal and Reuse Feasibility Analysis of Winter Maintenance Wash Water.
Ullinger, Heather L; Kennedy, Marla J; Schneider, William H; Miller, Christopher M
2016-01-01
The Ohio Department of Transportation has more than 60 facilities without sewer access generating approximately 19 million gallons of winter maintenance wash water. Off-site disposal is costly, creating the need for sustainable management strategies. The objective of this study was to conduct an exploratory feasibility analysis to assess wash water disposal and potential reuse as brine. Based on a comprehensive literature review and relevant environmental chemistry, a sampling protocol consisting of 31 water quality constituents was utilized for monthly sampling at three geographically distinct Ohio Department of Transportation garages during the winter of 2012. Results were compared to local disposal and reuse guidance limits. Three constituents, including a maximum copper concentration of 858 ppb, exceeded disposal limits, and many constituents also failed to meet reuse limits. Some concentrations were orders of magnitude higher than reuse limits and suggest pre-treatment would be necessary if wash water were reused as brine. These water quality results, in conjunction with copper chemical equilibrium modeling, show pH and dissolved carbon both significantly impact the total dissolved copper concentration and should be measured to assess reuse potential. The sampling protocol and specific obstacles highlighted in this paper aid in the future development of sustainable wash water management strategies.
Exploratory Disposal and Reuse Feasibility Analysis of Winter Maintenance Wash Water
2016-01-01
The Ohio Department of Transportation has more than 60 facilities without sewer access generating approximately 19 million gallons of winter maintenance wash water. Off-site disposal is costly, creating the need for sustainable management strategies. The objective of this study was to conduct an exploratory feasibility analysis to assess wash water disposal and potential reuse as brine. Based on a comprehensive literature review and relevant environmental chemistry, a sampling protocol consisting of 31 water quality constituents was utilized for monthly sampling at three geographically distinct Ohio Department of Transportation garages during the winter of 2012. Results were compared to local disposal and reuse guidance limits. Three constituents, including a maximum copper concentration of 858 ppb, exceeded disposal limits, and many constituents also failed to meet reuse limits. Some concentrations were orders of magnitude higher than reuse limits and suggest pre-treatment would be necessary if wash water were reused as brine. These water quality results, in conjunction with copper chemical equilibrium modeling, show pH and dissolved carbon both significantly impact the total dissolved copper concentration and should be measured to assess reuse potential. The sampling protocol and specific obstacles highlighted in this paper aid in the future development of sustainable wash water management strategies. PMID:26908148
Prochwicz, Katarzyna; Kłosowska, Joanna; Kałużna-Wielobób, Alina
2018-06-01
Although skin-picking is a relatively common behaviour, data concerning factors which underlie skin-picking severity and functional impairment caused by picking are still limited. In the present study we examined whether some emotion regulation strategies (cognitive reappraisal and expressive suppression) and personality features influence skin-picking in a university sample; the moderating role of personality traits in the relationship between emotion regulation strategies and skin-picking was also considered. A total sample of 252 adults (mean age 24.03; SD = 5.66) completed the Skin-Picking Scale-Revised, Temperament and Character Inventory and Emotion Regulation Questionnaire. The linear regression analyses indicated that the use of cognitive reappraisal as emotion regulation strategy decrease skin-picking severity. Reduced novelty seeking and reduced self-directedness were identified as significant predictors of skin-picking severity, whereas decreased self-directedness and elevated cooperativeness predicted functional impairment caused by skin-picking. Further moderation analysis revealed that cognitive reappraisal decreases skin-picking severity only among individuals with low and moderate novelty seeking, however, not among those with high novelty seeking. These results indicate that emotion regulation strategies and specific personality traits are important factors associated with skin-picking behaviours in a university sample. Further research is needed to establish the link between emotion regulation strategies and personality traits in clinical sample. Copyright © 2018 Elsevier B.V. All rights reserved.
Strategies to address participant misrepresentation for eligibility in Web-based research.
Kramer, Jessica; Rubin, Amy; Coster, Wendy; Helmuth, Eric; Hermos, John; Rosenbloom, David; Moed, Rich; Dooley, Meghan; Kao, Ying-Chia; Liljenquist, Kendra; Brief, Deborah; Enggasser, Justin; Keane, Terence; Roy, Monica; Lachowicz, Mark
2014-03-01
Emerging methodological research suggests that the World Wide Web ("Web") is an appropriate venue for survey data collection, and a promising area for delivering behavioral intervention. However, the use of the Web for research raises concerns regarding sample validity, particularly when the Web is used for recruitment and enrollment. The purpose of this paper is to describe the challenges experienced in two different Web-based studies in which participant misrepresentation threatened sample validity: a survey study and an online intervention study. The lessons learned from these experiences generated three types of strategies researchers can use to reduce the likelihood of participant misrepresentation for eligibility in Web-based research. Examples of procedural/design strategies, technical/software strategies and data analytic strategies are provided along with the methodological strengths and limitations of specific strategies. The discussion includes a series of considerations to guide researchers in the selection of strategies that may be most appropriate given the aims, resources and target population of their studies. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Qiang, Wei
2011-12-01
We describe a sampling scheme for the two-dimensional (2D) solid state NMR experiments, which can be readily applied to the sensitivity-limited samples. The sampling scheme utilizes continuous, non-uniform sampling profile for the indirect dimension, i.e. the acquisition number decreases as a function of the evolution time ( t1) in the indirect dimension. For a beta amyloid (Aβ) fibril sample, we observed overall 40-50% signal enhancement by measuring the cross peak volume, while the cross peak linewidths remained comparable to the linewidths obtained by regular sampling and processing strategies. Both the linear and Gaussian decay functions for the acquisition numbers result in similar percentage of increment in signal. In addition, we demonstrated that this sampling approach can be applied with different dipolar recoupling approaches such as radiofrequency assisted diffusion (RAD) and finite-pulse radio-frequency-driven recoupling (fpRFDR). This sampling scheme is especially suitable for the sensitivity-limited samples which require long signal averaging for each t1 point, for instance the biological membrane proteins where only a small fraction of the sample is isotopically labeled.
Entropy from State Probabilities: Hydration Entropy of Cations
2013-01-01
Entropy is an important energetic quantity determining the progression of chemical processes. We propose a new approach to obtain hydration entropy directly from probability density functions in state space. We demonstrate the validity of our approach for a series of cations in aqueous solution. Extensive validation of simulation results was performed. Our approach does not make prior assumptions about the shape of the potential energy landscape and is capable of calculating accurate hydration entropy values. Sampling times in the low nanosecond range are sufficient for the investigated ionic systems. Although the presented strategy is at the moment limited to systems for which a scalar order parameter can be derived, this is not a principal limitation of the method. The strategy presented is applicable to any chemical system where sufficient sampling of conformational space is accessible, for example, by computer simulations. PMID:23651109
Arayasirikul, Sean; Chen, Yea-Hung; Jin, Harry; Wilson, Erin
2015-01-01
Respondent-driven sampling (RDS) peer referral has been proven to be an effective recruitment method for hard-to-reach populations; however, its application in diverse populations is limited. Recruitment occurred in two phases: RDS-only followed by development and implementation of an online social network strategy in combination with RDS peer referral (RDS+SNS). Compared to RDS-only, RDS+SNS reached a sample that was younger (χ2=9.19, P = .03), more likely to identify with a non-binary gender identity (χ2=10.4247, P = .03), with less housing instability (50.5% vs. 68.6%, χ2=9.0038, P = .002) and less sex work (19.7% vs. 31.4%, χ2=5.0798, P = .02). Additionally, we describe lessons learned as a result of implementing our online social network strategy. Our findings underscore the importance of integrating Internet-driven strategies to meet challenges in sample diversity and recruitment of young transwomen. PMID:26499337
Strategies for the design of bright upconversion nanoparticles for bioanalytical applications
NASA Astrophysics Data System (ADS)
Wiesholler, Lisa M.; Hirsch, Thomas
2018-06-01
In recent years upconversion nanoparticles (UCNPs) received great attention because of their outstanding optical properties. Especially in bioanalytical applications this class of materials can overcome limitations of common probes like high background fluorescence or blinking. Nevertheless, the requirements for UCNPs to be applicable in biological samples, e.g. small size, water-dispersibility, excitation at low power density are in contradiction with the demand of high brightness. Therefore, a lot of attention is payed to the enhancement of the upconversion luminescence. This review discuss the recent trends and strategies to boost the brightness of UCNPs, classified in three main directions: a) improving the efficiency of energy absorption by the sensitizer via coupling to plasmonic or photonic structures or via attachment of ligands for light harvesting; b) minimizing non-radiative deactivation by variations in the architecture of UCNPs; and c) changing the excitation wavelength to get bright particles at low excitation power density for applications in aqueous systems. These strategies are critically reviewed including current limitations as well as future perspectives for the design of efficient UCNPs especially for sensing application in biological samples or cells.
Carvalho, Joana; Sá, Ana
2017-01-01
Limited interpersonal skills and relationship deficits are recognized as risk factors for sexual aggression as committed by convicted sexual offenders. Yet, less severe forms of sexual aggression are frequently perpetrated by nonforensic samples, including highly educated samples. This study was aimed at characterizing a sample of male college students reporting sexually aggressive strategies as a means to initiate sexual intercourse according to a set of interpersonal relationship factors, thus extending the knowledge on the role of interpersonal dimensions to the distinct contexts of sexual violence. Three hundred eight male college students completed a web survey assessing adult attachment styles, intimacy perception, interpersonal style, and psychosocial adjustment. Findings showed that 162 students (>50%) reported to have used some form of sexually aggressive strategy against women to initiate sexual contact. After controlling for the effects of social desirability, participants reporting sexually aggressive strategies presented significantly less confidence trusting others, more lack of perceived personal validation (within relationships), a more aggressive interpersonal style, and higher levels of hostility. Findings suggest that sexual violence, as measured in the context of college samples, may have an interpersonal nature, reflecting deficient social and intimacy skills; preventive programs are thus expected to enhance interpersonal relationship strategies as well as target individuals' perceived interpersonal vulnerability.
Huh, Hyu Jung; Kim, Kyung Hee; Lee, Hee-Kyung; Chae, Jeong-Ho
2017-04-15
Childhood trauma is an important factor in adverse mental health outcomes, including depression and anxiety. The purpose of the present study was to evaluate a hypothesized model describing a pathway of childhood trauma and its influence on psychiatric symptoms in patients with depressive disorder. In this model, childhood trauma was positively associated with current depression and anxiety symptoms, which were mediated by a cognitive emotional regulation strategy. Patients with depressive disorder (n=585, 266 men, 316 women) completed the Beck Depression Inventory (BDI), State-Trait Anxiety Inventory (STAI), Childhood Trauma Questionnaire (CTQ), and Cognitive Emotion Regulation Questionnaire (CERQ). We divided the cognitive emotion regulation strategies into adaptive and maladaptive strategies using a CERQ subscore. We employed structural equation modeling (SEM) and simple/multiple mediation analyses. The indirect effect of maladaptive strategies was significant in the relationship between overall childhood trauma and depression/anxiety severity, whereas the mediation effect of adaptive strategies was limited to depressive symptoms. With respect to specific types of trauma, maladaptive strategies mediated the association between emotional abuse and current depression/anxiety, while the mediation effect of adaptive strategies was limited to emotional neglect. This study's cross-sectional design does not allow establishment of causal relationships. Childhood trauma recall bias may be possible. These findings support the hypothesized model in which childhood trauma is associated with adulthood depression/anxiety symptoms in clinical samples, and mediated by emotion regulation strategies. These results suggest that cognitive emotion dysregulation is an important factor affecting depression/anxiety symptoms in patients with childhood trauma. Copyright © 2017 Elsevier B.V. All rights reserved.
Sampling of temporal networks: Methods and biases
NASA Astrophysics Data System (ADS)
Rocha, Luis E. C.; Masuda, Naoki; Holme, Petter
2017-11-01
Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example, human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Given the particularities of temporal network data and the variety of network structures, we recommend that the choice of sampling methods be problem oriented to minimize the potential biases for the specific research questions on hand. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data.
NASA Astrophysics Data System (ADS)
Krell, Mario Michael; Wilshusen, Nils; Seeland, Anett; Kim, Su Kyoung
2017-04-01
Objective. Classifier transfers usually come with dataset shifts. To overcome dataset shifts in practical applications, we consider the limitations in computational resources in this paper for the adaptation of batch learning algorithms, like the support vector machine (SVM). Approach. We focus on data selection strategies which limit the size of the stored training data by different inclusion, exclusion, and further dataset manipulation criteria like handling class imbalance with two new approaches. We provide a comparison of the strategies with linear SVMs on several synthetic datasets with different data shifts as well as on different transfer settings with electroencephalographic (EEG) data. Main results. For the synthetic data, adding only misclassified samples performed astoundingly well. Here, balancing criteria were very important when the other criteria were not well chosen. For the transfer setups, the results show that the best strategy depends on the intensity of the drift during the transfer. Adding all and removing the oldest samples results in the best performance, whereas for smaller drifts, it can be sufficient to only add samples near the decision boundary of the SVM which reduces processing resources. Significance. For brain-computer interfaces based on EEG data, models trained on data from a calibration session, a previous recording session, or even from a recording session with another subject are used. We show, that by using the right combination of data selection criteria, it is possible to adapt the SVM classifier to overcome the performance drop from the transfer.
Forecasting long-range atmospheric transport episodes of polychlorinated biphenyls using FLEXPART
NASA Astrophysics Data System (ADS)
Halse, Anne Karine; Eckhardt, Sabine; Schlabach, Martin; Stohl, Andreas; Breivik, Knut
2013-06-01
The analysis of concentrations of persistent organic pollutants (POPs) in ambient air is costly and can only be done for a limited number of samples. It is thus beneficial to maximize the information content of the samples analyzed via a targeted observation strategy. Using polychlorinated biphenyls (PCBs) as an example, a forecasting system to predict and evaluate long-range atmospheric transport (LRAT) episodes of POPs at a remote site in southern Norway has been developed. The system uses the Lagrangian particle transport model FLEXPART, and can be used for triggering extra ("targeted") sampling when LRAT episodes are predicted to occur. The system was evaluated by comparing targeted samples collected over 12-25 h during individual LRAT episodes with monitoring samples regularly collected over one day per week throughout a year. Measured concentrations in all targeted samples were above the 75th percentile of the concentrations obtained from the regular monitoring program and included the highest measured values of all samples. This clearly demonstrates the success of the targeted sampling strategy.
Jongenburger, I; Reij, M W; Boer, E P J; Gorris, L G M; Zwietering, M H
2011-11-15
The actual spatial distribution of microorganisms within a batch of food influences the results of sampling for microbiological testing when this distribution is non-homogeneous. In the case of pathogens being non-homogeneously distributed, it markedly influences public health risk. This study investigated the spatial distribution of Cronobacter spp. in powdered infant formula (PIF) on industrial batch-scale for both a recalled batch as well a reference batch. Additionally, local spatial occurrence of clusters of Cronobacter cells was assessed, as well as the performance of typical sampling strategies to determine the presence of the microorganisms. The concentration of Cronobacter spp. was assessed in the course of the filling time of each batch, by taking samples of 333 g using the most probable number (MPN) enrichment technique. The occurrence of clusters of Cronobacter spp. cells was investigated by plate counting. From the recalled batch, 415 MPN samples were drawn. The expected heterogeneous distribution of Cronobacter spp. could be quantified from these samples, which showed no detectable level (detection limit of -2.52 log CFU/g) in 58% of samples, whilst in the remainder concentrations were found to be between -2.52 and 2.75 log CFU/g. The estimated average concentration in the recalled batch was -2.78 log CFU/g and a standard deviation of 1.10 log CFU/g. The estimated average concentration in the reference batch was -4.41 log CFU/g, with 99% of the 93 samples being below the detection limit. In the recalled batch, clusters of cells occurred sporadically in 8 out of 2290 samples of 1g taken. The two largest clusters contained 123 (2.09 log CFU/g) and 560 (2.75 log CFU/g) cells. Various sampling strategies were evaluated for the recalled batch. Taking more and smaller samples and keeping the total sampling weight constant, considerably improved the performance of the sampling plans to detect such a type of contaminated batch. Compared to random sampling, stratified random sampling improved the probability to detect the heterogeneous contamination. Copyright © 2011 Elsevier B.V. All rights reserved.
Sampling strategies exploiting multi-pumping flow systems.
Prior, João A V; Santos, João L M; Lima, José L F C
2003-04-01
In this work new strategies were exploited to implement multi-pumping flow systems relying on the utilisation of multiple devices that act simultaneously as sample-insertion, reagent-introduction, and solution-propelling units. The solenoid micro-pumps that were initially used as the only active elements of multi-pumping systems, and which were able to produce pulses of 3 to 25 microL, were replaced by syringe pumps with the aim of producing pulses between 1 and 4 microL. The performance of the developed flow system was assessed by using distinct sample-insertion strategies like single sample volume, merging zones, and binary sampling in the spectrophotometric determination of isoniazid in pharmaceutical formulations upon reaction with 1,2-naphthoquinone-4-sulfonate, in alkaline medium. The results obtained showed that enhanced sample/reagent mixing could be obtained with binary sampling and by using a 1 microL per step pump, even in limited dispersion conditions. Moreover, syringe pumps produce very reproducible flowing streams and are easily manipulated and controlled by a computer program, which is greatly simplified since they are the only active manifold component. Linear calibration plots up to 18.0 microg mL(-1), with a relative standard deviation of less than 1.48% (n=10) and a throughput of about 20 samples per hour, were obtained.
Zhu, Yunzeng; Chen, Yiqi; Meng, Xiangrui; Wang, Jing; Lu, Ying; Xu, Youchun; Cheng, Jing
2017-09-05
Centrifugal microfluidics has been widely applied in the sample-in-answer-out systems for the analyses of nucleic acids, proteins, and small molecules. However, the inherent characteristic of unidirectional fluid propulsion limits the flexibility of these fluidic chips. Providing an extra degree of freedom to allow the unconstrained and reversible pumping of liquid is an effective strategy to address this limitation. In this study, a wirelessly charged centrifugal microfluidic platform with two rotation axes has been constructed and the flow control strategy in such platform with two degrees of freedom was comprehensively studied for the first time. Inductively coupled coils are installed on the platform to achieve wireless power transfer to the spinning stage. A micro servo motor is mounted on both sides of the stage to alter the orientation of the device around a secondary rotation axis on demand during stage rotation. The basic liquid operations on this platform, including directional transport of liquid, valving, metering, and mixing, are comprehensively studied and realized. Finally, a chip for the simultaneous determination of hexavalent chromium [Cr(VI)] and methanal in water samples is designed and tested based on the strategy presented in this paper, demonstrating the potential use of this platform for on-site environmental monitoring, food safety testing, and other life science applications.
Fletcher, Kathryn; Parker, Gordon; Bayes, Adam; Paterson, Amelia; McClure, Georgia
2014-03-01
Bipolar II disorder (BP II) and Borderline Personality Disorder (BPD) share common features and can be difficult to differentiate, contributing to misdiagnosis and inappropriate treatment. Research contrasting phenomenological features of both conditions is limited. The current study sought to identify differences in emotion regulation strategies in BP II and BPD in addition to examining relationships with perceived parental style. Participants were recruited from a variety of outpatient and community settings. Eligible participants required a clinical diagnosis of BP II or BPD, subsequently confirmed via structured diagnostic interviews assessing DSM-IV criteria. Participants completed a series of self-reported questionnaires assessing emotion regulation strategies and perceived parental style. The sample comprised 48 (n=24 BP II and n=24 BPD) age and gender-matched participants. Those with BPD were significantly more likely to use maladaptive emotion regulation strategies, less likely to use adaptive emotion regulation strategies, and scored significantly higher on the majority of (perceived) dysfunctional parenting sub-scales than participants with BP II. Dysfunctional parenting experiences were related to maladaptive emotion regulation strategies in participants with BP II and BPD, however differential associations were observed across groups. Relatively small sample sizes; lack of a healthy control comparator group; lack of statistical control for differing sociodemographic and clinical characteristics, medication and psychological treatments; no assessment of state or trait anxiety; over-representation of females in both groups limiting generalisability of results; and reliance on self-report measures. Differences in emotion regulation strategies and perceived parental style provide some support for the validity of distinguishing BP II and BPD. Development of intervention strategies targeting the differing forms of emotion regulatory pathology in these groups may be warranted. Copyright © 2014 Elsevier B.V. All rights reserved.
Modeling Sustainment Investment
2015-05-01
Requests Strategy Staffing Training & Process Funding sustainment capacity sustainment performance gap Bandwagon Effect R1 Limits to Growth B1...Mellon University sustainment capacity sustainment performance gap Bandwagon Effect R1 Limits to Growth B1 S Work Smarter B3 Work Bigger B2 desired... effects of decisions and • Suggests how to prevent problems before they become too expensive. Next: A sample piece of the simulation model
Determination of trace metals in spirits by total reflection X-ray fluorescence spectrometry
NASA Astrophysics Data System (ADS)
Siviero, G.; Cinosi, A.; Monticelli, D.; Seralessandri, L.
2018-06-01
Eight spirituous samples were analyzed for trace metal content with Horizon Total Reflection X-Ray Fluorescence (TXRF) Spectrometer. The expected single metal amount is at the ng/g level in a mixed aqueous/organic matrix, thus requiring a sample preparation method capable of achieving suitable limits of detection. On-site enrichment and Atmospheric Pressure-Vapor Phase Decomposition allowed to detect Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Sr and Pb with detection limits ranging from 0.1 ng/g to 4.6 ng/g. These results highlight how the synergy between instrument and sample preparation strategy may foster the use of TXRF as a fast and reliable technique for the determination of trace elements in spirituous samples, either for quality control or risk assessment purposes.
Clinical usefulness of limited sampling strategies for estimating AUC of proton pump inhibitors.
Niioka, Takenori
2011-03-01
Cytochrome P450 (CYP) 2C19 (CYP2C19) genotype is regarded as a useful tool to predict area under the blood concentration-time curve (AUC) of proton pump inhibitors (PPIs). In our results, however, CYP2C19 genotypes had no influence on AUC of all PPIs during fluvoxamine treatment. These findings suggest that CYP2C19 genotyping is not always a good indicator for estimating AUC of PPIs. Limited sampling strategies (LSS) were developed to estimate AUC simply and accurately. It is important to minimize the number of blood samples because of patient's acceptance. This article reviewed the usefulness of LSS for estimating AUC of three PPIs (omeprazole: OPZ, lansoprazole: LPZ and rabeprazole: RPZ). The best prediction formulas in each PPI were AUC(OPZ)=9.24 x C(6h)+2638.03, AUC(LPZ)=12.32 x C(6h)+3276.09 and AUC(RPZ)=1.39 x C(3h)+7.17 x C(6h)+344.14, respectively. In order to optimize the sampling strategy of LPZ, we tried to establish LSS for LPZ using a time point within 3 hours through the property of pharmacokinetics of its enantiomers. The best prediction formula using the fewest sampling points (one point) was AUC(racemic LPZ)=6.5 x C(3h) of (R)-LPZ+13.7 x C(3h) of (S)-LPZ-9917.3 x G1-14387.2×G2+7103.6 (G1: homozygous extensive metabolizer is 1 and the other genotypes are 0; G2: heterozygous extensive metabolizer is 1 and the other genotypes are 0). Those strategies, plasma concentration monitoring at one or two time-points, might be more suitable for AUC estimation than reference to CYP2C19 genotypes, particularly in the case of coadministration of CYP mediators.
Arayasirikul, Sean; Chen, Yea-Hung; Jin, Harry; Wilson, Erin
2016-06-01
Respondent-driven sampling (RDS) peer referral has been proven to be an effective recruitment method for hard-to-reach populations; however, its application in diverse populations is limited. Recruitment occurred in two phases: RDS-only followed by development and implementation of an online social network strategy in combination with RDS peer referral (RDS + SNS). Compared to RDS-only, RDS + SNS reached a sample that was younger (χ(2) = 9.19, P = .03), more likely to identify with a non-binary gender identity (χ(2) = 10.4247, P = .03), with less housing instability (50.5 vs. 68.6 %, χ(2) = 9.0038, P = .002) and less sex work (19.7 vs. 31.4 %, χ(2) = 5.0798, P = .02). Additionally, we describe lessons learned as a result of implementing our online social network strategy. Our findings underscore the importance of integrating Internet-driven strategies to meet challenges in sample diversity and recruitment of young transwomen.
Mueller, Silke C; Drewelow, Bernd
2013-05-01
The area under the concentration-time curve (AUC) after oral midazolam administration is commonly used for cytochrome P450 (CYP) 3A phenotyping studies. The aim of this investigation was to evaluate a limited sampling strategy for the prediction of AUC with oral midazolam. A total of 288 concentration-time profiles from 123 healthy volunteers who participated in four previously performed drug interaction studies with intense sampling after a single oral dose of 7.5 mg midazolam were available for evaluation. Of these, 45 profiles served for model building, which was performed by stepwise multiple linear regression, and the remaining 243 datasets served for validation. Mean prediction error (MPE), mean absolute error (MAE) and root mean squared error (RMSE) were calculated to determine bias and precision The one- to four-sampling point models with the best coefficient of correlation were the one-sampling point model (8 h; r (2) = 0.84), the two-sampling point model (0.5 and 8 h; r (2) = 0.93), the three-sampling point model (0.5, 2, and 8 h; r (2) = 0.96), and the four-sampling point model (0.5,1, 2, and 8 h; r (2) = 0.97). However, the one- and two-sampling point models were unable to predict the midazolam AUC due to unacceptable bias and precision. Only the four-sampling point model predicted the very low and very high midazolam AUC of the validation dataset with acceptable precision and bias. The four-sampling point model was also able to predict the geometric mean ratio of the treatment phase over the baseline (with 90 % confidence interval) results of three drug interaction studies in the categories of strong, moderate, and mild induction, as well as no interaction. A four-sampling point limited sampling strategy to predict the oral midazolam AUC for CYP3A phenotyping is proposed. The one-, two- and three-sampling point models were not able to predict midazolam AUC accurately.
Efficient Simulation of Tropical Cyclone Pathways with Stochastic Perturbations
NASA Astrophysics Data System (ADS)
Webber, R.; Plotkin, D. A.; Abbot, D. S.; Weare, J.
2017-12-01
Global Climate Models (GCMs) are known to statistically underpredict intense tropical cyclones (TCs) because they fail to capture the rapid intensification and high wind speeds characteristic of the most destructive TCs. Stochastic parametrization schemes have the potential to improve the accuracy of GCMs. However, current analysis of these schemes through direct sampling is limited by the computational expense of simulating a rare weather event at fine spatial gridding. The present work introduces a stochastically perturbed parametrization tendency (SPPT) scheme to increase simulated intensity of TCs. We adapt the Weighted Ensemble algorithm to simulate the distribution of TCs at a fraction of the computational effort required in direct sampling. We illustrate the efficiency of the SPPT scheme by comparing simulations at different spatial resolutions and stochastic parameter regimes. Stochastic parametrization and rare event sampling strategies have great potential to improve TC prediction and aid understanding of tropical cyclogenesis. Since rising sea surface temperatures are postulated to increase the intensity of TCs, these strategies can also improve predictions about climate change-related weather patterns. The rare event sampling strategies used in the current work are not only a novel tool for studying TCs, but they may also be applied to sampling any range of extreme weather events.
Method and algorithm of automatic estimation of road surface type for variable damping control
NASA Astrophysics Data System (ADS)
Dąbrowski, K.; Ślaski, G.
2016-09-01
In this paper authors presented an idea of road surface estimation (recognition) on a base of suspension dynamic response signals statistical analysis. For preliminary analysis cumulated distribution function (CDF) was used, and some conclusion that various roads have responses values in a different ranges of limits for the same percentage of samples or for the same limits different percentages of samples are located within the range between limit values. That was the base for developed and presented algorithm which was tested using suspension response signals recorded during road test riding over various surfaces. Proposed algorithm can be essential part of adaptive damping control algorithm for a vehicle suspension or adaptive control strategy for suspension damping control.
Highly sensitive dendrimer-based nanoplasmonic biosensor for drug allergy diagnosis.
Soler, Maria; Mesa-Antunez, Pablo; Estevez, M-Carmen; Ruiz-Sanchez, Antonio Jesus; Otte, Marinus A; Sepulveda, Borja; Collado, Daniel; Mayorga, Cristobalina; Torres, Maria Jose; Perez-Inestrosa, Ezequiel; Lechuga, Laura M
2015-04-15
A label-free biosensing strategy for amoxicillin (AX) allergy diagnosis based on the combination of novel dendrimer-based conjugates and a recently developed nanoplasmonic sensor technology is reported. Gold nanodisks were functionalized with a custom-designed thiol-ending-polyamido-based dendron (d-BAPAD) peripherally decorated with amoxicilloyl (AXO) groups (d-BAPAD-AXO) in order to detect specific IgE generated in patient's serum against this antibiotic during an allergy outbreak. This innovative strategy, which follows a simple one-step immobilization procedure, shows exceptional results in terms of sensitivity and robustness, leading to a highly-reproducible and long-term stable surface which allows achieving extremely low limits of detection. Moreover, the viability of this biosensor approach to analyze human biological samples has been demonstrated by directly analyzing and quantifying specific anti-AX antibodies in patient's serum without any sample pretreatment. An excellent limit of detection (LoD) of 0.6ng/mL (i.e. 0.25kU/L) has been achieved in the evaluation of clinical samples evidencing the potential of our nanoplasmonic biosensor as an advanced diagnostic tool to quickly identify allergic patients. The results have been compared and validated with a conventional clinical immunofluorescence assay (ImmunoCAP test), confirming an excellent correlation between both techniques. The combination of a novel compact nanoplasmonic platform and a dendrimer-based strategy provides a highly sensitive label free biosensor approach with over two times better detectability than conventional SPR. Both the biosensor device and the carrier structure hold great potential in clinical diagnosis for biomarker analysis in whole serum samples and other human biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Review of sampling hard-to-reach and hidden populations for HIV surveillance.
Magnani, Robert; Sabin, Keith; Saidel, Tobi; Heckathorn, Douglas
2005-05-01
Adequate surveillance of hard-to-reach and 'hidden' subpopulations is crucial to containing the HIV epidemic in low prevalence settings and in slowing the rate of transmission in high prevalence settings. For a variety of reasons, however, conventional facility and survey-based surveillance data collection strategies are ineffective for a number of key subpopulations, particularly those whose behaviors are illegal or illicit. This paper critically reviews alternative sampling strategies for undertaking behavioral or biological surveillance surveys of such groups. Non-probability sampling approaches such as facility-based sentinel surveillance and snowball sampling are the simplest to carry out, but are subject to a high risk of sampling/selection bias. Most of the probability sampling methods considered are limited in that they are adequate only under certain circumstances and for some groups. One relatively new method, respondent-driven sampling, an adaptation of chain-referral sampling, appears to be the most promising for general applications. However, as its applicability to HIV surveillance in resource-poor settings has yet to be established, further field trials are needed before a firm conclusion can be reached.
Fleming, Denise H; Mathew, Binu S; Prasanna, Samuel; Annapandian, Vellaichamy M; John, George T
2011-04-01
Enteric-coated mycophenolate sodium (EC-MPS) is widely used in renal transplantation. With a delayed absorption profile, it has not been possible to develop limited sampling strategies to estimate area under the curve (mycophenolic acid [MPA] AUC₀₋₁₂), which have limited time points and are completed in 2 hours. We developed and validated simplified strategies to estimate MPA AUC₀₋₁₂ in an Indian renal transplant population prescribed EC-MPS together with prednisolone and tacrolimus. Intensive pharmacokinetic sampling (17 samples each) was performed in 18 patients to measure MPA AUC₀₋₁₂. The profiles at 1 month were used to develop the simplified strategies and those at 5.5 months used for validation. We followed two approaches. In one, the AUC was calculated using the trapezoidal rule with fewer time points followed by an extrapolation. In the second approach, by stepwise multiple regression analysis, models with different time points were identified and linear regression analysis performed. Using the trapezoidal rule, two equations were developed with six time points and sampling to 6 or 8 hours (8hrAUC[₀₋₁₂exp]) after the EC-MPS dose. On validation, the 8hrAUC(₀₋₁₂exp) compared with total measured AUC₀₋₁₂ had a coefficient of correlation (r²) of 0.872 with a bias and precision (95% confidence interval) of 0.54% (-6.07-7.15) and 9.73% (5.37-14.09), respectively. Second, limited sampling strategies were developed with four, five, six, seven, and eight time points and completion within 2 hours, 4 hours, 6 hours, and 8 hours after the EC-MPS dose. On validation, six, seven, and eight time point equations, all with sampling to 8 hours, had an acceptable r with the total measured MPA AUC₀₋₁₂ (0.817-0.927). In the six, seven, and eight time points, the bias (95% confidence interval) was 3.00% (-4.59 to 10.59), 0.29% (-5.4 to 5.97), and -0.72% (-5.34 to 3.89) and the precision (95% confidence interval) was 10.59% (5.06-16.13), 8.33% (4.55-12.1), and 6.92% (3.94-9.90), respectively. Of the eight simplified approaches, inclusion of seven or eight time points improved the accuracy of the predicted AUC compared with the actual and can be advocated based on the priority of the user.
A test of the domain-specific acculturation strategy hypothesis.
Miller, Matthew J; Yang, Minji; Lim, Robert H; Hui, Kayi; Choi, Na-Yeun; Fan, Xiaoyan; Lin, Li-Ling; Grome, Rebekah E; Farrell, Jerome A; Blackmon, Sha'kema
2013-01-01
Acculturation literature has evolved over the past several decades and has highlighted the dynamic ways in which individuals negotiate experiences in multiple cultural contexts. The present study extends this literature by testing M. J. Miller and R. H. Lim's (2010) domain-specific acculturation strategy hypothesis-that individuals might use different acculturation strategies (i.e., assimilated, bicultural, separated, and marginalized strategies; J. W. Berry, 2003) across behavioral and values domains-in 3 independent cluster analyses with Asian American participants. Present findings supported the domain-specific acculturation strategy hypothesis as 67% to 72% of participants from 3 independent samples using different strategies across behavioral and values domains. Consistent with theory, a number of acculturation strategy cluster group differences emerged across generational status, acculturative stress, mental health symptoms, and attitudes toward seeking professional psychological help. Study limitations and future directions for research are discussed.
Dupuis, L Lee; Sibbald, Cathryn; Schechter, Tal; Ansari, Marc; Gassas, Adam; Théorêt, Yves; Kassir, Nastya; Champagne, Martin A; Doyle, John
2008-05-01
We currently calculate area under the busulfan concentration time curve (AUC) using 7 plasma busulfan concentrations (AUC7) drawn after the first of 16 i.v. busulfan doses given as a 2-hour infusion every 6 hours. The aim of this study was to develop and validate limited sampling strategies (LSSs) using 3 or fewer busulfan concentration values with which to reliably calculate AUC in children undergoing hematopoietic stem cell transplant (HSCT). Children in the development group (44) received i.v. busulfan at Sick Kids; the validation group consisted of 35 children who received care at CHU Ste-Justine. Busulfan doses given and subsequent plasma busulfan concentrations were recorded. LSSs using 1 to 3 concentration-time points were developed using multiple linear regression. LSS were considered to be acceptable when adjusted r(2) > 0.9, mean bias <15% and precision <15%. Extent of agreement between the AUC7 values and the LSS AUC was assessed by the intraclass correlation coefficient (ICC) and Bland-Altman (BA) analysis. Agreement was considered to be excellent when the lower limit of the 95% confidence limit of the ICC exceeded 0.9 and when the limits of agreement in the BA analysis were +/-15% for both AUC and dose. Administration of the theoretic adjusted busulfan doses based on each LSS was simulated and cases where the resulting AUC was >1500 or <900 microM x min were noted. LSSs using 1, 2, or 3 plasma busulfan concentrations were developed that showed excellent agreement with AUC7 and adjusted busulfan doses. In the validation sample, only the 2- and 3-point LSSs demonstrated acceptable precision and lack of bias. LSSs using 2 or 3 plasma busulfan concentrations can be used to reliably estimate busulfan AUC after IV administration in children undergoing HSCT.
Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V
2018-03-01
Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were <30% (predefined criterion) and correlation (r) was at least 0.7950 for the consolidated internal and external datasets of 102 healthy subjects for the AUC 0-t prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error <30% and correlation (r) was at least 0.9339 in the same pool of healthy subjects. A 3-concentration-time points limited sampling model predicts the exposure of saroglitazar (ie, AUC 0-t ) within predefined acceptable bias and imprecision limit. Same model was also used to predict AUC 0-∞ . The same limited sampling model was found to predict the exposure of saroglitazar sulfoxide within predefined criteria. This model can find utility during late-phase clinical development of saroglitazar in the patient population. Copyright © 2018 Elsevier HS Journals, Inc. All rights reserved.
Spatial Dependence and Sampling of Phytoseiid Populations on Hass Avocados in Southern California.
Lara, Jesús R; Amrich, Ruth; Saremi, Naseem T; Hoddle, Mark S
2016-04-22
Research on phytoseiid mites has been critical for developing an effective biocontrol strategy for suppressing Oligonchus perseae Tuttle, Baker, and Abatiello (Acari: Tetranychidae) in California avocado orchards. However, basic understanding of the spatial ecology of natural populations of phytoseiids in relation to O. perseae infestations and the validation of research-based strategies for assessing densities of these predators has been limited. To address these shortcomings, cross-sectional and longitudinal observations consisting of >3,000 phytoseiids and 500,000 O. perseae counted on 11,341 leaves were collected across 10 avocado orchards during a 10-yr period. Subsets of these data were analyzed statistically to characterize the spatial distribution of phytoseiids in avocado orchards and to evaluate the merits of developing binomial and enumerative sampling strategies for these predators. Spatial correlation of phytoseiids between trees was detected at one site, and a strong association of phytoseiids with elevated O. perseae densities was detected at four sites. Sampling simulations revealed that enumeration-based sampling performed better than binomial sampling for estimating phytoseiid densities. The ecological implications of these findings and potential for developing a custom sampling plan to estimate densities of phytoseiids inhabiting sampled trees in avocado orchards in California are discussed. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Goede, S Lucas; van Roon, Aafke H C; Reijerink, Jacqueline C I Y; van Vuuren, Anneke J; Lansdorp-Vogelaar, Iris; Habbema, J Dik F; Kuipers, Ernst J; van Leerdam, Monique E; van Ballegooijen, Marjolein
2013-05-01
The sensitivity and specificity of a single faecal immunochemical test (FIT) are limited. The performance of FIT screening can be improved by increasing the screening frequency or by providing more than one sample in each screening round. This study aimed to evaluate if two-sample FIT screening is cost-effective compared with one-sample FIT. The MISCAN-colon microsimulation model was used to estimate costs and benefits of strategies with either one or two-sample FIT screening. The FIT cut-off level varied between 50 and 200 ng haemoglobin/ml, and the screening schedule was varied with respect to age range and interval. In addition, different definitions for positivity of the two-sample FIT were considered: at least one positive sample, two positive samples, or the mean of both samples being positive. Within an exemplary screening strategy, biennial FIT from the age of 55-75 years, one-sample FIT provided 76.0-97.0 life-years gained (LYG) per 1000 individuals, at a cost of € 259,000-264,000 (range reflects different FIT cut-off levels). Two-sample FIT screening with at least one sample being positive provided 7.3-12.4 additional LYG compared with one-sample FIT at an extra cost of € 50,000-59,000. However, when all screening intervals and age ranges were considered, intensifying screening with one-sample FIT provided equal or more LYG at lower costs compared with two-sample FIT. If attendance to screening does not differ between strategies it is recommended to increase the number of screening rounds with one-sample FIT screening, before considering increasing the number of FIT samples provided per screening round.
Two means of sampling sexual minority women: how different are the samples of women?
Boehmer, Ulrike; Clark, Melissa; Timm, Alison; Ozonoff, Al
2008-01-01
We compared 2 sampling approaches of sexual minority women in 1 limited geographic area to better understand the implications of these 2 sampling approaches. Sexual minority women identified through the Census did not differ on average age or the prevalence of raising children from those sampled using nonrandomized methods. Women in the convenience sample were better educated and lived in smaller households. Modeling the likelihood of disability in this population resulted in contradictory parameter estimates by sampling approach. The degree of variation observed both between sampling approaches and between different parameters suggests that the total population of sexual minority women is still unmeasured. Thoroughly constructed convenience samples will continue to be a useful sampling strategy to further research on this population.
Strategies for high-throughput focused-beam ptychography
Jacobsen, Chris; Deng, Junjing; Nashed, Youssef
2017-08-08
X-ray ptychography is being utilized for a wide range of imaging experiments with a resolution beyond the limit of the X-ray optics used. Introducing a parameter for the ptychographic resolution gainG p(the ratio of the beam size over the achieved pixel size in the reconstructed image), strategies for data sampling and for increasing imaging throughput when the specimen is at the focus of an X-ray beam are considered. As a result, the tradeoffs between large and small illumination spots are examined.
Strategies for high-throughput focused-beam ptychography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobsen, Chris; Deng, Junjing; Nashed, Youssef
X-ray ptychography is being utilized for a wide range of imaging experiments with a resolution beyond the limit of the X-ray optics used. Introducing a parameter for the ptychographic resolution gainG p(the ratio of the beam size over the achieved pixel size in the reconstructed image), strategies for data sampling and for increasing imaging throughput when the specimen is at the focus of an X-ray beam are considered. As a result, the tradeoffs between large and small illumination spots are examined.
Hoorelbeke, Kristof; Koster, Ernst H W; Demeyer, Ineke; Loeys, Tom; Vanderhasselt, Marie-Anne
2016-10-01
Cognitive control plays a key role in both adaptive emotion regulation, such as positive reappraisal, and maladaptive emotion regulation, such as rumination, with both strategies playing a major role in resilience and well-being. As a result, cognitive control training (CCT) targeting working memory functioning may have the potential to reduce maladaptive emotion regulation and increase adaptive emotion regulation. The current study explored the effects of CCT on positive reappraisal ability in a lab context, and deployment and efficacy of positive appraisal and rumination in daily life. A sample of undergraduates (n = 83) was allocated to CCT or an active control condition, performing 10 online training sessions over a period of 14 days. Effects on regulation of affective states in daily life were assessed using experience sampling over a 7-day posttraining period. Results revealed a positive association between baseline cognitive control and self-reported use of adaptive emotion regulation strategies, whereas maladaptive emotion regulation strategies showed a negative association. CCT showed transfer to working memory functioning on the dual n-back task. Overall, effects of CCT on emotion regulation were limited to reducing deployment of rumination in low positive affective states. However, we did not find beneficial effects on indicators of adaptive emotion regulation. These findings are in line with previous studies targeting maladaptive emotion regulation but suggest limited use in enhancing adaptive emotion regulation in a healthy sample. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Alm Pfrunder, A; Falk, A; Vicente, V; Lindström, V
2018-04-20
To explore the strategies of nurses working in the ambulance service while caring for patients with limited Swedish-English proficiency. Communication difficulties due to lack of mutual language is a challenge in health care systems around the world. Little is known about nurses' strategies while caring for patients with whom they do not share a mutual language in an unstructured, unplanned prehospital emergency environment, the ambulance service. A qualitative study design based on interviews was used and a purposeful sample and snowball technique was used to identify nurses with prehospital emergency experience of caring for patients with limited Swedish-English proficiency. Eleven nurses were interviewed, and the main strategy they used was adapting to the patients' need and the caring situation. The nurses used their own body, and tone of voice for creating a sense of trust and security. The nurses also used structured assessment in accordance with medical guidelines. Translation devices and relatives/bystanders were used as interpreters when possible. Another strategy was to transport the patient directly to the emergency department since they had not found a secure way of assessing and caring for the patients in the ambulance. The nurses used a palette of strategies while assessing and caring for patients when there was no mutual language between the caregiver and care seeker. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Schreiber, P W; Köhler, N; Cervera, R; Hasse, B; Sax, H; Keller, P M
2018-07-01
A growing number of Mycobacterium chimaera infections after cardiosurgery have been reported by several countries. These potentially fatal infections were traced back to contaminated heater-cooler devices (HCDs), which use water as a heat transfer medium. Aerosolization of water contaminated with M. chimaera from HCDs enables airborne transmission to patients undergoing open chest surgery. Infection control teams test HCD water samples for mycobacterial growth to guide preventive measures. The detection limit of M. chimaera in water samples, however, has not previously been investigated. To determine the detection limit of M. chimaera in water samples using laboratory-based serial dilution tests. An M. chimaera strain representative of the international cardiosurgery-associated M. chimaera outbreak was used to generate a logarithmic dilution series. Two different water volumes, 50 and 1000mL, were inoculated, and, after identical processing (centrifugation, decantation, and decontamination), seeded on mycobacteria growth indicator tube (MGIT) and Middlebrook 7H11 solid media. MGIT consistently showed a lower detection limit than 7H11 solid media, corresponding to a detection limit of ≥1.44 × 10 4 cfu/mL for 50mL and ≥2.4cfu/mL for 1000mL water samples. Solid media failed to detect M. chimaera in 50mL water samples. Depending on water volume and culture method, major differences exist in the detection limit of M. chimaera. In terms of sensitivity, 1000mL water samples in MGIT media performed best. Our results have important implications for infection prevention and control strategies in mitigation of the M. chimaera outbreak and healthcare water safety in general. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Fear of Failure, Self-Handicapping, and Negative Emotions in Response to Failure
ERIC Educational Resources Information Center
Bartels, Jared M.; Herman, William E.
2011-01-01
Research suggests that students who fear failure are likely to utilize cognitive strategies such as self-handicapping that serve to perpetuate failure. Such devastating motivational dispositions clearly limit academic success. The present study examined negative emotional responses to scenarios involving academic failure among a sample of…
Mitra, Abhishek; Skrzypczak, Magdalena; Ginalski, Krzysztof; Rowicka, Maga
2015-01-01
Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding). Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants). Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol) that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer’s, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively, we discuss how analysis can be repeated from saved sequencing images using the Long Template Protocol to increase accuracy. PMID:25860802
Program Evaluation of Outcomes Based Orthotic and Prosthetic Education
2007-12-01
writing of new Biomechanics and Gait curricula and syllabi; implementation of online learning strategies through the use of Blackboard online learning...TERMS Prosthetics, Orthotics, Prosthesis , Orthosis, Amputation, Outcomes Based Evaluation 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...Appendix III. Appendix IV. Sample Syllabus – Biomechanics …………………………… 24 Appendix V. Sample Final Examination……………………………………. 29
Maturation and sexual ontogeny in the spangled emperor Lethrinus nebulosus.
Marriott, R J; Jarvis, N D C; Adams, D J; Gallash, A E; Norriss, J; Newman, S J
2010-04-01
The reproductive development and sexual ontogeny of spangled emperor Lethrinus nebulosus populations in the Ningaloo Marine Park (NMP) were investigated to obtain an improved understanding of its evolved reproductive strategy and data for fisheries management. Evidence derived from (1) analyses of histological data and sampled sex ratios with size and age, (2) the identification of residual previtellogenic oocytes in immature and mature testes sampled during the spawning season and (3) observed changes in testis internal structure with increasing fish size and age, demonstrated a non-functional protogynous hermaphroditic strategy (or functional gonochorism). All the smallest and youngest fish sampled were female until they either changed sex to male at a mean 277.5 mm total length (L(T)) and 2.3 years old or remained female and matured at a larger mean L(T) (392.1 mm) and older age (3.5 years). Gonad masses were similar for males and females over the size range sampled and throughout long reproductive lives (up to a maximum estimated age of c. 31 years), which was another correlate of functional gonochorism. That the mean L(T) at sex change and female maturity were below the current minimum legal size (MLS) limit (410 mm) demonstrated that the current MLS limit is effective for preventing recreational fishers in the NMP retaining at least half of the juvenile males and females in their landed catches.
Genetic variation in avocado stem weevils Copturus aguacatae (Coleoptera: Curculionidae) in Mexico.
Engstrand, Rachel C; Cibrián Tovar, Juan; Cibrián-Jaramillo, Angélica; Kolokotronis, Sergios-Orestis
2010-12-01
The avocado stem weevil Copturus aguacatae is an important pest in avocado plantations. Its presence hinders the production and marketing of avocado in Mexico, the largest avocado producer worldwide. Biological control through pheromone synthesis, a strategy favored over chemical control in crops, is currently limited by difficult field identification of this species. Using DNA barcoding, we examine the patterns of genetic variation of C. aguacatae in avocado trees in Mexico to help facilitate its identification and biological control. We show that there is one single species of avocado stem weevil throughout the sampled sites in Mexico. Overall, haplotype diversity is high, with Oaxaca forming one distinct group and all other sampled populations are admixed irrespective of geographic origin. The results suggest that high gene flow is maintained in this species and that a global strategy for biocontrol can be designed and implemented throughout the sampled range.
Bias of shear wave elasticity measurements in thin layer samples and a simple correction strategy.
Mo, Jianqiang; Xu, Hao; Qiang, Bo; Giambini, Hugo; Kinnick, Randall; An, Kai-Nan; Chen, Shigao; Luo, Zongping
2016-01-01
Shear wave elastography (SWE) is an emerging technique for measuring biological tissue stiffness. However, the application of SWE in thin layer tissues is limited by bias due to the influence of geometry on measured shear wave speed. In this study, we investigated the bias of Young's modulus measured by SWE in thin layer gelatin-agar phantoms, and compared the result with finite element method and Lamb wave model simulation. The result indicated that the Young's modulus measured by SWE decreased continuously when the sample thickness decreased, and this effect was more significant for smaller thickness. We proposed a new empirical formula which can conveniently correct the bias without the need of using complicated mathematical modeling. In summary, we confirmed the nonlinear relation between thickness and Young's modulus measured by SWE in thin layer samples, and offered a simple and practical correction strategy which is convenient for clinicians to use.
Technologies for imaging neural activity in large volumes
Ji, Na; Freeman, Jeremy; Smith, Spencer L.
2017-01-01
Neural circuitry has evolved to form distributed networks that act dynamically across large volumes. Collecting data from individual planes, conventional microscopy cannot sample circuitry across large volumes at the temporal resolution relevant to neural circuit function and behaviors. Here, we review emerging technologies for rapid volume imaging of neural circuitry. We focus on two critical challenges: the inertia of optical systems, which limits image speed, and aberrations, which restrict the image volume. Optical sampling time must be long enough to ensure high-fidelity measurements, but optimized sampling strategies and point spread function engineering can facilitate rapid volume imaging of neural activity within this constraint. We also discuss new computational strategies for the processing and analysis of volume imaging data of increasing size and complexity. Together, optical and computational advances are providing a broader view of neural circuit dynamics, and help elucidate how brain regions work in concert to support behavior. PMID:27571194
Recent approaches in sensitive enantioseparations by CE.
Sánchez-Hernández, Laura; Castro-Puyana, María; Marina, María Luisa; Crego, Antonio L
2012-01-01
The latest strategies and instrumental improvements for enhancing the detection sensitivity in chiral analysis by CE are reviewed in this work. Following the previous reviews by García-Ruiz et al. (Electrophoresis 2006, 27, 195-212) and Sánchez-Hernández et al. (Electrophoresis 2008, 29, 237-251; Electrophoresis 2010, 31, 28-43), this review includes those papers that were published during the period from June 2009 to May 2011. These works describe the use of offline and online sample treatment techniques, online sample preconcentration techniques based on electrophoretic principles, and alternative detection systems to UV-Vis to increase the detection sensitivity. The application of the above-mentioned strategies, either alone or combined, to improve the sensitivity in the enantiomeric analysis of a broad range of samples, such as pharmaceutical, biological, food and environmental samples, enables to decrease the limits of detection up to 10⁻¹² M. The use of microchips to achieve sensitive chiral separations is also discussed. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Duffaud, Sylvain; Liébart, Sandra
2014-01-01
There is no consensus on prescription of medicines in many situations in general medicine. The aim of this study was to identify the strategies used by general practitioners to limit prescriptions in order to make their prescriptions more effective. A mixed sample of general practitioners in terms of age and types of practice were interviewed using the focus group method until a sufficient number of data were obtained. Fourteen women and ten men aged between 32 and 64 years were interviewed by means of three group interviews. Various strategies were identified: the practitioner's attitude (rapid identification of the patient's needs, listening and evaluation of symptoms, support by physical examination) and the use of resources (reference tools and news) during the consultation; the importance of the conclusion of the consultation (written advice or visit report, review of the previous prescription) and explanation (reasons for limitation, reassurance, arguments, proposal of a follow-up visit). Limitation of prescriptions also depends on the practitioner's own reasons (initial and continued training, motivation and personal objectives, part of a peer group) but equally on the health care system (institutional, specialist support). The study highlights numerous approach to facilitate limitation of prescriptions: training and informing practitioners and patients, consultation management, promote communication at the heart of the health care system and policy-makers. Training organizations and health authorities could facilitate these strategies for the benefit of patients.
Clinical trial designs for testing biomarker-based personalized therapies
Lai, Tze Leung; Lavori, Philip W; Shih, Mei-Chiung I; Sikic, Branimir I
2014-01-01
Background Advances in molecular therapeutics in the past decade have opened up new possibilities for treating cancer patients with personalized therapies, using biomarkers to determine which treatments are most likely to benefit them, but there are difficulties and unresolved issues in the development and validation of biomarker-based personalized therapies. We develop a new clinical trial design to address some of these issues. The goal is to capture the strengths of the frequentist and Bayesian approaches to address this problem in the recent literature and to circumvent their limitations. Methods We use generalized likelihood ratio tests of the intersection null and enriched strategy null hypotheses to derive a novel clinical trial design for the problem of advancing promising biomarker-guided strategies toward eventual validation. We also investigate the usefulness of adaptive randomization (AR) and futility stopping proposed in the recent literature. Results Simulation studies demonstrate the advantages of testing both the narrowly focused enriched strategy null hypothesis related to validating a proposed strategy and the intersection null hypothesis that can accommodate to a potentially successful strategy. AR and early termination of ineffective treatments offer increased probability of receiving the preferred treatment and better response rates for patients in the trial, at the expense of more complicated inference under small-to-moderate total sample sizes and some reduction in power. Limitations The binary response used in the development phase may not be a reliable indicator of treatment benefit on long-term clinical outcomes. In the proposed design, the biomarker-guided strategy (BGS) is not compared to ‘standard of care’, such as physician’s choice that may be informed by patient characteristics. Therefore, a positive result does not imply superiority of the BGS to ‘standard of care’. The proposed design and tests are valid asymptotically. Simulations are used to examine small-to-moderate sample properties. Conclusion Innovative clinical trial designs are needed to address the difficulties and issues in the development and validation of biomarker-based personalized therapies. The article shows the advantages of using likelihood inference and interim analysis to meet the challenges in the sample size needed and in the constantly evolving biomarker landscape and genomic and proteomic technologies. PMID:22397801
2009-03-01
sediment. The plastic limit for the samples tested averaged to 43.4 percent while the liquid limit was 77.7 percent. As discussed above, the sediment...sediment from the drums, clean it of debris such as plastic , clam shells, sticks, etc., mix to a density to be maintained consistently among all... pollutant loading to aquatic ecosystems has decreased in the past decades, contaminated sediments have shifted from a sink to a potential source of
Wildfires and tourist behaviors in Florida
Brijesh Thapa; Ignatius Cahyanto; Stephen M. Holland; James D. Absher
2013-01-01
The impacts of wildfires on tourism have largely been examined with emphasis on economic losses and recovery strategies. Given the limited research from a demand perspective, this study examined tourist risk perceptions and reactionary behaviors toward wildfires in Florida. Data (N ¼ 771) was collected among a U.S. sample of non-resident overnight leisure travelers...
The Positive Event Scale: Measuring Uplift Frequency and Intensity in an Adult Sample
ERIC Educational Resources Information Center
Maybery, D. J.; Jones-Ellis, Jenny; Neale, Jason; Arentz, Alexandra
2006-01-01
Of the limited number of studies employing positive event (uplift) measures none have published detailed psychometric information about the scale that was used (Maybery and Graham, 2001, Stress and Health 17, pp. 91-104). Building on previous work with university students and employing conceptually distinct measurement strategies (i.e. measuring…
Filippidou, Sevasti; Wunderlin, Tina; Junier, Thomas; Jeanneret, Nicole; Dorador, Cristina; Molina, Veronica; Johnson, David R.; Junier, Pilar
2016-01-01
Environmental conditions unsuitable for microbial growth are the rule rather than the exception in most habitats. In response to this, microorganisms have developed various strategies to withstand environmental conditions that limit active growth. Endospore-forming Firmicutes (EFF) deploy a myriad of survival strategies in order to resist adverse conditions. Like many bacterial groups, they can form biofilms and detect nutrient scarcity through chemotaxis. Moreover, within this paraphyletic group of Firmicutes, ecophysiological optima are diverse. Nonetheless, a response to adversity that delimits this group is the formation of wet-heat resistant spores. These strategies are energetically demanding and therefore might affect the biological success of EFF. Therefore, we hypothesize that abundance and diversity of EFF should be maximized in those environments in which the benefits of these survival strategies offsets the energetic cost. In order to address this hypothesis, geothermal and mineral springs and drillings were selected because in these environments of steep physicochemical gradients, diversified survival strategies may become a successful strategy.We collected 71 samples from geothermal and mineral environments characterized by none (null), single or multiple limiting environmental factors (temperature, pH, UV radiation, and specific mineral composition). To measure success, we quantified EFF gene copy numbers (GCN; spo0A gene) in relation to total bacterial GCN (16S rRNA gene), as well as the contribution of EFF to community composition. The quantification showed that relative GCN for EFF reached up to 20% at sites characterized by multiple limiting environmental factors, whereas it corresponded to less than 1% at sites with one or no limiting environmental factor. Pyrosequencing of the 16S rRNA gene supports a higher contribution of EFF at sites with multiple limiting factors. Community composition suggested a combination of phylotypes for which active growth could be expected, and phylotypes that are most likely in the state of endospores, in all the sites. In summary, our results suggest that diversified survival strategies, including sporulation and metabolic adaptations, explain the biological success of EFF in geothermal and natural springs, and that multiple extreme environmental factors favor the prevalence of EFF. PMID:27857706
Filippidou, Sevasti; Wunderlin, Tina; Junier, Thomas; Jeanneret, Nicole; Dorador, Cristina; Molina, Veronica; Johnson, David R; Junier, Pilar
2016-01-01
Environmental conditions unsuitable for microbial growth are the rule rather than the exception in most habitats. In response to this, microorganisms have developed various strategies to withstand environmental conditions that limit active growth. Endospore-forming Firmicutes (EFF) deploy a myriad of survival strategies in order to resist adverse conditions. Like many bacterial groups, they can form biofilms and detect nutrient scarcity through chemotaxis. Moreover, within this paraphyletic group of Firmicutes, ecophysiological optima are diverse. Nonetheless, a response to adversity that delimits this group is the formation of wet-heat resistant spores. These strategies are energetically demanding and therefore might affect the biological success of EFF. Therefore, we hypothesize that abundance and diversity of EFF should be maximized in those environments in which the benefits of these survival strategies offsets the energetic cost. In order to address this hypothesis, geothermal and mineral springs and drillings were selected because in these environments of steep physicochemical gradients, diversified survival strategies may become a successful strategy.We collected 71 samples from geothermal and mineral environments characterized by none (null), single or multiple limiting environmental factors (temperature, pH, UV radiation, and specific mineral composition). To measure success, we quantified EFF gene copy numbers (GCN; spo0A gene) in relation to total bacterial GCN (16S rRNA gene), as well as the contribution of EFF to community composition. The quantification showed that relative GCN for EFF reached up to 20% at sites characterized by multiple limiting environmental factors, whereas it corresponded to less than 1% at sites with one or no limiting environmental factor. Pyrosequencing of the 16S rRNA gene supports a higher contribution of EFF at sites with multiple limiting factors. Community composition suggested a combination of phylotypes for which active growth could be expected, and phylotypes that are most likely in the state of endospores, in all the sites. In summary, our results suggest that diversified survival strategies, including sporulation and metabolic adaptations, explain the biological success of EFF in geothermal and natural springs, and that multiple extreme environmental factors favor the prevalence of EFF.
Srinivas, Nuggehally R
2016-01-01
In the present age of polypharmacy, limited sampling strategy becomes important to verify if drug levels are within the prescribed threshold limits from efficacy and safety considerations. The need to establish reliable single time concentration dependent models to predict exposure becomes important from cost and time perspectives. A simple unweighted linear regression model was developed to describe the relationship between Cmax versus AUC for fexofenadine, losartan, EXP3174, itraconazole and hydroxyitraconazole. The fold difference, defined as the quotient of the observed and predicted AUC values, were evaluated along with statistical comparison of the predicted versus observed values. The correlation between Cmax versus AUC was well established for all the five drugs with a correlation coefficient (r) ranging from 0.9130 to 0.9997. Majority of the predicted values for all the five drugs (77%) were contained within a narrow boundary of 0.75- to 1.5-fold difference. The r values for observed versus predicted AUC were 0.9653 (n = 145), 0.8342 (n = 76), 0.9524 (n = 88), 0.9339 (n = 89) and 0.9452 (n = 66) for fexofenadine, losartan, EXP3174, itraconazole and hydroxyitraconazole, respectively. Cmax versus AUC relationships were established for all drugs and were amenable for limited sampling strategy for AUC prediction. However, fexofenadine, EXP3174 and hydroxyitraconazole may be most relevant for AUC prediction by a single time concentration as judged by the various criteria applied in this study.
Vallecillos, Laura; Pocurull, Eva; Borrull, Francesc
2015-03-01
Musk compounds are widely used as fragrances in personal care products. On account of their widespread use and their low biodegradation, they can be found in environmental samples. In our study two extraction methodologies were compared and different clean-up strategies were also studied in order to develop a reliable analytical method, with minimum matrix effect and good detection limits, to determine synthetic musk fragrances- six polycyclic musks, three nitro musks and the degradation product of one polycyclic musk- in fish and mussel samples. The first extraction technique involves a QuEChERS extraction, a consolidate extraction methodology in the field of food analysis of growing interest over recent years, followed by a dispersive solid-phase extraction (dSPE) as clean-up strategy. The second extraction technique consists of a conventional pressurised liquid extraction (PLE) with dichloromethane and an in-cell clean-up to decrease the matrix effect and remove the undesired components(⁎)present in PLE extracts. Large volume injection (LVI) followed by gas chromatography-ion trap-tandem mass spectrometry (GC-IT-MS/MS) was chosen as the separation and detection technique. Validation parameters, such as method detection limits and method quantification limits were found at ng g(-1) levels for both fish and mussel matrices. Good levels of intra-day and inter-day repeatabilities were obtained analysing fish and mussel samples spiked at 50 ng g(-1) (d.w.) (n=5, RSDs<17%). The developed PLE/GC-IT-MS/MS method was successfully applied to determine the target musk fragrances present in fish and mussel samples from the local market in Tarragona and fish samples from the Ebro River. The results showed the presence of galaxolide (2.97-18.04 ng g(-1) (d.w.)) and tonalide (1.17-8.42 ng g(-1) (d.w.)) in all the samples analysed, while the remaining polycyclic musks such as cashmeran, celestolide and phantolide, were only detected in some of the fish samples analysed. None of the samples analysed contained detectable traces of the nitro musks studied. Copyright © 2014 Elsevier B.V. All rights reserved.
Zakumumpa, Henry; Taiwo, Modupe Oladunni; Muganzi, Alex; Ssengooba, Freddie
2016-10-19
Human resources for health (HRH) constraints are a major barrier to the sustainability of antiretroviral therapy (ART) scale-up programs in Sub-Saharan Africa. Many prior approaches to HRH constraints have taken a top-down trend of generalized global strategies and policy guidelines. The objective of the study was to examine the human resources for health strategies adopted by front-line providers in Uganda to sustain ART delivery beyond the initial ART scale-up phase between 2004 and 2009. A two-phase mixed-methods approach was adopted. In the first phase, a survey of a nationally representative sample of health facilities (n = 195) across Uganda was conducted. The second phase involved in-depth interviews (n = 36) with ART clinic managers and staff of 6 of the 195 health facilities purposively selected from the first study phase. Quantitative data was analysed based on descriptive statistics, and qualitative data was analysed by coding and thematic analysis. The identified strategies were categorized into five themes: (1) providing monetary and non-monetary incentives to health workers on busy ART clinic days; (2) workload reduction through spacing ART clinic appointments; (3) adopting training workshops in ART management as a motivation strategy for health workers; (4) adopting non-physician-centred staffing models; and (5) devising ART program leadership styles that enhanced health worker commitment. Facility-level strategies for responding to HRH constraints are feasible and can contribute to efforts to increase country ownership of HIV programs in resource-limited settings. Consideration of the human resources for health strategies identified in the study by ART program planners and managers could enhance the long-term sustainment of ART programs by providers in resource-limited settings.
NASA Astrophysics Data System (ADS)
Aberson, Sim David
In 1997, the National Hurricane Center and the Hurricane Research Division began conducting operational synoptic surveillance missions with the Gulfstream IV-SP jet aircraft to improve operational forecast models. During the first two years, twenty-four missions were conducted around tropical cyclones threatening the continental United States, Puerto Rico, and the Virgin Islands. Global Positioning System dropwindsondes were released from the aircraft at 150--200 km intervals along the flight track in the tropical cyclone environment to obtain wind, temperature, and humidity profiles from flight level (around 150 hPa) to the surface. The observations were processed and formatted aboard the aircraft and transmitted to the National Centers for Environmental Prediction (NCEP). There, they were ingested into the Global Data Assimilation System that subsequently provides initial and time-dependent boundary conditions for numerical models that forecast tropical cyclone track and intensity. Three dynamical models were employed in testing the targeting and sampling strategies. With the assimilation into the numerical guidance of all the observations gathered during the surveillance missions, only the 12-h Geophysical Fluid Dynamics Laboratory Hurricane Model forecast showed statistically significant improvement. Neither the forecasts from the Aviation run of the Global Spectral Model nor the shallow-water VICBAR model were improved with the assimilation of the dropwindsonde data. This mediocre result is found to be due mainly to the difficulty in operationally quantifying the storm-motion vector used to create accurate synthetic data to represent the tropical cyclone vortex in the models. A secondary limit on forecast improvements from the surveillance missions is the limited amount of data provided by the one surveillance aircraft in regular missions. The inability of some surveillance missions to surround the tropical cyclone with dropwindsonde observations is a possible third limit, though the results are inconclusive. Due to limited aircraft resources, optimal observing strategies for these missions must be developed. Since observations in areas of decaying error modes are unlikely to have large impact on subsequent forecasts, such strategies should be based on taking observations in those geographic locations corresponding to the most rapidly growing error modes in the numerical models and on known deficiencies in current data assimilation systems. Here, the most rapidly growing modes are represented by areas of large forecast spread in the NCEP bred-mode global ensemble forecasting system. The sampling strategy requires sampling the entire target region at approximately the same resolution as the North American rawinsonde network to limit the possibly spurious spread of information from dropwindsonde observations into data-sparse regions where errors are likely to grow. When only the subset of data in these fully-sampled target regions is assimilated into the numerical models, statistically significant reduction of the track forecast errors of up to 25% within the critical first two days of the forecast are seen. These model improvements are comparable with the cumulative business-as-usual track forecast model improvements expected over eighteen years.
Fisher, Shawn C.; Reilly, Timothy J.; Jones, Daniel K.; Benzel, William M.; Griffin, Dale W.; Loftin, Keith A.; Iwanowicz, Luke R.; Cohl, Jonathan A.
2015-12-17
An understanding of the effects on human and ecological health brought by major coastal storms or flooding events is typically limited because of a lack of regionally consistent baseline and trends data in locations proximal to potential contaminant sources and mitigation activities, sensitive ecosystems, and recreational facilities where exposures are probable. In an attempt to close this gap, the U.S. Geological Survey (USGS) has implemented the Sediment-bound Contaminant Resiliency and Response (SCoRR) strategy pilot study to collect regional sediment-quality data prior to and in response to future coastal storms. The standard operating procedure (SOP) detailed in this document serves as the sample-collection protocol for the SCoRR strategy by providing step-by-step instructions for site preparation, sample collection and processing, and shipping of soil and surficial sediment (for example, bed sediment, marsh sediment, or beach material). The objectives of the SCoRR strategy pilot study are (1) to create a baseline of soil-, sand-, marsh sediment-, and bed-sediment-quality data from sites located in the coastal counties from Maine to Virginia based on their potential risk of being contaminated in the event of a major coastal storm or flooding (defined as Resiliency mode); and (2) respond to major coastal storms and flooding by reoccupying select baseline sites and sampling within days of the event (defined as Response mode). For both modes, samples are collected in a consistent manner to minimize bias and maximize quality control by ensuring that all sampling personnel across the region collect, document, and process soil and sediment samples following the procedures outlined in this SOP. Samples are analyzed using four USGS-developed screening methods—inorganic geochemistry, organic geochemistry, pathogens, and biological assays—which are also outlined in this SOP. Because the SCoRR strategy employs a multi-metric approach for sample analyses, this protocol expands upon and reconciles differences in the sample collection protocols outlined in the USGS “National Field Manual for the Collection of Water-Quality Data,” which should be used in conjunction with this SOP. A new data entry and sample tracking system also is presented to ensure all relevant data and metadata are gathered at the sample locations and in the laboratories.
Monitoring biological diversity: strategies, tools, limitations, and challenges
Beever, E.A.
2006-01-01
Monitoring is an assessment of the spatial and temporal variability in one or more ecosystem properties, and is an essential component of adaptive management. Monitoring can help determine whether mandated environmental standards are being met and can provide an early-warning system of ecological change. Development of a strategy for monitoring biological diversity will likely be most successful when based upon clearly articulated goals and objectives and may be enhanced by including several key steps in the process. Ideally, monitoring of biological diversity will measure not only composition, but also structure and function at the spatial and temporal scales of interest. Although biodiversity monitoring has several key limitations as well as numerous theoretical and practical challenges, many tools and strategies are available to address or overcome such challenges; I summarize several of these. Due to the diversity of spatio-temporal scales and comprehensiveness encompassed by existing definitions of biological diversity, an effective monitoring design will reflect the desired sampling domain of interest and its key stressors, available funding, legal requirements, and organizational goals.
Thirst distress and interdialytic weight gain: how do they relate?
Jacob, Sheena; Locking-Cusolito, Heather
2004-01-01
Thirst is a frequent and stressful symptom experienced by hemodialysis patients. Several studies have noted a positive relationship between thirst and interdialytic weight gain (IDWG). These factors prompted us to consider ways that we could intervene to reduce thirst and IDWG through an educative, supportive nursing intervention. This paper presents the results of a pilot research project, the purpose of which was to: examine the relationship between thirst distress (the negative symptoms associated with thirst) and IDWG in a sample of our patients, describe patients' strategies for management of thirst, and establish the necessary sample size for the planned intervention study. The pilot research project results showed that in a small sample of 20, there was a mildly positive, though not statistically significant, correlation between thirst distress and IDWG (r = 0.117). Subjects shared a wide variety of thirst management strategies including: limiting salt intake, using ice chips, measuring daily allotment, performing mouth care, eating raw fruits and vegetables, sucking on hard candy and chewing gum. This pilot research project showed that given an alpha of 0.05 and a power of 80%, we will require a sample of 39 subjects to detect a 20% change in IDWG. We will employ these results to plan our intervention study, first by establishing the appropriate sample size and second by incorporating identified patient strategies into an educational pamphlet that will form the basis of our intervention.
Field spectroscopy sampling strategies for improved measurement of Earth surface reflectance
NASA Astrophysics Data System (ADS)
Mac Arthur, A.; Alonso, L.; Malthus, T. J.; Moreno, J. F.
2013-12-01
Over the last two decades extensive networks of research sites have been established to measure the flux of carbon compounds and water vapour between the Earth's surface and the atmosphere using eddy covariance (EC) techniques. However, contributing Earth surface components cannot be determined and (as the ';footprints' are spatially constrained) these measurements cannot be extrapolated to regional cover using this technique. At many of these EC sites researchers have been integrating spectral measurements with EC and ancillary data to better understand light use efficiency and carbon dioxide flux. These spectroscopic measurements could also be used to assess contributing components and provide support for imaging spectroscopy, from airborne or satellite platforms, which can provide unconstrained spatial cover. Furthermore, there is an increasing interest in ';smart' database and information retrieval systems such as that proposed by EcoSIS and OPTIMISE to store, analyse, QA and merge spectral and biophysical measurements and provide information to end users. However, as Earth surfaces are spectrally heterogeneous and imaging and field spectrometers sample different spatial extents appropriate field sampling strategies require to be adopted. To sample Earth surfaces spectroscopists adopt either single; random; regular grid; transect; or 'swiping' point sampling strategies, although little comparative work has been carried out to determine the most appropriate approach; the work by Goetz (2012) is a limited exception. Mac Arthur et al (2012) demonstrated that, for two full wavelength (400 nm to 2,500 nm) field spectroradiometers, the measurement area sampled is defined by each spectroradiometer/fore optic system's directional response function (DRF) rather than the field-of-view (FOV) specified by instrument manufacturers. Mac Arthur et al (2012) also demonstrated that each reflecting element within the sampled area was not weighted equally in the integrated measurement recorded. There were non-uniformities of spectral response with the spectral ';weighting' per wavelength interval being positionally dependent and unique to each spectroradiometer/fore optic system investigated. However, Mac Arthur et al (2012) did not provide any advice on how to compensate for these systematic errors or advise on appropriate sampling strategies. The work reported here will provide the first systematic study of the effect of field spectroscopy sampling strategies for a range of different Earth surface types. Synthetic Earth surface hyperspectral data cubes for each surface type were generated and convolved with a range of the spectrometer/fore optic system directional response functions generated by Mac Arthur et al 2013, to simulate spectroscopic measurements of Earth surfaces. This has enabled different field sampling strategies to be directly compared and their suitability for each measurement purpose and surface type to be assessed and robust field spectroscopy sampling strategy recommendations to be made. This will be particularly of interest to the carbon and water vapour flux communities and assist the development of sampling strategies for field spectroscopy from rotary-wing Unmanned Aerial Vehicles, which will aid acquiring measurements in the spatial domain, and generally further the use of field spectroscopy for quantitative Earth observation.
Pannus, Pieter; Fajardo, Emmanuel; Metcalf, Carol; Coulborn, Rebecca M; Durán, Laura T; Bygrave, Helen; Ellman, Tom; Garone, Daniela; Murowa, Michael; Mwenda, Reuben; Reid, Tony; Preiser, Wolfgang
2013-10-01
Rollout of routine HIV-1 viral load monitoring is hampered by high costs and logistical difficulties associated with sample collection and transport. New strategies are needed to overcome these constraints. Dried blood spots from finger pricks have been shown to be more practical than the use of plasma specimens, and pooling strategies using plasma specimens have been demonstrated to be an efficient method to reduce costs. This study found that combination of finger-prick dried blood spots and a pooling strategy is a feasible and efficient option to reduce costs, while maintaining accuracy in the context of a district hospital in Malawi.
NASA Astrophysics Data System (ADS)
Nischkauer, Winfried; Vanhaecke, Frank; Bernacchi, Sébastien; Herwig, Christoph; Limbeck, Andreas
2014-11-01
Nebulising liquid samples and using the aerosol thus obtained for further analysis is the standard method in many current analytical techniques, also with inductively coupled plasma (ICP)-based devices. With such a set-up, quantification via external calibration is usually straightforward for samples with aqueous or close-to-aqueous matrix composition. However, there is a variety of more complex samples. Such samples can be found in medical, biological, technological and industrial contexts and can range from body fluids, like blood or urine, to fuel additives or fermentation broths. Specialized nebulizer systems or careful digestion and dilution are required to tackle such demanding sample matrices. One alternative approach is to convert the liquid into a dried solid and to use laser ablation for sample introduction. Up to now, this approach required the application of internal standards or matrix-adjusted calibration due to matrix effects. In this contribution, we show a way to circumvent these matrix effects while using simple external calibration for quantification. The principle of representative sampling that we propose uses radial line-scans across the dried residue. This compensates for centro-symmetric inhomogeneities typically observed in dried spots. The effectiveness of the proposed sampling strategy is exemplified via the determination of phosphorus in biochemical fermentation media. However, the universal viability of the presented measurement protocol is postulated. Detection limits using laser ablation-ICP-optical emission spectrometry were in the order of 40 μg mL- 1 with a reproducibility of 10 % relative standard deviation (n = 4, concentration = 10 times the quantification limit). The reported sensitivity is fit-for-purpose in the biochemical context described here, but could be improved using ICP-mass spectrometry, if future analytical tasks would require it. Trueness of the proposed method was investigated by cross-validation with conventional liquid measurements, and by analyzing IAEA-153 reference material (Trace Elements in Milk Powder); a good agreement with the certified value for phosphorus was obtained.
VizieR Online Data Catalog: XMM-Newton Bright Serendipitous Survey (Della Ceca+, 2004)
NASA Astrophysics Data System (ADS)
Della Ceca, R.; Maccacaro, T.; Caccianiga, A.; Severgnini, P.; Braito, V.; Barcons, X.; Carrera, F. J.; Watson, M. G.; Tedds, J. A.; Brunner, H.; Lehmann, I.; Page, M. J.; Lamer, G.; Schwope, A.
2005-09-01
We present here "The XMM-Newton Bright Serendipitous Survey", composed of two flux-limited samples: the XMM-Newton Bright Source Sample (BSS, hereafter) and the XMM-Newton "Hard" Bright Source Sample (HBSS, hereafter) having a flux limit of fX~7x10-14erg/cm2/s in the 0.5-4.5keV and 4.5-7.5keV energy band, respectively. After discussing the main goals of this project and the survey strategy, we present the basic data on a complete sample of 400 X-ray sources (389 of them belong to the BSS, 67 to the HBSS with 56 X-ray sources in common) derived from the analysis of 237 suitable XMM-Newton fields (211 for the HBSS). At the flux limit of the survey we cover a survey area of 28.10 (25.17 for the HBSS) sq. deg. The extragalactic number-flux relationships (in the 0.5-4.5keV and in the 4.5-7.5keV energy bands) are in good agreement with previous and new results making us confident about the correctness of data selection and analysis. (5 data files).
Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi
2015-12-01
A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (<7) were extracted more efficiently under acidic conditions and antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.
de Jong, Mats; Florea, Anca; Vries, Anne-Mare de; van Nuijs, Alexander L N; Covaci, Adrian; Van Durme, Filip; Martins, José C; Samyn, Nele; De Wael, Karolien
2018-04-17
The present work investigates the electrochemical determination of cocaine in the presence of levamisole, one of the most common adulterants found in cocaine street samples. Levamisole misleads cocaine color tests, giving a blue color (positive test) even in the absence of cocaine. Moreover, the electrochemical detection of cocaine is also affected by the presence of levamisole, with a suppression of the oxidation signal of cocaine. When levamisole is present in the sample in ratios higher than 1:1, the cocaine signal is no longer detected, thus leading to false negative results. Mass spectrometry and nuclear magnetic resonance were used to investigate if the signal suppression is due to the formation of a complex between cocaine and levamisole in bulk solution. Strategies to eliminate this suppressing effect are further suggested in this manuscript. In a first approach, the increase of the pH of the sample solution from pH 7 to pH 12 allowed the voltammetric determination of cocaine in the presence of levamisole in a concentration range from 10 to 5000 μM at nonmodified graphite disposable electrodes with a detection limit of 5 μM. In a second approach, the graphite electrode was cathodically pretreated, resulting in the presence of oxidation peaks of both cocaine and levamisole, with a detection limit for cocaine of 3 μM over the linear range of concentrations from 10 to 2500 μM. Both these strategies have been successfully applied for the simultaneous detection of cocaine and levamisole in three street samples on unmodified graphite disposable electrodes.
Writing for Learning in Science: A Secondary Analysis of Six Studies
ERIC Educational Resources Information Center
Gunel, Murat; Hand, Brian; Prain, Vaughan
2007-01-01
This study is a secondary analysis of six previous studies that formed part of an ongoing research program focused on examining the benefits of using writing-to-learn strategies within science classrooms. The study is an attempt to make broader generalizations than those based on individual studies, given limitations related to sample sizes,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timchalk, Charles; Weber, Thomas J.; Smith, Jordan N.
Advancements in Exposure Science involving the development and deployment of biomarkers of exposure and biological response are anticipated to significantly (and positively) influence health outcomes associated with occupational, environmental and clinical exposure to chemicals/drugs. To achieve this vision, innovative strategies are needed to develop multiplex sensor platforms capable of quantifying individual and mixed exposures (i.e. systemic dose) by measuring biomarkers of dose and biological response in readily obtainable (non-invasive) biofluids. Secondly, the use of saliva (alternative to blood) for biomonitoring coupled with the ability to rapidly analyze multiple samples in real-time offers an innovative opportunity to revolutionize biomonitoring assessments. Inmore » this regard, the timing and number of samples taken for biomonitoring will not be limited as is currently the case. In addition, real-time analysis will facilitate identification of work practices or conditions that are contributing to increased exposures and will make possible a more rapid and successful intervention strategy. The initial development and application of computational models for evaluation of saliva/blood analyte concentration at anticipated exposure levels represents an important opportunity to establish the limits of quantification and robustness of multiplex sensor systems by exploiting a unique computational modeling framework. The use of these pharmacokinetic models will also enable prediction of an exposure dose based on the saliva/blood measurement. This novel strategy will result in a more accurate prediction of exposures and, once validated, can be employed to assess dosimetry to a broad range of chemicals in support of biomonitoring and epidemiology studies.« less
Efficient identification of context dependent subgroups of risk from genome wide association studies
Dyson, Greg; Sing, Charles F.
2014-01-01
We have developed a modified Patient Rule-Induction Method (PRIM) as an alternative strategy for analyzing representative samples of non-experimental human data to estimate and test the role of genomic variations as predictors of disease risk in etiologically heterogeneous sub-samples. A computational limit of the proposed strategy is encountered when the number of genomic variations (predictor variables) under study is large (> 500) because permutations are used to generate a null distribution to test the significance of a term (defined by values of particular variables) that characterizes a sub-sample of individuals through the peeling and pasting processes. As an alternative, in this paper we introduce a theoretical strategy that facilitates the quick calculation of Type I and Type II errors in the evaluation of terms in the peeling and pasting processes carried out in the execution of a PRIM analysis that are underestimated and non-existent, respectively, when a permutation-based hypothesis test is employed. The resultant savings in computational time makes possible the consideration of larger numbers of genomic variations (an example genome wide association study is given) in the selection of statistically significant terms in the formulation of PRIM prediction models. PMID:24570412
Comparative and Quantitative Global Proteomics Approaches: An Overview
Deracinois, Barbara; Flahaut, Christophe; Duban-Deweer, Sophie; Karamanos, Yannis
2013-01-01
Proteomics became a key tool for the study of biological systems. The comparison between two different physiological states allows unravelling the cellular and molecular mechanisms involved in a biological process. Proteomics can confirm the presence of proteins suggested by their mRNA content and provides a direct measure of the quantity present in a cell. Global and targeted proteomics strategies can be applied. Targeted proteomics strategies limit the number of features that will be monitored and then optimise the methods to obtain the highest sensitivity and throughput for a huge amount of samples. The advantage of global proteomics strategies is that no hypothesis is required, other than a measurable difference in one or more protein species between the samples. Global proteomics methods attempt to separate quantify and identify all the proteins from a given sample. This review highlights only the different techniques of separation and quantification of proteins and peptides, in view of a comparative and quantitative global proteomics analysis. The in-gel and off-gel quantification of proteins will be discussed as well as the corresponding mass spectrometry technology. The overview is focused on the widespread techniques while keeping in mind that each approach is modular and often recovers the other. PMID:28250403
St. Onge, K. R.; Palmé, A. E.; Wright, S. I.; Lascoux, M.
2012-01-01
Most species have at least some level of genetic structure. Recent simulation studies have shown that it is important to consider population structure when sampling individuals to infer past population history. The relevance of the results of these computer simulations for empirical studies, however, remains unclear. In the present study, we use DNA sequence datasets collected from two closely related species with very different histories, the selfing species Capsella rubella and its outcrossing relative C. grandiflora, to assess the impact of different sampling strategies on summary statistics and the inference of historical demography. Sampling strategy did not strongly influence the mean values of Tajima’s D in either species, but it had some impact on the variance. The general conclusions about demographic history were comparable across sampling schemes even when resampled data were analyzed with approximate Bayesian computation (ABC). We used simulations to explore the effects of sampling scheme under different demographic models. We conclude that when sequences from modest numbers of loci (<60) are analyzed, the sampling strategy is generally of limited importance. The same is true under intermediate or high levels of gene flow (4Nm > 2–10) in models in which global expansion is combined with either local expansion or hierarchical population structure. Although we observe a less severe effect of sampling than predicted under some earlier simulation models, our results should not be seen as an encouragement to neglect this issue. In general, a good coverage of the natural range, both within and between populations, will be needed to obtain a reliable reconstruction of a species’s demographic history, and in fact, the effect of sampling scheme on polymorphism patterns may itself provide important information about demographic history. PMID:22870403
Kushida, Clete A; Nichols, Deborah A; Jadrnicek, Rik; Miller, Ric; Walsh, James K; Griffin, Kara
2012-07-01
De-identification and anonymization are strategies that are used to remove patient identifiers in electronic health record data. The use of these strategies in multicenter research studies is paramount in importance, given the need to share electronic health record data across multiple environments and institutions while safeguarding patient privacy. Systematic literature search using keywords of de-identify, deidentify, de-identification, deidentification, anonymize, anonymization, data scrubbing, and text scrubbing. Search was conducted up to June 30, 2011 and involved 6 different common literature databases. A total of 1798 prospective citations were identified, and 94 full-text articles met the criteria for review and the corresponding articles were obtained. Search results were supplemented by review of 26 additional full-text articles; a total of 120 full-text articles were reviewed. A final sample of 45 articles met inclusion criteria for review and discussion. Articles were grouped into text, images, and biological sample categories. For text-based strategies, the approaches were segregated into heuristic, lexical, and pattern-based systems versus statistical learning-based systems. For images, approaches that de-identified photographic facial images and magnetic resonance image data were described. For biological samples, approaches that managed the identifiers linked with these samples were discussed, particularly with respect to meeting the anonymization requirements needed for Institutional Review Board exemption under the Common Rule. Current de-identification strategies have their limitations, and statistical learning-based systems have distinct advantages over other approaches for the de-identification of free text. True anonymization is challenging, and further work is needed in the areas of de-identification of datasets and protection of genetic information.
Novel optical strategies for biodetection
NASA Astrophysics Data System (ADS)
Sakamuri, Rama M.; Wolfenden, Mark S.; Anderson, Aaron S.; Swanson, Basil I.; Schmidt, Jurgen S.; Mukundan, Harshini
2013-09-01
Although bio-detection strategies have significantly evolved in the past decade, they still suffer from many disadvantages. For one, current approaches still require confirmation of pathogen viability by culture, which is the `gold-standard' method, and can take several days to result. Second, current methods typically target protein and nucleic acid signatures and cannot be applied to other biochemical categories of biomarkers (e.g.; lipidated sugars). Lipidated sugars (e.g.; lipopolysaccharide, lipoarabinomannan) are bacterial virulence factors that are significant to pathogenicity. Herein, we present two different optical strategies for biodetection to address these two limitations. We have exploited bacterial iron sequestration mechanisms to develop a simple, specific assay for the selective detection of viable bacteria, without the need for culture. We are currently working on the use of this technology for the differential detection of two different bacteria, using siderophores. Second, we have developed a novel strategy termed `membrane insertion' for the detection of amphiphilic biomarkers (e.g. lipidated glycans) that cannot be detected by conventional approaches. We have extended this technology to the detection of small molecule amphiphilic virulence factors, such as phenolic glycolipid-1 from leprosy, which could not be directly detected before. Together, these strategies address two critical limitations in current biodetection approaches. We are currently working on the optimization of these methods, and their extension to real-world clinical samples.
Baltussen, E; Snijders, H; Janssen, H G; Sandra, P; Cramers, C A
1998-04-10
A recently developed method for the extraction of organic micropollutants from aqueous samples based on sorptive enrichment in columns packed with 100% polydimethylsiloxane (PDMS) particles was coupled on-line with HPLC analysis. The sorptive enrichment procedure originally developed for relatively nonpolar analytes was used to preconcentrate polar phenylurea herbicides from aqueous samples. PDMS extraction columns of 5, 10 and 25 cm were used to extract the herbicides from distilled, tap and river water samples. A model that allows prediction of retention and breakthrough volumes is presented. Despite the essentially apolar nature of the PDMS material, it is possible to concentrate sample volumes up to 10 ml on PDMS cartridges without losses of the most polar analyte under investigation, fenuron. For less polar analytes significantly larger sample volumes can be applied. Since standard UV detection does not provide adequate selectivity for river water samples, an electrospray (ES)-MS instrument was used to determine phenylurea herbicides in a water sample from the river Dommel. Methoxuron was present at a level of 80 ng/l. The detection limit of the current set-up, using 10 ml water samples and ES-MS detection is 10 ng/l in river water samples. Strategies for further improvement of the detection limits are identified.
NASA Astrophysics Data System (ADS)
Shrivastava, Sajal; Sohn, Il-Yung; Son, Young-Min; Lee, Won-Il; Lee, Nae-Eung
2015-11-01
Although real-time label-free fluorescent aptasensors based on nanomaterials are increasingly recognized as a useful strategy for the detection of target biomolecules with high fidelity, the lack of an imaging-based quantitative measurement platform limits their implementation with biological samples. Here we introduce an ensemble strategy for a real-time label-free fluorescent graphene (Gr) aptasensor platform. This platform employs aptamer length-dependent tunability, thus enabling the reagentless quantitative detection of biomolecules through computational processing coupled with real-time fluorescence imaging data. We demonstrate that this strategy effectively delivers dose-dependent quantitative readouts of adenosine triphosphate (ATP) concentration on chemical vapor deposited (CVD) Gr and reduced graphene oxide (rGO) surfaces, thereby providing cytotoxicity assessment. Compared with conventional fluorescence spectrometry methods, our highly efficient, universally applicable, and rational approach will facilitate broader implementation of imaging-based biosensing platforms for the quantitative evaluation of a range of target molecules.Although real-time label-free fluorescent aptasensors based on nanomaterials are increasingly recognized as a useful strategy for the detection of target biomolecules with high fidelity, the lack of an imaging-based quantitative measurement platform limits their implementation with biological samples. Here we introduce an ensemble strategy for a real-time label-free fluorescent graphene (Gr) aptasensor platform. This platform employs aptamer length-dependent tunability, thus enabling the reagentless quantitative detection of biomolecules through computational processing coupled with real-time fluorescence imaging data. We demonstrate that this strategy effectively delivers dose-dependent quantitative readouts of adenosine triphosphate (ATP) concentration on chemical vapor deposited (CVD) Gr and reduced graphene oxide (rGO) surfaces, thereby providing cytotoxicity assessment. Compared with conventional fluorescence spectrometry methods, our highly efficient, universally applicable, and rational approach will facilitate broader implementation of imaging-based biosensing platforms for the quantitative evaluation of a range of target molecules. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr05839b
Nower, Lia; Blaszczynski, Alex
2010-09-01
Studies attempting to identify the specific 'addictive' features of electronic gaming machines (EGMs) have yielded largely inconclusive results, suggesting that it is the interaction between a gambler's cognitions and the machine, rather than the machine itself, which fuels excessive play. Research has reported that machine players with gambling problems adopt a number of erroneous cognitive perceptions regarding the probability of winning and the nature of randomness. What is unknown, however, is whether motivations for gambling and attitudes toward pre-session monetary limit-setting vary across levels of gambling severity, and whether proposed precommitment strategies would be useful in minimizing excessive gambling expenditures. The current study explored these concepts in a sample of 127 adults, ages 18 to 81, attending one of four gambling venues in Queensland, Australia. The study found that problem gamblers were more likely than other gamblers to play machines to earn income or escape their problems rather than for fun and enjoyment. Similarly, they were less likely to endorse any type of monetary limit-setting prior to play. They were also reticent to adopt the use of a 'smart card' or other strategy to limit access to money during a session, though they indicated they lost track of money while gambling and were rarely aware of whether they were winning or losing during play. Implications for precommitment policies and further research are discussed.
NASA Astrophysics Data System (ADS)
Shafer, J. M.; Varljen, M. D.
1990-08-01
A fundamental requirement for geostatistical analyses of spatially correlated environmental data is the estimation of the sample semivariogram to characterize spatial correlation. Selecting an underlying theoretical semivariogram based on the sample semivariogram is an extremely important and difficult task that is subject to a great deal of uncertainty. Current standard practice does not involve consideration of the confidence associated with semivariogram estimates, largely because classical statistical theory does not provide the capability to construct confidence limits from single realizations of correlated data, and multiple realizations of environmental fields are not found in nature. The jackknife method is a nonparametric statistical technique for parameter estimation that may be used to estimate the semivariogram. When used in connection with standard confidence procedures, it allows for the calculation of closely approximate confidence limits on the semivariogram from single realizations of spatially correlated data. The accuracy and validity of this technique was verified using a Monte Carlo simulation approach which enabled confidence limits about the semivariogram estimate to be calculated from many synthetically generated realizations of a random field with a known correlation structure. The synthetically derived confidence limits were then compared to jackknife estimates from single realizations with favorable results. Finally, the methodology for applying the jackknife method to a real-world problem and an example of the utility of semivariogram confidence limits were demonstrated by constructing confidence limits on seasonal sample variograms of nitrate-nitrogen concentrations in shallow groundwater in an approximately 12-mi2 (˜30 km2) region in northern Illinois. In this application, the confidence limits on sample semivariograms from different time periods were used to evaluate the significance of temporal change in spatial correlation. This capability is quite important as it can indicate when a spatially optimized monitoring network would need to be reevaluated and thus lead to more robust monitoring strategies.
Quantifying and correcting motion artifacts in MRI
NASA Astrophysics Data System (ADS)
Bones, Philip J.; Maclaren, Julian R.; Millane, Rick P.; Watts, Richard
2006-08-01
Patient motion during magnetic resonance imaging (MRI) can produce significant artifacts in a reconstructed image. Since measurements are made in the spatial frequency domain ('k-space'), rigid-body translational motion results in phase errors in the data samples while rotation causes location errors. A method is presented to detect and correct these errors via a modified sampling strategy, thereby achieving more accurate image reconstruction. The strategy involves sampling vertical and horizontal strips alternately in k-space and employs phase correlation within the overlapping segments to estimate translational motion. An extension, also based on correlation, is employed to estimate rotational motion. Results from simulations with computer-generated phantoms suggest that the algorithm is robust up to realistic noise levels. The work is being extended to physical phantoms. Provided that a reference image is available and the object is of limited extent, it is shown that a measure related to the amount of energy outside the support can be used to objectively compare the severity of motion-induced artifacts.
Mind the gap: implementation challenges break the link between HIV/AIDS research and practice.
MacCarthy, Sarah; Reisner, Sari; Hoffmann, Michael; Perez-Brumer, Amaya; Silva-Santisteban, Alfonso; Nunn, Amy; Bastos, Leonardo; Vasconcellos, Mauricio Teixeira Leite de; Kerr, Ligia; Bastos, Francisco Inácio; Dourado, Inês
2016-11-03
Sampling strategies such as respondent-driven sampling (RDS) and time-location sampling (TLS) offer unique opportunities to access key populations such as men who have sex with men (MSM) and transgender women. Limited work has assessed implementation challenges of these methods. Overcoming implementation challenges can improve research quality and increase uptake of HIV services among key populations. Drawing from studies using RDS in Brazil and TLS in Peru, we summarize challenges encountered in the field and potential strategies to address them. In Brazil, study site selection, cash incentives, and seed selection challenged RDS implementation with MSM. In Peru, expansive geography, safety concerns, and time required for study participation complicated TLS implementation with MSM and transgender women. Formative research, meaningful participation of key populations across stages of research, and transparency in study design are needed to link HIV/AIDS research and practice. Addressing implementation challenges can close gaps in accessing services among those most burdened by the epidemic.
Mind the gap: implementation challenges break the link between HIV/AIDS research and practice
MacCarthy, Sarah; Reisner, Sari; Hoffmann, Michael; Perez-Brumer, Amaya; Silva-Santisteban, Alfonso; Nunn, Amy; Bastos, Leonardo; de Vasconcellos, Mauricio Teixeira Leite; Kerr, Ligia; Bastos, Francisco Inácio; Dourado, Inês
2018-01-01
Sampling strategies such as respondent-driven sampling (RDS) and time-location sampling (TLS) offer unique opportunities to access key populations such as men who have sex with men (MSM) and transgender women. Limited work has assessed implementation challenges of these methods. Overcoming implementation challenges can improve research quality and increase uptake of HIV services among key populations. Drawing from studies using RDS in Brazil and TLS in Peru, we summarize challenges encountered in the field and potential strategies to address them. In Brazil, study site selection, cash incentives, and seed selection challenged RDS implementation with MSM. In Peru, expansive geography, safety concerns, and time required for study participation complicated TLS implementation with MSM and transgender women. Formative research, meaningful participation of key populations across stages of research, and transparency in study design are needed to link HIV/AIDS research and practice. Addressing implementation challenges can close gaps in accessing services among those most burdened by the epidemic. PMID:27828609
Wall, Ben A; Arnold, Mark E; Radia, Devi; Gilbert, Will; Ortiz-Pelaez, Angel; Stärk, Katharina Dc; Van Klink, Ed; Guitian, Javier
2017-08-10
Transmissible spongiform encephalopathies (TSEs) are an important public health concern. Since the emergence of bovine spongiform encephalopathy (BSE) during the 1980s and its link with human Creutzfeldt-Jakob disease, active surveillance has been a key element of the European Union's TSE control strategy. Success of this strategy means that now, very few cases are detected compared with the number of animals tested. Refining surveillance strategies would enable resources to be redirected towards other public health priorities. Cost-effectiveness analysis was performed on several alternative strategies involving reducing the number of animals tested for BSE and scrapie in Great Britain and, for scrapie, varying the ratio of sheep sampled in the abattoir to fallen stock (which died on the farm). The most cost-effective strategy modelled for BSE involved reducing the proportion of fallen stock tested from 100% to 75%, producing a cost saving of ca GBP 700,000 per annum. If 50% of fallen stock were tested, a saving of ca GBP 1.4 million per annum could be achieved. However, these reductions are predicted to increase the period before surveillance can detect an outbreak. For scrapie, reducing the proportion of abattoir samples was the most cost-effective strategy modelled, with limited impact on surveillance effectiveness. This article is copyright of The Authors, 2017.
NASA Astrophysics Data System (ADS)
Farnoudian-Habibi, Amir; Massoumi, Bakhshali; Jaymand, Mehdi
2016-11-01
A novel and efficient strategy for the simultaneous determination of two tricyclic antidepressant (TCA) drugs [amitriptyline (AT), and its main metabolite (nortriptyline; NT)] via a combination of magnetic solid phase extraction (MSPE), and spectrophotometric techniques in serum is suggested. For this purpose, the imidazolium ionic liquid (Imz)-modified Fe3O4@SiO2 nanoparticles (Fe3O4@SiO2-Imz) was employed as an adsorbent for the MSPE. Preconcentration (loading-desorption) studies were performed under optimized conditions including pH, adsorbent amount, contact time, eluent volume, and desorption time. Afterward, determination of each drug was carried out by specific strategy. Acetaldehyde (AC), and 2,3,5,6-tetrachloro-1,4-benzoquinone (chloranil; CL) were used as chemical reagents for reaction with NT, while AT did not react with these reagents. This method is based on the condensation reaction between secondary amine group of NT and AC to afford an enamine, and subsequently reaction with CL to produce a chlorinated quinone-substituted enamine. The final product exhibited maximum absorption at 556 nm, while the AT was determined at 240 nm. The limits of detections (LODs) for NT and AT in serum sample were obtained as 0.19 and 0.90 ng mL- 1, respectively. The limits of quantifications (LOQs) were obtained to be 0.63 and 2.93 ng mL- 1 for NT and AT, respectively. A linear range was obtained to be 1 to 5 ng mL- 1. Results indicated that the suggested method is applicable for simultaneous determination of NT and AT in serum samples.
Kelly, Benjamin J; Fitch, James R; Hu, Yangqiu; Corsmeier, Donald J; Zhong, Huachun; Wetzel, Amy N; Nordquist, Russell D; Newsom, David L; White, Peter
2015-01-20
While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/.
Multilayer mounting for long-term light sheet microscopy of zebrafish.
Weber, Michael; Mickoleit, Michaela; Huisken, Jan
2014-02-27
Light sheet microscopy is the ideal imaging technique to study zebrafish embryonic development. Due to minimal photo-toxicity and bleaching, it is particularly suited for long-term time-lapse imaging over many hours up to several days. However, an appropriate sample mounting strategy is needed that offers both confinement and normal development of the sample. Multilayer mounting, a new embedding technique using low-concentration agarose in optically clear tubes, now overcomes this limitation and unleashes the full potential of light sheet microscopy for real-time developmental biology.
Multilayer Mounting for Long-term Light Sheet Microscopy of Zebrafish
Weber, Michael; Mickoleit, Michaela; Huisken, Jan
2014-01-01
Light sheet microscopy is the ideal imaging technique to study zebrafish embryonic development. Due to minimal photo-toxicity and bleaching, it is particularly suited for long-term time-lapse imaging over many hours up to several days. However, an appropriate sample mounting strategy is needed that offers both confinement and normal development of the sample. Multilayer mounting, a new embedding technique using low-concentration agarose in optically clear tubes, now overcomes this limitation and unleashes the full potential of light sheet microscopy for real-time developmental biology. PMID:24637614
ERIC Educational Resources Information Center
Kocman, Andreas; Fischer, Linda; Weber, Germain
2018-01-01
Background: Obtaining employment is among the most important ambitions of people with intellectual disability. Progress towards comprehensive inclusive employment is hampered by numerous barriers. Limited research is available on these barriers and strategies to overcome them. Method: A mixed method approach in a sample of 30 HR-managers was used…
William T. Borrie; James A. Harding
2002-01-01
A four-stage model of decisionmaking was investigated in the context of low-impact practices among rock climbers in the Bitterroot Valley of Montana. Previous research has suggested that knowing what to do to minimize environmental and social impacts may not be the only factor limiting compliance with recommended visitor behaviors. Results from a sample of climbers at...
On-chip dilution in nanoliter droplets.
Thakur, Raviraj; Amin, Ahmed M; Wereley, Steve
2015-09-07
Droplet microfluidics is enabling reactions at nano- and picoliter scale, resulting in faster and cheaper biological and chemical analyses. However, varying concentrations of samples on a drop-to-drop basis is still a challenging task in droplet microfluidics, primarily limited due to lack of control over individual droplets. In this paper, we report an on-chip microfluidic droplet dilution strategy using three-valve peristaltic pumps.
ERIC Educational Resources Information Center
Kafonek, Katherine; Richards, Tara N.
2017-01-01
Although gender-based violence prevention programs at institutions of higher education (IHEs) are mandated by federal legislation, research focusing on the prevalence or content of programming is limited. The present exploratory research examines campus websites for a nationally representative sample of Title IX eligible IHEs that offer at least a…
Bowden, Jack; Relton, Caroline; Davey Smith, George
2016-01-01
Mendelian randomization (MR) is an increasingly important tool for appraising causality in observational epidemiology. The technique exploits the principle that genotypes are not generally susceptible to reverse causation bias and confounding, reflecting their fixed nature and Mendel’s first and second laws of inheritance. The approach is, however, subject to important limitations and assumptions that, if unaddressed or compounded by poor study design, can lead to erroneous conclusions. Nevertheless, the advent of 2-sample approaches (in which exposure and outcome are measured in separate samples) and the increasing availability of open-access data from large consortia of genome-wide association studies and population biobanks mean that the approach is likely to become routine practice in evidence synthesis and causal inference research. In this article we provide an overview of the design, analysis, and interpretation of MR studies, with a special emphasis on assumptions and limitations. We also consider different analytic strategies for strengthening causal inference. Although impossible to prove causality with any single approach, MR is a highly cost-effective strategy for prioritizing intervention targets for disease prevention and for strengthening the evidence base for public health policy. PMID:26961927
Michalak, Erin E; Suto, Melinda J; Barnes, Steven J; Hou, Sharon; Lapsley, Sara; Scott, Mike W; Murray, Greg; Austin, Jehannine; Elliott, Nusha Balram; Berk, Lesley; Crest Bd
2016-12-01
Self-management represents an important complement to psychosocial treatments for bipolar disorder (BD), but research is limited. Specifically, little is known about self-management approaches for elevated mood states; this study investigated self-management strategies for: (1) maintaining balance in mood, and (2) stopping progression into hypomania/mania. To identify the common components of BD self-management, Delphi Consensus Consultation methods were combined with a Community-Based Participatory Research (CBPR) approach across five study phases: (1) Qualitative dataset content analysis; (2) Academic/grey literature reviews; (3) Content analysis; (4) Two Delphi rounds (rating strategies on a 5-point Likert scale, Very Unhelpful-Very Helpful), and; (5) Quantitative analysis and interpretation. Participants were people with BD and healthcare providers. Phases 1 and 2 identified 262 and 3940 candidate strategies, respectively; 3709 were discarded as duplicates/unintelligible. The remaining 493 were assessed via Delphi methods in Phase 4: 101 people with BD and 52 healthcare providers participated in Round 1; 83 of the BD panel (82%) and 43 of the healthcare provider panel (83%) participated in Round 2-exploratory factor analysis (EFA) was conducted on Round 2 results. EFA was underpowered and sample was not ethnically diverse, limiting generalizability. High concordance was observed in ratings of strategy effectiveness between the two panels. Future research could usefully investigate the provisional discovery here of underlying factors which link individual strategies. For example, 'maintaining hope' underpinned strategies for maintaining balance, and 'decreasing use of stimulants' underpinned strategies to interrupt hypo/manic ascent. There is merit in combining CBPR and Delphi methods. Copyright © 2016 Elsevier B.V. All rights reserved.
Strategies for Analyzing Sub-Micrometer Features with the FE-EPMA
NASA Astrophysics Data System (ADS)
McSwiggen, P.; Armstrong, J. T.; Nielsen, C.
2013-12-01
Changes in column design and electronics, as well as new types of spectrometers and analyzing crystals, have significantly advanced electron microprobes, in terms of stability, reproducibility and detection limits. A major advance in spatial resolution has occurred through the use of the field emission electron gun. The spatial resolution of an analysis is controlled by the diameter of the electron beam and the amount of scatter that takes place within the sample. The beam diameter is controlled by the column and type of electron gun being used. The accelerating voltage and the average atomic number/density of the sample control the amount of electron scatter within the sample. However a large electron interaction volume does not necessarily mean a large analytical volume. The beam electrons may spread out within a large volume, but if the electrons lack sufficient energy to produce the X-ray of interest, the analytical volume could be significantly smaller. Therefore there are two competing strategies for creating the smallest analytical volumes. The first strategy is to reduce the accelerating voltage to produce the smallest electron interaction volume. This low kV analytical approach is ultimately limited by the size of the electron beam itself. With a field emission gun, normally the smallest analytical area is achieved at around 5-7 kV. At lower accelerating voltages, the increase in the beam diameter begins to overshadow the reduction in internal scattering. For tungsten filament guns, the smallest analytical volume is reached at higher accelerating voltages. The second strategy is to minimize the overvoltage during the analysis. If the accelerating voltage is only 1-3 kV greater than the critical ionization energy for the X-ray line of interest, then even if the overall electron interaction volume is large, those electrons quickly loose sufficient energy to produce the desired X-rays. The portion of the interaction volume in which the desired X-rays will be produce will be very small and very near the surface. Both strategies have advantages and disadvantages depending on the ultimate goal of the analysis and the elements involved. This work will examine a number of considerations when attempting to decide which approach is best for a given analytical situation. These include: (1) the size of the analytical volumes, (2) minimum detection limits, (3) quality of the matrix corrections, (4) secondary fluorescence, (5) effects of surface contamination, oxide layers, and carbon coatings. This work is based on results largely from the Fe-Ni binary. A simple conclusion cannot be draw as to which strategy is better overall. The determination is highly system dependent. For many mineral systems, both strategies used in combination will produce the best results. Using multiple accelerating voltages to preform a single analysis allows the analyst to optimize their analytical conditions for each element individually.
Dynamic Event Tree advancements and control logic improvements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego
The RAVEN code has been under development at the Idaho National Laboratory since 2012. Its main goal is to create a multi-purpose platform for the deploying of all the capabilities needed for Probabilistic Risk Assessment, uncertainty quantification, data mining analysis and optimization studies. RAVEN is currently equipped with three different sampling categories: Forward samplers (Monte Carlo, Latin Hyper Cube, Stratified, Grid Sampler, Factorials, etc.), Adaptive Samplers (Limit Surface search, Adaptive Polynomial Chaos, etc.) and Dynamic Event Tree (DET) samplers (Deterministic and Adaptive Dynamic Event Trees). The main subject of this document is to report the activities that have been donemore » in order to: start the migration of the RAVEN/RELAP-7 control logic system into MOOSE, and develop advanced dynamic sampling capabilities based on the Dynamic Event Tree approach. In order to provide to all MOOSE-based applications a control logic capability, in this Fiscal Year an initial migration activity has been initiated, moving the control logic system, designed for RELAP-7 by the RAVEN team, into the MOOSE framework. In this document, a brief explanation of what has been done is going to be reported. The second and most important subject of this report is about the development of a Dynamic Event Tree (DET) sampler named “Hybrid Dynamic Event Tree” (HDET) and its Adaptive variant “Adaptive Hybrid Dynamic Event Tree” (AHDET). As other authors have already reported, among the different types of uncertainties, it is possible to discern two principle types: aleatory and epistemic uncertainties. The classical Dynamic Event Tree is in charge of treating the first class (aleatory) uncertainties; the dependence of the probabilistic risk assessment and analysis on the epistemic uncertainties are treated by an initial Monte Carlo sampling (MCDET). From each Monte Carlo sample, a DET analysis is run (in total, N trees). The Monte Carlo employs a pre-sampling of the input space characterized by epistemic uncertainties. The consequent Dynamic Event Tree performs the exploration of the aleatory space. In the RAVEN code, a more general approach has been developed, not limiting the exploration of the epistemic space through a Monte Carlo method but using all the forward sampling strategies RAVEN currently employs. The user can combine a Latin Hyper Cube, Grid, Stratified and Monte Carlo sampling in order to explore the epistemic space, without any limitation. From this pre-sampling, the Dynamic Event Tree sampler starts its aleatory space exploration. As reported by the authors, the Dynamic Event Tree is a good fit to develop a goal-oriented sampling strategy. The DET is used to drive a Limit Surface search. The methodology that has been developed by the authors last year, performs a Limit Surface search in the aleatory space only. This report documents how this approach has been extended in order to consider the epistemic space interacting with the Hybrid Dynamic Event Tree methodology.« less
Becheler, Ronan; Cassone, Anne-Laure; Noel, Philippe; Mouchel, Olivier; Morrison, Cheryl L.; Arnaud-Haond, Sophie
2017-01-01
Sampling in the deep sea is a technical challenge, which has hindered the acquisition of robust datasets that are necessary to determine the fine-grained biological patterns and processes that may shape genetic diversity. Estimates of the extent of clonality in deep-sea species, despite the importance of clonality in shaping the local dynamics and evolutionary trajectories, have been largely obscured by such limitations. Cold-water coral reefs along European margins are formed mainly by two reef-building species, Lophelia pertusa and Madrepora oculata. Here we present a fine-grained analysis of the genotypic and genetic composition of reefs occurring in the Bay of Biscay, based on an innovative deep-sea sampling protocol. This strategy was designed to be standardized, random, and allowed the georeferencing of all sampled colonies. Clonal lineages discriminated through their Multi-Locus Genotypes (MLG) at 6–7 microsatellite markers could thus be mapped to assess the level of clonality and the spatial spread of clonal lineages. High values of clonal richness were observed for both species across all sites suggesting a limited occurrence of clonality, which likely originated through fragmentation. Additionally, spatial autocorrelation analysis underlined the possible occurrence of fine-grained genetic structure in several populations of both L. pertusa and M. oculata. The two cold-water coral species examined had contrasting patterns of connectivity among canyons, with among-canyon genetic structuring detected in M. oculata, whereas L. pertusa was panmictic at the canyon scale. This study exemplifies that a standardized, random and georeferenced sampling strategy, while challenging, can be applied in the deep sea, and associated benefits outlined here include improved estimates of fine grained patterns of clonality and dispersal that are comparable across sites and among species.
Trutschel, Diana; Palm, Rebecca; Holle, Bernhard; Simon, Michael
2017-11-01
Because not every scientific question on effectiveness can be answered with randomised controlled trials, research methods that minimise bias in observational studies are required. Two major concerns influence the internal validity of effect estimates: selection bias and clustering. Hence, to reduce the bias of the effect estimates, more sophisticated statistical methods are needed. To introduce statistical approaches such as propensity score matching and mixed models into representative real-world analysis and to conduct the implementation in statistical software R to reproduce the results. Additionally, the implementation in R is presented to allow the results to be reproduced. We perform a two-level analytic strategy to address the problems of bias and clustering: (i) generalised models with different abilities to adjust for dependencies are used to analyse binary data and (ii) the genetic matching and covariate adjustment methods are used to adjust for selection bias. Hence, we analyse the data from two population samples, the sample produced by the matching method and the full sample. The different analysis methods in this article present different results but still point in the same direction. In our example, the estimate of the probability of receiving a case conference is higher in the treatment group than in the control group. Both strategies, genetic matching and covariate adjustment, have their limitations but complement each other to provide the whole picture. The statistical approaches were feasible for reducing bias but were nevertheless limited by the sample used. For each study and obtained sample, the pros and cons of the different methods have to be weighted. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Becheler, Ronan; Cassone, Anne-Laure; Noël, Philippe; Mouchel, Olivier; Morrison, Cheryl L.; Arnaud-Haond, Sophie
2017-11-01
Sampling in the deep sea is a technical challenge, which has hindered the acquisition of robust datasets that are necessary to determine the fine-grained biological patterns and processes that may shape genetic diversity. Estimates of the extent of clonality in deep-sea species, despite the importance of clonality in shaping the local dynamics and evolutionary trajectories, have been largely obscured by such limitations. Cold-water coral reefs along European margins are formed mainly by two reef-building species, Lophelia pertusa and Madrepora oculata. Here we present a fine-grained analysis of the genotypic and genetic composition of reefs occurring in the Bay of Biscay, based on an innovative deep-sea sampling protocol. This strategy was designed to be standardized, random, and allowed the georeferencing of all sampled colonies. Clonal lineages discriminated through their Multi-Locus Genotypes (MLG) at 6-7 microsatellite markers could thus be mapped to assess the level of clonality and the spatial spread of clonal lineages. High values of clonal richness were observed for both species across all sites suggesting a limited occurrence of clonality, which likely originated through fragmentation. Additionally, spatial autocorrelation analysis underlined the possible occurrence of fine-grained genetic structure in several populations of both L. pertusa and M. oculata. The two cold-water coral species examined had contrasting patterns of connectivity among canyons, with among-canyon genetic structuring detected in M. oculata, whereas L. pertusa was panmictic at the canyon scale. This study exemplifies that a standardized, random and georeferenced sampling strategy, while challenging, can be applied in the deep sea, and associated benefits outlined here include improved estimates of fine grained patterns of clonality and dispersal that are comparable across sites and among species.
Rising rates of labor induction: present concerns and future strategies.
Rayburn, William F; Zhang, Jun
2002-07-01
The rate of labor induction nationwide increased gradually from 9.5% to 19.4% between 1990 and 1998. Reasons for this doubling of inductions relate to widespread availability of cervical ripening agents, pressure from patients, conveniences to physicians, and litigious constraints. The increase in medically indicated inductions was slower than the overall increase, suggesting that induction for marginal or elective reasons has risen more rapidly. Data to support or refute the benefits of marginal or elective inductions are limited. Many trials of inductions for marginal indications are either nonexistent or retrospective with small sample sizes, thereby limiting definitive conclusions. Until prospective clinical trials can better validate reasons for the liberal use of labor induction, it would seem prudent to maintain a cautious approach, especially among nulliparous women. Strategies are proposed for developing evidence-based guidelines to reduce the presumed increase in health care costs, risk of cesarean delivery for nulliparas, and overscheduling in labor and delivery.
Lesion Quantification in Dual-Modality Mammotomography
NASA Astrophysics Data System (ADS)
Li, Heng; Zheng, Yibin; More, Mitali J.; Goodale, Patricia J.; Williams, Mark B.
2007-02-01
This paper describes a novel x-ray/SPECT dual modality breast imaging system that provides 3D structural and functional information. While only a limited number of views on one side of the breast can be acquired due to mechanical and time constraints, we developed a technique to compensate for the limited angle artifact in reconstruction images and accurately estimate both the lesion size and radioactivity concentration. Various angular sampling strategies were evaluated using both simulated and experimental data. It was demonstrated that quantification of lesion size to an accuracy of 10% and quantification of radioactivity to an accuracy of 20% are feasible from limited-angle data acquired with clinically practical dosage and acquisition time
NASA Astrophysics Data System (ADS)
Chen, Daniel; Chen, Damian; Yen, Ray; Cheng, Mingjen; Lan, Andy; Ghaskadvi, Rajesh
2008-11-01
Identifying hotspots--structures that limit the lithography process window--become increasingly important as the industry relies heavily on RET to print sub-wavelength designs. KLA-Tencor's patented Process Window Qualification (PWQ) methodology has been used for this purpose in various fabs. PWQ methodology has three key advantages (a) PWQ Layout--to obtain the best sensitivity (b) Design Based Binning--for pattern repeater analysis (c) Intelligent sampling--for the best DOI sampling rate. This paper evaluates two different analysis strategies for SEM review sampling successfully deployed at Inotera Memories, Inc. We propose a new approach combining the location repeater and pattern repeaters. Based on a recent case study the new sampling flow reduces the data analysis and sampling time from 6 hours to 1.5 hour maintaining maximum DOI sample rate.
Limited sampling strategies to predict the area under the concentration-time curve for rifampicin.
Medellín-Garibay, Susanna E; Correa-López, Tania; Romero-Méndez, Carmen; Milán-Segovia, Rosa C; Romano-Moreno, Silvia
2014-12-01
Rifampicin (RMP) is the most effective first-line antituberculosis drug. One of the most critical aspects of using it in fixed-drug combination formulations is to ensure it reaches therapeutic levels in blood. The determination of the area under the concentration-time curve (AUC) and appropriate dose adjustment of this drug may contribute to optimization of therapy. Even when the maximal concentration (Cmax) of RMP also predicts its sterilizing effect, the time to reach it (Tmax) takes 40 minutes to 6 hours. The aim of this study was to develop a limited sampling strategy (LSS) for therapeutic drug monitoring assistance for RMP. Full concentration-time curves were obtained from 58 patients with tuberculosis (TB) after the oral administration of RMP in fixed-drug combination formulation. A validated high-performance liquid chromatographic method was used. Pharmacokinetic parameters were estimated with a noncompartmental model. Generalized linear models were obtained by forward steps, and bootstrapping was performed to develop LSS to predict AUC curve from time 0 to the last measured at 24 hours postdose (AUC0-24). The predictive performance of the proposed models was assessed using RMP profiles from 25 other TB patients by comparing predicted and observed AUC0-24. The mean AUC0-24 in the current study was 91.46 ± 36.7 mg·h·L, and the most convenient sampling time points to predict it were 2, 4 and 12 hours postdose (slope [m] = 0.955 ± 0.06; r = 0.92). The mean prediction error was -0.355%, and the root mean square error was 5.6% in the validation group. Alternate LSSs are proposed with 2 of these sampling time points, which also provide good predictions when the 3 most convenient are not feasible. The AUC0-24 for RMP in TB patients can be predicted with acceptable precision through a 2- or 3-point sampling strategy, despite wide interindividual variability. These LSSs could be applied in clinical practice to optimize anti-TB therapy based on therapeutic drug monitoring.
What Strategies Do Physicians and Patients Discuss to Reduce Out-of-Pocket Costs?
Hunter, Wynn G.; Zhang, Cecilia Z.; Hesson, Ashley; Davis, J. Kelly; Kirby, Christine; Williamson, Lillie D.; Barnett, Jamison A.; Ubel, Peter A.
2015-01-01
Background More than 1 in 4 Americans report difficulty paying medical bills. Cost-reducing strategies discussed during outpatient physician visits remain poorly characterized. Objective We sought to determine how often patients and physicians discuss healthcare costs during outpatient visits and what strategies, if any, they discussed to lower patient out-of-pocket costs. Design Retrospective analysis of dialogue from 1,755 outpatient visits in community-based practices nationwide from 2010–2014. The study population included 677 patients with breast cancer, 422 with depression, and 656 with rheumatoid arthritis visiting 56 oncologists, 36 psychiatrists, and 26 rheumatologists, respectively. Results Thirty percent of visits contained cost conversations (95% confidence interval [CI], 28 to 32). Forty-four percent of cost conversations involved discussion of cost-saving strategies (95% CI, 40 to 48; median duration, 68 seconds). We identified 4 strategies to lower costs without changing the care plan – in order of overall frequency, (1) Changing logistics of care; (2) Facilitating copay assistance; (3) Providing free samples; (4) Changing/adding insurance plans – and 4 strategies to reduce costs by changing the care plan – (1) Switching to lower-cost alternative therapy/diagnostic; (2) Switching from brand name to generic; (3) Changing dosage/frequency; (4) Stopping/withholding interventions. Strategies were relatively consistent across health conditions, except for switching to lower-cost alternative (more common in breast oncology), and providing free samples (more common in depression). Limitation Focus on three conditions with potentially high out-of-pocket costs. Conclusions Despite price opacity, physicians and patients discuss a variety of out-of-pocket cost reduction strategies during clinic visits. Almost half of cost discussions mention one or more cost-saving strategies, with more frequent mention of those not requiring care-plan changes. PMID:26785714
Ernst, Madeleine; Silva, Denise Brentan; Silva, Ricardo Roberto; Vêncio, Ricardo Z N; Lopes, Norberto Peporine
2014-06-01
Covering: up to 2013. Plant metabolomics is a relatively recent research field that has gained increasing interest in the past few years. Up to the present day numerous review articles and guide books on the subject have been published. This review article focuses on the current applications and limitations of the modern mass spectrometry techniques, especially in combination with electrospray ionisation (ESI), an ionisation method which is most commonly applied in metabolomics studies. As a possible alternative to ESI, perspectives on matrix-assisted laser desorption/ionisation mass spectrometry (MALDI-MS) in metabolomics studies are introduced, a method which still is not widespread in the field. In metabolomics studies the results must always be interpreted in the context of the applied sampling procedures as well as data analysis. Different sampling strategies are introduced and the importance of data analysis is illustrated in the example of metabolic network modelling.
Pan, Jialiang; Hu, Yuling; Liang, Tingan; Li, Gongke
2012-11-02
A novel and simple in-mold coating strategy was proposed for the preparation of uniform solid-phase microextraction (SPME) coatings. Such a strategy is based on the direct synthesis of the polymer coating on the surface of a solid fiber using a glass capillary as the mold. The capillary was removed and the polymer with well-controlled thickness could be coated on the silica fiber reproductively. Following the strategy, a new poly(acrylamide-co-ethylene glycol dimethacrylate) (poly(AM-co-EGDMA)) coating was prepared for the preconcentration of 24-epibrassinolide (24-epiBL) from plant matrix. The coating had the enrichment factor of 32 folds, and the extraction efficiency per unit thickness was 5 times higher than that of the commercial polydimethylsiloxane/divinylbenzene (PDMS/DVB) coating. A novel method based on SPME coupled with derivatization and large volume injection-high performance liquid chromatography (LVI-HPLC) was developed for the analysis of 24-epiBL. The linear range was 0.500-20.0 μg/L with the detection limit of 0.13 μg/L. The amounts of endogenous 24-epiBL in rape and sunflower breaking-wall pollens samples were determined with satisfactory recovery (77.8-104%) and reproducibility (3.9-7.9%). The SPME-DE/LVI-HPLC method is rapid, reliable, convenient and applicable for complicated plant samples. Copyright © 2012 Elsevier B.V. All rights reserved.
Advances in ultrasensitive mass spectrometry of organic molecules.
Kandiah, Mathivathani; Urban, Pawel L
2013-06-21
Ultrasensitive mass spectrometric analysis of organic molecules is important for various branches of chemistry, and other fields including physics, earth and environmental sciences, archaeology, biomedicine, and materials science. It finds applications--as an enabling tool--in systems biology, biological imaging, clinical analysis, and forensics. Although there are a number of technical obstacles associated with the analysis of samples by mass spectrometry at ultratrace level (for example analyte losses during sample preparation, insufficient sensitivity, ion suppression), several noteworthy developments have been made over the years. They include: sensitive ion sources, loss-free interfaces, ion optics components, efficient mass analyzers and detectors, as well as "smart" sample preparation strategies. Some of the mass spectrometric methods published to date can achieve sensitivity which is by several orders of magnitude higher than that of alternative approaches. Femto- and attomole level limits of detection are nowadays common, while zepto- and yoctomole level limits of detection have also been reported. We envision that the ultrasensitive mass spectrometric assays will soon contribute to new discoveries in bioscience and other areas.
Measuring Protective Behavioral Strategies for Marijuana Use Among Young Adults.
Pedersen, Eric R; Hummer, Justin F; Rinker, Dipali Venkataraman; Traylor, Zach K; Neighbors, Clayton
2016-05-01
Marijuana use can result in a variety of negative consequences, yet it remains popular among young adults and the general public at large. Combined with the growing empirical support for the benefits of medicinal marijuana as well as the steady increase in popular opinion regarding its legalization, it is of growing importance to identify strategies that may mitigate the harms related to marijuana use, reduce consumption levels, and limit resulting negative consequences among young adults who use marijuana. The purpose of this study was to develop and conduct initial psychometric analyses on a new scale, which we named the Protective Behavioral Strategies for Marijuana (PBSM) scale. A sample of undergraduate college students who reported past-6-month marijuana use (n = 210) responded to the initial pool of PBSM items and completed measures of marijuana use, consequences from marijuana use, alcohol use, and protective behavioral strategies for alcohol. Results from an iterative principal component analyses process yielded a single-factor structure with 39 items. The PBSM mean composite score negatively associated with marijuana use and consequences, with the strongest correlations evident for pastmonth users. The PBSM also significantly positively correlated with alcohol protective strategies. Protective behavioral strategies for marijuana appear to be a measurable construct that are related to marijuana frequency and consequences, and thus may be a useful component of intervention and prevention programs with young adults. More work testing the PBSM items with larger and more diverse samples of young adults is encouraged.
Measuring Protective Behavioral Strategies for Marijuana Use Among Young Adults
Pedersen, Eric R.; Hummer, Justin F.; Rinker, Dipali Venkataraman; Traylor, Zach K.; Neighbors, Clayton
2016-01-01
Objective: Marijuana use can result in a variety of negative consequences, yet it remains popular among young adults and the general public at large. Combined with the growing empirical support for the benefits of medicinal marijuana as well as the steady increase in popular opinion regarding its legalization, it is of growing importance to identify strategies that may mitigate the harms related to marijuana use, reduce consumption levels, and limit resulting negative consequences among young adults who use marijuana. The purpose of this study was to develop and conduct initial psychometric analyses on a new scale, which we named the Protective Behavioral Strategies for Marijuana (PBSM) scale. Method: A sample of undergraduate college students who reported past-6-month marijuana use (n = 210) responded to the initial pool of PBSM items and completed measures of marijuana use, consequences from marijuana use, alcohol use, and protective behavioral strategies for alcohol. Results: Results from an iterative principal component analyses process yielded a single-factor structure with 39 items. The PBSM mean composite score negatively associated with marijuana use and consequences, with the strongest correlations evident for past-month users. The PBSM also significantly positively correlated with alcohol protective strategies. Conclusions: Protective behavioral strategies for marijuana appear to be a measurable construct that are related to marijuana frequency and consequences, and thus may be a useful component of intervention and prevention programs with young adults. More work testing the PBSM items with larger and more diverse samples of young adults is encouraged. PMID:27172576
Liu, Xiaojuan; Xu, Ningning; Gai, Panpan; Li, Feng
2018-08-01
Since melamine is a strong hazard to human health, the development of new methods for highly sensitive detection of melamine is highly desirable. Herein, a novel fluorescent biosensing strategy was designed for sensitive and selective melamine assay based on the recognition ability of abasic (AP) site in triplex towards melamine and signal amplification by Mg 2+ -dependent DNAzyme. In this strategy, the melamine-induced formation of triplex DNA was employed to trigger the strand displacement reaction (SDR). The SDR process converted the specific target recognition into the release and activation of Mg 2+ -dependent DNAzyme, which could catalyze the cleavage of fluorophore/quencher labeled DNA substrate (FQ), resulting in a significantly increased fluorescent signal. Under the optimal conditions, the fluorescent signal has a linear relationship with the logarithm of the melamine concentration in a wide range of 0.005-50 μM. The detection limit was estimated to be 0.9 nM (0.1ppb), which is sufficiently sensitive for practical application. Furthermore, this strategy exhibits high selectivity against other potential interfering substances, and the practical application of this strategy for milk samples reveals that the proposed strategy works well for melamine assay in real samples. Therefore, this strategy presents a new method for the sensitive melamine assay and holds great promise for sensing applications in the environment and the food safety field. Copyright © 2018 Elsevier B.V. All rights reserved.
Gass, Katherine M; Sime, Heven; Mwingira, Upendo J; Nshala, Andreas; Chikawe, Maria; Pelletreau, Sonia; Barbre, Kira A; Deming, Michael S; Rebollo, Maria P
2017-10-01
Endemicity mapping is required to determining whether a district requires mass drug administration (MDA). Current guidelines for mapping LF require that two sites be selected per district and within each site a convenience sample of 100 adults be tested for antigenemia or microfilaremia. One or more confirmed positive tests in either site is interpreted as an indicator of potential transmission, prompting MDA at the district-level. While this mapping strategy has worked well in high-prevalence settings, imperfect diagnostics and the transmission potential of a single positive adult have raised concerns about the strategy's use in low-prevalence settings. In response to these limitations, a statistically rigorous confirmatory mapping strategy was designed as a complement to the current strategy when LF endemicity is uncertain. Under the new strategy, schools are selected by either systematic or cluster sampling, depending on population size, and within each selected school, children 9-14 years are sampled systematically. All selected children are tested and the number of positive results is compared against a critical value to determine, with known probabilities of error, whether the average prevalence of LF infection is likely below a threshold of 2%. This confirmatory mapping strategy was applied to 45 districts in Ethiopia and 10 in Tanzania, where initial mapping results were considered uncertain. In 42 Ethiopian districts, and all 10 of the Tanzanian districts, the number of antigenemic children was below the critical cutoff, suggesting that these districts do not require MDA. Only three Ethiopian districts exceeded the critical cutoff of positive results. Whereas the current World Health Organization guidelines would have recommended MDA in all 55 districts, the present results suggest that only three of these districts requires MDA. By avoiding unnecessary MDA in 52 districts, the confirmatory mapping strategy is estimated to have saved a total of $9,293,219.
Microbial Groundwater Sampling Protocol for Fecal-Rich Environments
Harter, Thomas; Watanabe, Naoko; Li, Xunde; Atwill, Edward R; Samuels, William
2014-01-01
Inherently, confined animal farming operations (CAFOs) and other intense fecal-rich environments are potential sources of groundwater contamination by enteric pathogens. The ubiquity of microbial matter poses unique technical challenges in addition to economic constraints when sampling wells in such environments. In this paper, we evaluate a groundwater sampling protocol that relies on extended purging with a portable submersible stainless steel pump and Teflon® tubing as an alternative to equipment sterilization. The protocol allows for collecting a large number of samples quickly, relatively inexpensively, and under field conditions with limited access to capacity for sterilizing equipment. The protocol is tested on CAFO monitoring wells and considers three cross-contamination sources: equipment, wellbore, and ambient air. For the assessment, we use Enterococcus, a ubiquitous fecal indicator bacterium (FIB), in laboratory and field tests with spiked and blank samples, and in an extensive, multi-year field sampling campaign on 17 wells within 2 CAFOs. The assessment shows that extended purging can successfully control for equipment cross-contamination, but also controls for significant contamination of the well-head, within the well casing and within the immediate aquifer vicinity of the well-screen. Importantly, our tests further indicate that Enterococcus is frequently entrained in water samples when exposed to ambient air at a CAFO during sample collection. Wellbore and air contamination pose separate challenges in the design of groundwater monitoring strategies on CAFOs that are not addressed by equipment sterilization, but require adequate QA/QC procedures and can be addressed by the proposed sampling strategy. PMID:24903186
Fenske, Richard A.; Bradman, Asa; Whyatt, Robin M.; Wolff, Mary S.; Barr, Dana B.
2005-01-01
In this article we examine sampling strategies and analytical methods used in a series of recent studies of children’s exposure to pesticides that may prove useful in the design and implementation of the National Children’s Study. We focus primarily on the experiences of four of the National Institute of Environmental Health Sciences/U.S. Environmental Protection Agency/ Children’s Centers and include University of Washington studies that predated these centers. These studies have measured maternal exposures, perinatal exposures, infant and toddler exposures, and exposure among young children through biologic monitoring, personal sampling, and environmental monitoring. Biologic monitoring appears to be the best available method for assessment of children’s exposure to pesticides, with some limitations. It is likely that a combination of biomarkers, environmental measurements, and questionnaires will be needed after careful consideration of the specific hypotheses posed by investigators and the limitations of each exposure metric. The value of environmental measurements, such as surface and toy wipes and indoor air or house dust samples, deserves further investigation. Emphasis on personal rather than environmental sampling in conjunction with urine or blood sampling is likely to be most effective at classifying exposure. For infants and young children, ease of urine collection (possible for extended periods of time) may make these samples the best available approach to capturing exposure variability of nonpersistent pesticides; additional validation studies are needed. Saliva measurements of pesticides, if feasible, would overcome the limitations of urinary metabolite-based exposure analysis. Global positioning system technology appears promising in the delineation of children’s time–location patterns. PMID:16203262
Teaching calculus using module based on cooperative learning strategy
NASA Astrophysics Data System (ADS)
Arbin, Norazman; Ghani, Sazelli Abdul; Hamzah, Firdaus Mohamad
2014-06-01
The purpose of the research is to evaluate the effectiveness of a module which utilizes the cooperative learning for teaching Calculus for limit, derivative and integral. The sample consists of 50 semester 1 students from the Science Programme (AT 16) Sultan Idris Education University. A set of questions of related topics (pre and post) has been used as an instrument to collect data. The data is analyzed using inferential statistics involving the paired sample t-test and the independent t-test. The result shows that students have positive inclination towards the modulein terms of understanding.
Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z; Schlein, Alexandra N; Hooker, Jonathan C; Fazeli Dehkordy, Soudabeh; Hamilton, Gavin; Reeder, Scott B; Loomba, Rohit; Sirlin, Claude B
2018-04-01
Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. Retrospective secondary analysis of prospectively acquired clinical research data. A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradient-recalled echo technique. An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland-Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland-Altman analyses. The study population's mean whole-liver PDFF was 10.1 ± 8.9% (range: 1.1-44.1%). Although there was no significant difference in average segmental (P = 0.452) or lobar (P = 0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) <1.5%, including four-ROI strategies with two ROIs from each lobe, which all had LOA <1.5%. 14/36 (39%) of two-ROI strategies and 74/84 (88%) of three-ROI strategies had ICC >0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:988-994. © 2017 International Society for Magnetic Resonance in Medicine.
Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z.; Schlein, Alexandra N.; Hooker, Jonathan C.; Dehkordy, Soudabeh Fazeli; Hamilton, Gavin; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.
2017-01-01
BACKGROUND Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. PURPOSE To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. STUDY TYPE Retrospective secondary analysis of prospectively acquired clinical research data. POPULATION A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. FIELD STRENGTH/SEQUENCE Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradientrecalled echo technique. ASSESSMENT An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. STATISTICAL TESTING Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland–Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland–Altman analyses. RESULTS The study population’s mean whole-liver PDFF was 10.1±8.9% (range: 1.1–44.1%). Although there was no significant difference in average segmental (P=0.452) or lobar (P=0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥ 4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) <1.5%, including four-ROI strategies with two ROIs from each lobe, which all had LOA <1.5%. 14/36 (39%) of two-ROI strategies and 74/84 (88%) of three-ROI strategies had ICC >0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. DATA CONCLUSION Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. Level of Evidence 3 Technical Efficacy Stage 2 PMID:28842937
NASA Astrophysics Data System (ADS)
Jomaa, Seifeddine; Jiang, Sanyuan; Yang, Xiaoqiang; Rode, Michael
2016-04-01
It is known that a good evaluation and prediction of surface water pollution is mainly limited by the monitoring strategy and the capability of the hydrological water quality model to reproduce the internal processes. To this end, a compromise sampling frequency, which can reflect the dynamical behaviour of leached nutrient fluxes responding to changes in land use, agriculture practices and point sources, and appropriate process-based water quality model are required. The objective of this study was to test the identification of hydrological water quality model parameters (nitrogen and phosphorus) under two different monitoring strategies: (1) regular grab-sampling approach and (2) regular grab-sampling with additional monitoring during the hydrological events using automatic samplers. First, the semi-distributed hydrological water quality HYPE (Hydrological Predictions for the Environment) model was successfully calibrated (1994-1998) for discharge (NSE = 0.86), nitrate-N (lowest NSE for nitrate-N load = 0.69), particulate phosphorus and soluble phosphorus in the Selke catchment (463 km2, central Germany) for the period 1994-1998 using regular grab-sampling approach (biweekly to monthly for nitrogen and phosphorus concentrations). Second, the model was successfully validated during the period 1999-2010 for discharge, nitrate-N, particulate-phosphorus and soluble-phosphorus (lowest NSE for soluble phosphorus load = 0.54). Results, showed that when additional sampling during the events with random grab-sampling approach was used (period 2011-2013), the hydrological model could reproduce only the nitrate-N and soluble phosphorus concentrations reasonably well. However, when additional sampling during the hydrological events was considered, the HYPE model could not represent the measured particulate phosphorus. This reflects the importance of suspended sediment during the hydrological events increasing the concentrations of particulate phosphorus. The HYPE model could reproduce the total phosphorus during the period 2011-2013 only when the sediment transport-related model parameters was re-identified again considering the automatic sampling during the high-flow conditions.
Making health care quality reports easier to use.
Hibbard, J H; Peters, E; Slovic, P; Finucane, M L; Tusler, M
2001-11-01
Although there is evidence that consumers want comparative quality information, most studies indicate that consumers make limited use of the data in decision making. The reasons for the limited use appear to be the complexity of the information and the difficulty of processing and using the amount of information in reports. The purpose of this investigation was to determine whether there are approaches to reporting comparative information that make it easier for consumers to comprehend the information. Further, the degree to which consumers who have a low level of skill can accurately use that information when it is presented in a format that is easier to use was examined. The study used an experimental design to examine how different presentation approaches affect the use of information. Participants were randomly assigned to different conditions and were asked to review information and complete a decision task related to using comparative information and making health plan selections. Two separate convenience samples were used in the study: an elderly Medicare sample (N = 253), and a nonelderly sample (N = 239). The findings indicate that there are data presentation approaches that help consumers who have lower skills use information more accurately. Some of these presentation strategies (for example, relative stars) improve comprehension among the lower skilled, and other strategies (for example, evaluative labels) appear to aid those in the midrange of comprehension skill. Using these approaches in reporting would likely increase the use of the comparative information and increase the efficacy of reporting efforts.
Kadar, Hanane; Veyrand, Bruno; Barbarossa, Andrea; Pagliuca, Giampiero; Legrand, Arnaud; Bosher, Cécile; Boquien, Clair-Yves; Durand, Sophie; Monteau, Fabrice; Antignac, Jean-Philippe; Le Bizec, Bruno
2011-10-01
Perfluorinated compounds (PFCs) are man-made chemicals for which endocrine disrupting properties and related possible side effects on human health have been reported, particularly in the case of an exposure during the early stages of development, (notably the perinatal period). Existing analytical methods dedicated to PFCs monitoring in food and/or human fluids are currently based on liquid chromatography coupled to tandem mass spectrometry, and were recently demonstrated to present some limitations in terms of sensitivity and/or specificity. An alternative strategy dedicated to the analysis of fourteen PFCs in human breast milk was proposed, based on an effective sample preparation followed by a liquid chromatography coupled to high resolution mass spectrometry measurement (LC-HRMS). This methodology confirmed the high interest for HRMS after negative ionization for such halogenated substances, and finally permitted to reach detection limits around the pg mL(-1) range with an outstanding signal specificity compared to LC-MS/MS. The proposed method was applied to a first set of 30 breast milk samples from French women. The main PFCs detected in all these samples were PFOS and PFOA with respective median values of 74 (range from 24 to 171) and 57 (range from 18 to 102) pg mL(-1), respectively. These exposure data appeared in the same range as other reported values for European countries. Copyright © 2011 Elsevier Ltd. All rights reserved.
Accumulation of polycyclic aromatic hydrocarbons by Neocalanus copepods in Port Valdez, Alaska.
Carls, Mark G; Short, Jeffrey W; Payne, James
2006-11-01
Sampling zooplankton is a useful strategy for observing trace hydrocarbon concentrations in water because samples represent an integrated average over a considerable effective sampling volume and are more representative of the sampled environment than discretely collected water samples. We demonstrate this method in Port Valdez, Alaska, an approximately 100 km(2) basin that receives about 0.5-2.4 kg of polynuclear aromatic hydrocarbons (PAH) per day. Total PAH (TPAH) concentrations (0.61-1.31 microg/g dry weight), composition, and spatial distributions in a lipid-rich copepod, Neocalanus were consistent with the discharge as the source of contamination. Although Neocalanus acquire PAH from water or suspended particulate matter, total PAH concentrations in these compartments were at or below method detection limits, demonstrating plankton can amplify trace concentrations to detectable levels useful for study.
Application of Deep Learning in GLOBELAND30-2010 Product Refinement
NASA Astrophysics Data System (ADS)
Liu, T.; Chen, X.
2018-04-01
GlobeLand30, as one of the best Global Land Cover (GLC) product at 30-m resolution, has been widely used in many research fields. Due to the significant spectral confusion among different land cover types and limited textual information of Landsat data, the overall accuracy of GlobeLand30 is about 80 %. Although such accuracy is much higher than most other global land cover products, it cannot satisfy various applications. There is still a great need of an effective method to improve the quality of GlobeLand30. The explosive high-resolution satellite images and remarkable performance of Deep Learning on image classification provide a new opportunity to refine GlobeLand30. However, the performance of deep leaning depends on quality and quantity of training samples as well as model training strategy. Therefore, this paper 1) proposed an automatic training sample generation method via Google earth to build a large training sample set; and 2) explore the best training strategy for land cover classification using GoogleNet (Inception V3), one of the most widely used deep learning network. The result shows that the fine-tuning from first layer of Inception V3 using rough large sample set is the best strategy. The retrained network was then applied in one selected area from Xi'an city as a case study of GlobeLand30 refinement. The experiment results indicate that the proposed approach with Deep Learning and google earth imagery is a promising solution for further improving accuracy of GlobeLand30.
Convenience samples and caregiving research: how generalizable are the findings?
Pruchno, Rachel A; Brill, Jonathan E; Shands, Yvonne; Gordon, Judith R; Genderson, Maureen Wilson; Rose, Miriam; Cartwright, Francine
2008-12-01
We contrast characteristics of respondents recruited using convenience strategies with those of respondents recruited by random digit dial (RDD) methods. We compare sample variances, means, and interrelationships among variables generated from the convenience and RDD samples. Women aged 50 to 64 who work full time and provide care to a community-dwelling older person were recruited using either RDD (N = 55) or convenience methods (N = 87). Telephone interviews were conducted using reliable, valid measures of demographics, characteristics of the care recipient, help provided to the care recipient, evaluations of caregiver-care recipient relationship, and outcomes common to caregiving research. Convenience and RDD samples had similar variances on 68.4% of the examined variables. We found significant mean differences for 63% of the variables examined. Bivariate correlations suggest that one would reach different conclusions using the convenience and RDD sample data sets. Researchers should use convenience samples cautiously, as they may have limited generalizability.
UMTRA Project water sampling and analysis plan, Durango, Colorado. Revision 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-09-01
Planned, routine ground water sampling activities at the US Department of Energy (DOE) Uranium Mill Tailings Remedial Action (UMTRA) Project site in Durango, Colorado, are described in this water sampling and analysis plan. The plan identifies and justifies the sampling locations, analytical parameters, detection limits, and sampling frequency for the routine monitoring stations at the site. The ground water data are used to characterize the site ground water compliance strategies and to monitor contaminants of potential concern identified in the baseline risk assessment (DOE, 1995a). Regulatory basis for routine ground water monitoring at UMTRA Project sites is derived from themore » US EPA regulations in 40 CFR Part 192 (1994) and EPA standards of 1995 (60 FR 2854). Sampling procedures are guided by the UMTRA Project standard operating procedures (SOP) (JEG, n.d.), the Technical Approach Document (TAD) (DOE, 1989), and the most effective technical approach for the site.« less
Qiu, Huazhang; Wu, Namei; Zheng, Yanjie; Chen, Min; Weng, Shaohuang; Chen, Yuanzhong; Lin, Xinhua
2015-01-01
A robust and versatile signal-on fluorescence sensing strategy was developed to provide label-free detection of various target analytes. The strategy used SYBR Green I dye and graphene oxide as signal reporter and signal-to-background ratio enhancer, respectively. Multidrug resistance protein 1 (MDR1) gene and mercury ion (Hg2+) were selected as target analytes to investigate the generality of the method. The linear relationship and specificity of the detections showed that the sensitive and selective analyses of target analytes could be achieved by the proposed strategy with low detection limits of 0.5 and 2.2 nM for MDR1 gene and Hg2+, respectively. Moreover, the strategy was used to detect real samples. Analytical results of MDR1 gene in the serum indicated that the developed method is a promising alternative approach for real applications in complex systems. Furthermore, the recovery of the proposed method for Hg2+ detection was acceptable. Thus, the developed label-free signal-on fluorescence sensing strategy exhibited excellent universality, sensitivity, and handling convenience. PMID:25565810
NASA Astrophysics Data System (ADS)
Lima, Pedro; Steger, Stefan; Glade, Thomas
2017-04-01
Landslides can represent a significant threat for people and infrastructure in hilly and mountainous landscapes worldwide. The understanding and prediction of those geomorphic processes is crucial to avoid economic loses or even casualties to people and their properties. Statistical based landslide susceptibility models are well known for being highly reliant on the quality, representativeness and availability of input data. In this context, several studies indicate that the landslide inventory represents the most important input data. However each landslide mapping technique or data collection has its drawbacks. Consequently, biased landslide inventories may be commonly introduced into statistical models, especially at regional or even national scale. It remains to the researcher to be aware of potential limitations and design strategies to avoid or reduce the potential propagation of input data errors and biases influences on the modelling outcomes. Previous studies have proven that such erroneous landslide inventories may lead to unrealistic landslide susceptibility maps. We assume that one possibility to tackle systematic landslide inventory-based biases might be a concentration on sampling strategies that focus on the distribution of non-landslide locations. For this purpose, we test an approach for the Austrian territory that concentrates on a modified non-landslide sampling strategy, instead the traditional applied random sampling. It is expected that the way non-landslide locations are represented (e.g. equally over the area or within those areas where mapping campaigns have been conducted) is important to reduce a potential over- or underestimation of landslide susceptibility within specific areas caused by bias. As presumably each landslide inventory is known to be systematically incomplete, especially in those areas where no mapping campaign was previously conducted. This is also applicable to the one currently available for the Austrian territory, composed by 14,519 shallow landslides. Within this study, we introduce the following explanatory variables to test the effect of different non-landslide strategies: Lithological units, grouped by their geotechnical properties and topographic parameters such as aspect, elevation, slope gradient and the topographic position. Landslide susceptibility maps will be derived by applying logistic regression, while systematic comparisons will be carried out based on models created by different non-landslide sampling strategies. Models generated by the conventional random sampling are presented against models based on stratified and clustered sampling strategies. The modelling results will be compared in terms of their prediction performance measured by the AUROC (Area Under the Receiver Operating Characteristic Curve) obtained by means of a k-fold cross-validation and also by the spatial pattern of the maps. The outcomes of this study are intended to contribute to the understanding on how landslide-inventory based biases may be counteracted.
Penetrator role in Mars sample strategy
NASA Technical Reports Server (NTRS)
Boynton, William; Dwornik, Steve; Eckstrom, William; Roalstad, David A.
1988-01-01
The application of the penetrator to a Mars Return Sample Mission (MRSM) has direct advantages to meet science objectives and mission safety. Based on engineering data and work currently conducted at Ball Aerospace Systems Division, the concept of penetrators as scientific instruments is entirely practical. The primary utilization of a penetrator for MRSM would be to optimize the selection of the sample site location and to help in selection of the actual sample to be returned to Earth. It is recognized that the amount of sample to be returned is very limited, therefore the selection of the sample site is critical to the success of the mission. The following mission scenario is proposed. The site selection of a sample to be acquired will be performed by science working groups. A decision will be reached and a set of target priorities established based on data to give geochemical, geophysical and geological information. The first task of a penetrator will be to collect data at up to 4 to 6 possible landing sites. The penetrator can include geophysical, geochemical, geological and engineering instruments to confirm that scientific data requirements at that site will be met. This in situ near real-time data, collected prior to final targeting of the lander, will insure that the sample site is both scientifically valuable and also that it is reachable within limits of the capability of the lander.
Nonuniform sampling and non-Fourier signal processing methods in multidimensional NMR
Mobli, Mehdi; Hoch, Jeffrey C.
2017-01-01
Beginning with the introduction of Fourier Transform NMR by Ernst and Anderson in 1966, time domain measurement of the impulse response (the free induction decay, FID) consisted of sampling the signal at a series of discrete intervals. For compatibility with the discrete Fourier transform (DFT), the intervals are kept uniform, and the Nyquist theorem dictates the largest value of the interval sufficient to avoid aliasing. With the proposal by Jeener of parametric sampling along an indirect time dimension, extension to multidimensional experiments employed the same sampling techniques used in one dimension, similarly subject to the Nyquist condition and suitable for processing via the discrete Fourier transform. The challenges of obtaining high-resolution spectral estimates from short data records using the DFT were already well understood, however. Despite techniques such as linear prediction extrapolation, the achievable resolution in the indirect dimensions is limited by practical constraints on measuring time. The advent of non-Fourier methods of spectrum analysis capable of processing nonuniformly sampled data has led to an explosion in the development of novel sampling strategies that avoid the limits on resolution and measurement time imposed by uniform sampling. The first part of this review discusses the many approaches to data sampling in multidimensional NMR, the second part highlights commonly used methods for signal processing of such data, and the review concludes with a discussion of other approaches to speeding up data acquisition in NMR. PMID:25456315
NASA Astrophysics Data System (ADS)
Reyes, J. J.; Adam, J. C.; Tague, C.
2016-12-01
Grasslands play an important role in agricultural production as forage for livestock; they also provide a diverse set of ecosystem services including soil carbon (C) storage. The partitioning of C between above and belowground plant compartments (i.e. allocation) is influenced by both plant characteristics and environmental conditions. The objectives of this study are to 1) develop and evaluate a hybrid C allocation strategy suitable for grasslands, and 2) apply this strategy to examine the importance of various parameters related to biogeochemical cycling, photosynthesis, allocation, and soil water drainage on above and belowground biomass. We include allocation as an important process in quantifying the model parameter uncertainty, which identifies the most influential parameters and what processes may require further refinement. For this, we use the Regional Hydro-ecologic Simulation System, a mechanistic model that simulates coupled water and biogeochemical processes. A Latin hypercube sampling scheme was used to develop parameter sets for calibration and evaluation of allocation strategies, as well as parameter uncertainty analysis. We developed the hybrid allocation strategy to integrate both growth-based and resource-limited allocation mechanisms. When evaluating the new strategy simultaneously for above and belowground biomass, it produced a larger number of less biased parameter sets: 16% more compared to resource-limited and 9% more compared to growth-based. This also demonstrates its flexible application across diverse plant types and environmental conditions. We found that higher parameter importance corresponded to sub- or supra-optimal resource availability (i.e. water, nutrients) and temperature ranges (i.e. too hot or cold). For example, photosynthesis-related parameters were more important at sites warmer than the theoretical optimal growth temperature. Therefore, larger values of parameter importance indicate greater relative sensitivity in adequately representing the relevant process to capture limiting resources or manage atypical environmental conditions. These results may inform future experimental work by focusing efforts on quantifying specific parameters under various environmental conditions or across diverse plant functional types.
Soil sampling strategies: evaluation of different approaches.
de Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia
2008-11-01
The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2sigma, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies.
Gu, Hui-Wen; Wu, Hai-Long; Yin, Xiao-Li; Li, Yong; Liu, Ya-Juan; Xia, Hui; Zhang, Shu-Rong; Jin, Yi-Feng; Sun, Xiao-Dong; Yu, Ru-Qin; Yang, Peng-Yuan; Lu, Hao-Jie
2014-10-27
β-blockers are the first-line therapeutic agents for treating cardiovascular diseases and also a class of prohibited substances in athletic competitions. In this work, a smart strategy that combines three-way liquid chromatography-mass spectrometry (LC-MS) data with second-order calibration method based on alternating trilinear decomposition (ATLD) algorithm was developed for simultaneous determination of ten β-blockers in human urine and plasma samples. This flexible strategy proved to be a useful tool to solve the problems of overlapped peaks and uncalibrated interferences encountered in quantitative LC-MS, and made the multi-targeted interference-free qualitative and quantitative analysis of β-blockers in complex matrices possible. The limits of detection were in the range of 2.0×10(-5)-6.2×10(-3) μg mL(-1), and the average recoveries were between 90 and 110% with standard deviations and average relative prediction errors less than 10%, indicating that the strategy could provide satisfactory prediction results for ten β-blockers in human urine and plasma samples only using liquid chromatography hyphenated single-quadrupole mass spectrometer in full scan mode. To further confirm the feasibility and reliability of the proposed method, the same batch samples were analyzed by multiple reaction monitoring (MRM) method. T-test demonstrated that there are no significant differences between the prediction results of the two methods. Considering the advantages of fast, low-cost, high sensitivity, and no need of complicated chromatographic and tandem mass spectrometric conditions optimization, the proposed strategy is expected to be extended as an attractive alternative method to quantify analyte(s) of interest in complex systems such as cells, biological fluids, food, environment, pharmaceuticals and other complex samples. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA GRC and MSFC Space-Plasma Arc Testing Procedures
NASA Technical Reports Server (NTRS)
Ferguson, Dale C.; Vayner, Boris V.; Galofaro, Joel T,; Hillard, G. Barry; Vaughn, Jason; Schneider, Todd
2005-01-01
Tests of arcing and current collection in simulated space plasma conditions have been performed at the NASA Glenn Research Center (GRC) in Cleveland, Ohio, for over 30 years and at the Marshall Space Flight Center (MSFC) in Huntsville, Alabama, for almost as long. During this period, proper test conditions for accurate and meaningful space simulation have been worked out, comparisons with actual space performance in spaceflight tests and with real operational satellites have been made, and NASA has achieved our own internal standards for test protocols. It is the purpose of this paper to communicate the test conditions, test procedures, and types of analysis used at NASA GRC and MSFC to the space environmental testing community at large, to help with international space-plasma arcing-testing standardization. To be discussed are: 1.Neutral pressures, neutral gases, and vacuum chamber sizes. 2. Electron and ion densities, plasma uniformity, sample sizes, and Debuy lengths. 3. Biasing samples versus self-generated voltages. Floating samples versus grounded. 4. Power supplies and current limits. Isolation of samples from power supplies during arcs. 5. Arc circuits. Capacitance during biased arc-threshold tests. Capacitance during sustained arcing and damage tests. Arc detection. Prevention sustained discharges during testing. 6. Real array or structure samples versus idealized samples. 7. Validity of LEO tests for GEO samples. 8. Extracting arc threshold information from arc rate versus voltage tests. 9. Snapover and current collection at positive sample bias. Glows at positive bias. Kapon (R) pyrolisis. 10. Trigger arc thresholds. Sustained arc thresholds. Paschen discharge during sustained arcing. 11. Testing for Paschen discharge threshold. Testing for dielectric breakdown thresholds. Testing for tether arcing. 12. Testing in very dense plasmas (ie thruster plumes). 13. Arc mitigation strategies. Charging mitigation strategies. Models. 14. Analysis of test results. Finally, the necessity of testing will be emphasized, not to the exclusion of modeling, but as part of a complete strategy for determining when and if arcs will occur, and preventing them from occurring in space.
NASA GRC and MSFC Space-Plasma Arc Testing Procedures
NASA Technical Reports Server (NTRS)
Ferguson, Dale C.a; Vayner, Boris V.; Galofaro, Joel T.; Hillard, G. Barry; Vaughn, Jason; Schneider, Todd
2005-01-01
Tests of arcing and current collection in simulated space plasma conditions have been performed at the NASA Glenn Research Center (GRC) in Cleveland, Ohio, for over 30 years and at the Marshall Space flight Center (MSFC) for almost as long. During this period, proper test conditions for accurate and meaningful space simulation have been worked out, comparisons with actual space performance in spaceflight tests and with real operational satellites have been made, and NASA has achieved our own internal standards for test protocols. It is the purpose of this paper to communicate the test conditions, test procedures, and types of analysis used at NASA GRC and MSFC to the space environmental testing community at large, to help with international space-plasma arcing testing standardization. To be discussed are: 1. Neutral pressures, neutral gases, and vacuum chamber sizes. 2. Electron and ion densities, plasma uniformity, sample sizes, and Debye lengths. 3. Biasing samples versus self-generated voltages. Floating samples versus grounded. 4. Power supplies and current limits. Isolation of samples from power supplies during arcs. Arc circuits. Capacitance during biased arc-threshold tests. Capacitance during sustained arcing and damage tests. Arc detection. Preventing sustained discharges during testing. 5. Real array or structure samples versus idealized samples. 6. Validity of LEO tests for GEO samples. 7. Extracting arc threshold information from arc rate versus voltage tests. 8 . Snapover and current collection at positive sample bias. Glows at positive bias. Kapton pyrolization. 9. Trigger arc thresholds. Sustained arc thresholds. Paschen discharge during sustained arcing. 10. Testing for Paschen discharge thresholds. Testing for dielectric breakdown thresholds. Testing for tether arcing. 11. Testing in very dense plasmas (ie thruster plumes). 12. Arc mitigation strategies. Charging mitigation strategies. Models. 13. Analysis of test results. Finally, the necessity of testing will be emphasized, not to the exclusion of modeling, but as part of a complete strategy for determining when and if arcs will occur, and preventing them from occurring in space.
Electrochemical Detection in Stacked Paper Networks.
Liu, Xiyuan; Lillehoj, Peter B
2015-08-01
Paper-based electrochemical biosensors are a promising technology that enables rapid, quantitative measurements on an inexpensive platform. However, the control of liquids in paper networks is generally limited to a single sample delivery step. Here, we propose a simple method to automate the loading and delivery of liquid samples to sensing electrodes on paper networks by stacking multiple layers of paper. Using these stacked paper devices (SPDs), we demonstrate a unique strategy to fully immerse planar electrodes by aqueous liquids via capillary flow. Amperometric measurements of xanthine oxidase revealed that electrochemical sensors on four-layer SPDs generated detection signals up to 75% higher compared with those on single-layer paper devices. Furthermore, measurements could be performed with minimal user involvement and completed within 30 min. Due to its simplicity, enhanced automation, and capability for quantitative measurements, stacked paper electrochemical biosensors can be useful tools for point-of-care testing in resource-limited settings. © 2015 Society for Laboratory Automation and Screening.
Ding, Jun; Xiao, Hua-Ming; Liu, Simin; Wang, Chang; Liu, Xin; Feng, Yu-Qi
2018-10-05
Although several methods have realized the analysis of low molecular weight (LMW) compounds using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) by overcoming the problem of interference with MS signals in the low mass region derived from conventional organic matrices, this emerging field still requires strategies to address the issue of analyzing complex samples containing LMW components in addition to the LMW compounds of interest, and solve the problem of lack of universality. The present study proposes an integrated strategy that combines chemical labeling with the supramolecular chemistry of cucurbit [n]uril (CB [n]) for the MALDI MS analysis of LMW compounds in complex samples. In this strategy, the target LMW compounds are first labeled by introducing a series of bifunctional reagents that selectively react with the target analytes and also form stable inclusion complexes with CB [n]. Then, the labeled products act as guest molecules that readily and selectively form stable inclusion complexes with CB [n]. This strategy relocates the MS signals of the LMW compounds of interest from the low mass region suffering high interference to the high mass region where interference with low mass components is absent. Experimental results demonstrate that a wide range of LMW compounds, including carboxylic acids, aldehydes, amines, thiol, and cis-diols, can be successfully detected using the proposed strategy, and the limits of detection were in the range of 0.01-1.76 nmol/mL. In addition, the high selectivity of the labeling reagents for the target analytes in conjunction with the high selectivity of the binding between the labeled products and CB [n] ensures an absence of signal interference with the non-targeted LMW components of complex samples. Finally, the feasibility of the proposed strategy for complex sample analysis is demonstrated by the accurate and rapid quantitative analysis of aldehydes in saliva and herbal medicines. As such, this work not only provides an alternative method for the detection of various LMW compounds using MALDI MS, but also can be applied to the selective and high-throughput analysis of LMW analytes in complex samples. Copyright © 2018 Elsevier B.V. All rights reserved.
Reconstruction of limited-angle dual-energy CT using mutual learning and cross-estimation (MLCE)
NASA Astrophysics Data System (ADS)
Zhang, Huayu; Xing, Yuxiang
2016-03-01
Dual-energy CT (DECT) imaging has gained a lot of attenuation because of its capability to discriminate materials. We proposes a flexible DECT scan strategy which can be realized on a system with general X-ray sources and detectors. In order to lower dose and scanning time, our DECT acquires two projections data sets on two arcs of limited-angular coverage (one for each energy) respectively. Meanwhile, a certain number of rays from two data sets form conjugate sampling pairs. Our reconstruction method for such a DECT scan mainly tackles the consequent limited-angle problem. Using the idea of artificial neural network, we excavate the connection between projections at two different energies by constructing a relationship between the linear attenuation coefficient of the high energy and that of the low one. We use this relationship to cross-estimate missing projections and reconstruct attenuation images from an augmented data set including projections at views covered by itself (projections collected in scanning) and by the other energy (projections estimated) for each energy respectively. Validated by our numerical experiment on a dental phantom with rather complex structures, our DECT is effective in recovering small structures in severe limited-angle situations. This DECT scanning strategy can much broaden DECT design in reality.
On the design of paleoenvironmental data networks for estimating large-scale patterns of climate
NASA Astrophysics Data System (ADS)
Kutzbach, J. E.; Guetter, P. J.
1980-09-01
Guidelines are determined for the spatial density and location of climatic variables (temperature and precipitation) that are appropriate for estimating the continental- to hemispheric-scale pattern of atmospheric circulation (sea-level pressure). Because instrumental records of temperature and precipitation simulate the climatic information that is contained in certain paleoenvironmental records (tree-ring, pollen, and written-documentary records, for example), these guidelines provide useful sampling strategies for reconstructing the pattern of atmospheric circulation from paleoenvironmental records. The statistical analysis uses a multiple linear regression model. The sampling strategies consist of changes in site density (from 0.5 to 2.5 sites per million square kilometers) and site location (from western North American sites only to sites in Japan, North America, and western Europe) of the climatic data. The results showed that the accuracy of specification of the pattern of sea-level pressure: (1) is improved if sites with climatic records are spread as uniformly as possible over the area of interest; (2) increases with increasing site density-at least up to the maximum site density used in this study; (3) is improved if sites cover an area that extends considerably beyond the limits of the area of interest. The accuracy of specification was lower for independent data than for the data that were used to develop the regression model; some skill was found for almost all sampling strategies.
Comprehensive benchmarking and ensemble approaches for metagenomic classifiers.
McIntyre, Alexa B R; Ounit, Rachid; Afshinnekoo, Ebrahim; Prill, Robert J; Hénaff, Elizabeth; Alexander, Noah; Minot, Samuel S; Danko, David; Foox, Jonathan; Ahsanuddin, Sofia; Tighe, Scott; Hasan, Nur A; Subramanian, Poorani; Moffat, Kelly; Levy, Shawn; Lonardi, Stefano; Greenfield, Nick; Colwell, Rita R; Rosen, Gail L; Mason, Christopher E
2017-09-21
One of the main challenges in metagenomics is the identification of microorganisms in clinical and environmental samples. While an extensive and heterogeneous set of computational tools is available to classify microorganisms using whole-genome shotgun sequencing data, comprehensive comparisons of these methods are limited. In this study, we use the largest-to-date set of laboratory-generated and simulated controls across 846 species to evaluate the performance of 11 metagenomic classifiers. Tools were characterized on the basis of their ability to identify taxa at the genus, species, and strain levels, quantify relative abundances of taxa, and classify individual reads to the species level. Strikingly, the number of species identified by the 11 tools can differ by over three orders of magnitude on the same datasets. Various strategies can ameliorate taxonomic misclassification, including abundance filtering, ensemble approaches, and tool intersection. Nevertheless, these strategies were often insufficient to completely eliminate false positives from environmental samples, which are especially important where they concern medically relevant species. Overall, pairing tools with different classification strategies (k-mer, alignment, marker) can combine their respective advantages. This study provides positive and negative controls, titrated standards, and a guide for selecting tools for metagenomic analyses by comparing ranges of precision, accuracy, and recall. We show that proper experimental design and analysis parameters can reduce false positives, provide greater resolution of species in complex metagenomic samples, and improve the interpretation of results.
Sells, Sallie M; Held, David W; Enloe, Stephen F; Loewenstein, Nancy J; Eckhardt, Lori G
2015-03-01
Cogongrass (Imperata cylindrica Beav.) is an aggressive, invasive weed with a global distribution. In North America, it threatens the integrity of southeastern pine agroecosystems, including longleaf pine (Pinus palustris Mill.). While studies have examined the impacts of cogongrass and various vegetation management strategies on longleaf pine understory plant communities, little is known about how they impact associated insect communities. To understand the effect of cogongrass management strategies on arthropod natural enemies and bark beetles, a split-plot design was used to test fire (whole-plot) and four subplot treatments (control, herbicide, seeding and herbicide plus seeding). Arthropods were sampled using pitfall traps and sweep samples. After 2 years of sampling, total natural enemies were not significantly affected by subplot treatment but were affected by burn treatment. Upon subdividing natural enemies into groups, only spiders were significantly affected by subplot treatment, but predatory beetles and ants were significantly affected by burn treatment. The abundance of root-feeding bark beetles (Hylastes spp.) was not significant by subplot or whole-plot treatments. Multiple applications of herbicide remain the most effective way to manage cogongrass in longleaf pine. In this study, we found limited evidence that cogongrass management with herbicides would negatively impact arthropod natural enemies associated with longleaf pine or locally increase root-feeding bark beetles. © 2014 Society of Chemical Industry.
Wu, Chang-Lin; He, Jian-An; Gu, Da-Yong; Shao, Chao-Peng; Zhu, Yi; Dang, Xin-Tang
2018-01-01
This study was aimed to establish a novel strategy based on the surface plasmon resonance (SPR) technology for platelet compatibility testing. A novel surface matrix was prepared based on poly (OEGMA-co-HEMA) via surface-initiated polymerization as a biosensor surface platform. Type O universal platelets and donor platelets were immobilized on these novel matrices via amine-coupling reaction and worked as a capturing ligand for binding the platelet antibody. Antibodies binding to platelets were monitored in real time by injecting the samples into a microfluidic channel. Clinical serum samples (n = 186) with multiple platelet transfusions were assayed for platelet antibodies using the SPR technology and monoclonal antibody-immobilized platelet antigen (MAIPA) assay. The novel biosensor surface achieved nonfouling background and high immobilization capacity and showed good repeatability and stability after regeneration. The limit of detection of the SPR biosensor for platelet antibody was estimated to be 50 ng/mL. The sensitivity and specificity were 92% and 98.7%. It could detect the platelet antibody directly in serum samples, and the results were similar to MAIPA assay. A novel strategy to facilitate the sensitive and reliable detection of platelet compatibility for developing an SPR-based biosensor was established in this study. The SPR-based biosensor combined with novel surface chemistry is a promising method for platelet compatibility testing.
Babamoradi, Hamid; van den Berg, Frans; Rinnan, Åsmund
2016-02-18
In Multivariate Statistical Process Control, when a fault is expected or detected in the process, contribution plots are essential for operators and optimization engineers in identifying those process variables that were affected by or might be the cause of the fault. The traditional way of interpreting a contribution plot is to examine the largest contributing process variables as the most probable faulty ones. This might result in false readings purely due to the differences in natural variation, measurement uncertainties, etc. It is more reasonable to compare variable contributions for new process runs with historical results achieved under Normal Operating Conditions, where confidence limits for contribution plots estimated from training data are used to judge new production runs. Asymptotic methods cannot provide confidence limits for contribution plots, leaving re-sampling methods as the only option. We suggest bootstrap re-sampling to build confidence limits for all contribution plots in online PCA-based MSPC. The new strategy to estimate CLs is compared to the previously reported CLs for contribution plots. An industrial batch process dataset was used to illustrate the concepts. Copyright © 2016 Elsevier B.V. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-07-01
... strategies are strategies for which adequate procedures to quantify emissions reductions or specify a program... goals. Such programs are categorized into the following three categories: Emission-limiting, market-response, and directionally-sound strategies. Emission-limiting strategies are strategies that directly...
Bindon, James R; Vitzthum, Virginia J
2002-04-01
This study compares findings from research projects involving different genetic, environmental, and cultural contexts: a study of lifestyle and health from American Samoa (ASLS) and the Bolivian project. Reproduction and Ecology in Provincia Aroma (REPA). This paper presents analyses of varying economic strategies and their association with nutritional status indicators in each population. The ASLS sample includes 66 Samoan women and the REPA sample includes 210 Aymara women. Principle components analysis of household economic resources within each sample extracted two significant factors: one represents modernizing influences including education and occupational status, and the other represents ethnographically salient traditional economic behavior. The traditional pattern includes adding household members in Samoa and selling agricultural products in Bolivia. This analysis places each woman along two continua, traditional and modern, based on her household mobilization of economic resources, permitting an understanding of the patterns underlying household economic behavior that is not possible in univariate analyses of socioeconomic variables. For the Bolivian women the strategy involving more education and higher occupational status was associated with higher measures of several nutritional status indicators, including body mass index, arm muscle area, and peripheral skinfolds. But among the Samoan women, where substantial obesity was the norm, there were no significant differences in anthropometric measurements based on economic strategies. These data argue for the importance of directly measuring the potential consequences of variation in household economic strategies rather than merely inferring such, and of assessing ethnographically relevant aspects of household economic production rather than limiting analyses to non-context-specific economic indicators such as income. This focus on household strategy is likely to be fruitful especially where economic and nutritional conditions are marginal. The findings from Bolivia also support efforts in developing countries to improve girls' education, and thereby occupational prospects, as a means to improve their health status as women.
Pelletreau, Sonia; Barbre, Kira A.; Deming, Michael S.; Rebollo, Maria P.
2017-01-01
Endemicity mapping is required to determining whether a district requires mass drug administration (MDA). Current guidelines for mapping LF require that two sites be selected per district and within each site a convenience sample of 100 adults be tested for antigenemia or microfilaremia. One or more confirmed positive tests in either site is interpreted as an indicator of potential transmission, prompting MDA at the district-level. While this mapping strategy has worked well in high-prevalence settings, imperfect diagnostics and the transmission potential of a single positive adult have raised concerns about the strategy’s use in low-prevalence settings. In response to these limitations, a statistically rigorous confirmatory mapping strategy was designed as a complement to the current strategy when LF endemicity is uncertain. Under the new strategy, schools are selected by either systematic or cluster sampling, depending on population size, and within each selected school, children 9–14 years are sampled systematically. All selected children are tested and the number of positive results is compared against a critical value to determine, with known probabilities of error, whether the average prevalence of LF infection is likely below a threshold of 2%. This confirmatory mapping strategy was applied to 45 districts in Ethiopia and 10 in Tanzania, where initial mapping results were considered uncertain. In 42 Ethiopian districts, and all 10 of the Tanzanian districts, the number of antigenemic children was below the critical cutoff, suggesting that these districts do not require MDA. Only three Ethiopian districts exceeded the critical cutoff of positive results. Whereas the current World Health Organization guidelines would have recommended MDA in all 55 districts, the present results suggest that only three of these districts requires MDA. By avoiding unnecessary MDA in 52 districts, the confirmatory mapping strategy is estimated to have saved a total of $9,293,219. PMID:28976981
Clustering analysis of proteins from microbial genomes at multiple levels of resolution.
Zaslavsky, Leonid; Ciufo, Stacy; Fedorov, Boris; Tatusova, Tatiana
2016-08-31
Microbial genomes at the National Center for Biotechnology Information (NCBI) represent a large collection of more than 35,000 assemblies. There are several complexities associated with the data: a great variation in sampling density since human pathogens are densely sampled while other bacteria are less represented; different protein families occur in annotations with different frequencies; and the quality of genome annotation varies greatly. In order to extract useful information from these sophisticated data, the analysis needs to be performed at multiple levels of phylogenomic resolution and protein similarity, with an adequate sampling strategy. Protein clustering is used to construct meaningful and stable groups of similar proteins to be used for analysis and functional annotation. Our approach is to create protein clusters at three levels. First, tight clusters in groups of closely-related genomes (species-level clades) are constructed using a combined approach that takes into account both sequence similarity and genome context. Second, clustroids of conservative in-clade clusters are organized into seed global clusters. Finally, global protein clusters are built around the the seed clusters. We propose filtering strategies that allow limiting the protein set included in global clustering. The in-clade clustering procedure, subsequent selection of clustroids and organization into seed global clusters provides a robust representation and high rate of compression. Seed protein clusters are further extended by adding related proteins. Extended seed clusters include a significant part of the data and represent all major known cell machinery. The remaining part, coming from either non-conservative (unique) or rapidly evolving proteins, from rare genomes, or resulting from low-quality annotation, does not group together well. Processing these proteins requires significant computational resources and results in a large number of questionable clusters. The developed filtering strategies allow to identify and exclude such peripheral proteins limiting the protein dataset in global clustering. Overall, the proposed methodology allows the relevant data at different levels of details to be obtained and data redundancy eliminated while keeping biologically interesting variations.
Stress and psychopathology in latin-american immigrants: the role of coping strategies.
Patiño, Camila; Kirchner, Teresa
2010-01-01
Increased migration into Spain requires the development of preventive strategies that help both immigrants and the host society to deal with the associated risk factors and thus avoid the emergence of psychopathology. To determine the level of psychopathology in Latin-American immigrants who reside in Barcelona and its relationship to the coping strategies used to mitigate the effects of the stress linked to migration. The sample comprised 210 Latin-American immigrants over the age of 18. Sampling was based on consecutive cases, and participants were contacted through an NGO. Employment is the stressor that most affects immigrants. Psychopathological symptoms are common among the immigrant population, and there is a relationship between the use of avoidance coping strategies and greater symptomatology. The longer immigrants have been in the host country, the less they make use of approach strategies. The migratory process produces high levels of stress that are linked to psychopathology. Being subjected to a prolonged stressor has a destabilizing effect on both mental and physical health and can lead to a deterioration in social relationships due to more intense feelings of anger and frustration. Coping strategies appear to be more widely used among immigrants than in the indigenous population, and this may indicate the high levels of stress to which the former are subject and the attempts they make to deal with it. The limitations of the study include the source of data collection and the fact that most of the instruments used have not been validated in the participants' countries of origin. Copyright 2009 S. Karger AG, Basel.
Park, Alayna L; Tsai, Katherine H; Guan, Karen; Chorpita, Bruce F
2018-02-14
This study examined patterns of evidence-based treatment (EBT) delivery following a county-wide EBT reform initiative. Data were gathered from 60 youth and their 21 providers, who were instructed to deliver therapy as they normally would under the EBT initiative. Results showed limited applicability of county-supported EBTs to this service sample, and that most youth did not receive traditional delivery of EBTs. Findings suggest that it may be unrealistic to expect providers to deliver EBTs with fidelity with all clients, and that EBT implementation may be best thought of as a strategy for improving mental health services rather than a goal.
Söderström, Hanna; Lindberg, Richard H; Fick, Jerker
2009-01-16
Although polar organic contaminants (POCs) such as pharmaceuticals are considered as some of today's most emerging contaminants few of them are regulated or included in on-going monitoring programs. However, the growing concern among the public and researchers together with the new legislature within the European Union, the registration, evaluation and authorisation of chemicals (REACH) system will increase the future need of simple, low cost strategies for monitoring and risk assessment of POCs in aquatic environments. In this article, we overview the advantages and shortcomings of traditional and novel sampling techniques available for monitoring the emerging POCs in water. The benefits and drawbacks of using active and biological sampling were discussed and the principles of organic passive samplers (PS) presented. A detailed overview of type of polar organic PS available, and their classes of target compounds and field of applications were given, and the considerations involved in using them such as environmental effects and quality control were discussed. The usefulness of biological sampling of POCs in water was found to be limited. Polar organic PS was considered to be the only available, but nevertheless, an efficient alternative to active water sampling due to its simplicity, low cost, no need of power supply or maintenance, and the ability of collecting time-integrative samples with one sample collection. However, the polar organic PS need to be further developed before they can be used as standard in water quality monitoring programs.
Aerosol dilution as a simple strategy for analysis of complex samples by ICP-MS.
Barros, Ariane I; Pinheiro, Fernanda C; Amaral, Clarice D B; Lorençatto, Rodolfo; Nóbrega, Joaquim A
2018-02-01
This study investigated the capability of High Matrix Introduction (HMI) strategy for analysis of dialysis solution and urine samples using inductively coupled plasma mass spectrometry. The use of HMI enables the direct introduction of urine samples and dialysis solutions 2-fold diluted with 0.14molL -1 HNO 3 . Bismuth, Ge, Ir, Li, Pt, Rh, Sc and Tl were evaluated as internal standards for Al, Ag, As, Be, Cd, Cr, Pb, Sb, Se, Tl, and Hg determination in dialysis solution and As, Cd, Hg and Pb determination in urine samples. Helium collision cell mode (4.5mLmin -1 ) was efficient to overcome polyatomic interferences in As, Se and Cr determinations. Mercury memory effects were evaluated by washing with 0.12molL -1 HCl or an alkaline diluent solution prepared with n-butanol, NH 4 OH, EDTA, and Triton X-100. This later solution was efficient for avoiding Hg memory effects in 6h of analysis. Linear calibration curves were obtained for all analytes and detection limits were lower than maximum amounts allowed by Brazilian legislations. Recoveries for all analytes in dialysis solutions and urine samples ranged from 82% to 125% and relative standard deviations for all elements and samples were lower than 7%. Analysis of control internal urine samples was in agreement with certified values at 95% confidence level (t-test; p < 0.05). Copyright © 2017 Elsevier B.V. All rights reserved.
Ptolemy, Adam S; Britz-McKibbin, Philip
2006-02-17
New strategies for integrating sample pretreatment with chemical analyses under a single format is required for rapid, sensitive and enantioselective analyses of low abundance metabolites in complex biological samples. Capillary electrophoresis (CE) offers a unique environment for controlling analyte/reagent band dispersion and electromigration properties using discontinuous electrolyte systems. Recent work in our laboratory towards developing a high-throughput CE platform for low abundance metabolites via on-line sample preconcentration with chemical derivatization (SPCD) is primarily examined in this review, as there have been surprisingly only a few strategies reported in the literature to date. In-capillary sample preconcentration serves to enhance concentration sensitivity via electrokinetic focusing of long sample injection volumes for lower detection limits, whereas chemical derivatization by zone passing is used to expand detectability and selectivity, notably for enantiomeric resolution of metabolites lacking intrinsic chromophores using nanolitre volumes of reagent. Together, on-line SPCD-CE can provide over a 100-fold improvement in concentration sensitivity, shorter total analysis times, reduced sample handling and improved reliability for a variety of amino acid and amino sugar metabolites, which is also amenable to automated high-throughput screening. This review will highlight basic method development and optimization parameters relevant to SPCD-CE, including applications to bacterial metabolite flux and biomarker analyses. Insight into the mechanism of analyte focusing and labeling by SPCD-CE is also discussed, as well as future directions for continued research.
Efficient mitigation strategies for epidemics in rural regions.
Scoglio, Caterina; Schumm, Walter; Schumm, Phillip; Easton, Todd; Roy Chowdhury, Sohini; Sydney, Ali; Youssef, Mina
2010-07-13
Containing an epidemic at its origin is the most desirable mitigation. Epidemics have often originated in rural areas, with rural communities among the first affected. Disease dynamics in rural regions have received limited attention, and results of general studies cannot be directly applied since population densities and human mobility factors are very different in rural regions from those in cities. We create a network model of a rural community in Kansas, USA, by collecting data on the contact patterns and computing rates of contact among a sampled population. We model the impact of different mitigation strategies detecting closely connected groups of people and frequently visited locations. Within those groups and locations, we compare the effectiveness of random and targeted vaccinations using a Susceptible-Exposed-Infected-Recovered compartmental model on the contact network. Our simulations show that the targeted vaccinations of only 10% of the sampled population reduced the size of the epidemic by 34.5%. Additionally, if 10% of the population visiting one of the most popular locations is randomly vaccinated, the epidemic size is reduced by 19%. Our results suggest a new implementation of a highly effective strategy for targeted vaccinations through the use of popular locations in rural communities.
Quandt, Sara A; Doran, Alicia M; Rao, Pamela; Hoppin, Jane A; Snively, Beverly M; Arcury, Thomas A
2004-01-01
The collection of environmental samples presents a responsibility to return information to the affected participants. Explaining complex and often ambiguous scientific information to a lay audience is a challenge. As shown by environmental justice research, this audience frequently has limited formal education, increasing the challenge for researchers to explain the data collected, the risk indicated by the findings, and action the affected community should take. In this study we describe the development and implementation of a risk communication strategy for environmental pesticide samples collected in the homes of Latino/a migrant and seasonal farmworkers in a community-based participatory research project. The communication strategy was developed with community input and was based on face-to-face meetings with members of participating households. Using visual displays of data effectively conveyed information about individual household contamination and placed it in the context of community findings. The lack of national reference data and definitive standards for action necessitated a simplified risk message. We review the strengths and weaknesses of such an approach and suggest areas for future research in risk communication to communities affected by environmental health risks. PMID:15064174
Donoso Brown, Elena V; Fichter, Renae
2017-12-01
Upper extremity hemiparesis is an impairment post-stroke that impacts quality of life. Home programs are an intervention strategy used by many occupational therapists to support continued motor recovery post-stroke, yet little is known about how these programs are designed and implemented. The purpose of this study was to describe how occupational therapy practitioners approach this task and specifically what strategies they use to support adherence and what types of technology are most commonly used. An on-line survey methodology was used. Participants were recruited through multiple sources including state associations and occupational therapy educational program directors. A total of 73 occupational therapy practitioners submitted complete surveys. It was found that majority of occupational therapy practitioners in the sample (n = 53) reported creating home programs focused on upper extremity motor recovery more than 80% of the time. Range of motion and strengthening were reported as being in the top three most commonly used interventions by more than half the sample, however incorporating clients' goals and interests were reported most often as strategies to create meaning in the home program. Respondents also reported limited incorporation of technology and strategies to support adherence. Personal motivation was reported by occupational therapy practitioners to be a key moderator of adherence to a home program. Occupational therapy practitioners often provide home programs for individuals post-stroke focusing on upper extremity function. Future research that aims to understand stakeholders' perspectives on home programs and determine effective strategies for ensuring adherence is needed.
Effects of organic carbon sequestration strategies on soil enzymatic activities
NASA Astrophysics Data System (ADS)
Puglisi, E.; Suciu, N.; Botteri, L.; Ferrari, T.; Coppolecchia, D.; Trevisan, M.; Piccolo, A.
2009-04-01
Greenhouse gases emissions can be counterbalanced with proper agronomical strategies aimed at sequestering carbon in soils. These strategies must be tested not only for their ability in reducing carbon dioxide emissions, but also for their impact on soil quality: enzymatic activities are related to main soil ecological quality, and can be used as early and sensitive indicators of alteration events. Three different strategies for soil carbon sequestration were studied: minimum tillage, protection of biodegradable organic fraction by compost amendment and oxidative polimerization of soil organic matter catalyzed by biometic porfirins. All strategies were compared with a traditional agricultural management based on tillage and mineral fertilization. Experiments were carried out in three Italian soils from different pedo-climatic regions located respectively in Piacenza, Turin and Naples and cultivated with maize or wheat. Soil samples were taken for three consecutive years after harvest and analyzed for their content in phosphates, ß-glucosidase, urease and invertase. An alteration index based on these enzymatic activities levels was applied as well. The biomimetic porfirin application didn't cause changes in enzymatic activities compared to the control at any treatment or location. Enzymatic activities were generally higher in the minimum tillage and compost treatment, while differences between location and date of samplings were limited. Application of the soil alteration index based on enzymatic activities showed that soils treated with compost or subjected to minimum tillage generally have a higher biological quality. The work confirms the environmental sustainability of the carbon sequestering agronomical practices studied.
Jaccard distance based weighted sparse representation for coarse-to-fine plant species recognition.
Zhang, Shanwen; Wu, Xiaowei; You, Zhuhong
2017-01-01
Leaf based plant species recognition plays an important role in ecological protection, however its application to large and modern leaf databases has been a long-standing obstacle due to the computational cost and feasibility. Recognizing such limitations, we propose a Jaccard distance based sparse representation (JDSR) method which adopts a two-stage, coarse to fine strategy for plant species recognition. In the first stage, we use the Jaccard distance between the test sample and each training sample to coarsely determine the candidate classes of the test sample. The second stage includes a Jaccard distance based weighted sparse representation based classification(WSRC), which aims to approximately represent the test sample in the training space, and classify it by the approximation residuals. Since the training model of our JDSR method involves much fewer but more informative representatives, this method is expected to overcome the limitation of high computational and memory costs in traditional sparse representation based classification. Comparative experimental results on a public leaf image database demonstrate that the proposed method outperforms other existing feature extraction and SRC based plant recognition methods in terms of both accuracy and computational speed.
A CD45-based barcoding approach to multiplex mass-cytometry (CyTOF).
Lai, Liyun; Ong, Raymond; Li, Juntao; Albani, Salvatore
2015-04-01
CyTOF enables the study of the immune system with a complexity, depth, and multidimensionality never achieved before. However, the full potential of using CyTOF can be limited by scarce cell samples. Barcoding strategies developed based on direct labeling of cells using maleimido-monoamide-DOTA (m-DOTA) provide a very useful tool. However, using m-DOTA has some inherent problems, mainly associated with signal intensity. This may be a source of uncertainty when samples are multiplexed. As an alternative or complementary approach to m-DOTA, conjugating an antibody, specific for a membrane protein present on most immune cells, with different isotopes could address the issues of stability and signal intensity needed for effective barcoding. We chose for this purpose CD45, and designed experiments to address different types of cultures and the ability to detect extra- and intra-cellular targets. We show here that our approach provides an useful alternative to m-DOTA in terms of sensitivity, specificity, flexibility, and user-friendliness. Our manuscript provides details to effectively barcode immune cells, overcoming limitations in current technology and enabling the use of CyTOF with scarce samples (for instance precious clinical samples). © 2015 The Authors. Published by Wiley Periodicals, Inc.
A Strategy for Sensitive, Large Scale Quantitative Metabolomics
Liu, Xiaojing; Ser, Zheng; Cluntun, Ahmad A.; Mentch, Samantha J.; Locasale, Jason W.
2014-01-01
Metabolite profiling has been a valuable asset in the study of metabolism in health and disease. However, current platforms have different limiting factors, such as labor intensive sample preparations, low detection limits, slow scan speeds, intensive method optimization for each metabolite, and the inability to measure both positively and negatively charged ions in single experiments. Therefore, a novel metabolomics protocol could advance metabolomics studies. Amide-based hydrophilic chromatography enables polar metabolite analysis without any chemical derivatization. High resolution MS using the Q-Exactive (QE-MS) has improved ion optics, increased scan speeds (256 msec at resolution 70,000), and has the capability of carrying out positive/negative switching. Using a cold methanol extraction strategy, and coupling an amide column with QE-MS enables robust detection of 168 targeted polar metabolites and thousands of additional features simultaneously. Data processing is carried out with commercially available software in a highly efficient way, and unknown features extracted from the mass spectra can be queried in databases. PMID:24894601
Elimination of Taenia solium Transmission in Northern Peru.
Garcia, Hector H; Gonzalez, Armando E; Tsang, Victor C W; O'Neal, Seth E; Llanos-Zavalaga, Fernando; Gonzalvez, Guillermo; Romero, Jaime; Rodriguez, Silvia; Moyano, Luz M; Ayvar, Viterbo; Diaz, Andre; Hightower, Allen; Craig, Philip S; Lightowlers, Marshall W; Gauci, Charles G; Leontsini, Elli; Gilman, Robert H
2016-06-16
Taeniasis and cysticercosis are major causes of seizures and epilepsy. Infection by the causative parasite Taenia solium requires transmission between humans and pigs. The disease is considered to be eradicable, but data on attempts at regional elimination are lacking. We conducted a three-phase control program in Tumbes, Peru, to determine whether regional elimination would be feasible. We systematically tested and compared elimination strategies to show the feasibility of interrupting the transmission of T. solium infection in a region of highly endemic disease in Peru. In phase 1, we assessed the effectiveness and feasibility of six intervention strategies that involved screening of humans and pigs, antiparasitic treatment, prevention education, and pig replacement in 42 villages. In phase 2, we compared mass treatment with mass screening (each either with or without vaccination of pigs) in 17 villages. In phase 3, we implemented the final strategy of mass treatment of humans along with the mass treatment and vaccination of pigs in the entire rural region of Tumbes (107 villages comprising 81,170 people and 55,638 pigs). The effect of the intervention was measured after phases 2 and 3 with the use of detailed necropsy to detect pigs with live, nondegenerated cysts capable of causing new infection. The necropsy sampling was weighted in that we preferentially included more samples from seropositive pigs than from seronegative pigs. Only two of the strategies implemented in phase 1 resulted in limited control over the transmission of T. solium infection, which highlighted the need to intensify the subsequent strategies. After the strategies in phase 2 were implemented, no cyst that was capable of further transmission of T. solium infection was found among 658 sampled pigs. One year later, without further intervention, 7 of 310 sampled pigs had live, nondegenerated cysts, but no infected pig was found in 11 of 17 villages, including all the villages in which mass antiparasitic treatment plus vaccination was implemented. After the final strategy was implemented in phase 3, a total of 3 of 342 pigs had live, nondegenerated cysts, but no infected pig was found in 105 of 107 villages. We showed that the transmission of T. solium infection was interrupted on a regional scale in a highly endemic region in Peru. (Funded by the Bill and Melinda Gates Foundation and others.).
Palinkas, Lawrence A; Horwitz, Sarah M; Green, Carla A; Wisdom, Jennifer P; Duan, Naihua; Hoagwood, Kimberly
2015-09-01
Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research.
Classification of prostate cancer grade using temporal ultrasound: in vivo feasibility study
NASA Astrophysics Data System (ADS)
Ghavidel, Sahar; Imani, Farhad; Khallaghi, Siavash; Gibson, Eli; Khojaste, Amir; Gaed, Mena; Moussa, Madeleine; Gomez, Jose A.; Siemens, D. Robert; Leveridge, Michael; Chang, Silvia; Fenster, Aaron; Ward, Aaron D.; Abolmaesumi, Purang; Mousavi, Parvin
2016-03-01
Temporal ultrasound has been shown to have high classification accuracy in differentiating cancer from benign tissue. In this paper, we extend the temporal ultrasound method to classify lower grade Prostate Cancer (PCa) from all other grades. We use a group of nine patients with mostly lower grade PCa, where cancerous regions are also limited. A critical challenge is to train a classifier with limited aggressive cancerous tissue compared to low grade cancerous tissue. To resolve the problem of imbalanced data, we use Synthetic Minority Oversampling Technique (SMOTE) to generate synthetic samples for the minority class. We calculate spectral features of temporal ultrasound data and perform feature selection using Random Forests. In leave-one-patient-out cross-validation strategy, an area under receiver operating characteristic curve (AUC) of 0.74 is achieved with overall sensitivity and specificity of 70%. Using an unsupervised learning approach prior to proposed method improves sensitivity and AUC to 80% and 0.79. This work represents promising results to classify lower and higher grade PCa with limited cancerous training samples, using temporal ultrasound.
Systematic review of the use of online questionnaires of older adults.
Remillard, Meegan L; Mazor, Kathleen M; Cutrona, Sarah L; Gurwitz, Jerry H; Tjia, Jennifer
2014-04-01
To describe methodological approaches to population targeting and sampling and to summarize limitations of Internet-based questionnaires in older adults. Systematic literature review. Studies using online questionnaires in older adult populations. English-language articles using search terms for geriatric, age 65 and over, Internet survey, online survey, Internet questionnaire, and online questionnaire in PubMed and EBSCO host between 1984 and July 2012. Inclusion criteria were study population mean age 65 and older and use of an online questionnaire for research. Review of 336 abstracts yielded 14 articles for full review by two investigators; 11 articles met inclusion criteria. Articles were extracted for study design and setting, participant characteristics, recruitment strategy, country, and study limitations. Eleven articles were published after 2001. Studies had populations with a mean age of 65 to 78, included descriptive and analytical designs, and were conducted in the United States, Australia, and Japan. Recruiting methods varied widely from paper fliers and personal e-mails to use of consumer marketing panels. Investigator-reported study limitations included the use of small convenience samples and limited generalizability. Online questionnaires are a feasible method of surveying older adults in some geographic regions and for some subsets of older adults, but limited Internet access constrains recruiting methods and often limits study generalizability. © 2014, Copyright the Authors Journal compilation © 2014, The American Geriatrics Society.
Kocman, Andreas; Fischer, Linda; Weber, Germain
2018-01-01
Obtaining employment is among the most important ambitions of people with intellectual disability. Progress towards comprehensive inclusive employment is hampered by numerous barriers. Limited research is available on these barriers and strategies to overcome them. A mixed method approach in a sample of 30 HR-managers was used to assess (i) differences in perceived barriers for employment of people with specific disabilities and mental disorders; (ii) barriers specific to employing people with intellectual disability; (iii) strategies to overcome these barriers. Employers perceive more barriers for hiring people with intellectual disability and mental disorders than for physical disabilities. Employment for this population is hampered by a perceived lack of skills and legal issues. Strategies perceived as beneficial are supplying information, changes in organizational strategies and legal changes. Employers' differentiated expectations and reservations towards hiring individuals with specific disabilities need to be taken into account to increase employment for people with intellectual disability. © 2017 John Wiley & Sons Ltd.
Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin
2017-08-15
Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.
Physiologically Relevant Changes in Serotonin Resolved by Fast Microdialysis
2013-01-01
Online microdialysis is a sampling and detection method that enables continuous interrogation of extracellular molecules in freely moving subjects under behaviorally relevant conditions. A majority of recent publications using brain microdialysis in rodents report sample collection times of 20–30 min. These long sampling times are due, in part, to limitations in the detection sensitivity of high performance liquid chromatography (HPLC). By optimizing separation and detection conditions, we decreased the retention time of serotonin to 2.5 min and the detection threshold to 0.8 fmol. Sampling times were consequently reduced from 20 to 3 min per sample for online detection of serotonin (and dopamine) in brain dialysates using a commercial HPLC system. We developed a strategy to collect and to analyze dialysate samples continuously from two animals in tandem using the same instrument. Improvements in temporal resolution enabled elucidation of rapid changes in extracellular serotonin levels associated with mild stress and circadian rhythms. These dynamics would be difficult or impossible to differentiate using conventional microdialysis sampling rates. PMID:23614776
Nanotechnology: moving from microarrays toward nanoarrays.
Chen, Hua; Li, Jun
2007-01-01
Microarrays are important tools for high-throughput analysis of biomolecules. The use of microarrays for parallel screening of nucleic acid and protein profiles has become an industry standard. A few limitations of microarrays are the requirement for relatively large sample volumes and elongated incubation time, as well as the limit of detection. In addition, traditional microarrays make use of bulky instrumentation for the detection, and sample amplification and labeling are quite laborious, which increase analysis cost and delays the time for obtaining results. These problems limit microarray techniques from point-of-care and field applications. One strategy for overcoming these problems is to develop nanoarrays, particularly electronics-based nanoarrays. With further miniaturization, higher sensitivity, and simplified sample preparation, nanoarrays could potentially be employed for biomolecular analysis in personal healthcare and monitoring of trace pathogens. In this chapter, it is intended to introduce the concept and advantage of nanotechnology and then describe current methods and protocols for novel nanoarrays in three aspects: (1) label-free nucleic acids analysis using nanoarrays, (2) nanoarrays for protein detection by conventional optical fluorescence microscopy as well as by novel label-free methods such as atomic force microscopy, and (3) nanoarray for enzymatic-based assay. These nanoarrays will have significant applications in drug discovery, medical diagnosis, genetic testing, environmental monitoring, and food safety inspection.
Marcellino, W L; Salih, D A; Njahira, M N; Ndiwa, N; Araba, A; El Hussein, A M; Seitzer, U; Ahmed, J S; Bishop, R P; Skilton, R A
2017-08-01
A cross-sectional survey was carried out in four counties of Jonglei State, South Sudan, between May and June 2012 to determine the distribution and northern limit of Theileria parva, the causative agent of East Coast fever in cattle, and its tick vector Rhipicephalus appendiculatus, as a prerequisite to the deployment of relevant control strategies. A total of 1636 ticks, 386 serum samples and 399 blood samples were collected from indigenous, apparently healthy, cattle of different age groups. Tick species were identified morphologically, and the identity of R. appendiculatus was confirmed by DNA barcoding. Overall, the T. parva infection rate in R. appendiculatus was 25% as shown by nested PCR. ELISA was used to assess antibodies to T. parva, and the overall seroprevalence was 22.8%. PCR of the blood samples showed 55 (13.8%) were positive for T. parva. This is the first molecular confirmation of T. parva DNA in areas north of Juba, where it was previously known and established. The northern limit of T. parva was determined as N⁰06.17.792, about 242 Km north from Juba. Implication of this limit on the epidemiology and control of ECF is discussed. © 2016 Blackwell Verlag GmbH.
Estimation of effective temperatures in a quantum annealer: Towards deep learning applications
NASA Astrophysics Data System (ADS)
Realpe-Gómez, John; Benedetti, Marcello; Perdomo-Ortiz, Alejandro
Sampling is at the core of deep learning and more general machine learning applications; an increase in its efficiency would have a significant impact across several domains. Recently, quantum annealers have been proposed as a potential candidate to speed up these tasks, but several limitations still bar them from being used effectively. One of the main limitations, and the focus of this work, is that using the device's experimentally accessible temperature as a reference for sampling purposes leads to very poor correlation with the Boltzmann distribution it is programmed to sample from. Based on quantum dynamical arguments, one can expect that if the device indeed happens to be sampling from a Boltzmann-like distribution, it will correspond to one with an instance-dependent effective temperature. Unless this unknown temperature can be unveiled, it might not be possible to effectively use a quantum annealer for Boltzmann sampling processes. In this work, we propose a strategy to overcome this challenge with a simple effective-temperature estimation algorithm. We provide a systematic study assessing the impact of the effective temperatures in the quantum-assisted training of Boltzmann machines, which can serve as a building block for deep learning architectures. This work was supported by NASA Ames Research Center.
Development of a coping with stress scale for a non-western population of children and adolescents.
Gökler DanIşman, Ilgın; Yıldız, Nejla; Yiğit, İbrahim
2017-11-01
In the related literature numerous instruments have been developed to measure children and adolescents' coping with stress. Considering the cultural differences in individuals' choice for coping strategies, along with the limitations of the existing measures of coping for children and adolescents (e.g., being derived from coping measures developed for adults; unrepresentative samples with limited age range, etc.), the current study aimed to construct a self-report coping scale for a non-western population of children and adolescents. The study design included both qualitative and quantitative methodology. Three consecutive studies were conducted for the development and validation of the Children and Adolescents' Coping with Stress Scale (CACSS), a self-report measure assessing coping strategies of children and adolescents aged from 9 to 18 in response to self-identified stressors. Exploratory and confirmatory factor analyses resulted in a 61-item CACSS with 10 factors. The scale appears to have a clear factor structure; sufficient temporal stability; and good convergent, discriminant, and construct validity. By addressing limitations of existing coping scales, CACSS is believed to contribute to the literature as a developmentally appropriate and multidimensional tool.
NASA Technical Reports Server (NTRS)
Mittlefehldt, David W.; Yingst, R. Aileen; Schmidt, Mariek E.; Herkenhoff, Ken E.
2007-01-01
Examining the his-tory of a rock as the summed history of its constituent grains is a proven and powerful strategy that has been used on Earth to maximize the information that can be gleaned from limited samples. Grain size, sorting, roundness, and texture can be observed at the handlens scale, and may reveal clues to transport regime (e.g. fluvial, glacial, eolian) and transport distance. Diagenetic minerals may be of a form and textural context to allow identification, and to point to dominant diagenetic processes (e.g. evaporitic concentration, intermittent dissolution, early vs. late diagenetic emplacement). Handlens scale features of volcaniclastic particles may be diagnostic of primary vs recycled (by surface processes) grains and may provide information about eruptive patterns and processes. When the study site is truly remote, such as Mars, and when there are severe limitations on sample return or sample analysis with other methods, examination at the hand lens scale becomes critical both for extracting a maximum of information, and for best utilizing finite analytical capabilities.
Li, Sheng; Yao, Xinhua; Fu, Jianzhong
2014-07-16
Thermoelectric energy harvesting is emerging as a promising alternative energy source to drive wireless sensors in mechanical systems. Typically, the waste heat from spindle units in machine tools creates potential for thermoelectric generation. However, the problem of low and fluctuant ambient temperature differences in spindle units limits the application of thermoelectric generation to drive a wireless sensor. This study is devoted to presenting a transformer-based power management system and its associated control strategy to make the wireless sensor work stably at different speeds of the spindle. The charging/discharging time of capacitors is optimized through this energy-harvesting strategy. A rotating spindle platform is set up to test the performance of the power management system at different speeds. The experimental results show that a longer sampling cycle time will increase the stability of the wireless sensor. The experiments also prove that utilizing the optimal time can make the power management system work more effectively compared with other systems using the same sample cycle.
Donovan, Rory M.; Tapia, Jose-Juan; Sullivan, Devin P.; Faeder, James R.; Murphy, Robert F.; Dittrich, Markus; Zuckerman, Daniel M.
2016-01-01
The long-term goal of connecting scales in biological simulation can be facilitated by scale-agnostic methods. We demonstrate that the weighted ensemble (WE) strategy, initially developed for molecular simulations, applies effectively to spatially resolved cell-scale simulations. The WE approach runs an ensemble of parallel trajectories with assigned weights and uses a statistical resampling strategy of replicating and pruning trajectories to focus computational effort on difficult-to-sample regions. The method can also generate unbiased estimates of non-equilibrium and equilibrium observables, sometimes with significantly less aggregate computing time than would be possible using standard parallelization. Here, we use WE to orchestrate particle-based kinetic Monte Carlo simulations, which include spatial geometry (e.g., of organelles, plasma membrane) and biochemical interactions among mobile molecular species. We study a series of models exhibiting spatial, temporal and biochemical complexity and show that although WE has important limitations, it can achieve performance significantly exceeding standard parallel simulation—by orders of magnitude for some observables. PMID:26845334
Li, Sheng; Yao, Xinhua; Fu, Jianzhong
2014-01-01
Thermoelectric energy harvesting is emerging as a promising alternative energy source to drive wireless sensors in mechanical systems. Typically, the waste heat from spindle units in machine tools creates potential for thermoelectric generation. However, the problem of low and fluctuant ambient temperature differences in spindle units limits the application of thermoelectric generation to drive a wireless sensor. This study is devoted to presenting a transformer-based power management system and its associated control strategy to make the wireless sensor work stably at different speeds of the spindle. The charging/discharging time of capacitors is optimized through this energy-harvesting strategy. A rotating spindle platform is set up to test the performance of the power management system at different speeds. The experimental results show that a longer sampling cycle time will increase the stability of the wireless sensor. The experiments also prove that utilizing the optimal time can make the power management system work more effectively compared with other systems using the same sample cycle. PMID:25033189
Mesoporous structured MIPs@CDs fluorescence sensor for highly sensitive detection of TNT.
Xu, Shoufang; Lu, Hongzhi
2016-11-15
A facile strategy was developed to prepare mesoporous structured molecularly imprinted polymers capped carbon dots (M-MIPs@CDs) fluorescence sensor for highly sensitive and selective determination of TNT. The strategy using amino-CDs directly as "functional monomer" for imprinting simplify the imprinting process and provide well recognition sites accessibility. The as-prepared M-MIPs@CDs sensor, using periodic mesoporous silica as imprinting matrix, and amino-CDs directly as "functional monomer", exhibited excellent selectivity and sensitivity toward TNT with detection limit of 17nM. The recycling process was sustainable for 10 times without obvious efficiency decrease. The feasibility of the developed method in real samples was successfully evaluated through the analysis of TNT in soil and water samples with satisfactory recoveries of 88.6-95.7%. The method proposed in this work was proved to be a convenient and practical way to prepare high sensitive and selective fluorescence MIPs@CDs sensors. Copyright © 2016 Elsevier B.V. All rights reserved.
Picheny, Victor; Trépos, Ronan; Casadebaig, Pierre
2017-01-01
Accounting for the interannual climatic variations is a well-known issue for simulation-based studies of environmental systems. It often requires intensive sampling (e.g., averaging the simulation outputs over many climatic series), which hinders many sequential processes, in particular optimization algorithms. We propose here an approach based on a subset selection in a large basis of climatic series, using an ad-hoc similarity function and clustering. A non-parametric reconstruction technique is introduced to estimate accurately the distribution of the output of interest using only the subset sampling. The proposed strategy is non-intrusive and generic (i.e. transposable to most models with climatic data inputs), and can be combined to most “off-the-shelf” optimization solvers. We apply our approach to sunflower ideotype design using the crop model SUNFLO. The underlying optimization problem is formulated as a multi-objective one to account for risk-aversion. Our approach achieves good performances even for limited computational budgets, outperforming significantly standard strategies. PMID:28542198
Choi, Jane Ru; Yong, Kar Wey; Tang, Ruihua; Gong, Yan; Wen, Ting; Yang, Hui; Li, Ang; Chia, Yook Chin; Pingguan-Murphy, Belinda; Xu, Feng
2017-01-01
Paper-based devices have been broadly used for the point-of-care detection of dengue viral nucleic acids due to their simplicity, cost-effectiveness, and readily observable colorimetric readout. However, their moderate sensitivity and functionality have limited their applications. Despite the above-mentioned advantages, paper substrates are lacking in their ability to control fluid flow, in contrast to the flow control enabled by polymer substrates (e.g., agarose) with readily tunable pore size and porosity. Herein, taking the benefits from both materials, the authors propose a strategy to create a hybrid substrate by incorporating agarose into the test strip to achieve flow control for optimal biomolecule interactions. As compared to the unmodified test strip, this strategy allows sensitive detection of targets with an approximately tenfold signal improvement. Additionally, the authors showcase the potential of functionality improvement by creating multiple test zones for semi-quantification of targets, suggesting that the number of visible test zones is directly proportional to the target concentration. The authors further demonstrate the potential of their proposed strategy for clinical assessment by applying it to their prototype sample-to-result test strip to sensitively and semi-quantitatively detect dengue viral RNA from the clinical blood samples. This proposed strategy holds significant promise for detecting various targets for diverse future applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ceramic components manufacturing by selective laser sintering
NASA Astrophysics Data System (ADS)
Bertrand, Ph.; Bayle, F.; Combe, C.; Goeuriot, P.; Smurov, I.
2007-12-01
In the present paper, technology of selective laser sintering/melting is applied to manufacture net shaped objects from pure yttria-zirconia powders. Experiments are carried out on Phenix Systems PM100 machine with 50 W fibre laser. Powder is spread by a roller over the surface of 100 mm diameter alumina cylinder. Design of experiments is applied to identify influent process parameters (powder characteristics, powder layering and laser manufacturing strategy) to obtain high-quality ceramic components (density and micro-structure). The influence of the yttria-zirconia particle size and morphology onto powder layering process is analysed. The influence of the powder layer thickness on laser sintering/melting is studied for different laser beam velocity V ( V = 1250-2000 mm/s), defocalisation (-6 to 12 mm), distance between two neighbour melted lines (so-called "vectors") (20-40 μm), vector length and temperature in the furnace. The powder bed density before laser sintering/melting also has significant influence on the manufactured samples density. Different manufacturing strategies are applied and compared: (a) different laser beam scanning paths to fill the sliced surfaces of the manufactured object, (b) variation of vector length (c) different strategies of powder layering, (d) temperature in the furnace and (e) post heat treatment in conventional furnace. Performance and limitations of different strategies are analysed applying the following criteria: geometrical accuracy of the manufactured samples, porosity. The process stability is proved by fabrication of 1 cm 3 volume cube.
Nonuniform sampling and non-Fourier signal processing methods in multidimensional NMR.
Mobli, Mehdi; Hoch, Jeffrey C
2014-11-01
Beginning with the introduction of Fourier Transform NMR by Ernst and Anderson in 1966, time domain measurement of the impulse response (the free induction decay, FID) consisted of sampling the signal at a series of discrete intervals. For compatibility with the discrete Fourier transform (DFT), the intervals are kept uniform, and the Nyquist theorem dictates the largest value of the interval sufficient to avoid aliasing. With the proposal by Jeener of parametric sampling along an indirect time dimension, extension to multidimensional experiments employed the same sampling techniques used in one dimension, similarly subject to the Nyquist condition and suitable for processing via the discrete Fourier transform. The challenges of obtaining high-resolution spectral estimates from short data records using the DFT were already well understood, however. Despite techniques such as linear prediction extrapolation, the achievable resolution in the indirect dimensions is limited by practical constraints on measuring time. The advent of non-Fourier methods of spectrum analysis capable of processing nonuniformly sampled data has led to an explosion in the development of novel sampling strategies that avoid the limits on resolution and measurement time imposed by uniform sampling. The first part of this review discusses the many approaches to data sampling in multidimensional NMR, the second part highlights commonly used methods for signal processing of such data, and the review concludes with a discussion of other approaches to speeding up data acquisition in NMR. Copyright © 2014 Elsevier B.V. All rights reserved.
Palinkas, Lawrence A.; Horwitz, Sarah M.; Green, Carla A.; Wisdom, Jennifer P.; Duan, Naihua; Hoagwood, Kimberly
2013-01-01
Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research. PMID:24193818
Strategies and limitations for fluorescence detection of XAFS at high flux beamlines
Heald, Steve M.
2015-02-17
The issue of detecting the XAFS signal from dilute samples is discussed in detail with the aim of making best use of high flux beamlines that provide up to 10 13 photons -1. Various detection methods are compared, including filters with slits, solid state detectors, crystal analyzers and combinations of these. These comparisons rely on simulations that use experimentally determined parameters. It is found that inelastic scattering places a fundamental limit on detection, and that it is important to take proper account of the polarization dependence of the signals. The combination of a filter–slit system with a solid state detectormore » is a promising approach. With an optimized system good performance can be obtained even if the total count rate is limited to 10 7 Hz. Detection schemes with better energy resolution can help at the largest dilutions if their collection efficiency and count rate limits can be improved.« less
Strategies and limitations for fluorescence detection of XAFS at high flux beamlines
Heald, Steve M.
2015-01-01
The issue of detecting the XAFS signal from dilute samples is discussed in detail with the aim of making best use of high flux beamlines that provide up to 1013 photons s−1. Various detection methods are compared, including filters with slits, solid state detectors, crystal analyzers and combinations of these. These comparisons rely on simulations that use experimentally determined parameters. It is found that inelastic scattering places a fundamental limit on detection, and that it is important to take proper account of the polarization dependence of the signals. The combination of a filter–slit system with a solid state detector is a promising approach. With an optimized system good performance can be obtained even if the total count rate is limited to 107 Hz. Detection schemes with better energy resolution can help at the largest dilutions if their collection efficiency and count rate limits can be improved. PMID:25723945
Oliveri, Paolo
2017-08-22
Qualitative data modelling is a fundamental branch of pattern recognition, with many applications in analytical chemistry, and embraces two main families: discriminant and class-modelling methods. The first strategy is appropriate when at least two classes are meaningfully defined in the problem under study, while the second strategy is the right choice when the focus is on a single class. For this reason, class-modelling methods are also referred to as one-class classifiers. Although, in the food analytical field, most of the issues would be properly addressed by class-modelling strategies, the use of such techniques is rather limited and, in many cases, discriminant methods are forcedly used for one-class problems, introducing a bias in the outcomes. Key aspects related to the development, optimisation and validation of suitable class models for the characterisation of food products are critically analysed and discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
Shrivastava, Sajal; Sohn, Il-Yung; Son, Young-Min; Lee, Won-Il; Lee, Nae-Eung
2015-12-14
Although real-time label-free fluorescent aptasensors based on nanomaterials are increasingly recognized as a useful strategy for the detection of target biomolecules with high fidelity, the lack of an imaging-based quantitative measurement platform limits their implementation with biological samples. Here we introduce an ensemble strategy for a real-time label-free fluorescent graphene (Gr) aptasensor platform. This platform employs aptamer length-dependent tunability, thus enabling the reagentless quantitative detection of biomolecules through computational processing coupled with real-time fluorescence imaging data. We demonstrate that this strategy effectively delivers dose-dependent quantitative readouts of adenosine triphosphate (ATP) concentration on chemical vapor deposited (CVD) Gr and reduced graphene oxide (rGO) surfaces, thereby providing cytotoxicity assessment. Compared with conventional fluorescence spectrometry methods, our highly efficient, universally applicable, and rational approach will facilitate broader implementation of imaging-based biosensing platforms for the quantitative evaluation of a range of target molecules.
Zhou, Li; Wang, Kui; Li, Qifu; Nice, Edouard C; Zhang, Haiyuan; Huang, Canhua
2016-01-01
Cancer is a common disease that is a leading cause of death worldwide. Currently, early detection and novel therapeutic strategies are urgently needed for more effective management of cancer. Importantly, protein profiling using clinical proteomic strategies, with spectacular sensitivity and precision, offer excellent promise for the identification of potential biomarkers that would direct the development of targeted therapeutic anticancer drugs for precision medicine. In particular, clinical sample sources, including tumor tissues and body fluids (blood, feces, urine and saliva), have been widely investigated using modern high-throughput mass spectrometry-based proteomic approaches combined with bioinformatic analysis, to pursue the possibilities of precision medicine for targeted cancer therapy. Discussed in this review are the current advantages and limitations of clinical proteomics, the available strategies of clinical proteomics for the management of precision medicine, as well as the challenges and future perspectives of clinical proteomics-driven precision medicine for targeted cancer therapy.
Flynn, Fran; Evanish, Julie Q; Fernald, Josephine M; Hutchinson, Dawn E; Lefaiver, Cheryl
2016-08-01
Because of the high frequency of interruptions during medication administration, the effectiveness of strategies to limit interruptions during medication administration has been evaluated in numerous quality improvement initiatives in an effort to reduce medication administration errors. To evaluate the effectiveness of evidence-based strategies to limit interruptions during scheduled, peak medication administration times in 3 progressive cardiac care units (PCCUs). A secondary aim of the project was to evaluate the impact of limiting interruptions on medication errors. The percentages of interruptions and medication errors before and after implementation of evidence-based strategies to limit interruptions were measured by using direct observations of nurses on 2 PCCUs. Nurses in a third PCCU served as a comparison group. Interruptions (P < .001) and medication errors (P = .02) decreased significantly in 1 PCCU after implementation of evidence-based strategies to limit interruptions. Avoidable interruptions decreased 83% in PCCU1 and 53% in PCCU2 after implementation of the evidence-based strategies. Implementation of evidence-based strategies to limit interruptions in PCCUs decreases avoidable interruptions and promotes patient safety. ©2016 American Association of Critical-Care Nurses.
Hurtado-Sánchez, María Del Carmen; Lozano, Valeria A; Rodríguez-Cáceres, María Isabel; Durán-Merás, Isabel; Escandar, Graciela M
2015-03-01
An eco-friendly strategy for the simultaneous quantification of three emerging pharmaceutical contaminants is presented. The proposed analytical method, which involves photochemically induced fluorescence matrix data combined with second-order chemometric analysis, was used for the determination of carbamazepine, ofloxacin and piroxicam in water samples of different complexity without the need of chromatographic separation. Excitation-emission photoinduced fluorescence matrices were obtained after UV irradiation, and processed with second-order algorithms. Only one of the tested algorithms was able to overcome the strong spectral overlapping among the studied pollutants and allowed their successful quantitation in very interferent media. The method sensitivity in superficial and underground water samples was enhanced by a simple solid-phase extraction with C18 membranes, which was successful for the extraction/preconcentration of the pollutants at trace levels. Detection limits in preconcentrated (1:125) real water samples ranged from 0.04 to 0.3 ng mL(-1). Relative prediction errors around 10% were achieved. The proposed strategy is significantly simpler and greener than liquid chromatography-mass spectrometry methods, without compromising the analytical quality of the results. Copyright © 2014 Elsevier B.V. All rights reserved.
Integrated Sampling Strategy (ISS) Guide
Robert E. Keane; Duncan C. Lutes
2006-01-01
What is an Integrated Sampling Strategy? Simply put, it is the strategy that guides how plots are put on the landscape. FIREMONâs Integrated Sampling Strategy assists fire managers as they design their fire monitoring project by answering questions such as: What statistical approach is appropriate for my sample design? How many plots can I afford? How many plots do I...
Examining the validity of self-reports on scales measuring students' strategic processing.
Samuelstuen, Marit S; Bråten, Ivar
2007-06-01
Self-report inventories trying to measure strategic processing at a global level have been much used in both basic and applied research. However, the validity of global strategy scores is open to question because such inventories assess strategy perceptions outside the context of specific task performance. The primary aim was to examine the criterion-related and construct validity of the global strategy data obtained with the Cross-Curricular Competencies (CCC) scale. Additionally, we wanted to compare the validity of these data with the validity of data obtained with a task-specific self-report inventory focusing on the same types of strategies. The sample included 269 10th-grade students from 12 different junior high schools. Global strategy use as assessed with the CCC was compared with task-specific strategy use reported in three different reading situations. Moreover, relationships between scores on the CCC and scores on measures of text comprehension were examined and compared with relationships between scores on the task-specific strategy measure and the same comprehension measures. The comparison between the CCC strategy scores and the task-specific strategy scores suggested only modest criterion-related validity for the data obtained with the global strategy inventory. The CCC strategy scores were also not related to the text comprehension measures, indicating poor construct validity. In contrast, the task-specific strategy scores were positively related to the comprehension measures, indicating good construct validity. Attempts to measure strategic processing at a global level seem to have limited validity and utility.
Sheerin, Christina M.; Kozak, Andrea T.; Hale, Andrew C.; BCBA; Ramesh, Bangalore K.; Spates, C. Richard
2016-01-01
Background The drug D-Cycloserine (DCS) has been used as an adjunct to increase the pace of symptom reductions during exposure therapy for anxiety disorders. This procedure has met with mixed results andmany questions remain. Aims: The findings from two investigations are reported here, highlighting important domains for furthering our understanding of DCS effects. Method Study 1 (n = 16) treated social anxiety among a sample of emerging adults, and in addition to self-report utilized a behavioral measure of symptom improvement to evaluate outcomes. Study 2 (n = 16), utilizing a similar design, introduced an algorithm based post-session administration strategy following sessions where anxiety reductions were evident. Both investigations were double-blind, placebo controlled, randomized trials with participants diagnosed with social anxiety. Treatment was an exposure-based CBT-protocol adopted in other investigations that tested DCS. Results Findings of Study 1 yielded an interaction effect in favor of DCS for self-reported distress ratings (p=.02) and on a behavioral measure of anxiety (p=.01). Findings from Study 2 revealed a significant effect for self-reported subjective distress ratings (p=.002). Conclusions Although limitations of small sample size constrain generalization and limit power, results illustrate some beneficial effects of DCS within the context of exposure-based intervention for social anxiety, yet are discussed in the context of statistical vs. clinical significance and the DCS literature as a whole. Present findings highlight the potential usefulness of a post-session administration strategy and the behavioral measure for future efforts with an eye towards preventing bias through more nuanced and powered studies. PMID:27990477
NASA Astrophysics Data System (ADS)
Reilly, T. J.; Focazio, M. J.; Murdoch, P. S.; Benzel, W. M.; Fisher, S. C.; Griffin, D. W.; Iwanowicz, L. R.; Jones, D. K.; Loftin, K. A.
2014-12-01
Enhanced dispersion and concentration of contaminants such as trace metals and organic pollutants through storm-induced disturbances and sea level rise (SLR) are major factors that could adversely impact the health and resilience of communities and ecosystems in coming years. As part of the response to Hurricane Sandy, the U.S. Geological Survey collected data on the effects of contaminant source disturbance and dispersion. A major limitation of conducting pre- and post-Sandy comparisons was the lack of baseline data in locations proximal to potential contaminant sources and mitigation activities, sensitive ecosystems, and recreational facilities where human and ecological exposures are probable. To address this limitation, a Sediment-bound Contaminant Resiliency and Response (SCoRR) strategy with two operational modes, Resiliency (baseline) and Response (event-based), has been designed by leveraging existing interagency networks and resources. In Resiliency Mode, sites will be identified and sampled using standardized procedures prioritized to develop baseline data and to define sediment-quality based environmental health metrics. In Response Mode, a subset of sites within the network will be evaluated to ensure that adequate pre-event data exist at priority locations. If deficient, pre-event samples will be collected from priority locations. Crews will be deployed post-event to resample these locations allowing direct evaluation of impacts, as well as redefining baseline conditions for these areas. A tiered analytical and data integration strategy has been developed that will identify vulnerable human and environmental receptors, the sediment-bound contaminants present, and the biological activity and potential effects of exposure to characterized sediments. Communication mechanisms are in development to make resulting data available in a timely fashion and in a suitable format for informing event response and recovery efforts.
Vanhoof, Chris; Corthouts, Valère; Tirez, Kristof
2004-04-01
To determine the heavy metal content in soil samples at contaminated locations, a static and time consuming procedure is used in most cases. Soil samples are collected and analyzed in the laboratory at high quality and high analytical costs. The demand by government and consultants for a more dynamic approach and by customers requiring performances in which analyses are performed in the field with immediate feedback of the analytical results, is growing. Especially during the follow-up of remediation projects or during the determination of the sampling strategy, field analyses are advisable. For this purpose four types of ED-XRF systems, ranging from portable up to high performance laboratory systems, have been evaluated. The evaluation criteria are based on the performance characteristics for all the ED-XRF systems such as limit of detection, accuracy and the measurement uncertainty on one hand, and also the influence of the sample pretreatment on the obtained results on the other hand. The study proved that the field portable system and the bench top system, placed in a mobile van, can be applied as field techniques, resulting in semi-quantitative analytical results. A limited homogenization of the analyzed sample significantly increases the representativeness of the soil sample. The ED-XRF systems can be differentiated by their limits of detection which are a factor of 10 to 20 higher for the portable system. The accuracy of the results and the measurement uncertainty also improved using the bench top system. Therefore, the selection criteria for applicability of both field systems are based on the required detection level and also the required accuracy of the results.
Roegner, Amber F.; Schirmer, Macarena Pírez; Puschner, Birgit; Brena, Beatriz; Gonzalez-Sapienza, Gualberto
2014-01-01
The freshwater cyanotoxins, microcystins (MCs), pose a global public health threat as potent hepatotoxins in cyanobacterial blooms; their persistence in drinking and recreational water has been associated with potential chronic effects in addition to acute intoxications. Rapid and accurate detection of the over 80 structural congeners is challenged by the rigorous and time consuming clean up required to overcome interference found in raw water samples. MALDI-MS has shown promise for rapid quantification of individual congeners in raw water samples, with very low operative cost, but so far limited sensitivity and lack of available and versatile internal standards (ISs) has limited its use. Two easily synthesized S-hydroxyethyl–Cys(7)-MC-LR and –RR ISs were used to generate linear standard curves in a reflectron MALDI instrument, reproducible across several orders of magnitude for MC –LR, - RR and –YR. Minimum quantification limits in direct water samples with no clean up or concentration step involved were consistently below 7 μg/L, with recoveries from spiked samples between 80 and 119%. This method improves sensitivity by 30 fold over previous reports of quantitative MALDI-TOF applications to MCs and provides a salient option for rapid throughput analysis for multiple MC congeners in untreated raw surface water blooms as a means to identify source public health threats and target intervention strategies within a watershed. As demonstrated by analysis of a set of samples from Uruguay, utilizing the reaction of different MC congeners with alternate sulfhydryl compounds, the m/z of the IS can be customized to avoid overlap with interfering compounds in local surface water samples. PMID:24388801
Weightman, R M; Huckle, A J; Roques, S E; Ginsburg, D; Dyer, C J
2012-01-01
Wild rocket (Diplotaxis tenuifolia) is a leafy vegetable known for its high tissue nitrate concentration (TNC) which can exceed the limits set in the relevant European legislation designed to protect human health. The aim of this work was to understand the factors influencing TNC and to develop best practice guidelines to growers. Commercial crops of field-grown wild rocket were studied over two seasons. In 2010, ten separate crops were sampled representing a range of soil types and time periods during the summer. Two fields sampled using a 'W'- or 'X'-shaped sampling pattern demonstrated that 10 incremental samples bulked to make 1 kg of fresh material could be used to provide an adequate sample for determination of TNC in the wild rocket crop, as is the case for other leafy vegetables. Of eight commercial crops sampled in 2010 with an average nitrogen (N) fertiliser application of 104 kg N ha(-1), two exceeded the limit of 6000 mg NO3(-) kg(-1) set in the legislation. In 2011, six N response experiments were carried out, and only two sites showed a significant yield response to N fertiliser. The reason for the lack of response at the other sites was principally due to high levels of soil mineral N prior to drilling, meaning the crops' requirement for N was satisfied without additional fertiliser N. In the experimental situation at an N fertiliser application rate of 120 kg N ha(-1), 50% of crops would have exceeded the 6000 mg NO3(-) kg(-1) limit. In both seasons, low radiation levels in the 5 days prior to harvest were shown to increase TNC, although the relationship was also influenced by N supply. Strategies for optimising N nutrition of field-grown wild rocket are discussed.
Connor, Thomas H; Smith, Jerome P
2016-09-01
At the present time, the method of choice to determine surface contamination of the workplace with antineoplastic and other hazardous drugs is surface wipe sampling and subsequent sample analysis with a variety of analytical techniques. The purpose of this article is to review current methodology for determining the level of surface contamination with hazardous drugs in healthcare settings and to discuss recent advances in this area. In addition it will provide some guidance for conducting surface wipe sampling and sample analysis for these drugs in healthcare settings. Published studies on the use of wipe sampling to measure hazardous drugs on surfaces in healthcare settings drugs were reviewed. These studies include the use of well-documented chromatographic techniques for sample analysis in addition to newly evolving technology that provides rapid analysis of specific antineoplastic. Methodology for the analysis of surface wipe samples for hazardous drugs are reviewed, including the purposes, technical factors, sampling strategy, materials required, and limitations. The use of lateral flow immunoassay (LFIA) and fluorescence covalent microbead immunosorbent assay (FCMIA) for surface wipe sample evaluation is also discussed. Current recommendations are that all healthc a re settings where antineoplastic and other hazardous drugs are handled include surface wipe sampling as part of a comprehensive hazardous drug-safe handling program. Surface wipe sampling may be used as a method to characterize potential occupational dermal exposure risk and to evaluate the effectiveness of implemented controls and the overall safety program. New technology, although currently limited in scope, may make wipe sampling for hazardous drugs more routine, less costly, and provide a shorter response time than classical analytical techniques now in use.
Kelly, Heath; Riddell, Michaela A; Gidding, Heather F; Nolan, Terry; Gilbert, Gwendolyn L
2002-08-19
We compared estimates of the age-specific population immunity to measles, mumps, rubella, hepatitis B and varicella zoster viruses in Victorian school children obtained by a national sero-survey, using a convenience sample of residual sera from diagnostic laboratories throughout Australia, with those from a three-stage random cluster survey. When grouped according to school age (primary or secondary school) there was no significant difference in the estimates of immunity to measles, mumps, hepatitis B or varicella. Compared with the convenience sample, the random cluster survey estimated higher immunity to rubella in samples from both primary (98.7% versus 93.6%, P = 0.002) and secondary school students (98.4% versus 93.2%, P = 0.03). Despite some limitations, this study suggests that the collection of a convenience sample of sera from diagnostic laboratories is an appropriate sampling strategy to provide population immunity data that will inform Australia's current and future immunisation policies. Copyright 2002 Elsevier Science Ltd.
Desmopressin to Prevent Rapid Sodium Correction in Severe Hyponatremia: A Systematic Review.
MacMillan, Thomas E; Tang, Terence; Cavalcanti, Rodrigo B
2015-12-01
Hyponatremia is common among inpatients and is associated with severe adverse outcomes such as osmotic demyelination syndrome. Current guidelines recommend serum sodium concentration correction targets of no more than 8 mEq/L per day in patients at high risk of osmotic demyelination syndrome. Desmopressin is recommended to control high rates of serum sodium concentration correction in severe hyponatremia. However, recommendations are based on limited data. The objective of this study is to review current strategies for DDAVP use in severe hyponatremia. Systematic literature search of 4 databases of peer-reviewed studies was performed and study quality was appraised. The literature search identified 17 observational studies with 80 patients. We found 3 strategies for desmopressin administration in hyponatremia: 1) proactive, where desmopressin is administered early based on initial serum sodium concentration; 2) reactive, where desmopressin is administered based on changes in serum sodium concentration or urine output; 3) rescue, where desmopressin is administered after serum sodium correction targets are exceeded or when osmotic demyelination appears imminent. A proactive strategy of desmopressin administration with hypertonic saline was associated with lower incidence of exceeding serum sodium concentration correction targets, although this evidence is derived from a small case series. Three distinct strategies for desmopressin administration are described in the literature. Limitations in study design and sample size prevent definitive conclusions about the optimal strategy for desmopressin administration to correct hyponatremia. There is a pressing need for better quality research to guide clinicians in managing severe hyponatremia. Copyright © 2015 Elsevier Inc. All rights reserved.
Shen, You-xin; Liu, Wei-li; Li, Yu-hui; Guan, Hui-lin
2014-01-01
A large number of small-sized samples invariably shows that woody species are absent from forest soil seed banks, leading to a large discrepancy with the seedling bank on the forest floor. We ask: 1) Does this conventional sampling strategy limit the detection of seeds of woody species? 2) Are large sample areas and sample sizes needed for higher recovery of seeds of woody species? We collected 100 samples that were 10 cm (length) × 10 cm (width) × 10 cm (depth), referred to as larger number of small-sized samples (LNSS) in a 1 ha forest plot, and placed them to germinate in a greenhouse, and collected 30 samples that were 1 m × 1 m × 10 cm, referred to as small number of large-sized samples (SNLS) and placed them (10 each) in a nearby secondary forest, shrub land and grass land. Only 15.7% of woody plant species of the forest stand were detected by the 100 LNSS, contrasting with 22.9%, 37.3% and 20.5% woody plant species being detected by SNLS in the secondary forest, shrub land and grassland, respectively. The increased number of species vs. sampled areas confirmed power-law relationships for forest stand, the LNSS and SNLS at all three recipient sites. Our results, although based on one forest, indicate that conventional LNSS did not yield a high percentage of detection for woody species, but SNLS strategy yielded a higher percentage of detection for woody species in the seed bank if samples were exposed to a better field germination environment. A 4 m2 minimum sample area derived from power equations is larger than the sampled area in most studies in the literature. Increased sample size also is needed to obtain an increased sample area if the number of samples is to remain relatively low.
Gonçalves, Janaina R; Nappo, Solange A
2015-07-25
In Brazil, crack cocaine use remains a healthcare challenge due to the rapid onset of its pleasurable effects, its ability to induce craving and addiction, and the fact that it is easily accessible. Delayed action on the part of the Brazilian Government in addressing the drug problem has led users to develop their own strategies for surviving the effects of crack cocaine use, particularly the drug craving and psychosis. In this context, users have sought the benefits of combining crack cocaine with marijuana. Our aim was to identify the reasons why users combine crack cocaine with marijuana and the health implications of doing so. The present study is a qualitative study, using in-depth interviews and criteria-based sampling, following 27 crack cocaine users who combined its use with marijuana. Participants were recruited using the snowball sampling technique, and the point of theoretical saturation was used to define the sample size. Data were analyzed using the content analysis technique. The interviewees reported that the combination of crack cocaine use with marijuana provided "protection" (reduced undesirable effects, improved sleep and appetite, reduced craving for crack cocaine, and allowed the patients to recover some quality of life). Combined use of cannabis as a strategy to reduce the effects of crack exhibited several significant advantages, particularly an improved quality of life, which "protected" users from the violence typical of the crack culture. Crack use is considered a serious public health problem in Brazil, and there are few solution strategies. Within that limited context, the combination of cannabis and crack deserves more thorough clinical investigation to assess its potential use as a strategy to reduce the damage associated with crack use.
Thiopeptide Antibiotics: Retrospective and Recent Advances
Just-Baringo, Xavier; Albericio, Fernando; Álvarez, Mercedes
2014-01-01
Thiopeptides, or thiazolyl peptides, are a relatively new family of antibiotics that already counts with more than one hundred different entities. Although they are mainly isolated from soil bacteria, during the last decade, new members have been isolated from marine samples. Far from being limited to their innate antibacterial activity, thiopeptides have been found to possess a wide range of biological properties, including anticancer, antiplasmodial, immunosuppressive, etc. In spite of their ribosomal origin, these highly posttranslationally processed peptides have posed a fascinating synthetic challenge, prompting the development of various methodologies and strategies. Regardless of their limited solubility, intensive investigations are bringing thiopeptide derivatives closer to the clinic, where they are likely to show their veritable therapeutic potential. PMID:24445304
Strategies for monitoring the bacteriological quality of water supply in distribution systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geldreich, E.E.; Goodrich, J.A.; Clark, R.M.
1989-01-01
Monitoring strategies for characterizing the bacteriological quality of water in the distribution system require a complete understanding of a variety of interrelated aspects that include treated water quality, water-supply retention in storage and infrastructure deterioration in the distribution system. A study of field data from several water-supply utilities was used to highlight some innovative interpretations of compliance monitoring data. Major perceptions include: The use of a 5% coliform frequency of occurrence limit highlights compliance significance in those situations where there are clusters of positive samples containing less than 4 coliforms per 100 mL. Unfortunately, this presence/absence concept does not providemore » any indication of the magnitude of a contamination event.« less
Optimal Keno Strategies and the Central Limit Theorem
ERIC Educational Resources Information Center
Johnson, Roger W.
2006-01-01
For the casino game Keno we determine optimal playing strategies. To decide such optimal strategies, both exact (hypergeometric) and approximate probability calculations are used. The approximate calculations are obtained via the Central Limit Theorem and simulation, and an important lesson about the application of the Central Limit Theorem is…
Influence of various water quality sampling strategies on load estimates for small streams
Robertson, Dale M.; Roerish, Eric D.
1999-01-01
Extensive streamflow and water quality data from eight small streams were systematically subsampled to represent various water‐quality sampling strategies. The subsampled data were then used to determine the accuracy and precision of annual load estimates generated by means of a regression approach (typically used for big rivers) and to determine the most effective sampling strategy for small streams. Estimation of annual loads by regression was imprecise regardless of the sampling strategy used; for the most effective strategy, median absolute errors were ∼30% based on the load estimated with an integration method and all available data, if a regression approach is used with daily average streamflow. The most effective sampling strategy depends on the length of the study. For 1‐year studies, fixed‐period monthly sampling supplemented by storm chasing was the most effective strategy. For studies of 2 or more years, fixed‐period semimonthly sampling resulted in not only the least biased but also the most precise loads. Additional high‐flow samples, typically collected to help define the relation between high streamflow and high loads, result in imprecise, overestimated annual loads if these samples are consistently collected early in high‐flow events.
LeCuyer, Elizabeth A.; Swanson, Dena Phillips
2016-01-01
Research suggests that higher levels of authoritarian parenting exist in African American (AA) families than in European American (EA) families, and that authoritarian attitudes may be associated with more positive outcomes in AA families than EA families. However, less is known about authoritarian attitudes and children's development within AA families. This within-group study of 50 African American mothers and their 3-year-old children examined associations between maternal authoritarian attitudes, observed maternal limit-setting strategies, and children's self-regulation during a limit-setting interaction. The findings indicate that while AA families may hold more authoritarian attitudes than EA families, the direction of effect of authoritarian attitudes on children's outcomes appears to be the same in both ethnic groups. In this sample, when examining AA authoritarian attitudes relative to those of other AA mothers, less or lower authoritarian attitudes were associated with authoritative limit-setting behavior (firm limits within the context of overall warmth and responsiveness) and better children's self-regulation. PMID:28408794
LeCuyer, Elizabeth A; Swanson, Dena Phillips
2017-03-01
Research suggests that higher levels of authoritarian parenting exist in African American (AA) families than in European American (EA) families, and that authoritarian attitudes may be associated with more positive outcomes in AA families than EA families. However, less is known about authoritarian attitudes and children's development within AA families. This within-group study of 50 African American mothers and their 3-year-old children examined associations between maternal authoritarian attitudes, observed maternal limit-setting strategies, and children's self-regulation during a limit-setting interaction. The findings indicate that while AA families may hold more authoritarian attitudes than EA families, the direction of effect of authoritarian attitudes on children's outcomes appears to be the same in both ethnic groups. In this sample, when examining AA authoritarian attitudes relative to those of other AA mothers, less or lower authoritarian attitudes were associated with authoritative limit-setting behavior (firm limits within the context of overall warmth and responsiveness) and better children's self-regulation.
[Use of blood lead data to evaluate and prevent childhood lead poisoning in Latin America].
Romieu, Isabelle
2003-01-01
Exposure to lead is a widespread and serious threat to the health of children in Latin America. Health officials should monitor sources of exposure and health outcomes to design, implement, and evaluate prevention and control activities. To evaluate the magnitude of lead as a public health problem, three key elements must be defined: I) the potential sources of exposure, 2) the indicators to evaluate health effects and environmental exposure, and 3) the sampling methods for the population at risk. Several strategies can be used to select the study population depending on the study objectives, the time limitations, and the available resources. If the objective is to evaluate the magnitude and sources of the problem, the following sampling methods can be used: I) population-based random sampling; 2) facility-based random sampling within hospitals, daycare centers, or schools; 3) target sampling of high risk groups; 4) convenience sampling of volunteers; and 5) case reporting (which can lead to the identification of populations at risk and sources of exposures). For all sampling methods, information gathering should include the use of a questionnaire to collect general information on the participants and on potential local sources of exposure, as well as the collection of biological samples. In interpreting data, one should consider the type of sampling used and the non-response rates, as well as factors that might influence blood lead measurements, such as age and seasonal variability. Blood lead measurements should be integrated in an overall strategy to prevent lead toxicity in children. The English version of this paper is available at: http://www.insp.mx/salud/index.html.
Micropollutants and chemical residues in organic and conventional meat.
Dervilly-Pinel, Gaud; Guérin, Thierry; Minvielle, Brice; Travel, Angélique; Normand, Jérôme; Bourin, Marie; Royer, Eric; Dubreil, Estelle; Mompelat, Sophie; Hommet, Frédéric; Nicolas, Marina; Hort, Vincent; Inthavong, Chanthadary; Saint-Hilaire, Mailie; Chafey, Claude; Parinet, Julien; Cariou, Ronan; Marchand, Philippe; Le Bizec, Bruno; Verdon, Eric; Engel, Erwan
2017-10-01
The chemical contamination levels of both conventional and organic meats were assessed. The objective was to provide occurrence data in a context of chronic exposure. Environmental contaminants (17 polychlorinated dibenzodioxins/dibenzofurans, 18 polychlorinated biphenyls (PCBs), 3 hexabromocyclododecane (HBCD) isomers, 6 mycotoxins, 6 inorganic compounds) together with chemical residues arising from production inputs (75 antimicrobials, 10 coccidiostats and 121 pesticides) have been selected as relevant compounds. A dedicated sampling strategy, representative of the French production allowed quantification of a large sample set (n=266) including both conventional (n=139) and organic (n=127) raw meat from three animal species (bovine, porcine, poultry). While contamination levels below regulatory limits were measured in all the samples, significant differences were observed between both species and types of farming. Several environmental contaminants (Dioxins, PCBs, HBCD, Zn, Cu, Cd, Pb, As) were measured at significantly higher levels in organic samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
Co-victims of Homicide: A Systematic Review of the Literature.
Connolly, Jennifer; Gordon, Ronit
2015-10-01
This systematic literature review examines the effects of homicide on surviving family members, the "co-victims" or "survivors" of homicide. A content analysis was conducted on 40 articles identified through a search of the literature. The research samples were predominately located in the United States, but included two U.K. samples, one Jamaican sample, and one sample from Israel. All articles were written in English. Three themes were identified. Nineteen articles explored the psychological, academic, social, occupational, and familial effects of homicide. Thirteen articles considered survivors' grieving process and how it was altered by experiences with the criminal justice system as well as coping strategies used by survivors to deal with their grief. Eight articles explored treatment interventions available to help surviving family members deal with their grief. Key findings from the articles, limitations of the current research, and implications for future research, policy, and practice are included. © The Author(s) 2014.
Gaspard, Philippe G; Schwartzbrod, Janine
2003-03-01
The use of sludge in agriculture must be carried out according to many guidelines, especially regarding a precise knowledge of the pathogenic microorganisms it contains. The control of the produced sludge requires a sampling strategy that is representative of the contamination present in the sludge. Thus, we evaluated the distribution of helminth eggs in sludge to determine how to sample and at what frequency. Two plants were studied, firstly we studied sludge that was undergoing biological treatment (anaerobic digestion, prolonged aeration), secondly we evaluated the dehydration step (centrifugation and filter press). The helminth egg concentrations were measured over short periods (between 5 minutes and 7 hours) and for periods of over 24 hours (7 to 28 days). The results showed that there was much homogeneity in periods of less than 7 hours, thus it was advisable to take grab samples. An appropriate sample weight was 30 g dry matter, because this allowed an analysis in triplicate when testing treatment processes according to standards of France, (less than 3 viable eggs/10 g dry matter). Determination of the egg concentration in the plants during periods of over 24 hours showed that the parasite flow was stable. In some cases, large variations were due to the treatment processes (storage or thickening, mixing of different sludges). These results have been confirmed with the study of 6 other plants during a one year period. Thus, the recommended sampling frequency can be limited to every 3 to 6 months, by adapting the sampling methods to the characteristics of the plant.
On some genetic consequences of social structure, mating systems, dispersal, and sampling
Parreira, Bárbara R.; Chikhi, Lounès
2015-01-01
Many species are spatially and socially organized, with complex social organizations and dispersal patterns that are increasingly documented. Social species typically consist of small age-structured units, where a limited number of individuals monopolize reproduction and exhibit complex mating strategies. Here, we model social groups as age-structured units and investigate the genetic consequences of social structure under distinct mating strategies commonly found in mammals. Our results show that sociality maximizes genotypic diversity, which contradicts the belief that social groups are necessarily subject to strong genetic drift and at high risk of inbreeding depression. Social structure generates an excess of genotypic diversity. This is commonly observed in ecological studies but rarely reported in population genetic studies that ignore social structure. This heterozygosity excess, when detected, is often interpreted as a consequence of inbreeding avoidance mechanisms, but we show that it can occur even in the absence of such mechanisms. Many seemly contradictory results from ecology and population genetics can be reconciled by genetic models that include the complexities of social species. We find that such discrepancies can be explained by the intrinsic properties of social groups and by the sampling strategies of real populations. In particular, the number of social groups and the nature of the individuals that compose samples (e.g., nonreproductive and reproductive individuals) are key factors in generating outbreeding signatures. Sociality is an important component of population structure that needs to be revisited by ecologists and population geneticists alike. PMID:26080393
River runoff estimates based on remotely sensed surface velocities
NASA Astrophysics Data System (ADS)
Grünler, Steffen; Stammer, Detlef; Romeiser, Roland
2010-05-01
One promising technique for river runoff estimates from space is the retrieval of surface currents on the basis of synthetic aperture radar along-track interferometry (ATI). The German satellite TerraSAR-X, which was launched in June 2007, will permit ATI measurements in an experimental mode. Based on numerical simulations, we present findings of a research project in which the potential of satellite measurements of various parameters with different temporal and spatial sampling characteristics is evaluated. A sampling strategy for river runoff estimates is developed. We address the achievable accuracy and limitations of such estimates for different local flow conditions at selected test site. High-resolution three-dimensional current fields in the Elbe river (Germany) from a numerical model are used as reference data set and input for simulations of a variety of possible measuring and data interpretation strategies to be evaluated. Addressing the problem of aliasing we removed tidal signals from the sampling data. Discharge estimates on the basis of measured surface current fields and river widths from TerraSAR-X are successfully simulated. The differences of the resulted net discharge estimate are between 30-55% for a required continuously observation period of one year. We discuss the applicability of the measuring strategies to a number of major rivers. Further we show results of runoff estimates by the retrieval of surface current fields by real TerraSAR-X ATI data (AS mode) for the Elbe river study area.
Air sampling workshop: October 24-25, 1978
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1979-06-01
A two-day workshop was held in October 1978 on air sampling strategies for the occupational environment. Strategies comprise the elements of implementing an air sampling program including deciding on the extent of sampling, selecting appropriate types of measurement, placing sampling instruments properly, and interpreting sample results correctly. All of these elements are vital in the reliable assessment of occupational exposures yet their coverage in the industrial hygiene literature is meager. Although keyed to a few introductory topics, the agenda was sufficiently informal to accommodate extemporaneous discussion on any subject related to sampling strategies. Questions raised during the workshop mirror themore » status of air sampling strategy as much as the factual information that was presented. It may be concluded from the discussion and questions that air sampling strategy is an elementary state and urgently needs concerted attention from the industrial hygiene profession.« less
Cognitive niches: an ecological model of strategy selection.
Marewski, Julian N; Schooler, Lael J
2011-07-01
How do people select among different strategies to accomplish a given task? Across disciplines, the strategy selection problem represents a major challenge. We propose a quantitative model that predicts how selection emerges through the interplay among strategies, cognitive capacities, and the environment. This interplay carves out for each strategy a cognitive niche, that is, a limited number of situations in which the strategy can be applied, simplifying strategy selection. To illustrate our proposal, we consider selection in the context of 2 theories: the simple heuristics framework and the ACT-R (adaptive control of thought-rational) architecture of cognition. From the heuristics framework, we adopt the thesis that people make decisions by selecting from a repertoire of simple decision strategies that exploit regularities in the environment and draw on cognitive capacities, such as memory and time perception. ACT-R provides a quantitative theory of how these capacities adapt to the environment. In 14 simulations and 10 experiments, we consider the choice between strategies that operate on the accessibility of memories and those that depend on elaborate knowledge about the world. Based on Internet statistics, our model quantitatively predicts people's familiarity with and knowledge of real-world objects, the distributional characteristics of the associated speed of memory retrieval, and the cognitive niches of classic decision strategies, including those of the fluency, recognition, integration, lexicographic, and sequential-sampling heuristics. In doing so, the model specifies when people will be able to apply different strategies and how accurate, fast, and effortless people's decisions will be.
Yu, Hsing-Yi; Tang, Fu-In; Yeh, Ming-Chen; Kuo, Benjamin Ing-Tiau; Yu, Shu
2011-03-01
Pain is a common problem among the elderly. The entire scope of chronic pain relief strategies used by community elderly is still unclear. A limited number of studies have investigated this issue from diverse culture perspectives. In the present study, we investigated the use and perceived effectiveness of pain relief strategies adopted by the elderly; gender differences between frequently used relief strategies were also explored. Two hundred nineteen participants living in Taiwan City, Taiwan, were recruited by a random sampling method and interviewed face to face. The prevalence of chronic pain among the elderly was 42.0% (n = 92). The elderly tended to adopt multiple strategies (mean ± SD = 9.08 ± 3.56; range = 2-18) to relieve their chronic pain. In three domains of pain relief strategies, conventional medicine was used more frequently than complementary and alternative medicine and psychologic approaches. Most pain relief strategies were ineffective. Among the 22 strategies used, no strategy was reported as "much improved" by a majority of users. The top five pain relief strategies used by men and women were the same. Elderly women tended to adopt more psychologic approaches, such as acceptance and ignoring to relieve pain, than men. The findings suggest that nurses should pay more attention to the issue of chronic pain relief and provide the elderly with more effective pain relief strategies. Copyright © 2011 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-04-26
The testimony contains the comments of NIOSH regarding the proposed rule on occupational exposure to asbestos (1332214), tremolite (14567738), anthophyllite (17068789), and actinolite (77536664). The testimony included a memorandum comparing NIOSH Method 7400 to OSHA Method ID 160, an evaluation of glove bag containment in asbestos removal, the limit of detection that was ascertained from the sampling strategies for three sets of samples, effectiveness of periodic cleaning of surfaces with a HEPA vacuum, an update on cancer mortality among chrysotile (12001295) asbestos miners, and the use of hand tools to remove built up roofing.
NASA Astrophysics Data System (ADS)
Pool, Sandra; Viviroli, Daniel; Seibert, Jan
2017-11-01
Applications of runoff models usually rely on long and continuous runoff time series for model calibration. However, many catchments around the world are ungauged and estimating runoff for these catchments is challenging. One approach is to perform a few runoff measurements in a previously fully ungauged catchment and to constrain a runoff model by these measurements. In this study we investigated the value of such individual runoff measurements when taken at strategic points in time for applying a bucket-type runoff model (HBV) in ungauged catchments. Based on the assumption that a limited number of runoff measurements can be taken, we sought the optimal sampling strategy (i.e. when to measure the streamflow) to obtain the most informative data for constraining the runoff model. We used twenty gauged catchments across the eastern US, made the assumption that these catchments were ungauged, and applied different runoff sampling strategies. All tested strategies consisted of twelve runoff measurements within one year and ranged from simply using monthly flow maxima to a more complex selection of observation times. In each case the twelve runoff measurements were used to select 100 best parameter sets using a Monte Carlo calibration approach. Runoff simulations using these 'informed' parameter sets were then evaluated for an independent validation period in terms of the Nash-Sutcliffe efficiency of the hydrograph and the mean absolute relative error of the flow-duration curve. Model performance measures were normalized by relating them to an upper and a lower benchmark representing a well-informed and an uninformed model calibration. The hydrographs were best simulated with strategies including high runoff magnitudes as opposed to the flow-duration curves that were generally better estimated with strategies that captured low and mean flows. The choice of a sampling strategy covering the full range of runoff magnitudes enabled hydrograph and flow-duration curve simulations close to a well-informed model calibration. The differences among such strategies covering the full range of runoff magnitudes were small indicating that the exact choice of a strategy might be less crucial. Our study corroborates the information value of a small number of strategically selected runoff measurements for simulating runoff with a bucket-type runoff model in almost ungauged catchments.
Sampling strategies for estimating brook trout effective population size
Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher
2012-01-01
The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...
Systematic Review of the Use of Online Questionnaires among the Geriatric Population
Remillard, Meegan L.; Mazor, Kathleen M.; Cutrona, Sarah L.; Gurwitz, Jerry H.; Tjia, Jennifer
2014-01-01
Background/Objectives The use of internet-based questionnaires to collect information from older adults is not well established. This systematic literature review of studies using online questionnaires in older adult populations aims to 1. describe methodologic approaches to population targeting and sampling and 2. summarize limitations of Internet-based questionnaires in geriatric populations. Design, Setting, Participants We identified English language articles using search terms for geriatric, age 65 and over, Internet survey, online survey, Internet questionnaire, and online questionnaire in PubMed and EBSCO host between 1984 and July 2012. Inclusion criteria were: study population mean age ≥65 years old and use of an online questionnaire for research. Review of 336 abstracts yielded 14 articles for full review by 2 investigators; 11 articles met inclusion criteria. Measurements Articles were extracted for study design and setting, patient characteristics, recruitment strategy, country, and study limitations. Results Eleven (11) articles were published after 2001. Studies had populations with a mean age of 65 to 78 years, included descriptive and analytical designs, and were conducted in the United States, Australia, and Japan. Recruiting methods varied widely from paper fliers and personal emails to use of consumer marketing panels. Investigator-reported study limitations included the use of small convenience samples and limited generalizability. Conclusion Online questionnaires are a feasible method of surveying older adults in some geographic regions and for some subsets of older adults, but limited Internet access constrains recruiting methods and often limits study generalizability. PMID:24635138
Levis, Denise M; Westbrook, Kyresa
2013-01-01
Many health organizations and practitioners in the United States promote preconception health (PCH) to consumers. However, summaries and evaluations of PCH promotional activities are limited. We conducted a content analysis of PCH health education materials collected from local-, state-, national-, and federal-level partners by using an existing database of partners, outreach to maternal and child health organizations, and a snowball sampling technique. Not applicable. Not applicable. Thirty-two materials were included for analysis, based on inclusion/exclusion criteria. A codebook guided coding of materials' characteristics (type, authorship, language, cost), use of marketing and behavioral strategies to reach the target population (target audience, message framing, call to action), and inclusion of PCH subject matter (clinical-behavioral components). The self-assessment of PCH behaviors was the most common material (28%) to appear in the sample. Most materials broadly targeted women, and there was a near-equal distribution in targeting by pregnancy planning status segments (planners and nonplanners). "Practicing PCH benefits the baby's health" was the most common message frame used. Materials contained a wide range of clinical-behavioral components. Strategic targeting of subgroups of consumers is an important but overlooked strategy. More research is needed around PCH components, in terms of packaging and increasing motivation, which could guide use and placement of clinical-behavioral components within promotional materials.
Bipyrimidine Signatures as a Photoprotective Genome Strategy in G + C-rich Halophilic Archaea.
Jones, Daniel L; Baxter, Bonnie K
2016-09-02
Halophilic archaea experience high levels of ultraviolet (UV) light in their environments and demonstrate resistance to UV irradiation. DNA repair systems and carotenoids provide UV protection but do not account for the high resistance observed. Herein, we consider genomic signatures as an additional photoprotective strategy. The predominant forms of UV-induced DNA damage are cyclobutane pyrimidine dimers, most notoriously thymine dimers (T^Ts), which form at adjacent Ts. We tested whether the high G + C content seen in halophilic archaea serves a photoprotective function through limiting T nucleotides, and thus T^T lesions. However, this speculation overlooks the other bipyrimidine sequences, all of which capable of forming photolesions to varying degrees. Therefore, we designed a program to determine the frequencies of the four bipyrimidine pairs (5' to 3': TT, TC, CT, and CC) within genomes of halophilic archaea and four other randomized sample groups for comparison. The outputs for each sampled genome were weighted by the intrinsic photoreactivities of each dinucleotide pair. Statistical methods were employed to investigate intergroup differences. Our findings indicate that the UV-resistance seen in halophilic archaea can be attributed in part to a genomic strategy: high G + C content and the resulting bipyrimidine signature reduces the genomic photoreactivity.
Bipyrimidine Signatures as a Photoprotective Genome Strategy in G + C-rich Halophilic Archaea
Jones, Daniel L.; Baxter, Bonnie K.
2016-01-01
Halophilic archaea experience high levels of ultraviolet (UV) light in their environments and demonstrate resistance to UV irradiation. DNA repair systems and carotenoids provide UV protection but do not account for the high resistance observed. Herein, we consider genomic signatures as an additional photoprotective strategy. The predominant forms of UV-induced DNA damage are cyclobutane pyrimidine dimers, most notoriously thymine dimers (T^Ts), which form at adjacent Ts. We tested whether the high G + C content seen in halophilic archaea serves a photoprotective function through limiting T nucleotides, and thus T^T lesions. However, this speculation overlooks the other bipyrimidine sequences, all of which capable of forming photolesions to varying degrees. Therefore, we designed a program to determine the frequencies of the four bipyrimidine pairs (5’ to 3’: TT, TC, CT, and CC) within genomes of halophilic archaea and four other randomized sample groups for comparison. The outputs for each sampled genome were weighted by the intrinsic photoreactivities of each dinucleotide pair. Statistical methods were employed to investigate intergroup differences. Our findings indicate that the UV-resistance seen in halophilic archaea can be attributed in part to a genomic strategy: high G + C content and the resulting bipyrimidine signature reduces the genomic photoreactivity. PMID:27598206
McIntosh, James; O'Brien, Tommy; McKeganey, Neil
2008-06-01
This paper reports on a qualitative study of the attitudes and risk management strategies of a sample of problem drug users in relation to driving while under the influence of drugs. Interviews were conducted with 26 individuals (21 men and 5 women) all of whom had been addicted to heroin and had admitted to driving while under the influence of illegal drugs. The drug users reported four main strategies for managing the risks associated with drug driving: attempting to limit their drug intake to their tolerance level; delaying driving after taking a drug until they felt safe; stopping driving if they felt unsafe while behind the wheel; and avoiding driving altogether under the influence of certain drugs. However, the interviewees' accounts of their drug driving behaviour suggest that these strategies are not only far from reliable, they may also act to encourage drug driving by creating a false sense of security. The reassurance they provide may also undermine any educational messages targeting drug driving. There was little in the problem users' accounts to suggest that media campaigns or a more effective method of detection would have much influence upon their behaviour. The paper concludes that the most realistic approach to the problem may be to incorporate drug driving interventions within drug treatment programmes.
Digital visual communications using a Perceptual Components Architecture
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1991-01-01
The next era of space exploration will generate extraordinary volumes of image data, and management of this image data is beyond current technical capabilities. We propose a strategy for coding visual information that exploits the known properties of early human vision. This Perceptual Components Architecture codes images and image sequences in terms of discrete samples from limited bands of color, spatial frequency, orientation, and temporal frequency. This spatiotemporal pyramid offers efficiency (low bit rate), variable resolution, device independence, error-tolerance, and extensibility.
Archiving Microgravity Flight Data and Samples
NASA Technical Reports Server (NTRS)
1996-01-01
To obtain help in evaluating its current strategy for archiving data and samples obtained in microgravity research, NASA's Microgravity Science and Applications Division (MSAD) asked the Space Studies Board's Committee on Microgravity Research for guidance on the following questions: What data should be archived and where should it be kept? In what form should the data be maintained (electronic files, photographs, hard copy, samples)? What should the general format of the database be? To what extent should it be universally accessible and through what mechanisms? Should there be a period of time for which principal investigators have proprietary access? If so, how long should proprietary data be stored? What provisions should be made for data obtained from ground-based experiments? What should the deadline be for investigators placing their data in the archive? How long should data be saved? How long should data be easily accessible? As a prelude to making recommendations for optimum selection and storage of microgravity data and samples, the committee in this report briefly describes NASA's past archiving practices and outlines MSAD's current archiving strategy. Although the committee found that only a limited number of experiments have thus far been archived, it concluded that the general archiving strategy, characterized by MSAD as minimalist, appears viable. A central focus of attention is the Experiment Data Management Plan (EDMP), MSAD's recently instituted data management and archiving framework for flight experiments. Many of the report's recommendations are aimed at enhancing the effectiveness of the EDMP approach, which the committee regards as an appropriate data management method for MSAD. Other recommendations provide guidance on broader issues related to the questions listed above. This report does not address statutory or regulatory records retention requirements.
Participant Recruitment for Studies on Disability and Work: Challenges and Solutions.
Lysaght, Rosemary; Kranenburg, Rachelle; Armstrong, Carolyn; Krupa, Terry
2016-06-01
Purpose A number of key issues related to employment of persons with disabilities demand ongoing and effective lines of inquiry. There is evidence, however, that work researchers struggle with recruitment of participants, and that this may limit the types and appropriateness of methods selected. This two phase study sought to identify the nature of recruitment challenges in workplace-based disability research, and to identify strategies for addressing identified barriers. Methods The first phase of this study was a scoping review of the literature to identify the study designs and approaches frequently used in this field of inquiry, and the success of the various recruitment methods in use. In the second phase, we used qualitative methods to explore with employers and other stakeholders in the field their perceived challenges related to participating in disability-related research, and approaches that might address these. Results The most frequently used recruitment methods identified in the literature were non-probability approaches for qualitative studies, and sampling from existing worker databases for survey research. Struggles in participant recruitment were evidenced by the use of multiple recruitment strategies, and heavy reliance on convenience sampling. Employers cited a number of barriers to participation, including time pressures, fear of legal reprisal, and perceived lack of relevance to the organization. Conclusions Participant recruitment in disability-related research is a concern, particularly in studies that require collection of new data from organizations and individuals, and where large probability samples and/or stratified or purposeful samples are desirable. A number of strategies may contribute to improved success, including development of participatory research models that will enhance benefits and perceived benefits of workplace involvement.
Sample allocation balancing overall representativeness and stratum precision.
Diaz-Quijano, Fredi Alexander
2018-05-07
In large-scale surveys, it is often necessary to distribute a preset sample size among a number of strata. Researchers must make a decision between prioritizing overall representativeness or precision of stratum estimates. Hence, I evaluated different sample allocation strategies based on stratum size. The strategies evaluated herein included allocation proportional to stratum population; equal sample for all strata; and proportional to the natural logarithm, cubic root, and square root of the stratum population. This study considered the fact that, from a preset sample size, the dispersion index of stratum sampling fractions is correlated with the population estimator error and the dispersion index of stratum-specific sampling errors would measure the inequality in precision distribution. Identification of a balanced and efficient strategy was based on comparing those both dispersion indices. Balance and efficiency of the strategies changed depending on overall sample size. As the sample to be distributed increased, the most efficient allocation strategies were equal sample for each stratum; proportional to the logarithm, to the cubic root, to square root; and that proportional to the stratum population, respectively. Depending on sample size, each of the strategies evaluated could be considered in optimizing the sample to keep both overall representativeness and stratum-specific precision. Copyright © 2018 Elsevier Inc. All rights reserved.
Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos
2014-04-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.
Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos
2014-01-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564
Search strategy using LHC pileup interactions as a zero bias sample
NASA Astrophysics Data System (ADS)
Nachman, Benjamin; Rubbo, Francesco
2018-05-01
Due to a limited bandwidth and a large proton-proton interaction cross section relative to the rate of interesting physics processes, most events produced at the Large Hadron Collider (LHC) are discarded in real time. A sophisticated trigger system must quickly decide which events should be kept and is very efficient for a broad range of processes. However, there are many processes that cannot be accommodated by this trigger system. Furthermore, there may be models of physics beyond the standard model (BSM) constructed after data taking that could have been triggered, but no trigger was implemented at run time. Both of these cases can be covered by exploiting pileup interactions as an effective zero bias sample. At the end of high-luminosity LHC operations, this zero bias dataset will have accumulated about 1 fb-1 of data from which a bottom line cross section limit of O (1 ) fb can be set for BSM models already in the literature and those yet to come.
From orphan virus to pathogen: the path to the clinical lab.
Li, Linlin; Delwart, Eric
2011-10-01
Viral metagenomics has recently yielded numerous previously uncharacterized viral genomes from human and animal samples. We review some of the metagenomics tools and strategies to determine which orphan viruses are likely pathogens. Disease association studies compare viral prevalence in patients with unexplained symptoms versus healthy individuals but require these case and control groups to be closely matched epidemiologically. The development of an antibody response in convalescent serum can temporarily link symptoms with a recent infection. Neutralizing antibody detection require often difficult cell culture virus amplification. Antibody binding assays require proper antigen synthesis and positive control sera to set assay thresholds. High levels of viral genetic diversity within orphan viral groups, frequent co-infections, low or rare pathogenicity, and chronic virus shedding, can all complicate disease association studies. The limited availability of matched cases and controls sample sets from different age groups and geographic origins is a major block for estimating the pathogenic potential of recently characterized orphan viruses. Current limitations on the practical use of deep sequencing for viral diagnostics are listed.
Cox, Alison D; Dube, Charmayne; Temple, Beverley
2015-03-01
Many individuals with intellectual disability engage in challenging behaviour. This can significantly limit quality of life and also negatively impact caregivers (e.g., direct care staff, family caregivers and teachers). Fortunately, efficacious staff training may alleviate some negative side effects of client challenging behaviour. Currently, a systematic review of studies evaluating whether staff training influences client challenging behaviour has not been conducted. The purpose of this article was to identify emerging patterns, knowledge gaps and make recommendations for future research on this topic. The literature search resulted in a total of 19 studies that met our inclusion criteria. Articles were separated into four staff training categories. Studies varied across sample size, support staff involved in training, study design, training duration and data collection strategy. A small sample size (n = 19) and few replication studies, alongside several other procedural limitations prohibited the identification of a best practice training approach. © The Author(s) 2014.
Portable Infrared Laser Spectroscopy for On-site Mycotoxin Analysis.
Sieger, Markus; Kos, Gregor; Sulyok, Michael; Godejohann, Matthias; Krska, Rudolf; Mizaikoff, Boris
2017-03-09
Mycotoxins are toxic secondary metabolites of fungi that spoil food, and severely impact human health (e.g., causing cancer). Therefore, the rapid determination of mycotoxin contamination including deoxynivalenol and aflatoxin B 1 in food and feed samples is of prime interest for commodity importers and processors. While chromatography-based techniques are well established in laboratory environments, only very few (i.e., mostly immunochemical) techniques exist enabling direct on-site analysis for traders and manufacturers. In this study, we present MYCOSPEC - an innovative approach for spectroscopic mycotoxin contamination analysis at EU regulatory limits for the first time utilizing mid-infrared tunable quantum cascade laser (QCL) spectroscopy. This analysis technique facilitates on-site mycotoxin analysis by combining QCL technology with GaAs/AlGaAs thin-film waveguides. Multivariate data mining strategies (i.e., principal component analysis) enabled the classification of deoxynivalenol-contaminated maize and wheat samples, and of aflatoxin B 1 affected peanuts at EU regulatory limits of 1250 μg kg -1 and 8 μg kg -1 , respectively.
Portable Infrared Laser Spectroscopy for On-site Mycotoxin Analysis
Sieger, Markus; Kos, Gregor; Sulyok, Michael; Godejohann, Matthias; Krska, Rudolf; Mizaikoff, Boris
2017-01-01
Mycotoxins are toxic secondary metabolites of fungi that spoil food, and severely impact human health (e.g., causing cancer). Therefore, the rapid determination of mycotoxin contamination including deoxynivalenol and aflatoxin B1 in food and feed samples is of prime interest for commodity importers and processors. While chromatography-based techniques are well established in laboratory environments, only very few (i.e., mostly immunochemical) techniques exist enabling direct on-site analysis for traders and manufacturers. In this study, we present MYCOSPEC - an innovative approach for spectroscopic mycotoxin contamination analysis at EU regulatory limits for the first time utilizing mid-infrared tunable quantum cascade laser (QCL) spectroscopy. This analysis technique facilitates on-site mycotoxin analysis by combining QCL technology with GaAs/AlGaAs thin-film waveguides. Multivariate data mining strategies (i.e., principal component analysis) enabled the classification of deoxynivalenol-contaminated maize and wheat samples, and of aflatoxin B1 affected peanuts at EU regulatory limits of 1250 μg kg−1 and 8 μg kg−1, respectively. PMID:28276454
Anand, Prachi; Grigoryan, Alexandre; Bhuiyan, Mohammed H; Ueberheide, Beatrix; Russell, Victoria; Quinoñez, Jose; Moy, Patrick; Chait, Brian T; Poget, Sébastien F; Holford, Mandë
2014-01-01
Disulfide-rich peptide toxins found in the secretions of venomous organisms such as snakes, spiders, scorpions, leeches, and marine snails are highly efficient and effective tools for novel therapeutic drug development. Venom peptide toxins have been used extensively to characterize ion channels in the nervous system and platelet aggregation in haemostatic systems. A significant hurdle in characterizing disulfide-rich peptide toxins from venomous animals is obtaining significant quantities needed for sequence and structural analyses. Presented here is a strategy for the structural characterization of venom peptide toxins from sample limited (4 ng) specimens via direct mass spectrometry sequencing, chemical synthesis and NMR structure elucidation. Using this integrated approach, venom peptide Tv1 from Terebra variegata was discovered. Tv1 displays a unique fold not witnessed in prior snail neuropeptides. The novel structural features found for Tv1 suggest that the terebrid pool of peptide toxins may target different neuronal agents with varying specificities compared to previously characterized snail neuropeptides.
Portable Infrared Laser Spectroscopy for On-site Mycotoxin Analysis
NASA Astrophysics Data System (ADS)
Sieger, Markus; Kos, Gregor; Sulyok, Michael; Godejohann, Matthias; Krska, Rudolf; Mizaikoff, Boris
2017-03-01
Mycotoxins are toxic secondary metabolites of fungi that spoil food, and severely impact human health (e.g., causing cancer). Therefore, the rapid determination of mycotoxin contamination including deoxynivalenol and aflatoxin B1 in food and feed samples is of prime interest for commodity importers and processors. While chromatography-based techniques are well established in laboratory environments, only very few (i.e., mostly immunochemical) techniques exist enabling direct on-site analysis for traders and manufacturers. In this study, we present MYCOSPEC - an innovative approach for spectroscopic mycotoxin contamination analysis at EU regulatory limits for the first time utilizing mid-infrared tunable quantum cascade laser (QCL) spectroscopy. This analysis technique facilitates on-site mycotoxin analysis by combining QCL technology with GaAs/AlGaAs thin-film waveguides. Multivariate data mining strategies (i.e., principal component analysis) enabled the classification of deoxynivalenol-contaminated maize and wheat samples, and of aflatoxin B1 affected peanuts at EU regulatory limits of 1250 μg kg-1 and 8 μg kg-1, respectively.
Canier, Lydie; Khim, Nimol; Kim, Saorin; Eam, Rotha; Khean, Chanra; Loch, Kaknika; Ken, Malen; Pannus, Pieter; Bosman, Philippe; Stassijns, Jorgen; Nackers, Fabienne; Alipon, SweetC; Char, Meng Chuor; Chea, Nguon; Etienne, William; De Smet, Martin; Kindermans, Jean-Marie; Ménard, Didier
2015-01-01
In the context of malaria elimination, novel strategies for detecting very low malaria parasite densities in asymptomatic individuals are needed. One of the major limitations of the malaria parasite detection methods is the volume of blood samples being analyzed. The objective of the study was to compare the diagnostic accuracy of a malaria polymerase chain reaction assay, from dried blood spots (DBS, 5 μL) and different volumes of venous blood (50 μL, 200 μL, and 1 mL). The limit of detection of the polymerase chain reaction assay, using calibrated Plasmodium falciparum blood dilutions, showed that venous blood samples (50 μL, 200 μL, 1 mL) combined with Qiagen extraction methods gave a similar threshold of 100 parasites/mL, ∼100-fold lower than 5 μL DBS/Instagene method. On a set of 521 field samples, collected in two different transmission areas in northern Cambodia, no significant difference in the proportion of parasite carriers, regardless of the methods used was found. The 5 μL DBS method missed 27% of the samples detected by the 1 mL venous blood method, but most of the missed parasites carriers were infected by Plasmodium vivax (84%). The remaining missed P. falciparum parasite carriers (N = 3) were only detected in high-transmission areas. PMID:25561570
Reconnaissance Strategy for Seep Chemosynthetic Communities in the Gulf of Mexico
NASA Astrophysics Data System (ADS)
MacDonald, I. R.; Roberts, H. H.; Fisher, C. R.; Bernard, B. B.; Joye, S.; Carney, R.; Hunt, J.; Shedd, W.
2007-05-01
The Continental Slope of the Gulf of Mexico hosts diverse chemosynthetic communities at oil and gas seeps. Exploration is needed to extend knowledge of the Gulf of Mexico chemosynthetic ecosystem in the zones anticipated to receive energy exploration and production activities over the coming decades. A nested survey approach can be used to identify representative sampling sites within this vast offshore area. Potential sites where chemosynthetic community could occur are selected on the basis geophysical, geochemical, and satellite remote-sensing indicators. Photo-reconnaissance using cost-effective camera systems is then used to confirm the presences or absence of chemosynthetic communities at high-probability sites. Follow-up sampling can then proceed with submersibles or ROVs to acquire tissue and or geochemical samples. However, because access is limited, submersible dives may not be possible at all sites. Two examples of this approach have recently been applied in the northern and southern Gulf of Mexico, respectively. We compared community characterizations obtained from the initial reconnaissance with more detailed characterizations forthcoming from submersible sampling. Our results show that major differences in community type and geochemical substrata are evident from preliminary reconnaissance, while details of animal densities and species compositions require targeted sampling with submersibles. However, given the limited access to submersibles, cost-effective surveys with deep-sea camera systems would greatly expand understanding of the zoogeography of chemosynthetic fauna in the Gulf of Mexico and Caribbean Sea.
Airborne particulate matter in spacecraft
NASA Technical Reports Server (NTRS)
1988-01-01
Acceptability limits and sampling and monitoring strategies for airborne particles in spacecraft were considered. Based on instances of eye and respiratory tract irritation reported by Shuttle flight crews, the following acceptability limits for airborne particles were recommended: for flights of 1 week or less duration (1 mg/cu m for particles less than 10 microns in aerodynamic diameter (AD) plus 1 mg/cu m for particles 10 to 100 microns in AD); and for flights greater than 1 week and up to 6 months in duration (0.2 mg/cu m for particles less than 10 microns in AD plus 0.2 mg/cu m for particles 10 to 100 microns in AD. These numerical limits were recommended to aid in spacecraft atmosphere design which should aim at particulate levels that are a low as reasonably achievable. Sampling of spacecraft atmospheres for particles should include size-fractionated samples of 0 to 10, 10 to 100, and greater than 100 micron particles for mass concentration measurement and elementary chemical analysis by nondestructive analysis techniques. Morphological and chemical analyses of single particles should also be made to aid in identifying airborne particulate sources. Air cleaning systems based on inertial collection principles and fine particle collection devices based on electrostatic precipitation and filtration should be considered for incorporation into spacecraft air circulation systems. It was also recommended that research be carried out in space in the areas of health effects and particle characterization.
Kim, Yong-Hyun; Kim, Ki-Hyun
2012-10-02
To understand the ultimately lowest detection range of volatile organic compounds (VOCs) in air, application of a high sensitivity analytical system was investigated by coupling thermal desorption (TD) technique with gas chromatography (GC) and time-of-flight (TOF) mass spectrometry (MS). The performance of the TD-GC/TOF MS system was evaluated using liquid standards of 19 target VOCs prepared in the range of 35 pg to 2.79 ng per μL. Studies were carried out using both total ion chromatogram (TIC) and extracted ion chromatogram (EIC) mode. EIC mode was used for calibration to reduce background and to improve signal-to-noise. The detectability of 19 target VOCs, if assessed in terms of method detection limit (MDL, per US EPA definition) and limit of detection (LOD), averaged 5.90 pg and 0.122 pg, respectively, with the mean coefficient of correlation (R(2)) of 0.9975. The minimum quantifiable mass of target analytes, when determined using real air samples by the TD-GC/TOF MS, is highly comparable to the detection limits determined experimentally by standard. In fact, volumes for the actual detection of the major aromatic VOCs like benzene, toluene, and xylene (BTX) in ambient air samples were as low as 1.0 mL in the 0.11-2.25 ppb range. It was thus possible to demonstrate that most target compounds including those in low abundance could be reliably quantified at concentrations down to 0.1 ppb at sample volumes of less than 10 mL. The unique sensitivity of this advanced analytical system can ultimately lead to a shift in field sampling strategy with smaller air sample volumes facilitating faster, simpler air sampling (e.g., use of gas syringes rather than the relative complexity of pumps or bags/canisters), with greatly reduced risk of analyte breakthrough and minimal interference, e.g., from atmospheric humidity. The improved detection limits offered by this system can also enhance accuracy and measurement precision.
Structural and Psycho-Social Limits to Climate Change Adaptation in the Great Barrier Reef Region
Evans, Louisa S.; Hicks, Christina C.; Adger, W. Neil; Barnett, Jon; Perry, Allison L.; Fidelman, Pedro; Tobin, Renae
2016-01-01
Adaptation, as a strategy to respond to climate change, has limits: there are conditions under which adaptation strategies fail to alleviate impacts from climate change. Research has primarily focused on identifying absolute bio-physical limits. This paper contributes empirical insight to an emerging literature on the social limits to adaptation. Such limits arise from the ways in which societies perceive, experience and respond to climate change. Using qualitative data from multi-stakeholder workshops and key-informant interviews with representatives of the fisheries and tourism sectors of the Great Barrier Reef region, we identify psycho-social and structural limits associated with key adaptation strategies, and examine how these are perceived as more or less absolute across levels of organisation. We find that actors experience social limits to adaptation when: i) the effort of pursuing a strategy exceeds the benefits of desired adaptation outcomes; ii) the particular strategy does not address the actual source of vulnerability, and; iii) the benefits derived from adaptation are undermined by external factors. We also find that social limits are not necessarily more absolute at higher levels of organisation: respondents perceived considerable opportunities to address some psycho-social limits at the national-international interface, while they considered some social limits at the local and regional levels to be effectively absolute. PMID:26960200
Structural and Psycho-Social Limits to Climate Change Adaptation in the Great Barrier Reef Region.
Evans, Louisa S; Hicks, Christina C; Adger, W Neil; Barnett, Jon; Perry, Allison L; Fidelman, Pedro; Tobin, Renae
2016-01-01
Adaptation, as a strategy to respond to climate change, has limits: there are conditions under which adaptation strategies fail to alleviate impacts from climate change. Research has primarily focused on identifying absolute bio-physical limits. This paper contributes empirical insight to an emerging literature on the social limits to adaptation. Such limits arise from the ways in which societies perceive, experience and respond to climate change. Using qualitative data from multi-stakeholder workshops and key-informant interviews with representatives of the fisheries and tourism sectors of the Great Barrier Reef region, we identify psycho-social and structural limits associated with key adaptation strategies, and examine how these are perceived as more or less absolute across levels of organisation. We find that actors experience social limits to adaptation when: i) the effort of pursuing a strategy exceeds the benefits of desired adaptation outcomes; ii) the particular strategy does not address the actual source of vulnerability, and; iii) the benefits derived from adaptation are undermined by external factors. We also find that social limits are not necessarily more absolute at higher levels of organisation: respondents perceived considerable opportunities to address some psycho-social limits at the national-international interface, while they considered some social limits at the local and regional levels to be effectively absolute.
Pesticide residues in orange fruit from citrus orchards in Nuevo Leon State, Mexico.
Suárez-Jacobo, Angela; Alcantar-Rosales, Victor Manuel; Alonso-Segura, Diana; Heras-Ramírez, Maria; Elizarragaz-De La Rosa, Dalau; Lugo-Melchor, Ofelia; Gaspar-Ramirez, Octavio
2017-09-01
Some international organisations established maximum residue limits (MRLs) in food to protect human health. Mexico lacks regulations in this matter, affecting national and international trade from agroindustry. The aim of this study was to diagnose pesticide residues in oranges from Nuevo Leon, México, in citrus orchards. In May 2014, 100 orange fruit samples were taken randomly from orchards and subjected to analysis for 93 pesticides at residual level by GC/QQQ-MS and LCQ-TOF-MS. Results showed the presence of 15 pesticide residues in the samples. The comparison of the residual levels of pesticides found in orange samples among the MRLs allowed by USA, EU and Japanese regulations demonstrated that all samples were below MRLs issued by USA and Japan. Some orange samples were above MRLs issued by the EU. This provides a basis to establish strategies in order to satisfy International Standards to protect human health and encourage Food Safety in Mexico.
Shore, Sabrina; Henderson, Jordana M; Lebedev, Alexandre; Salcedo, Michelle P; Zon, Gerald; McCaffrey, Anton P; Paul, Natasha; Hogrefe, Richard I
2016-01-01
For most sample types, the automation of RNA and DNA sample preparation workflows enables high throughput next-generation sequencing (NGS) library preparation. Greater adoption of small RNA (sRNA) sequencing has been hindered by high sample input requirements and inherent ligation side products formed during library preparation. These side products, known as adapter dimer, are very similar in size to the tagged library. Most sRNA library preparation strategies thus employ a gel purification step to isolate tagged library from adapter dimer contaminants. At very low sample inputs, adapter dimer side products dominate the reaction and limit the sensitivity of this technique. Here we address the need for improved specificity of sRNA library preparation workflows with a novel library preparation approach that uses modified adapters to suppress adapter dimer formation. This workflow allows for lower sample inputs and elimination of the gel purification step, which in turn allows for an automatable sRNA library preparation protocol.
Locks, Lindsey M; Pandey, Pooja R; Osei, Akoto K; Spiro, David S; Adhikari, Debendra P; Haselow, Nancy J; Quinn, Victoria J; Nielsen, Jennifer N
2015-10-01
Global recommendations on strategies to improve infant feeding, care and nutrition are clear; however, there is limited literature that explains methods for tailoring these recommendations to the local context where programmes are implemented. This paper aims to: (1) highlight the individual, cultural and environmental factors revealed by formative research to affect infant and young child feeding and care practices in Baitadi district of Far Western Nepal; and (2) outline how both quantitative and qualitative research methods were used to design a context-specific behaviour change strategy to improve child nutrition. Quantitative data on 750 children aged 12-23 months and their families were collected via surveys administered to mothers. The participants were selected using a multistage cluster sampling technique. The survey asked about knowledge, attitude and behaviours relating to infant and young child feeding. Qualitative data on breastfeeding and complementary feeding beliefs and practices were also collected from a separate sample via focus group discussions with mothers, and key informant interviews with mothers-in-law and husbands. Key findings revealed gaps in knowledge among many informants resulting in suboptimal infant and young child feeding practices - particularly with relation to duration of exclusive breastfeeding and dietary diversity of complementary foods. The findings from this research were then incorporated into a context-specific nutrition behaviour change communication strategy. © 2013 Helen Keller International © 2013 John Wiley & Sons, Ltd.
Visual scanning behavior is related to recognition performance for own- and other-age faces
Proietti, Valentina; Macchi Cassia, Viola; dell’Amore, Francesca; Conte, Stefania; Bricolo, Emanuela
2015-01-01
It is well-established that our recognition ability is enhanced for faces belonging to familiar categories, such as own-race faces and own-age faces. Recent evidence suggests that, for race, the recognition bias is also accompanied by different visual scanning strategies for own- compared to other-race faces. Here, we tested the hypothesis that these differences in visual scanning patterns extend also to the comparison between own and other-age faces and contribute to the own-age recognition advantage. Participants (young adults with limited experience with infants) were tested in an old/new recognition memory task where they encoded and subsequently recognized a series of adult and infant faces while their eye movements were recorded. Consistent with findings on the other-race bias, we found evidence of an own-age bias in recognition which was accompanied by differential scanning patterns, and consequently differential encoding strategies, for own-compared to other-age faces. Gaze patterns for own-age faces involved a more dynamic sampling of the internal features and longer viewing time on the eye region compared to the other regions of the face. This latter strategy was extensively employed during learning (vs. recognition) and was positively correlated to discriminability. These results suggest that deeply encoding the eye region is functional for recognition and that the own-age bias is evident not only in differential recognition performance, but also in the employment of different sampling strategies found to be effective for accurate recognition. PMID:26579056
Wang, Chen; Yin, Ying-Hao; Wei, Ying-Jie; Shi, Zi-Qi; Liu, Jian-Qun; Liu, Li-Fang; Xin, Gui-Zhong
2017-09-15
Metabolites derived from herbal compounds are becoming promising sources for discovering new drugs. However, the rapid identification of metabolites from biological matrixes is limited by massive endogenous interference and low abundance of metabolites. Thus, by using zebrafish larvae as the biotransformation system, we herein proposed and validated an integrated strategy for rapid identification of metabolites derived from herbal compounds. Two pivotal steps involved in this strategy are to differentiate metabolites from herbal compounds and match metabolites with their parent compounds. The differentiation step was achieved by cross orthogonal partial least-squares discriminant analysis. Automatic matching analysis was performed on R Project based on a self-developed program, of which the number of matched ionic clusters and its corresponding percentage between metabolite and parent compound were taken into account to assess their similarity. Using this strategy, 46 metabolites screened from incubation water samples of zebrafish treated with total Epimedium flavonoids (EFs) could be matched with their corresponding parent compounds, 37 of them were identified and validated by the known metabolic pathways and fragmentation patterns. Finally, 75% of the identified EFs metabolites were successfully detected in urine samples of rats treated with EFs. These experimental results indicate that the proposed strategy using zebrafish larvae as the biotransformation system will facilitate the rapid identification of metabolites derived from herbal compounds, which shows promising perspectives in providing additional resources for pharmaceutical developments from natural products. Copyright © 2017 Elsevier B.V. All rights reserved.
Sampling strategies on Mars: Remote and not-so-remote observations from a surface rover
NASA Technical Reports Server (NTRS)
Singer, R. B.
1988-01-01
The mobility and speed of a semi-autonomous Mars rover are of necessity limited by the need to think and stay out of trouble. This consideration makes it essential that the rover's travels be carefully directed to likely targets of interest for sampling and in situ study. Short range remote sensing conducted from the rover, based on existing technology, can provide significant information about the chemistry and mineralogy of surrounding rocks and soils in support of sampling efforts. These observations are of course of direct scientific importance as well. Because of the small number of samples actually to be returned to Earth, it is also important that candidate samples be analyzed aboard the rover so that diversity can be maximized. It is essential to perform certain types of analyses, such as those involving volatiles, prior to the thermal and physical shocks of the return trip to Earth. In addition, whatever measurements can be made of nonreturned samples will be important to enlarge the context of the detailed analyses to be performed later on the few returned samples. Some considerations related to these objectives are discussed.
Selective Data Acquisition in NMR. The Quantification of Anti-phase Scalar Couplings
NASA Astrophysics Data System (ADS)
Hodgkinson, P.; Holmes, K. J.; Hore, P. J.
Almost all time-domain NMR experiments employ "linear sampling," in which the NMR response is digitized at equally spaced times, with uniform signal averaging. Here, the possibilities of nonlinear sampling are explored using anti-phase doublets in the indirectly detected dimensions of multidimensional COSY-type experiments as an example. The Cramér-Rao lower bounds are used to evaluate and optimize experiments in which the sampling points, or the extent of signal averaging at each point, or both, are varied. The optimal nonlinear sampling for the estimation of the coupling constant J, by model fitting, turns out to involve just a few key time points, for example, at the first node ( t= 1/ J) of the sin(π Jt) modulation. Such sparse sampling patterns can be used to derive more practical strategies, in which the sampling or the signal averaging is distributed around the most significant time points. The improvements in the quantification of NMR parameters can be quite substantial especially when, as is often the case for indirectly detected dimensions, the total number of samples is limited by the time available.
Li, Xueyan; Kan, Xianwen
2018-04-30
In this study, a ratiometric strategy-based electrochemical sensor was developed by electropolymerization of thionine (THI) and β-cyclodextrin (β-CD) composite films on a glassy carbon electrode surface for imidacloprid (IMI) detection. THI played the role of an inner reference element to provide a built-in correction. In addition, the modified β-CD showed good selective enrichment for IMI to improve the sensitivity and anti-interference ability of the sensor. The current ratio between IMI and THI was calculated as the detected signal for IMI sensing. Compared with common single-signal sensing, the proposed ratiometric strategy showed a higher linear range and a lower limit of detection of 4.0 × 10-8-1.0 × 10-5 mol L-1 and 1.7 × 10-8 mol L-1, respectively, for IMI detection. On the other hand, the ratiometric strategy endowed the sensor with good accuracy, reproducibility, and stability. The sensor was also used for IMI determination in real samples with satisfactory results. The simple, effective, and reliable way reported in this study can be further used to prepare ratiometric strategy-based electrochemical sensors for the selective and sensitive detection of other compounds with good accuracy and stability.
Mutumba, Massy; Bauermeister, José A; Musiime, Victor; Byaruhanga, Judith; Francis, Kiweewa; Snow, Rachel C; Tsai, Alexander C
2015-02-01
Although more than 90% of youth perinatally infected with HIV live in sub-Saharan Africa, little is known about the psychosocial factors that impact their wellbeing, or how these youth cope with these challenges. The purpose of this study was to identify the psychosocial challenges and coping strategies among perinatal HIV-infected adolescents in Uganda. In-depth interviews were conducted with a purposive sample of 38 HIV-infected adolescents aged 12-19 years at a large HIV treatment center in Kampala. Data were analyzed thematically to identify themes and domains related to stressors and specific coping strategies. Psychosocial challenges included stigma/discrimination, relationship challenges such as HIV status disclosure, and medication difficulties. Coping strategies included medication adherence, concealment or limited disclosure of HIV status, treatment optimism, social support, rationalizing, social comparison, spirituality/religiosity, avoidance, and distraction. Age and gender differences also emerged: younger participants generally lacked specific coping strategies; compared to females, male adolescents reported greater use of avoidance/distraction techniques. Findings underscore the need to address stigma within homes and schools, and to equip adolescents with the comprehensive knowledge and skills to address their varied challenges.
Pearson, Matthew R.; Kite, Benjamin A.; Henson, James M.
2016-01-01
In the present study, we examined whether use of protective behavioral strategies mediated the relationship between self-control constructs and alcohol-related outcomes. According to the two-mode model of self-control, good self-control (planfulness; measured with Future Time Perspective, Problem Solving, and Self-Reinforcement) and poor regulation (impulsivity; measured with Present Time Perspective, Poor Delay of Gratification, Distractibility) are theorized to be relatively independent constructs rather than opposite ends of a single continuum. The analytic sample consisted of 278 college student drinkers (68% women) who responded to a battery of surveys at a single time point. Using a structural equation model based on the two-mode model of self-control, we found that good self-control predicted increased use of three types of protective behavioral strategies (Manner of Drinking, Limiting/Stopping Drinking, and Serious Harm Reduction). Poor regulation was unrelated to use of protective behavioral strategies, but had direct effects on alcohol use and alcohol problems. Further, protective behavioral strategies mediated the relationship between good self-control and alcohol use. The clinical implications of these findings are discussed. PMID:22663345
Hybrid-coded 3D structured illumination imaging with Bayesian estimation (Conference Presentation)
NASA Astrophysics Data System (ADS)
Chen, Hsi-Hsun; Luo, Yuan; Singh, Vijay R.
2016-03-01
Light induced fluorescent microscopy has long been developed to observe and understand the object at microscale, such as cellular sample. However, the transfer function of lense-based imaging system limits the resolution so that the fine and detailed structure of sample cannot be identified clearly. The techniques of resolution enhancement are fascinated to break the limit of resolution for objective given. In the past decades, the resolution enhancement imaging has been investigated through variety of strategies, including photoactivated localization microscopy (PALM), stochastic optical reconstruction microscopy (STORM), stimulated emission depletion (STED), and structure illuminated microscopy (SIM). In those methods, only SIM can intrinsically improve the resolution limit for a system without taking the structure properties of object into account. In this paper, we develop a SIM associated with Bayesian estimation, furthermore, with optical sectioning capability rendered from HiLo processing, resulting the high resolution through 3D volume. This 3D SIM can provide the optical sectioning and resolution enhancement performance, and be robust to noise owing to the Data driven Bayesian estimation reconstruction proposed. For validating the 3D SIM, we show our simulation result of algorithm, and the experimental result demonstrating the 3D resolution enhancement.
Burger, Emily A; Sy, Stephen; Nygård, Mari; Kim, Jane J
2017-01-01
Human papillomavirus (HPV) testing allows women to self-collect cervico-vaginal cells at home (i.e., self-sampling). Using primary data from a randomized pilot study, we evaluated the long-term consequences and cost-effectiveness of using self-sampling to improve participation to routine cervical cancer screening in Norway. We compared a strategy reflecting screening participation (using reminder letters) to strategies that involved mailing self-sampling device kits to women noncompliant to screening within a 5- or 10-year period under two scenarios: (A) self-sampling respondents had moderate under-screening histories, or (B) respondents to self-sampling had moderate and severe under-screening histories. Model outcomes included quality-adjusted life-years (QALY) and lifetime costs. The "most cost-effective" strategy was identified as the strategy just below $100,000 per QALY gained. Mailing self-sampling device kits to all women noncompliant to screening within a 5- or 10-year period can be more effective and less costly than the current reminder letter policy; however, the optimal self-sampling strategy was dependent on the profile of self-sampling respondents. For example, "10-yearly self-sampling" is preferred ($95,500 per QALY gained) if "5-yearly self-sampling" could only attract moderate under-screeners; however, "5-yearly self-sampling" is preferred if this strategy could additionally attract severe under-screeners. Targeted self-sampling of noncompliers likely represents good value-for-money; however, the preferred strategy is contingent on the screening histories and compliance of respondents. The magnitude of the health benefit and optimal self-sampling strategy is dependent on the profile and behavior of respondents. Health authorities should understand these factors prior to selecting and implementing a self-sampling policy. Cancer Epidemiol Biomarkers Prev; 26(1); 95-103. ©2016 AACR. ©2016 American Association for Cancer Research.
Evaluating data worth for ground-water management under uncertainty
Wagner, B.J.
1999-01-01
A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models-a chance-constrained ground-water management model and an integer-programing sampling network design model-to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information-i.e., the projected reduction in management costs-with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models - a chance-constrained ground-water management model and an integer-programming sampling network design model - to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information - i.e., the projected reduction in management costs - with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.
Heat stress intervention research in construction: gaps and recommendations.
Yang, Yang; Chan, Albert Ping-Chuen
2017-06-08
Developing heat stress interventions for construction workers has received mounting concerns in recent years. However, limited efforts have been exerted to elaborate the rationale, methodology, and practicality of heat stress intervention in the construction industry. This study aims to review previous heat stress intervention research in construction, to identify the major research gaps in methodological issues, and to offer detailed recommendations for future studies. A total of 35 peer-reviewed journal papers have been identified to develop administrative, environmental or personal engineering interventions to safeguard construction workers. It was found that methodological limitations, such as arbitrary sampling methods and unreliable instruments, could be the major obstacle in undertaking heat stress intervention research. To bridge the identified research gaps, this study then refined a research framework for conducting heat stress intervention studies in the construction industry. The proposed research strategy provides researchers and practitioners with fresh insights into expanding multidisciplinary research areas and solving practical problems in the management of heat stress. The proposed research framework may foster the development of heat stress intervention research in construction, which further aids researchers, practitioners, and policymakers in formulating proper intervention strategies.
Mechanistic Challenges and Advantages of Biosensor Miniaturization into the Nanoscale.
Soleymani, Leyla; Li, Feng
2017-04-28
Over the past few decades, there has been tremendous interest in developing biosensing systems that combine high sensitivity and specificity with rapid sample-to-answer times, portability, low-cost operation, and ease-of-use. Miniaturizing the biosensor dimensions into the nanoscale has been identified as a strategy for addressing the functional requirements of point-of-care and wearable biosensors. However, it is important to consider that decreasing the critical dimensions of biosensing elements impacts the two most important performance metrics of biosensors: limit-of-detection and response time. Miniaturization into the nanoscale enhances signal-to-noise-ratio by increasing the signal density (signal/geometric surface area) and reducing background signals. However, there is a trade-off between the enhanced signal transduction efficiency and the longer time it takes to collect target analytes on sensor surfaces due to the increase in mass transport times. By carefully considering the signal transduction mechanisms and reaction-transport kinetics governing different classes of biosensors, it is possible to develop structure-level and device-level strategies for leveraging miniaturization toward creating biosensors that combine low limit-of-detection with rapid response times.
Jacobson, Jerry O; Cueto, Carmen; Smith, Jennifer L; Hwang, Jimee; Gosling, Roly; Bennett, Adam
2017-01-18
To eliminate malaria, malaria programmes need to develop new strategies for surveillance and response appropriate for the changing epidemiology that accompanies transmission decline, in which transmission is increasingly driven by population subgroups whose behaviours place them at increased exposure. Conventional tools of malaria surveillance and response are likely not sufficient in many elimination settings for accessing high-risk population subgroups, such as mobile and migrant populations (MMPs), given their greater likelihood of asymptomatic infections, illegal risk behaviours, limited access to public health facilities, and high mobility including extended periods travelling away from home. More adaptive, targeted strategies are needed to monitor transmission and intervention coverage effectively in these groups. Much can be learned from HIV programmes' experience with "second generation surveillance", including how to rapidly adapt surveillance and response strategies to changing transmission patterns, biological and behavioural surveys that utilize targeted sampling methods for specific behavioural subgroups, and methods for population size estimation. This paper reviews the strategies employed effectively for HIV programmes and offers considerations and recommendations for adapting them to the malaria elimination context.
School-age children's perceptions of their PICU hospitalization.
Board, Rhonda
2005-01-01
The aim of this cross-sectional study was to explore the effects of a PICU hospitalization on critically ill school- age children. Few studies have examined the impact of the PICU experience on children themselves. A convenience sample was recruited of 21 developmentally appropriate children who were aged 7-12 years and had never been hospitalized. Children were asked open-ended questions related to their PICU experience, frequency and effectiveness of coping strategies was measured, and drawings were used to evaluate anxiety. Although not detailed, most children did have some recollection of their PICU stay. People in the PICU (i.e., nurses, physicians) were remembered as good, while feelings the children had (i.e., tired, didn't like it) were described as what was bad about the PICU. Children's coping strategies scores were very low. Most children had an average level of anxiety based on analysis of their drawings. Children's repertoire of coping strategies may be limited by the PICU, especially while intubated. Nurses should never underestimate the effect their behavior and responsiveness has on children. Feasible coping strategies and use of therapeutic play for PICU children should be explored further.
Ruiz, María Angeles; Sanjuan, Pilar; Pérez-García, Ana M; Rueda, Beatriz
2011-05-01
Fifty-two men who had suffered a first episode ischemic heart disease reported their degree of life satisfaction, the strategies they used to adjust to the illness, and the symptoms of anxiety and depression they felt. The multiple regression analyses carried out indicated that emotional distress was associated with a lower level of life satisfaction. In the analyses of anxiety symptoms, the use of negative adjustment strategies was also a significant predictor. Lastly, a significant Life Satisfaction x Type of Adjustment interaction was obtained. According to this, the patients who felt more satisfaction with their lives used more positive strategies to adjust to the illness and fewer negative ones, than the group of patients who were less satisfied. In conclusion, life satisfaction predicts emotional wellbeing of patients with ischemic heart disease and it enhances the implementation of appropriate strategies to cope with the disease. Moreover, although life satisfaction has been considered a stable measure, we suggest it may change as the experience of illness limits individuals' important goals.
Ramírez, Juan Carlos; Cura, Carolina Inés; Moreira, Otacilio da Cruz; Lages-Silva, Eliane; Juiz, Natalia; Velázquez, Elsa; Ramírez, Juan David; Alberti, Anahí; Pavia, Paula; Flores-Chávez, María Delmans; Muñoz-Calderón, Arturo; Pérez-Morales, Deyanira; Santalla, José; Guedes, Paulo Marcos da Matta; Peneau, Julie; Marcet, Paula; Padilla, Carlos; Cruz-Robles, David; Valencia, Edward; Crisante, Gladys Elena; Greif, Gonzalo; Zulantay, Inés; Costales, Jaime Alfredo; Alvarez-Martínez, Miriam; Martínez, Norma Edith; Villarroel, Rodrigo; Villarroel, Sandro; Sánchez, Zunilda; Bisio, Margarita; Parrado, Rudy; Galvão, Lúcia Maria da Cunha; da Câmara, Antonia Cláudia Jácome; Espinoza, Bertha; de Noya, Belkisyole Alarcón; Puerta, Concepción; Riarte, Adelina; Diosque, Patricio; Sosa-Estani, Sergio; Guhl, Felipe; Ribeiro, Isabela; Aznar, Christine; Britto, Constança; Yadón, Zaida Estela; Schijman, Alejandro G.
2015-01-01
An international study was performed by 26 experienced PCR laboratories from 14 countries to assess the performance of duplex quantitative real-time PCR (qPCR) strategies on the basis of TaqMan probes for detection and quantification of parasitic loads in peripheral blood samples from Chagas disease patients. Two methods were studied: Satellite DNA (SatDNA) qPCR and kinetoplastid DNA (kDNA) qPCR. Both methods included an internal amplification control. Reportable range, analytical sensitivity, limits of detection and quantification, and precision were estimated according to international guidelines. In addition, inclusivity and exclusivity were estimated with DNA from stocks representing the different Trypanosoma cruzi discrete typing units and Trypanosoma rangeli and Leishmania spp. Both methods were challenged against 156 blood samples provided by the participant laboratories, including samples from acute and chronic patients with varied clinical findings, infected by oral route or vectorial transmission. kDNA qPCR showed better analytical sensitivity than SatDNA qPCR with limits of detection of 0.23 and 0.70 parasite equivalents/mL, respectively. Analyses of clinical samples revealed a high concordance in terms of sensitivity and parasitic loads determined by both SatDNA and kDNA qPCRs. This effort is a major step toward international validation of qPCR methods for the quantification of T. cruzi DNA in human blood samples, aiming to provide an accurate surrogate biomarker for diagnosis and treatment monitoring for patients with Chagas disease. PMID:26320872
Cisco Networking Academy Program for high school students: Formative & summative evaluation
NASA Astrophysics Data System (ADS)
Cranford-Wesley, Deanne
This study examined the effectiveness of the Cisco Network Technology Program in enhancing students' technology skills as measured by classroom strategies, student motivation, student attitude, and student learning. Qualitative and quantitative methods were utilized to determine the effectiveness of this program. The study focused on two 11th grade classrooms at Hamtramck High School. Hamtramck, an inner-city community located in Detroit, is racially and ethnically diverse. The majority of students speak English as a second language; more than 20 languages are represented in the school district. More than 70% of the students are considered to be economically at risk. Few students have computers at home, and their access to the few computers at school is limited. Purposive sampling was conducted for this study. The sample consisted of 40 students, all of whom were trained in Cisco Networking Technologies. The researcher examined viable learning strategies in teaching a Cisco Networking class that focused on a web-based approach. Findings revealed that the Cisco Networking Academy Program was an excellent vehicle for teaching networking skills and, therefore, helping to enhance computer skills for the participating students. However, only a limited number of students were able to participate in the program, due to limited computer labs and lack of qualified teaching personnel. In addition, the cumbersome technical language posed an obstacle to students' success in networking. Laboratory assignments were preferred by 90% of the students over lecture and PowerPoint presentations. Practical applications, lab projects, interactive assignments, PowerPoint presentations, lectures, discussions, readings, research, and assessment all helped to increase student learning and proficiency and to enrich the classroom experience. Classroom strategies are crucial to student success in the networking program. Equipment must be updated and utilized to ensure that students are applying practical skills to networking concepts. The results also suggested a high level of motivation and retention in student participants. Students in both classes scored 80% proficiency on the Achievement Motivation Profile Assessment. The identified standard proficiency score was 70%, and both classes exceeded the standard.
Lim, Wei Yin; Goh, Boon Tong; Khor, Sook Mei
2017-08-15
Clinicians, working in the health-care diagnostic systems of developing countries, currently face the challenges of rising costs, increased number of patient visits, and limited resources. A significant trend is using low-cost substrates to develop microfluidic devices for diagnostic purposes. Various fabrication techniques, materials, and detection methods have been explored to develop these devices. Microfluidic paper-based analytical devices (μPADs) have gained attention for sensing multiplex analytes, confirming diagnostic test results, rapid sample analysis, and reducing the volume of samples and analytical reagents. μPADs, which can provide accurate and reliable direct measurement without sample pretreatment, can reduce patient medical burden and yield rapid test results, aiding physicians in choosing appropriate treatment. The objectives of this review are to provide an overview of the strategies used for developing paper-based sensors with enhanced analytical performances and to discuss the current challenges, limitations, advantages, disadvantages, and future prospects of paper-based microfluidic platforms in clinical diagnostics. μPADs, with validated and justified analytical performances, can potentially improve the quality of life by providing inexpensive, rapid, portable, biodegradable, and reliable diagnostics. Copyright © 2017 Elsevier B.V. All rights reserved.
Public perspectives on biospecimen procurement: what biorepositories should consider.
L'Heureux, Jamie; Murray, Jeffrey C; Newbury, Elizabeth; Shinkunas, Laura; Simon, Christian M
2013-06-01
Human biospecimens are central to biobanking efforts, yet how members of the public think about biobank procurement strategies is not well understood. This study aimed to explore public perspectives toward the procurement of residual clinical material versus "direct" procurement strategies such as the drawing of blood. Members of the public residing in and beyond the biobank catchment area of the University of Iowa Hospitals and Clinics were randomly selected to participate in focus groups and a telephone survey. The majority of survey participants (75%, n=559) found both residual and direct procurement strategies equally workable. Small proportions preferred either residual (15%; n=117) or direct (5%; n=40) procurement. Focus group participants (n=48) could identify benefits to both procurement strategies, but raised concerns about possible donor inconvenience/discomfort and reduced biospecimen accrual in the case of direct procurement. Residual procurement raised concerns about lower-quality samples being procured without full donor awareness. Biobanks should consider that members of the public in their research programs may be willing to make specimen donations regardless of whether a residual or direct procurement strategy is employed. Limiting patient discomfort and inconvenience may make direct procurement strategies more acceptable to some members of the public. Ensuring donor awareness through effective informed consent may allay public concerns about the indirectness of donating clinical biospecimens.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Guanghua, E-mail: yan@ufl.edu; Li, Jonathan; Huang, Yin
Purpose: To propose a simple model to explain the origin of ghost markers in marker-based optical tracking systems (OTS) and to develop retrospective strategies to detect and eliminate ghost markers. Methods: In marker-based OTS, ghost markers are virtual markers created due to the cross-talk between the two camera sensors, which can lead to system execution failure or inaccuracy in patient tracking. As a result, the users have to limit the number of markers and avoid certain marker configurations to reduce the chances of ghost markers. In this work, the authors propose retrospective strategies to detect and eliminate ghost markers. Themore » two camera sensors were treated as mathematical points in space. The authors identified the coplanar within limit (CWL) condition as the necessary condition for ghost marker occurrence. A simple ghost marker detection method was proposed based on the model. Ghost marker elimination was achieved through pattern matching: a ghost marker-free reference set was matched with the optical marker set observed by the OTS; unmatched optical markers were eliminated as either ghost markers or misplaced markers. The pattern matching problem was formulated as a constraint satisfaction problem (using pairwise distances as constraints) and solved with an iterative backtracking algorithm. Wildcard markers were introduced to address missing or misplaced markers. An experiment was designed to measure the sensor positions and the limit for the CWL condition. The ghost marker detection and elimination algorithms were verified with samples collected from a five-marker jig and a nine-marker anthropomorphic phantom, rotated with the treatment couch from −60° to +60°. The accuracy of the pattern matching algorithm was further validated with marker patterns from 40 patients who underwent stereotactic body radiotherapy (SBRT). For this purpose, a synthetic optical marker pattern was created for each patient by introducing ghost markers, marker position uncertainties, and marker displacement. Results: The sensor positions and the limit for the CWL condition were measured with excellent reproducibility (standard deviation ≤ 0.39 mm). The ghost marker detection algorithm had perfect detection accuracy for both the jig (1544 samples) and the anthropomorphic phantom (2045 samples). Pattern matching was successful for all samples from both phantoms as well as the 40 patient marker patterns. Conclusions: The authors proposed a simple model to explain the origin of ghost markers and identified the CWL condition as the necessary condition for ghost marker occurrence. The retrospective ghost marker detection and elimination algorithms guarantee complete ghost marker elimination while providing the users with maximum flexibility in selecting the number of markers and their configuration to meet their clinic needs.« less
Sampling in Developmental Science: Situations, Shortcomings, Solutions, and Standards.
Bornstein, Marc H; Jager, Justin; Putnick, Diane L
2013-12-01
Sampling is a key feature of every study in developmental science. Although sampling has far-reaching implications, too little attention is paid to sampling. Here, we describe, discuss, and evaluate four prominent sampling strategies in developmental science: population-based probability sampling, convenience sampling, quota sampling, and homogeneous sampling. We then judge these sampling strategies by five criteria: whether they yield representative and generalizable estimates of a study's target population, whether they yield representative and generalizable estimates of subsamples within a study's target population, the recruitment efforts and costs they entail, whether they yield sufficient power to detect subsample differences, and whether they introduce "noise" related to variation in subsamples and whether that "noise" can be accounted for statistically. We use sample composition of gender, ethnicity, and socioeconomic status to illustrate and assess the four sampling strategies. Finally, we tally the use of the four sampling strategies in five prominent developmental science journals and make recommendations about best practices for sample selection and reporting.
Sampling in Developmental Science: Situations, Shortcomings, Solutions, and Standards
Bornstein, Marc H.; Jager, Justin; Putnick, Diane L.
2014-01-01
Sampling is a key feature of every study in developmental science. Although sampling has far-reaching implications, too little attention is paid to sampling. Here, we describe, discuss, and evaluate four prominent sampling strategies in developmental science: population-based probability sampling, convenience sampling, quota sampling, and homogeneous sampling. We then judge these sampling strategies by five criteria: whether they yield representative and generalizable estimates of a study’s target population, whether they yield representative and generalizable estimates of subsamples within a study’s target population, the recruitment efforts and costs they entail, whether they yield sufficient power to detect subsample differences, and whether they introduce “noise” related to variation in subsamples and whether that “noise” can be accounted for statistically. We use sample composition of gender, ethnicity, and socioeconomic status to illustrate and assess the four sampling strategies. Finally, we tally the use of the four sampling strategies in five prominent developmental science journals and make recommendations about best practices for sample selection and reporting. PMID:25580049
Biosignature Discovery for Substance Use Disorders Using Statistical Learning.
Baurley, James W; McMahan, Christopher S; Ervin, Carolyn M; Pardamean, Bens; Bergen, Andrew W
2018-02-01
There are limited biomarkers for substance use disorders (SUDs). Traditional statistical approaches are identifying simple biomarkers in large samples, but clinical use cases are still being established. High-throughput clinical, imaging, and 'omic' technologies are generating data from SUD studies and may lead to more sophisticated and clinically useful models. However, analytic strategies suited for high-dimensional data are not regularly used. We review strategies for identifying biomarkers and biosignatures from high-dimensional data types. Focusing on penalized regression and Bayesian approaches, we address how to leverage evidence from existing studies and knowledge bases, using nicotine metabolism as an example. We posit that big data and machine learning approaches will considerably advance SUD biomarker discovery. However, translation to clinical practice, will require integrated scientific efforts. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lee, Hsienming; Larson, Daniel R.; Lawrence, David S.
2009-01-01
Biological systems are characterized by a level of spatial and temporal organization that often lies beyond the grasp of present day methods. Light-modulated bioreagents, including analogs of low molecular weight compounds, peptides, proteins, and nucleic acids, represent a compelling strategy to probe, perturb, or sample biological phenomena with the requisite control to address many of these organizational complexities. Although this technology has created considerable excitement in the chemical community, its application to biological questions has been relatively limited. We describe the challenges associated with the design, synthesis, and use of light-responsive bioreagents, the scope and limitations associated with the instrumentation required for their application, and recent chemical and biological advances in this field. PMID:19298086
Lee, Hsien-Ming; Larson, Daniel R; Lawrence, David S
2009-06-19
Biological systems are characterized by a level of spatial and temporal organization that often lies beyond the grasp of present day methods. Light-modulated bioreagents, including analogs of low molecular weight compounds, peptides, proteins, and nucleic acids, represent a compelling strategy to probe, perturb, or sample biological phenomena with the requisite control to address many of these organizational complexities. Although this technology has created considerable excitement in the chemical community, its application to biological questions has been relatively limited. We describe the challenges associated with the design, synthesis, and use of light-responsive bioreagents; the scope and limitations associated with the instrumentation required for their application; and recent chemical and biological advances in this field.
Ntoumanis, Nikos; Taylor, Ian M; Standage, Martyn
2010-12-01
There has been very limited research on the use of self-worth protection strategies in the achievement context of school physical education (PE). Thus the aim of the present study was to examine some antecedents and consequences of defensive pessimism and self-handicapping. The sample comprised 534 British pupils (275 females, 259 males) recruited from two schools who responded to established questionnaires. Results of structural equation modelling analysis indicated that self-handicapping and defensive pessimism were positively predicted by fear of failure and negatively predicted by competence valuation. In addition, defensive pessimism was negatively predicted by physical self-concept. In turn, defensive pessimism negatively predicted enjoyment in PE and intentions to participate in future optional PE programmes. Self-handicapping did not predict enjoyment or intentions. Results from multi-sample structural equation modelling showed the specified model to be largely invariant across males and females. The findings indicate that although both strategies aim to protect one's self-worth, some of their antecedents and consequences in PE may differ.
Oral health and elite sport performance
Needleman, Ian; Ashley, Paul; Fine, Peter; Haddad, Fares; Loosemore, Mike; de Medici, Akbar; Donos, Nikos; Newton, Tim; van Someren, Ken; Moazzez, Rebecca; Jaques, Rod; Hunter, Glenn; Khan, Karim; Shimmin, Mark; Brewer, John; Meehan, Lyndon; Mills, Steve; Porter, Stephen
2015-01-01
While the research base is limited, studies have consistently reported poor oral health in elite athletes since the first report from the 1968 Olympic Games. The finding is consistent both across selected samples attending dental clinics at major competitions and more representative sampling of teams and has led to calls from the International Olympic Committee for more accurate data on oral health. Poor oral health is an important issue directly as it can cause pain, negative effects on appearance and psychosocial effects on confidence and quality of life and may have long-term consequences for treatment burden. Self-reported evidence also suggests an impact on training and performance of athletes. There are many potential challenges to the oral health of athletes including nutritional, oral dehydration, exercise-induced immune suppression, lack of awareness, negative health behaviours and lack of prioritisation. However, in theory, oral diseases are preventable by simple interventions with good evidence of efficacy. The consensus statement aims to raise awareness of the issues of oral health in elite sport and recommends strategies for prevention and health promotion in addition to future research strategies. PMID:25263651
Refining the Measurement of Distress Intolerance
McHugh, R. Kathryn; Otto, Michael W.
2012-01-01
Distress intolerance is an important transdiagnostic variable that has long been implicated in the development and maintenance of psychological disorders. Self-report measurement strategies for distress intolerance have emerged from several different models of psychopathology and these measures have been applied inconsistently in the literature in the absence of a clear gold standard. The absence of a consistent assessment strategy has limited the ability to compare across studies and samples, thus hampering the advancement of this research agenda. This study evaluated the latent factor structure of existing measures of DI to examine the degree to which they are capturing the same construct. Results of confirmatory factor analysis in 3 samples totaling 400 participants provided support for a single factor latent structure. Individual items of these four scales were then correlated with this factor to identify those that best capture the core construct. Results provided consistent supported for 10 items that demonstrated the strongest concordance with this factor. The use of these 10 items as a unifying measure in the study of DI and future directions for the evaluation of its utility are discussed. PMID:22697451
Perspectives on land snails - sampling strategies for isotopic analyses
NASA Astrophysics Data System (ADS)
Kwiecien, Ola; Kalinowski, Annika; Kamp, Jessica; Pellmann, Anna
2017-04-01
Since the seminal works of Goodfriend (1992), several substantial studies confirmed a relation between the isotopic composition of land snail shells (d18O, d13C) and environmental parameters like precipitation amount, moisture source, temperature and vegetation type. This relation, however, is not straightforward and site dependent. The choice of sampling strategy (discrete or bulk sampling) and cleaning procedure (several methods can be used, but comparison of their effects in an individual shell has yet not been achieved) further complicate the shell analysis. The advantage of using snail shells as environmental archive lies in the snails' limited mobility, and therefore an intrinsic aptitude of recording local and site-specific conditions. Also, snail shells are often found at dated archaeological sites. An obvious drawback is that shell assemblages rarely make up a continuous record, and a single shell is only a snapshot of the environmental setting at a given time. Shells from archaeological sites might represent a dietary component and cooking would presumably alter the isotopic signature of aragonite material. Consequently, a proper sampling strategy is of great importance and should be adjusted to the scientific question. Here, we compare and contrast different sampling approaches using modern shells collected in Morocco, Spain and Germany. The bulk shell approach (fine-ground material) yields information on mean environmental parameters within the life span of analyzed individuals. However, despite homogenization, replicate measurements of bulk shell material returned results with a variability greater than analytical precision (up to 2‰ for d18O, and up to 1‰ for d13C), calling for caution analyzing only single individuals. Horizontal high-resolution sampling (single drill holes along growth lines) provides insights into the amplitude of seasonal variability, while vertical high-resolution sampling (multiple drill holes along the same growth line) produces replicable results. This reproducibility enables not only sequential testing of isotopic changes in shells exposed to artificially elevated temperatures, but also systematic assessment of different cleaning methods. Goodfriend, 1992. The use of land snail shells in paleoenvironmental reconstruction, EPSL 11, 655-685
What is a representative brain? Neuroscience meets population science.
Falk, Emily B; Hyde, Luke W; Mitchell, Colter; Faul, Jessica; Gonzalez, Richard; Heitzeg, Mary M; Keating, Daniel P; Langa, Kenneth M; Martz, Meghan E; Maslowsky, Julie; Morrison, Frederick J; Noll, Douglas C; Patrick, Megan E; Pfeffer, Fabian T; Reuter-Lorenz, Patricia A; Thomason, Moriah E; Davis-Kean, Pamela; Monk, Christopher S; Schulenberg, John
2013-10-29
The last decades of neuroscience research have produced immense progress in the methods available to understand brain structure and function. Social, cognitive, clinical, affective, economic, communication, and developmental neurosciences have begun to map the relationships between neuro-psychological processes and behavioral outcomes, yielding a new understanding of human behavior and promising interventions. However, a limitation of this fast moving research is that most findings are based on small samples of convenience. Furthermore, our understanding of individual differences may be distorted by unrepresentative samples, undermining findings regarding brain-behavior mechanisms. These limitations are issues that social demographers, epidemiologists, and other population scientists have tackled, with solutions that can be applied to neuroscience. By contrast, nearly all social science disciplines, including social demography, sociology, political science, economics, communication science, and psychology, make assumptions about processes that involve the brain, but have incorporated neural measures to differing, and often limited, degrees; many still treat the brain as a black box. In this article, we describe and promote a perspective--population neuroscience--that leverages interdisciplinary expertise to (i) emphasize the importance of sampling to more clearly define the relevant populations and sampling strategies needed when using neuroscience methods to address such questions; and (ii) deepen understanding of mechanisms within population science by providing insight regarding underlying neural mechanisms. Doing so will increase our confidence in the generalizability of the findings. We provide examples to illustrate the population neuroscience approach for specific types of research questions and discuss the potential for theoretical and applied advances from this approach across areas.
What is a representative brain? Neuroscience meets population science
Falk, Emily B.; Hyde, Luke W.; Mitchell, Colter; Faul, Jessica; Gonzalez, Richard; Heitzeg, Mary M.; Keating, Daniel P.; Langa, Kenneth M.; Martz, Meghan E.; Maslowsky, Julie; Morrison, Frederick J.; Noll, Douglas C.; Patrick, Megan E.; Pfeffer, Fabian T.; Reuter-Lorenz, Patricia A.; Thomason, Moriah E.; Davis-Kean, Pamela; Monk, Christopher S.; Schulenberg, John
2013-01-01
The last decades of neuroscience research have produced immense progress in the methods available to understand brain structure and function. Social, cognitive, clinical, affective, economic, communication, and developmental neurosciences have begun to map the relationships between neuro-psychological processes and behavioral outcomes, yielding a new understanding of human behavior and promising interventions. However, a limitation of this fast moving research is that most findings are based on small samples of convenience. Furthermore, our understanding of individual differences may be distorted by unrepresentative samples, undermining findings regarding brain–behavior mechanisms. These limitations are issues that social demographers, epidemiologists, and other population scientists have tackled, with solutions that can be applied to neuroscience. By contrast, nearly all social science disciplines, including social demography, sociology, political science, economics, communication science, and psychology, make assumptions about processes that involve the brain, but have incorporated neural measures to differing, and often limited, degrees; many still treat the brain as a black box. In this article, we describe and promote a perspective—population neuroscience—that leverages interdisciplinary expertise to (i) emphasize the importance of sampling to more clearly define the relevant populations and sampling strategies needed when using neuroscience methods to address such questions; and (ii) deepen understanding of mechanisms within population science by providing insight regarding underlying neural mechanisms. Doing so will increase our confidence in the generalizability of the findings. We provide examples to illustrate the population neuroscience approach for specific types of research questions and discuss the potential for theoretical and applied advances from this approach across areas. PMID:24151336
Li, Yuan-Yuan; Chen, Xiao-Huang; Xie, Zhang-Xian; Li, Dong-Xu; Wu, Peng-Fei; Kong, Ling-Fen; Lin, Lin; Kao, Shuh-Ji; Wang, Da-Zhi
2018-01-01
Nitrogen (N) is a primary limiting nutrient for bacterial growth and productivity in the ocean. To better understand bacterial community and their N utilization strategy in different N regimes of the ocean, we examined bacterial diversity, diazotrophic diversity, and N utilization gene expressions in the northwestern Pacific Ocean (NWPO) using a combination of high-throughput sequencing and real-time qPCR methods. 521 and 204 different operational taxonomic units (OTUs) were identified in the 16s rRNA and nifH libraries from nine surface samples. Of the 16s rRNA gene OTUs, 11.9% were observed in all samples while 3.5 and 15.9% were detected only in N-sufficient and N-deficient samples. Proteobacteria, Cyanobacteria and Bacteroidetes dominated the bacterial community. Prochlorococcus and Pseudoalteromonas were the most abundant at the genus level in N-deficient regimes, while SAR86, Synechococcus and SAR92 were predominant in the Kuroshio-Oyashio confluence region. The distribution of the nifH gene presented great divergence among sampling stations: Cyanobacterium_UCYN-A dominated the N-deficient stations, while clusters related to the Alpha-, Beta- , and Gamma-Proteobacteria were abundant in other stations. Temperature was the main factor that determined bacterial community structure and diversity while concentration of NO X -N was significantly correlated with structure and distribution of N 2 -fixing microorganisms. Expression of the ammonium transporter was much higher than that of urea transporter subunit A ( urtA ) and ferredoxin-nitrate reductase , while urtA had an increased expression in N-deficient surface water. The predicted ammonium transporter and ammonium assimilation enzymes were most abundant in surface samples while urease and nitrogenase were more abundant in the N-deficient regions. These findings underscore the fact that marine bacteria have evolved diverse N utilization strategies to adapt to different N habitats, and that urea metabolism is of vital ecological importance in N-deficient regimes.
Autoverification process improvement by Six Sigma approach: Clinical chemistry & immunoassay.
Randell, Edward W; Short, Garry; Lee, Natasha; Beresford, Allison; Spencer, Margaret; Kennell, Marina; Moores, Zoë; Parry, David
2018-05-01
This study examines effectiveness of a project to enhance an autoverification (AV) system through application of Six Sigma (DMAIC) process improvement strategies. Similar AV systems set up at three sites underwent examination and modification to produce improved systems while monitoring proportions of samples autoverified, the time required for manual review and verification, sample processing time, and examining characteristics of tests not autoverified. This information was used to identify areas for improvement and monitor the impact of changes. Use of reference range based criteria had the greatest impact on the proportion of tests autoverified. To improve AV process, reference range based criteria was replaced with extreme value limits based on a 99.5% test result interval, delta check criteria were broadened, and new specimen consistency rules were implemented. Decision guidance tools were also developed to assist staff using the AV system. The mean proportion of tests and samples autoverified improved from <62% for samples and <80% for tests, to >90% for samples and >95% for tests across all three sites. The new AV system significantly decreased turn-around time and total sample review time (to about a third), however, time spent for manual review of held samples almost tripled. There was no evidence of compromise to the quality of testing process and <1% of samples held for exceeding delta check or extreme limits required corrective action. The Six Sigma (DMAIC) process improvement methodology was successfully applied to AV systems resulting in an increase in overall test and sample AV by >90%, improved turn-around time, reduced time for manual verification, and with no obvious compromise to quality or error detection. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
De la Calle, Inmaculada; Menta, Mathieu; Séby, Fabienne
2016-11-01
Due to the increasing use of nanoparticles (NPs) in consumer products, it becomes necessary to develop different strategies for their detection, identification, characterization and quantification in a wide variety of samples. Since the analysis of NPs in consumer products and environmental samples is particularly troublesome, a detailed description of challenges and limitations is given here. This review mainly focuses on sample preparation procedures applied for the mostly used techniques for metallic and metal oxide NPs characterization in consumer products and most outstanding publications of biological and environmental samples (from 2006 to 2015). We summarize the procedures applied for total metal content, extraction/separation and/or preconcentration of NPs from the matrix, separation of metallic NPs from their ions or from larger particles and NPs' size fractionation. Sample preparation procedures specifically for microscopy are also described. Selected applications in cosmetics, food, other consumer products, biological tissues and environmental samples are presented. Advantages and inconveniences of those procedures are considered. Moreover, selected simplified schemes for NPs sample preparation, as well as usual techniques applied are included. Finally, promising directions for further investigations are discussed.
Invited commentary: recruiting for epidemiologic studies using social media.
Allsworth, Jenifer E
2015-05-15
Social media-based recruitment for epidemiologic studies has the potential to expand the demographic and geographic reach of investigators and identify potential participants more cost-effectively than traditional approaches. In fact, social media are particularly appealing for their ability to engage traditionally "hard-to-reach" populations, including young adults and low-income populations. Despite their great promise as a tool for epidemiologists, social media-based recruitment approaches do not currently compare favorably with gold-standard probability-based sampling approaches. Sparse data on the demographic characteristics of social media users, patterns of social media use, and appropriate sampling frames limit our ability to implement probability-based sampling strategies. In a well-conducted study, Harris et al. (Am J Epidemiol. 2015;181(10):737-746) examined the cost-effectiveness of social media-based recruitment (advertisements and promotion) in the Contraceptive Use, Pregnancy Intention, and Decisions (CUPID) Study, a cohort study of 3,799 young adult Australian women, and the approximate representativeness of the CUPID cohort. Implications for social media-based recruitment strategies for cohort assembly, data accuracy, implementation, and human subjects concerns are discussed. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Abnous, Khalil; Danesh, Noor Mohammad; Ramezani, Mohammad; Taghdisi, Seyed Mohammad; Emrani, Ahmad Sarreshtehdar
2018-08-22
Herein, a novel colorimetric aptasensor was introduced for detection of cocaine based on the formation of three-way junction pockets on the surfaces of gold nanoparticles (AuNPs) and the catalytic activity of the surfaces of AuNPs. Simplicity and detection of cocaine in a short time (only 35 min) are some of the unique features of the proposed sensing strategy. In the presence of cocaine, triple-fragment aptamer (TFA) forms on the surfaces of AuNPs, leading to a significant decrease of the catalytic activity of AuNPs and the color of samples remains yellow. In the absence of target, TFA does not form on the surfaces of AuNPs and 4-Nitrophenol, as a colorimetric agent, has more access to the surfaces of AuNPs, resulting in the reduction of 4-Nitrophenol and the color of sample changes from yellow to colorless. The sensing strategy showed good specificity, a limit of detection (LOD) of 440 pM and a dynamic range over 2-100 nM. The sensing method was also successfully applied to detect cocaine in spiked human serum samples with recovery of 94.71-98.63%. Copyright © 2018 Elsevier B.V. All rights reserved.
Wu, Ci; Chen, Xi; Liu, Jianhui; Zhang, Xiaolin; Xue, Weifeng; Liang, Zhen; Liu, Mengyao; Cui, Yan; Huang, Daliang; Zhang, Lihua
2017-10-08
A novel method of the simultaneous detection of multiple kinds of allergenic proteins in infant food with parallel reaction monitoring (PRM) mode using liquid chromatography-tandem mass spectrometry (LC-MS/MS) was established. In this method, unique peptides with good stability and high sensibility were used to quantify the corresponding allergenic proteins. Furthermore, multiple kinds of allergenic proteins are inspected simultaneously with high sensitivity. In addition, such method was successfully used for the detection of multiple allergenic proteins in infant food. As for the sample preparation for infant food, compared with the traditional acetone precipitation strategy, the protein extraction efficiency and capacity of resisting disturbance are both higher with in-situ filter-aided sample pretreatment (i-FASP) method. All allergenic proteins gave a good linear response with the correlation coefficients ( R 2 ) ≥ 0.99, and the largest concentration range of the allergenic proteins could be four orders of magnitude, and the lowest detection limit was 0.028 mg/L, which was better than that reported in references. Finally, the method was conveniently used to detect the allergens from four imported infant food real samples. All the results demonstrate that this novel strategy is of great significance for providing a rapid and reliable analytical technique for allergen proteomics.
Cross-cultural analysis of cognitive attributions of smoking in thai and South Korean adolescents.
Page, Randy M; Park, Sunhee; Suwanteerangkul, Jiraporn; Park, Hyunju; Kemeny, Maria; Philips, Lynn
2012-02-01
Understanding the cognitive attributions of smoking has the potential to advance youth smoking prevention efforts; however, research on this subject is limited in Asian countries. We attempted to determine the degree to which cognitive attributions of smoking differ among adolescents in 2 Asian countries, Thailand and South Korea. We surveyed 10th- to 12th-grade students in Chiang Mai, Thailand (N = 2516) and Seoul, South Korea (N = 1166). Logistic regression determined association of attributions and current smoking and differences in attributions between Thai and South Korean students. Items with the highest agreement among South Koreans were "helps me to deal with stress" and "helps relax" and among Thai were "feel like I am making my own decisions" and "keeps from being bored." Significant predictors of current smoking were different between samples. Only 1 cognitive attribution predicted current smoking in both samples ("helps me to deal with stress"). The pattern of relevant cognitive attributions of smoking for the 2 samples was distinct, suggesting that cross-cultural differences merit consideration when designing prevention and cessation programs. Health education should strive to dispel the use of smoking as a coping strategy for dealing with stressful situations and distressful feelings and teach adolescents alternative healthy strategies for dealing with stress. © 2012, American School Health Association.
Pearson, Matthew R.; Henson, James M.
2013-01-01
Much research links impulsivity with alcohol use and problems. In two studies, unplanned (or impulsive) drinking is assessed directly to determine whether it has direct effects on alcohol use and alcohol-related problems. In study 1, we examined whether unplanned drinking serves as a proximal mediator of the effects of impulsivity-like traits on alcohol-related outcomes. With a sample of 211 college student drinkers, we found that the Unplanned Drinking Scale was significantly related to alcohol use, and perhaps more importantly, had a direct effect on alcohol-related problems even after controlling for frequency and quantity of alcohol use. Further, unplanned drinking partially mediated the effects of negative urgency on alcohol-related problems. In study 2, we examined whether unplanned drinking accounts for unique variance in alcohol-related outcomes when controlling for use of protective behavioral strategies. With a sample of 170 college students, we replicated the findings of Study 1 in that the Unplanned Drinking Scale had a significant direct effect on alcohol-related problems even after controlling for alcohol use; further, this effect was maintained when controlling for use of protective behavioral strategies. Limitations include the modest sample sizes and the cross-sectional design. Future directions for testing the Model of Unplanned Drinking Behavior are proposed. PMID:23276312
Butler, James; Quinn, Sandra C.; Fryer, Craig S.; Garza, Mary A.; Kim, Kevin H.; Thomas, Stephen B.
2013-01-01
Limited attention has been given to the optimal strategies for retaining racial and ethnic minorities within studies and during the follow-up period. High attrition limits the interpretation of results and reduces the ability to translate findings into successful interventions. This study examined the retention strategies used by researchers when retaining minorities in research studies. From May to August 2010, we conducted an online survey with researchers (principal investigators, research staff, and IRB members) and examined their use of seven commonly used retention strategies. The number and type of retention strategies used, how these strategies differ by researcher type, and other characteristics (e.g., funding) were explored. We identified three clusters of researchers: comprehensive retention strategy researchers - utilized the greatest number of retention strategies; moderate retention strategy researchers - utilized an average number of retention strategies; and limited retention strategy researchers - utilized the least number of retention strategies. The comprehensive and moderate retention strategy researchers were more likely than the limited retention strategy researchers to conduct health outcomes research, work with a community advisory board, hire minority staff, use steps at a higher rate to overcome retention barriers, develop new partnerships with the minority community, modify study materials for the minority population, and allow staff to work flexible schedules. This study is a novel effort to characterize researchers, without implying a value judgment, according to their use of specific retention strategies. It provides critical information for conducting future research to determine the effectiveness of using a combination of retention strategies. PMID:23764697
Butler, James; Quinn, Sandra C; Fryer, Craig S; Garza, Mary A; Kim, Kevin H; Thomas, Stephen B
2013-09-01
Limited attention has been given to the optimal strategies for retaining racial and ethnic minorities within studies and during the follow-up period. High attrition limits the interpretation of results and reduces the ability to translate findings into successful interventions. This study examined the retention strategies used by researchers when retaining minorities in research studies. From May to August 2010, we conducted an online survey with researchers (principal investigators, research staff, and IRB members) and examined their use of seven commonly used retention strategies. The number and type of retention strategies used, how these strategies differ by researcher type, and other characteristics (e.g., funding) were explored. We identified three clusters of researchers: comprehensive retention strategy researchers - utilized the greatest number of retention strategies; moderate retention strategy researchers - utilized an average number of retention strategies; and limited retention strategy researchers - utilized the least number of retention strategies. The comprehensive and moderate retention strategy researchers were more likely than the limited retention strategy researchers to conduct health outcomes research, work with a community advisory board, hire minority staff, use steps at a higher rate to overcome retention barriers, develop new partnerships with the minority community, modify study materials for the minority population, and allow staff to work flexible schedules. This study is a novel effort to characterize researchers, without implying a value judgment, according to their use of specific retention strategies. It provides critical information for conducting future research to determine the effectiveness of using a combination of retention strategies. Copyright © 2013. Published by Elsevier Inc.
Pandemic Influenza and Community Preparedness
Ryan, Philip; Roberton, Don; Street, Jackie; Watson, Maureen
2009-01-01
Objectives. We aimed to examine community knowledge about and attitudes toward the threat of pandemic influenza and assess the community acceptability of strategies to reduce its effect. Methods. We conducted computer-aided telephone interviews in 2007 with a cross-sectional sample of rural and metropolitan residents of South Australia. Results. Of 1975 households interviewed, half (50.2%) had never heard of pandemic influenza or were unaware of its meaning. Only 10% of respondents were extremely concerned about the threat of pandemic influenza. Respondents identified children as the highest priority for vaccination, if supplies were limited; they ranked politicians and teachers as the lowest priority. Although only 61.7% of respondents agreed with a policy of home isolation, 98.2% agreed if it was part of a national strategy. Respondents considered television to be the best means of educating the community. Conclusions. Community knowledge about pandemic influenza is poor despite widespread concern. Public education about pandemic influenza is essential if strategies to reduce the impact of the disease are to be effective. PMID:19797751
Satisfaction With Life, Coping, and Spirituality Among Urban Families.
Doolittle, Benjamin; Courtney, Malachi; Jasien, Joan
2015-10-01
Urban families face many challenges that affect life satisfaction, including low income, limited access to resources, and unstable neighborhoods. To investigate life satisfaction and identify potential mediators: neighborhood stability, emotional coping strategies, religion, and spirituality. A convenience sample of families presenting to an urban primary care clinic for routine care filled out an anonymous, voluntary survey that included demographic data, the Satisfaction with Life Scale (SWLS), the Spiritual Inventory and Beliefs Scale, and an emotional coping inventory. 127 individuals filled out the survey. Life satisfaction was high (21.3 ± 9). Families in the lowest quartile of the SWLS were 4.5 times as likely to have a child with a chronic medical illness. SWLS correlated with strategy planning (r = 0.24, P < .01), external practices of religion (r = 0.23, P < .01), and humility (r = 0.18, P < .05). Encouraging patients' involvement in religion and certain coping strategies, especially among those families coping with children with special health care needs, may improve life satisfaction. © The Author(s) 2015.
Gosselin, Patrick; Langlois, Frédéric; Freeston, Mark H; Ladouceur, Robert; Laberge, Myriam; Lemay, Dominique
2007-02-01
Studies aiming to better understand worry have neglected children and adolescents. This constitutes an important limitation considering that excessive worry is frequent among adolescents and that patients suffering from excessive worries associate the beginning of their disorder with adolescence. This study evaluates the cognitive variables associated with worry in a sample of 777 adolescents. It attempts to determine whether cognitive avoidance and false beliefs about the usefulness of worries are present and associated with worries in adolescence. The results showed that participants with a high level of worry used more avoidance strategies and held more beliefs about worry. The results also revealed that avoidance of stimuli that trigger unpleasant thoughts and thought substitution were the major avoidance strategies related to worry among adolescents. The belief that worry helps to avoid future negative events was also related to worry. These findings may suggest that adolescents' worries are maintained by processes similar to those observed among adults.
Semi-supervised learning for photometric supernova classification
NASA Astrophysics Data System (ADS)
Richards, Joseph W.; Homrighausen, Darren; Freeman, Peter E.; Schafer, Chad M.; Poznanski, Dovi
2012-01-01
We present a semi-supervised method for photometric supernova typing. Our approach is to first use the non-linear dimension reduction technique diffusion map to detect structure in a data base of supernova light curves and subsequently employ random forest classification on a spectroscopically confirmed training set to learn a model that can predict the type of each newly observed supernova. We demonstrate that this is an effective method for supernova typing. As supernova numbers increase, our semi-supervised method efficiently utilizes this information to improve classification, a property not enjoyed by template-based methods. Applied to supernova data simulated by Kessler et al. to mimic those of the Dark Energy Survey, our methods achieve (cross-validated) 95 per cent Type Ia purity and 87 per cent Type Ia efficiency on the spectroscopic sample, but only 50 per cent Type Ia purity and 50 per cent efficiency on the photometric sample due to their spectroscopic follow-up strategy. To improve the performance on the photometric sample, we search for better spectroscopic follow-up procedures by studying the sensitivity of our machine-learned supernova classification on the specific strategy used to obtain training sets. With a fixed amount of spectroscopic follow-up time, we find that, despite collecting data on a smaller number of supernovae, deeper magnitude-limited spectroscopic surveys are better for producing training sets. For supernova Ia (II-P) typing, we obtain a 44 per cent (1 per cent) increase in purity to 72 per cent (87 per cent) and 30 per cent (162 per cent) increase in efficiency to 65 per cent (84 per cent) of the sample using a 25th (24.5th) magnitude-limited survey instead of the shallower spectroscopic sample used in the original simulations. When redshift information is available, we incorporate it into our analysis using a novel method of altering the diffusion map representation of the supernovae. Incorporating host redshifts leads to a 5 per cent improvement in Type Ia purity and 13 per cent improvement in Type Ia efficiency. A web service for the supernova classification method used in this paper can be found at .
Burger, Emily A; Sy, Stephen; Nygård, Mari; Kim, Jane J
2016-01-01
Background Human papillomavirus (HPV) testing allows women to self-collect cervico-vaginal cells at home (i.e., self-sampling). Using primary data from a randomized pilot study, we evaluated the long-term consequences and cost-effectiveness of using self-sampling to improve participation to routine cervical cancer screening in Norway. Methods We compared a strategy reflecting screening participation (using reminder letters) to strategies that involved mailing self-sampling device kits to women non-compliant to screening within a 5-year or 10-year period under two scenarios: A) self-sampling respondents had moderate under-screening histories, or B) respondents to self-sampling had moderate and severe under-screening histories. Model outcomes included quality-adjusted life-years (QALY) and lifetime costs. The ‘most cost-effective’ strategy was identified as the strategy just below $100,000 per QALY gained. Results Mailing self-sampling device kits to all women non-compliant to screening within a 5-year or 10-year period can be more effective and less costly than the current reminder letter policy; however, the optimal self-sampling strategy was dependent on the profile of self-sampling respondents. For example, ‘10-yearly self-sampling’ is preferred ($95,500 per QALY gained) if ‘5-yearly self-sampling’ could only attract moderate under-screeners; however, ‘5-yearly self-sampling’ is preferred if this strategy could additionally attract severe under-screeners. Conclusions Targeted self-sampling of non-compliers likely represents good value-for-money; however, the preferred strategy is contingent on the screening histories and compliance of respondents. Impact The magnitude of the health benefit and optimal self-sampling strategy is dependent on the profile and behavior of respondents. Health authorities should understand these factors prior to selecting and implementing a self-sampling policy. PMID:27624639
Maximizing the Biochemical Resolving Power of Fluorescence Microscopy
Esposito, Alessandro; Popleteeva, Marina; Venkitaraman, Ashok R.
2013-01-01
Most recent advances in fluorescence microscopy have focused on achieving spatial resolutions below the diffraction limit. However, the inherent capability of fluorescence microscopy to non-invasively resolve different biochemical or physical environments in biological samples has not yet been formally described, because an adequate and general theoretical framework is lacking. Here, we develop a mathematical characterization of the biochemical resolution in fluorescence detection with Fisher information analysis. To improve the precision and the resolution of quantitative imaging methods, we demonstrate strategies for the optimization of fluorescence lifetime, fluorescence anisotropy and hyperspectral detection, as well as different multi-dimensional techniques. We describe optimized imaging protocols, provide optimization algorithms and describe precision and resolving power in biochemical imaging thanks to the analysis of the general properties of Fisher information in fluorescence detection. These strategies enable the optimal use of the information content available within the limited photon-budget typically available in fluorescence microscopy. This theoretical foundation leads to a generalized strategy for the optimization of multi-dimensional optical detection, and demonstrates how the parallel detection of all properties of fluorescence can maximize the biochemical resolving power of fluorescence microscopy, an approach we term Hyper Dimensional Imaging Microscopy (HDIM). Our work provides a theoretical framework for the description of the biochemical resolution in fluorescence microscopy, irrespective of spatial resolution, and for the development of a new class of microscopes that exploit multi-parametric detection systems. PMID:24204821
Yousefi-Nooraie, Reza; Irani, Shirin; Mortaz-Hedjri, Soroush; Shakiba, Behnam
2013-10-01
The aim of this study was to compare the performance of three search methods in the retrieval of relevant clinical trials from PubMed to answer specific clinical questions. Included studies of a sample of 100 Cochrane reviews which recorded in PubMed were considered as the reference standard. The search queries were formulated based on the systematic review titles. Precision, recall and number of retrieved records for limiting the results to clinical trial publication type, and using sensitive and specific clinical queries filters were compared. The number of keywords, presence of specific names of intervention and syndrome in the search keywords were used in a model to predict the recalls and precisions. The Clinical queries-sensitive search strategy retrieved the largest number of records (33) and had the highest recall (41.6%) and lowest precision (4.8%). The presence of specific intervention name was the only significant predictor of all recalls and precisions (P = 0.016). The recall and precision of combination of simple clinical search queries and methodological search filters to find clinical trials in various subjects were considerably low. The limit field strategy yielded in higher precision and fewer retrieved records and approximately similar recall, compared with the clinical queries-sensitive strategy. Presence of specific intervention name in the search keywords increased both recall and precision. © 2010 John Wiley & Sons Ltd.
Wang, Tong; Wu, Hai-Long; Xie, Li-Xia; Zhu, Li; Liu, Zhi; Sun, Xiao-Dong; Xiao, Rong; Yu, Ru-Qin
2017-04-01
In this work, a smart chemometrics-enhanced strategy, high-performance liquid chromatography, and diode array detection coupled with second-order calibration method based on alternating trilinear decomposition algorithm was proposed to simultaneously quantify 12 polyphenols in different kinds of apple peel and pulp samples. The proposed strategy proved to be a powerful tool to solve the problems of coelution, unknown interferences, and chromatographic shifts in the process of high-performance liquid chromatography analysis, making it possible for the determination of 12 polyphenols in complex apple matrices within 10 min under simple conditions of elution. The average recoveries with standard deviations, and figures of merit including sensitivity, selectivity, limit of detection, and limit of quantitation were calculated to validate the accuracy of the proposed method. Compared to the quantitative analysis results from the classic high-performance liquid chromatography method, the statistical and graphical analysis showed that our proposed strategy obtained more reliable results. All results indicated that our proposed method used in the quantitative analysis of apple polyphenols was an accurate, fast, universal, simple, and green one, and it was expected to be developed as an attractive alternative method for simultaneous determination of multitargeted analytes in complex matrices. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Santangelo, Barbara; Lapolla, Rosa; Rutigliano, Irene; Pettoello Mantovani, Massimo; Campanozzi, Angelo
2018-06-01
No data are available on caffeine consumption among Italian adolescents. We investigated caffeine intake from coffee, soft drinks and energy drinks in a sample of Italian adolescents and determined if they exceeded the recommended limits. The study comprised 1213 adolescents with a mean age of 15.1 years (range 12-19) from four schools in Foggia, southern Italy. Caffeine intake was assessed using an anonymous self-reported questionnaire during the 2013/2014 school year. We calculated the percentage of daily caffeine consumers, their mean intake of caffeine from beverages and the contribution of each beverage category to the total caffeine intake. Approximately 76% of the sample consumed caffeine every day, amounting to 125.5 ± 69.2 mg/day and 2.1 ± 1.2 mg/kg/day. When we applied the reference values from the Academy of Pediatrics, we found that 46% of the adolescents exceeded the recommended upper limits. Coffee was the most frequently consumed caffeinated drink and the main contributor to daily caffeine intake. More than three quarters (76%) of the Italian adolescents in our study drank coffee on a daily basis and nearly half (46%) exceeded the recommended upper limits. Strategies are needed to reduce caffeine consumption by adolescents. ©2018 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
Seasonal change in the capacity for supercooling by neonatal painted turtles.
Packard, G C; Packard, M J; McDaniel, L L
2001-05-01
Hatchlings of the North American painted turtle (Chrysemys picta) typically spend their first winter of life inside the shallow, subterranean nest where they completed incubation the preceding summer. This facet of their natural history commonly causes neonates in northerly populations to be exposed in mid-winter to ice and cold, which many animals survive by remaining unfrozen and supercooled. We measured the limit of supercooling in samples of turtles taken shortly after hatching and in other samples after 2 months of acclimation (or acclimatization) to a reduced temperature in the laboratory or field. Animals initially had only a limited capacity for supercooling, but they acquired an ability to undergo deeper supercooling during the course of acclimation. The gut of most turtles was packed with particles of soil and eggshell shortly after hatching, but not after acclimation. Thus, the relatively high limit of supercooling for turtles in the days immediately after hatching may have resulted from the ingestion of soil (and associated nucleating agents) by the animals as they were freeing themselves from their eggshell, whereas the relatively low limit of supercooling attained by acclimated turtles may have resulted from their purging their gut of its contents. Parallels may, therefore, exist between the natural-history strategy expressed by hatchling painted turtles and that expressed by numerous terrestrial arthropods that withstand the cold of winter by sustaining a state of supercooling.
NASA Astrophysics Data System (ADS)
Stevens, A. H.; Gentry, D.; Amador, E.; Cable, M. L.; Cantrell, T.; Chaudry, N.; Cullen, T.; Duca, Z.; Jacobsen, M.; Kirby, J.; McCaig, H.; Murukesan, G.; Rader, E.; Rennie, V.; Schwieterman, E.; Sutton, S.; Tan, G.; Yin, C.; Cullen, D.; Geppert, W.; Stockton, A.
2018-04-01
We detail multi-year field investigations in Icelandic Mars analogue environments that have yielded results that can help inform strategies for sample selection and downselection for Mars Sample Return.
What variables affect public perceptions for EMS meeting general community needs?
Blau, Gary; Hochner, Arthur; Portwood, James
2012-01-01
In the fall, 2010, a phone survey of 928 respondents examined two research questions: does the general public perceive Emergency Medical Services (EMS) as meeting their community needs? And what factors or correlates help to explain EMS meeting community needs? To maximize geographical representation across the contiguous United States, a clustered stratified sampling strategy was used based upon zip codes across the 48 states. Results showed strong support by the sample for perceiving that EMS was meeting their general community needs. 17 percent of the variance in EMS meeting community needs was collectively explained by the demographic and perceptual variables in the regression model. Of the correlates tested, the strongest relationship was found between greater admiration for EMS professionals and higher perception of EMS meeting community needs. Study limitations included sampling households with only landline (no cell) phones, using a simulated emergency situation, and not collecting gender data.
Soil and leaf litter metaproteomics—a brief guideline from sampling to understanding
Keiblinger, Katharina M.; Fuchs, Stephan; Zechmeister-Boltenstern, Sophie; Riedel, Katharina
2016-01-01
The increasing application of soil metaproteomics is providing unprecedented, in-depth characterization of the composition and functionality of in situ microbial communities. Despite recent advances in high-resolution mass spectrometry, soil metaproteomics still suffers from a lack of effective and reproducible protein extraction protocols and standardized data analyses. This review discusses the opportunities and limitations of selected techniques in soil-, and leaf litter metaproteomics, and presents a step-by-step guideline on their application, covering sampling, sample preparation, extraction and data evaluation strategies. In addition, we present recent applications of soil metaproteomics and discuss how such approaches, linking phylogenetics and functionality, can help gain deeper insights into terrestrial microbial ecology. Finally, we strongly recommend that to maximize the insights environmental metaproteomics may provide, such methods should be employed within a holistic experimental approach considering relevant aboveground and belowground ecosystem parameters. PMID:27549116
Deloria-Knoll, Maria; Feikin, Daniel R; Scott, J Anthony G; O'Brien, Katherine L; DeLuca, Andrea N; Driscoll, Amanda J; Levine, Orin S
2012-04-01
Methods for the identification and selection of patients (cases) with severe or very severe pneumonia and controls for the Pneumonia Etiology Research for Child Health (PERCH) project were needed. Issues considered include eligibility criteria and sampling strategies, whether to enroll hospital or community controls, whether to exclude controls with upper respiratory tract infection (URTI) or nonsevere pneumonia, and matching criteria, among others. PERCH ultimately decided to enroll community controls and an additional human immunodeficiency virus (HIV)-infected control group at high HIV-prevalence sites matched on age and enrollment date of cases; controls with symptoms of URTI or nonsevere pneumonia will not be excluded. Systematic sampling of cases (when necessary) and random sampling of controls will be implemented. For each issue, we present the options that were considered, the advantages and disadvantages of each, the rationale for the methods selected for PERCH, and remaining implications and limitations.
ToxCast HTS Assay Development and Retrofitting: Strategies ...
A presentation to EC JRC partners on new ToxCast HTS assay methods and strategies to address current limitations to HTS methods Slide presentation to EC JRC partners on new ToxCast HTS assay methods and strategies to address current limitations to HTS methods.
Kao, Ya-Ting; Zhu, Xinxin; Xu, Fang; Min, Wei
2012-08-01
Probing biological structures and functions deep inside live organisms with light is highly desirable. Among the current optical imaging modalities, multiphoton fluorescence microscopy exhibits the best contrast for imaging scattering samples by employing a spatially confined nonlinear excitation. However, as the incident laser power drops exponentially with imaging depth into the sample due to the scattering loss, the out-of-focus background eventually overwhelms the in-focus signal, which defines a fundamental imaging-depth limit. Herein we significantly improve the image contrast for deep scattering samples by harnessing reversibly switchable fluorescent proteins (RSFPs) which can be cycled between bright and dark states upon light illumination. Two distinct techniques, multiphoton deactivation and imaging (MPDI) and multiphoton activation and imaging (MPAI), are demonstrated on tissue phantoms labeled with Dronpa protein. Such a focal switch approach can generate pseudo background-free images. Conceptually different from wave-based approaches that try to reduce light scattering in turbid samples, our work represents a molecule-based strategy that focused on imaging probes.
Kao, Ya-Ting; Zhu, Xinxin; Xu, Fang; Min, Wei
2012-01-01
Probing biological structures and functions deep inside live organisms with light is highly desirable. Among the current optical imaging modalities, multiphoton fluorescence microscopy exhibits the best contrast for imaging scattering samples by employing a spatially confined nonlinear excitation. However, as the incident laser power drops exponentially with imaging depth into the sample due to the scattering loss, the out-of-focus background eventually overwhelms the in-focus signal, which defines a fundamental imaging-depth limit. Herein we significantly improve the image contrast for deep scattering samples by harnessing reversibly switchable fluorescent proteins (RSFPs) which can be cycled between bright and dark states upon light illumination. Two distinct techniques, multiphoton deactivation and imaging (MPDI) and multiphoton activation and imaging (MPAI), are demonstrated on tissue phantoms labeled with Dronpa protein. Such a focal switch approach can generate pseudo background-free images. Conceptually different from wave-based approaches that try to reduce light scattering in turbid samples, our work represents a molecule-based strategy that focused on imaging probes. PMID:22876358
HIV-positive MSM's knowledge of HPV and anal cancer self-sampling: A scoping review.
Poon, M K L; Wong, J P H; Li, A T W; Manuba, M; Bisignano, A; Owino, M; Vahabi, M
2018-02-01
Human papillomavirus (hpv) infection is the cause of anal squamous cell cancer (ascc) in 80% of cases. Available research has also shown high prevalence of anal hpv infection among men who have sex with men (msm). However, hpv vaccination is low among msm in Canada. In light of this information, we conducted a scoping review with the aim of exploring (1) the knowledge of hpv and anal cancer among hiv-positive msm and (2) the acceptability of hpv and anal cancer self-sampling in this population. In conducting the review, we searched five electronic databases for peer-reviewed articles and abstracts published in English, between 2007 and 2017. A total of 803 articles were retrieved; after accounting for duplicates ( n= 40) and unmet criteria ( n= 754), a total of 794 articles were excluded. A final total of nine articles were used in this review. Results of this review show that hiv-positive msm have limited knowledge regarding the risks of anal cancer associated with hiv and hpv coinfection. Furthermore, there is limited research on hpv and anal cancer self-sampling in this population. However, the review of available studies suggested that hiv-positive msm were open to anal cancer self-sampling. It also identified potential barriers to self-sampling. In conclusion, we provide suggestions and future directions for policy-makers and educators to develop inclusive and accessible strategies to reach hiv-positive msm regarding anal cancer education and self-screening.
Wu, Qihua; Shi, Honglan; Ma, Yinfa; Adams, Craig; Eichholz, Todd; Timmons, Terry; Jiang, Hua
2015-01-01
N-Nitrosamines are potent mutagenic and carcinogenic emerging water disinfection by-products (DBPs). The most effective strategy to control the formation of these DBPs is minimizing their precursors from source water. Secondary and tertiary amines are dominating precursors of N-nitrosamines formation during drinking water disinfection process. Therefore, the screening and removal of these amines in source water are very essential for preventing the formation of N-nitrosamines. A rapid, simple, and sensitive ultrafast liquid chromatography-tandem mass spectrometry (UFLC-MS/MS) method has been developed in this study to determine seven amines, including dimethylamine, ethylmethylamine, diethylamine, dipropylamine, trimethylamine, 3-(dimethylaminomethyl)indole, and 4-dimethylaminoantipyrine, as major precursors of N-nitrosamines in drinking water system. No sample preparation process is needed except a simple filtration. Separation and detection can be achieved in 11 min per sample. The method detection limits of selected amines are ranging from 0.02 μg/L to 1 μg/L except EMA (5 μg/L), and good calibration linearity was achieved. The developed method was applied to determine the selected precursors in source water and drinking water samples collected from Midwest area of the United States. In most of water samples, the concentrations of selected precursors of N-nitrosamines were below their method detection limits. Dimethylamine was detected in some of water samples at the concentration up to 25.4 μg/L. Copyright © 2014 Elsevier B.V. All rights reserved.
HIV-positive MSM’s knowledge of HPV and anal cancer self-sampling: A scoping review
Poon, M.K.L.; Wong, J.P.H.; Li, A.T.W.; Manuba, M.; Bisignano, A.; Owino, M.; Vahabi, M.
2018-01-01
Human papillomavirus (hpv) infection is the cause of anal squamous cell cancer (ascc) in 80% of cases. Available research has also shown high prevalence of anal hpv infection among men who have sex with men (msm). However, hpv vaccination is low among msm in Canada. In light of this information, we conducted a scoping review with the aim of exploring (1) the knowledge of hpv and anal cancer among hiv-positive msm and (2) the acceptability of hpv and anal cancer self-sampling in this population. In conducting the review, we searched five electronic databases for peer-reviewed articles and abstracts published in English, between 2007 and 2017. A total of 803 articles were retrieved; after accounting for duplicates (n=40) and unmet criteria (n=754), a total of 794 articles were excluded. A final total of nine articles were used in this review. Results of this review show that hiv-positive msm have limited knowledge regarding the risks of anal cancer associated with hiv and hpv coinfection. Furthermore, there is limited research on hpv and anal cancer self-sampling in this population. However, the review of available studies suggested that hiv-positive msm were open to anal cancer self-sampling. It also identified potential barriers to self-sampling. In conclusion, we provide suggestions and future directions for policy-makers and educators to develop inclusive and accessible strategies to reach hiv-positive msm regarding anal cancer education and self-screening. PMID:29507499
Robotics and automation in Mars exploration
NASA Technical Reports Server (NTRS)
Bourke, Roger D.; Sturms, Francis M., Jr.; Golombek, Matthew P.; Gamber, R. T.
1992-01-01
A new approach to the exploration of Mars is examined which relies on the use of smaller and simpler vehicles. The new strategy involves the following principles: limiting science objectives to retrieval of rock samples from several different but geologically homogeneous areas; making use of emerging microspacecraft technologies to significantly reduce the mass of hardware elements; simplifying missions to the absolutely essential elements; and managing risk through the employment of many identical independent pieces some of which may fail. The emerging technologies and their applications to robotic Mars missions are discussed.
El Dib, Regina; Tikkinen, Kari A O; Akl, Elie A; Gomaa, Huda A; Mustafa, Reem A; Agarwal, Arnav; Carpenter, Christopher R; Zhang, Yuchen; Jorge, Eliane C; Almeida, Ricardo A M B; do Nascimento Junior, Paulo; Doles, Joao Vitor P; Mustafa, Ahmad A; Sadeghirad, Behnam; Lopes, Luciane C; Bergamaschi, Cristiane C; Suzumura, Erica A; Cardoso, Marília M A; Corrente, José Eduardo; Stone, Samuel B; Schunemann, Holger J; Guyatt, Gordon H
2017-04-01
To provide a perspective on the current practice of randomized clinical trials (RCTs) of diagnostic strategies focusing on patient-important outcomes. We conducted a comprehensive search of MEDLINE and included RCTs published in full-text reports that evaluated alternative diagnostic strategies. Of 56,912 unique citations, we sampled 7,500 and included 103 eligible RCTs, therefore suggesting that MEDLINE includes approximately 781 diagnostic RCTs. The 103 eligible trials reported on: mortality (n = 41; 39.8%); morbidities (n = 63; 61.2%); symptoms/quality of life/functional status (n = 14; 13.6%); and on composite end points (n = 10; 9.7%). Of the studies that reported statistically significant results (n = 12; 11.6%), we judged 7 (58.3%) as at low risk of bias with respect to missing outcome data and 4 (33.3%) as at low risk of bias regarding blinding. Of the 41 RCTs that reported on mortality, only one (2.4%) reported statistically significant results. Of 63 RCTs addressing morbidity outcomes, 11 (17.5%) reported statistically significant results, all of which reported relative effects of greater than 20%. RCTs of diagnostic tests are not uncommon, and sometimes suggest benefits on patient-important outcomes but often suffer from limitations in sample size and conduct. Copyright © 2017 Elsevier Inc. All rights reserved.
Fossils out of sequence: Computer simulations and strategies for dealing with stratigraphic disorder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cutler, A.H.; Flessa, K.W.
Microstratigraphic resolution is limited by vertical mixing and reworking of fossils. Stratigraphic disorder is the degree to which fossils within a stratigraphic sequence are not in proper chronological order. Stratigraphic disorder arises through in situ vertical mixing of fossils and reworking of older fossils into younger deposits. The authors simulated the effects of mixing and reworking by simple computer models, and measured stratigraphic disorder using rank correlation between age and stratigraphic position (Spearman and Kendall coefficients). Mixing was simulated by randomly transposing pairs of adjacent fossils in a sequence. Reworking was simulated by randomly inserting older fossils into a youngermore » sequence. Mixing is an inefficient means of producing disorder; after 500 mixing steps stratigraphic order is still significant at the 99% to 95% level, depending on the coefficient used. Reworking disorders sequences very efficiently: significant order begins to be lost when reworked shells make up 35% of the sequence. Thus a sequence can be dominated by undisturbed, autochthonous shells and still be disordered. The effects of mixing-produced disorder can be minimized by increasing sample size at each horizon. Increased spacing between samples is of limited utility in dealing with disordered sequences: while widely separated samples are more likely to be stratigraphically ordered, the smaller number of samples makes the detection of trends problematic.« less
NASA Astrophysics Data System (ADS)
Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro
2016-08-01
An increase in the efficiency of sampling from Boltzmann distributions would have a significant impact on deep learning and other machine-learning applications. Recently, quantum annealers have been proposed as a potential candidate to speed up this task, but several limitations still bar these state-of-the-art technologies from being used effectively. One of the main limitations is that, while the device may indeed sample from a Boltzmann-like distribution, quantum dynamical arguments suggest it will do so with an instance-dependent effective temperature, different from its physical temperature. Unless this unknown temperature can be unveiled, it might not be possible to effectively use a quantum annealer for Boltzmann sampling. In this work, we propose a strategy to overcome this challenge with a simple effective-temperature estimation algorithm. We provide a systematic study assessing the impact of the effective temperatures in the learning of a special class of a restricted Boltzmann machine embedded on quantum hardware, which can serve as a building block for deep-learning architectures. We also provide a comparison to k -step contrastive divergence (CD-k ) with k up to 100. Although assuming a suitable fixed effective temperature also allows us to outperform one-step contrastive divergence (CD-1), only when using an instance-dependent effective temperature do we find a performance close to that of CD-100 for the case studied here.
Wu, Wenhe; Li, Jun; Pan, Dun; Li, Jiang; Song, Shiping; Rong, Mingge; Li, Zixi; Gao, Jimin; Lu, Jianxin
2014-10-08
Enzyme-linked immunosorbent assay (ELISA) provides a convenient means for the detection of Salmonella enterica serovar Typhimurium (STM), which is important for rapid diagnosis of foodborne pathogens. However, conventional ELISA is limited by antibody-antigen immunoreactions and suffers from poor sensitivity and tedious sample pretreatment. Therefore, development of novel ELISA remains challenging. Herein, we designed a comprehensive strategy for rapid, sensitive, and quantitative detection of STM with high specificity by gold nanoparticle-based enzyme-linked antibody-aptamer sandwich (nano-ELAAS) method. STM was captured and preconcentrated from samples with aptamer-modified magnetic particles, followed by binding with detector antibodies. Then nanoprobes carrying a large amount of reporter antibodies and horseradish peroxidase molecules were used for colorimetric signal amplification. Under the optimized reaction conditions, the nano-ELAAS assay had a quantitative detection range from 1 × 10(3) to 1 × 10(8) CFU mL(-1), a limit of detection of 1 × 10(3) CFU mL(-1), and a selectivity of >10-fold for STM in samples containing other bacteria at higher concentration with an assay time less than 3 h. In addition, the developed nanoprobes were improved in terms of detection range and/or sensitivity when compared with two commercial enzyme-labeled antibody signal reporters. Finally, the nano-ELAAS method was demonstrated to work well in milk samples, a common source of STM contamination.
Sampling strategies for estimating acute and chronic exposures of pesticides in streams
Crawford, Charles G.
2004-01-01
The Food Quality Protection Act of 1996 requires that human exposure to pesticides through drinking water be considered when establishing pesticide tolerances in food. Several systematic and seasonally weighted systematic sampling strategies for estimating pesticide concentrations in surface water were evaluated through Monte Carlo simulation, using intensive datasets from four sites in northwestern Ohio. The number of samples for the strategies ranged from 4 to 120 per year. Sampling strategies with a minimal sampling frequency outside the growing season can be used for estimating time weighted mean and percentile concentrations of pesticides with little loss of accuracy and precision, compared to strategies with the same sampling frequency year round. Less frequent sampling strategies can be used at large sites. A sampling frequency of 10 times monthly during the pesticide runoff period at a 90 km 2 basin and four times monthly at a 16,400 km2 basin provided estimates of the time weighted mean, 90th, 95th, and 99th percentile concentrations that fell within 50 percent of the true value virtually all of the time. By taking into account basin size and the periodic nature of pesticide runoff, costs of obtaining estimates of time weighted mean and percentile pesticide concentrations can be minimized.
Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A
2018-05-01
High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2 = 0.98; p < 0.01) with a mean bias of -2.2% and precision of 9.4%. A similar relationship was observed in children (R 2 = 0.99; p < 0.01). The developed pharmacokinetic model-based sparse sampling strategy promises to achieve the target area under the curve as part of precision dosing.
NASA Astrophysics Data System (ADS)
Skeffington, R. A.; Halliday, S. J.; Wade, A. J.; Bowes, M. J.; Loewenthal, M.
2015-01-01
The EU Water Framework Directive (WFD) requires that the ecological and chemical status of water bodies in Europe should be assessed, and action taken where possible to ensure that at least "good" quality is attained in each case by 2015. This paper is concerned with the accuracy and precision with which chemical status in rivers can be measured given certain sampling strategies, and how this can be improved. High frequency (hourly) chemical data from four rivers in southern England were subsampled to simulate different sampling strategies for four parameters used for WFD classification: dissolved phosphorus, dissolved oxygen, pH and water temperature. These data sub-sets were then used to calculate the WFD classification for each site. Monthly sampling was less precise than weekly sampling, but the effect on WFD classification depended on the closeness of the range of concentrations to the class boundaries. In some cases, monthly sampling for a year could result in the same water body being assigned to one of 3 or 4 WFD classes with 95% confidence, whereas with weekly sampling this was 1 or 2 classes for the same cases. In the most extreme case, random sampling effects could result in the same water body being assigned to any of the 5 WFD quality classes. The width of the weekly sampled confidence intervals was about 33% that of the monthly for P species and pH, about 50% for dissolved oxygen, and about 67% for water temperature. For water temperature, which is assessed as the 98th percentile in the UK, monthly sampling biases the mean downwards by about 1 °C compared to the true value, due to problems of assessing high percentiles with limited data. Confining sampling to the working week compared to all seven days made little difference, but a modest improvement in precision could be obtained by sampling at the same time of day within a 3 h time window, and this is recommended. For parameters with a strong diel variation, such as dissolved oxygen, the value obtained, and thus possibly the WFD classification, can depend markedly on when in the cycle the sample was taken. Specifying this in the sampling regime would be a straightforward way to improve precision, but there needs to be agreement about how best to characterise risk in different types of river. These results suggest that in some cases it will be difficult to assign accurate WFD chemical classes or to detect likely trends using current sampling regimes, even for these largely groundwater-fed rivers. A more critical approach to sampling is needed to ensure that management actions are appropriate and supported by data.
Comparison of fuel value and combustion characteristics of two different RDF samples.
Sever Akdağ, A; Atımtay, A; Sanin, F D
2016-01-01
Generation of Municipal Solid Waste (MSW) tends to increase with the growing population and economic development of the society; therefore, establishing environmentally sustainable waste management strategies is crucial. In this sense, waste to energy strategies have come into prominence since they increase the resource efficiency and replace the fossil fuels with renewable energy sources by enabling material and energy recovery instead of landfill disposal of the wastes. Refuse Derived Fuel (RDF), which is an alternative fuel produced from energy-rich Municipal Solid Waste (MSW) materials diverted from landfills, is one of the waste to energy strategies gaining more and more attention. This study aims to investigate the thermal characteristics and co-combustion efficiency of two RDF samples in Turkey. Proximate, ultimate and thermogravimetric analyses (TGA) were conducted on these samples. Furthermore, elemental compositions of ash from RDF samples were determined by X-Ray Fluorescence (XRF) analysis. The RDF samples were combusted alone and co-combusted in mixtures with coal and petroleum coke in a lab scale reactor at certain percentages on energy basis (3%, 5%, 10%, 20% and 30%) where co-combustion processes and efficiencies were investigated. It was found that the calorific values of RDF samples on dry basis were close to that of coal and a little lower compared to petroleum coke used in this study. Furthermore, the analysis indicated that when RDF in the mixture was higher than 10%, the CO concentration in the flue gas increased and so the combustion efficiency decreased; furthermore, the combustion characteristics changed from char combustion to volatile combustion. However, RDF addition to the fuel mixtures decreased the SO2 emission and did not change the NOx profiles. Also, XRF analysis showed that the slagging and fouling potential of RDF combustion was a function of RDF portion in fuel blend. When the RDF was combusted alone, the slagging and fouling indices of its ash were found to be higher than the limit values producing slagging and fouling. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lai, Xiaoming; Zhu, Qing; Zhou, Zhiwen; Liao, Kaihua
2017-12-01
In this study, seven random combination sampling strategies were applied to investigate the uncertainties in estimating the hillslope mean soil water content (SWC) and correlation coefficients between the SWC and soil/terrain properties on a tea + bamboo hillslope. One of the sampling strategies is the global random sampling and the other six are the stratified random sampling on the top, middle, toe, top + mid, top + toe and mid + toe slope positions. When each sampling strategy was applied, sample sizes were gradually reduced and each sampling size contained 3000 replicates. Under each sampling size of each sampling strategy, the relative errors (REs) and coefficients of variation (CVs) of the estimated hillslope mean SWC and correlation coefficients between the SWC and soil/terrain properties were calculated to quantify the accuracy and uncertainty. The results showed that the uncertainty of the estimations decreased as the sampling size increasing. However, larger sample sizes were required to reduce the uncertainty in correlation coefficient estimation than in hillslope mean SWC estimation. Under global random sampling, 12 randomly sampled sites on this hillslope were adequate to estimate the hillslope mean SWC with RE and CV ≤10%. However, at least 72 randomly sampled sites were needed to ensure the estimated correlation coefficients with REs and CVs ≤10%. Comparing with all sampling strategies, reducing sampling sites on the middle slope had the least influence on the estimation of hillslope mean SWC and correlation coefficients. Under this strategy, 60 sites (10 on the middle slope and 50 on the top and toe slopes) were enough to ensure the estimated correlation coefficients with REs and CVs ≤10%. This suggested that when designing the SWC sampling, the proportion of sites on the middle slope can be reduced to 16.7% of the total number of sites. Findings of this study will be useful for the optimal SWC sampling design.
Optimal Inspection of Imports to Prevent Invasive Pest Introduction.
Chen, Cuicui; Epanchin-Niell, Rebecca S; Haight, Robert G
2018-03-01
The United States imports more than 1 billion live plants annually-an important and growing pathway for introduction of damaging nonnative invertebrates and pathogens. Inspection of imports is one safeguard for reducing pest introductions, but capacity constraints limit inspection effort. We develop an optimal sampling strategy to minimize the costs of pest introductions from trade by posing inspection as an acceptance sampling problem that incorporates key features of the decision context, including (i) simultaneous inspection of many heterogeneous lots, (ii) a lot-specific sampling effort, (iii) a budget constraint that limits total inspection effort, (iv) inspection error, and (v) an objective of minimizing cost from accepted defective units. We derive a formula for expected number of accepted infested units (expected slippage) given lot size, sample size, infestation rate, and detection rate, and we formulate and analyze the inspector's optimization problem of allocating a sampling budget among incoming lots to minimize the cost of slippage. We conduct an empirical analysis of live plant inspection, including estimation of plant infestation rates from historical data, and find that inspections optimally target the largest lots with the highest plant infestation rates, leaving some lots unsampled. We also consider that USDA-APHIS, which administers inspections, may want to continue inspecting all lots at a baseline level; we find that allocating any additional capacity, beyond a comprehensive baseline inspection, to the largest lots with the highest infestation rates allows inspectors to meet the dual goals of minimizing the costs of slippage and maintaining baseline sampling without substantial compromise. © 2017 Society for Risk Analysis.
Zhao, Yan; Chang, Cheng; Qin, Peibin; Cao, Qichen; Tian, Fang; Jiang, Jing; Li, Xianyu; Yu, Wenfeng; Zhu, Yunping; He, Fuchu; Ying, Wantao; Qian, Xiaohong
2016-01-21
Human plasma is a readily available clinical sample that reflects the status of the body in normal physiological and disease states. Although the wide dynamic range and immense complexity of plasma proteins are obstacles, comprehensive proteomic analysis of human plasma is necessary for biomarker discovery and further verification. Various methods such as immunodepletion, protein equalization and hyper fractionation have been applied to reduce the influence of high-abundance proteins (HAPs) and to reduce the high level of complexity. However, the depth at which the human plasma proteome has been explored in a relatively short time frame has been limited, which impedes the transfer of proteomic techniques to clinical research. Development of an optimal strategy is expected to improve the efficiency of human plasma proteome profiling. Here, five three-dimensional strategies combining HAP depletion (the 1st dimension) and protein fractionation (the 2nd dimension), followed by LC-MS/MS analysis (the 3rd dimension) were developed and compared for human plasma proteome profiling. Pros and cons of the five strategies are discussed for two issues: HAP depletion and complexity reduction. Strategies A and B used proteome equalization and tandem Seppro IgY14 immunodepletion, respectively, as the first dimension. Proteome equalization (strategy A) was biased toward the enrichment of basic and low-molecular weight proteins and had limited ability to enrich low-abundance proteins. By tandem removal of HAPs (strategy B), the efficiency of HAP depletion was significantly increased, whereas more off-target proteins were subtracted simultaneously. In the comparison of complexity reduction, strategy D involved a deglycosylation step before high-pH RPLC separation. However, the increase in sequence coverage did not increase the protein number as expected. Strategy E introduced SDS-PAGE separation of proteins, and the results showed oversampling of HAPs and identification of fewer proteins. Strategy C combined single Seppro IgY14 immunodepletion, high-pH RPLC fractionation and LC-MS/MS analysis. It generated the largest dataset, containing 1544 plasma protein groups and 258 newly identified proteins in a 30-h-machine-time analysis, making it the optimum three-dimensional strategy in our study. Further analysis of the integrated data from the five strategies showed identical distribution patterns in terms of sequence features and GO functional analysis with the 1929-plasma-protein dataset, further supporting the reliability of our plasma protein identifications. The characterization of 20 cytokines in the concentration range from sub-nanograms/milliliter to micrograms/milliliter demonstrated the sensitivity of the current strategies. Copyright © 2015 Elsevier B.V. All rights reserved.
A novel strategy for isolation and determination of sugars and sugar alcohols from conifers.
Sarvin, B A; Seregin, A P; Shpigun, O A; Rodin, I A; Stavrianidi, A N
2018-06-02
The ultrasound-assisted extraction method for isolation of 17 sugars and sugar alcohols from conifers with a subsequent hydrophilic interaction liquid chromatography-tandem mass spectrometry method for their determination is proposed. The optimization of extraction parameters was carried out using Taguchi - L 9 (3 4 ) orthogonal array experimental design for the following parameters-a methanol concentration in the extraction solution, an extraction time, a type of plant sample and an extraction temperature. The optimal ultrasound-assisted extraction conditions were-MeOH concentration - 30% (water - 70%), extraction time - 30 min, type of plant sample - II (grinded leaves 2-4 mm long), extraction temperature - 60 °C. Pure water and acetonitrile were used as eluents in gradient elution mode to separate the analytes. Direct determination of multiple sugars and sugar alcohols was carried out using a mass spectrometric detector operated in a multiple reaction monitoring mode, providing detection limits in the range between 0.1 and 20 ng/mL and good analytical characteristics of the method without derivatization. The developed approach was validated by multiple successive extraction method applied to test its performance on a series of 10 samples, i.e. 2 samples per each of 5 genera: Abies, Larix, Picea, Pinus (Pinaceae) and Juniperus (Cupressaceae), widely distributed in the boreal conifer forests of Eurasia. The novel strategy can be used for profiling of sugars and sugar alcohols in a wide range of plant species. Copyright © 2018. Published by Elsevier B.V.
Granzotto, Clara; Sutherland, Ken
2017-03-07
This paper reports an improved method for the identification of Acacia gum in cultural heritage samples using matrix assisted laser desorption ionization-time-of-flight mass spectrometry (MALDI-TOF MS) after enzymatic digestion of the polysaccharide component. The analytical strategy was optimized using a reference Acacia gum (gum arabic, sp. A. senegal) and provided an unambiguous MS profile of the gum, characterized by specific and recognized oligosaccharides, from as little as 0.1 μg of material. The enhanced experimental approach with reduced detection limit was successfully applied to the analysis of naturally aged (∼80 year) gum arabic samples, pure and mixed with lead white pigment, and allowed the detection of gum arabic in samples from a late painting (1949/1954) by Georges Braque in the collection of the Art Institute of Chicago. This first application of the technique to characterize microsamples from a painting, in conjunction with analyses by gas chromatography/mass spectrometry (GC/MS), provided important insights into Braque's unusual mixed paint media that are also helpful to inform appropriate conservation treatments for his works. The robustness of the analytical strategy due to the reproducibility of the gum MS profile, even in the presence of other organic and inorganic components, together with the minimal sample size required, demonstrate the value of this new MALDI-TOF MS method as an analytical tool for the identification of gum arabic in microsamples from museum artifacts.
Allison, Rosalie; Bird, Emma L; McClean, Stuart
2017-01-01
It is estimated that 21% of boys and 16% of girls in England meet recommended physical activity guidelines. Team sport has the potential to increase physical activity levels; however, studies show that gender-based factors can influence girls' participation in team sport. Furthermore, evidence for the effectiveness of interventions promoting team sport among girls is limited. This systematic review aimed to assess the impact of physical activity interventions on secondary school-aged girls' (aged 11-18 years) participation in team sport and to identify potential strategies for increasing participation. Electronic databases and grey literature were systematically searched for studies of interventions targeting team sport participation among girls in the UK. Results were exported to Refworks, duplicates removed and eligible studies identified. Extracted data included: participant details, such as sample size and age; components of the intervention; outcomes assessed; and each study was quality appraised. Due to heterogeneity across studies, results were presented narratively. Four studies sourced from the grey literature met the inclusion criteria. Findings suggest that physical activity interventions can encourage girls to try new sports, but evidence is limited in relation to sustained participation. Potential strategies for promoting participation included: consultation with girls, implementation of appropriate peer-leaders and friendship group strategies, early intervention and consideration of intervention setting. This review highlights the limited availability of evidence on the effectiveness of physical activity interventions for promoting team sport participation among girls in the UK. Findings indicate that future research is needed to improve the methodological quality of complex intervention evaluation. Physical activity interventions may have the potential to encourage girls to try team sport, but their impact on sustained participation, and subsequent physical activity outcomes, is less apparent.
Celler, Katherine; Fujita, Miki; Kawamura, Eiko; Ambrose, Chris; Herburger, Klaus; Wasteneys, Geoffrey O.
2016-01-01
Microtubules are required throughout plant development for a wide variety of processes, and different strategies have evolved to visualize and analyze them. This chapter provides specific methods that can be used to analyze microtubule organization and dynamic properties in plant systems and summarizes the advantages and limitations for each technique. We outline basic methods for preparing samples for immunofluorescence labelling, including an enzyme-based permeabilization method, and a freeze-shattering method, which generates microfractures in the cell wall to provide antibodies access to cells in cuticle-laden aerial organs such as leaves. We discuss current options for live cell imaging of MTs with fluorescently tagged proteins (FPs), and provide chemical fixation, high pressure freezing/freeze substitution, and post-fixation staining protocols for preserving MTs for transmission electron microscopy and tomography. PMID:26498784
Kim, Mhinjine; Budd, Nadine; Batorsky, Benjamin; Krubiner, Carleigh; Manchikanti, Swathi; Waldrop, Greer; Trude, Angela; Gittelsohn, Joel
2017-01-01
Receptivity to strategies to improve the food environment by increasing access to healthier foods in small food stores is underexplored. We conducted 20 in-depth interviews with small storeowners of different ethnic backgrounds as part of a small-store intervention trial. Store owners perceived barriers and facilitators to purchase, stock, and promote healthy foods. Barriers mentioned included customer preferences for higher fat and sweeter taste and for lower prices; lower wholesaler availability of healthy food; and customers' lack of interest in health. Most store owners thought positively of taste tests, free samples, and communication interventions. However, they varied in terms of their expectations of the effect of these strategies on customers' healthy food purchases. The findings reported add to the limited data on motivating and working with small-store owners in low-income urban settings.
Kim, Mhinjine; Budd, Nadine; Batorsky, Benjamin; Krubiner, Carleigh; Manchikanti, Swathi; Waldrop, Greer; Trude, Angela; Gittelsohn, Joel
2017-01-01
Receptivity to strategies to improve the food environment by increasing access to healthier foods in small food stores is underexplored. We conducted 20 in-depth interviews with small storeowners of different ethnic backgrounds, as part of a small store intervention trial. Storeowners perceived barriers and facilitators to purchase, stock and promote healthy foods. Barriers mentioned included customer preferences for higher fat and sweeter taste and for lower prices price, lower wholesaler availability of healthy food, and customers’ lack of interest in health. Most storeowners thought positively of taste tests, free samples and communication interventions. However, they varied in terms of their expectations of the impact these strategies on customers’ healthy food purchases. The findings reported add to the limited data on motivating and working with small store owners in low income urban settings. PMID:27841664
Zhao, Hui; Wang, Yong-Sheng; Tang, Xian; Zhou, Bin; Xue, Jin-Hua; Liu, Hui; Liu, Shan-Du; Cao, Jin-Xiu; Li, Ming-Hui; Chen, Si-Han
2015-08-05
We report on an enzyme-free and label-free strategy for the ultrasensitive determination of adenosine. A novel multipurpose adenosine aptamer (MAAP) is designed, which serves as an effective target recognition probe and a capture probe for malachite green. In the presence of adenosine, the conformation of the MAAP is converted from a hairpin structure to a G-quadruplex. Upon addition of malachite green into this solution, a noticeable enhancement of resonance light scattering was observed. The signal response is directly proportional to the concentration of adenosine ranging from 75 pM to 2.2 nM with a detection limit of 23 pM, which was 100-10,000 folds lower than those obtained by previous reported methods. Moreover, this strategy has been applied successfully for detecting adenosine in human urine and blood samples, further proving its reliability. The mechanism of adenosine inducing MAAP to form a G-quadruplex was demonstrated by a series of control experiments. Such a MAAP probe can also be used to other strategies such as fluorescence or spectrophotometric ones. We suppose that this strategy can be expanded to develop a universal analytical platform for various target molecules in the biomedical field and clinical diagnosis. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Liao, Kaihua; Zhou, Zhiwen; Lai, Xiaoming; Zhu, Qing; Feng, Huihui
2017-04-01
The identification of representative soil moisture sampling sites is important for the validation of remotely sensed mean soil moisture in a certain area and ground-based soil moisture measurements in catchment or hillslope hydrological studies. Numerous approaches have been developed to identify optimal sites for predicting mean soil moisture. Each method has certain advantages and disadvantages, but they have rarely been evaluated and compared. In our study, surface (0-20 cm) soil moisture data from January 2013 to March 2016 (a total of 43 sampling days) were collected at 77 sampling sites on a mixed land-use (tea and bamboo) hillslope in the hilly area of Taihu Lake Basin, China. A total of 10 methods (temporal stability (TS) analyses based on 2 indices, K-means clustering based on 6 kinds of inputs and 2 random sampling strategies) were evaluated for determining optimal sampling sites for mean soil moisture estimation. They were TS analyses based on the smallest index of temporal stability (ITS, a combination of the mean relative difference and standard deviation of relative difference (SDRD)) and based on the smallest SDRD, K-means clustering based on soil properties and terrain indices (EFs), repeated soil moisture measurements (Theta), EFs plus one-time soil moisture data (EFsTheta), and the principal components derived from EFs (EFs-PCA), Theta (Theta-PCA), and EFsTheta (EFsTheta-PCA), and global and stratified random sampling strategies. Results showed that the TS based on the smallest ITS was better (RMSE = 0.023 m3 m-3) than that based on the smallest SDRD (RMSE = 0.034 m3 m-3). The K-means clustering based on EFsTheta (-PCA) was better (RMSE <0.020 m3 m-3) than these based on EFs (-PCA) and Theta (-PCA). The sampling design stratified by the land use was more efficient than the global random method. Forty and 60 sampling sites are needed for stratified sampling and global sampling respectively to make their performances comparable to the best K-means method (EFsTheta-PCA). Overall, TS required only one site, but its accuracy was limited. The best K-means method required <8 sites and yielded high accuracy, but extra soil and terrain information is necessary when using this method. The stratified sampling strategy can only be used if no pre-knowledge about soil moisture variation is available. This information will help in selecting the optimal methods for estimation the area mean soil moisture.
A survey of managed care strategies for pregnant smokers.
Barker, D C; Robinson, L A; Rosenthal, A C
2000-01-01
The purpose of this study was to measure the content and comprehensiveness of pregnancy specific smoking cessation strategies within managed care organisations (MCOs) responding affirmatively to the national 1997-98 Addressing Tobacco in Managed Care (ATMC) survey. This cross sectional follow up study consisted of a fax survey sent to medical directors and a 37 question telephone survey of program overseers about the smoking cessation strategy. 147 MCOs identifying a pregnancy specific smoking cessation strategy on the 1997-98 ATMC survey served as the initial sample; 88 MCOs of 128 eligible plans completed both components, with a response rate of 69%. Pregnancy specific smoking cessation strategies varied. 40% of respondents used the Agency for Health Care Policy and Research guidelines for clinical smoking cessation to design their strategy. Strategies included self help materials, quit classes, telephone support and brief counselling by providers, linkages to quality improvement efforts, and use of patient databases for outreach. Only 42% offered a postpartum relapse prevention element. Lack of patient interest, competing clinic priorities, and the lack of a smoker identification system were the most problematic barriers to implementing strategies, common to at least a quarter of respondents. A majority ranked best practice manuals and web site linkages as the most useful form of technical assistance, followed by peer-to-peer counselling, regional workshops, newsletters, on-site assistance, and national conferences. The survey provides the first profile of prenatal tobacco treatment strategies in managed care. While design limitations prevent generalisation of these results to all MCOs, such information can help guide technical assistance to plans interested in reducing smoking among pregnant women.
Carneiro, Lorena Ribeiro de A; Lima, Albertina P; Machado, Ricardo B; Magnusson, William E
2016-01-01
Species-distribution models (SDM) are tools with potential to inform environmental-impact studies (EIA). However, they are not always appropriate and may result in improper and expensive mitigation and compensation if their limitations are not understood by decision makers. Here, we examine the use of SDM for frogs that were used in impact assessment using data obtained from the EIA of a hydroelectric project located in the Amazon Basin in Brazil. The results show that lack of knowledge of species distributions limits the appropriate use of SDM in the Amazon region for most target species. Because most of these targets are newly described and their distributions poorly known, data about their distributions are insufficient to be effectively used in SDM. Surveys that are mandatory for the EIA are often conducted only near the area under assessment, and so models must extrapolate well beyond the sampled area to inform decisions made at much larger spatial scales, such as defining areas to be used to offset the negative effects of the projects. Using distributions of better-known species in simulations, we show that geographical-extrapolations based on limited information of species ranges often lead to spurious results. We conclude that the use of SDM as evidence to support project-licensing decisions in the Amazon requires much greater area sampling for impact studies, or, alternatively, integrated and comparative survey strategies, to improve biodiversity sampling. When more detailed distribution information is unavailable, SDM will produce results that generate uncertain and untestable decisions regarding impact assessment. In many cases, SDM is unlikely to be better than the use of expert opinion.
Carneiro, Lorena Ribeiro de A.; Lima, Albertina P.; Machado, Ricardo B.; Magnusson, William E.
2016-01-01
Species-distribution models (SDM) are tools with potential to inform environmental-impact studies (EIA). However, they are not always appropriate and may result in improper and expensive mitigation and compensation if their limitations are not understood by decision makers. Here, we examine the use of SDM for frogs that were used in impact assessment using data obtained from the EIA of a hydroelectric project located in the Amazon Basin in Brazil. The results show that lack of knowledge of species distributions limits the appropriate use of SDM in the Amazon region for most target species. Because most of these targets are newly described and their distributions poorly known, data about their distributions are insufficient to be effectively used in SDM. Surveys that are mandatory for the EIA are often conducted only near the area under assessment, and so models must extrapolate well beyond the sampled area to inform decisions made at much larger spatial scales, such as defining areas to be used to offset the negative effects of the projects. Using distributions of better-known species in simulations, we show that geographical-extrapolations based on limited information of species ranges often lead to spurious results. We conclude that the use of SDM as evidence to support project-licensing decisions in the Amazon requires much greater area sampling for impact studies, or, alternatively, integrated and comparative survey strategies, to improve biodiversity sampling. When more detailed distribution information is unavailable, SDM will produce results that generate uncertain and untestable decisions regarding impact assessment. In many cases, SDM is unlikely to be better than the use of expert opinion. PMID:26784891
Information Foraging for Perceptual Decisions
2016-01-01
We tested an information foraging framework to characterize the mechanisms that drive active (visual) sampling behavior in decision problems that involve multiple sources of information. Experiments 1 through 3 involved participants making an absolute judgment about the direction of motion of a single random dot motion pattern. In Experiment 4, participants made a relative comparison between 2 motion patterns that could only be sampled sequentially. Our results show that: (a) Information (about noisy motion information) grows to an asymptotic level that depends on the quality of the information source; (b) The limited growth is attributable to unequal weighting of the incoming sensory evidence, with early samples being weighted more heavily; (c) Little information is lost once a new source of information is being sampled; and (d) The point at which the observer switches from 1 source to another is governed by online monitoring of his or her degree of (un)certainty about the sampled source. These findings demonstrate that the sampling strategy in perceptual decision-making is under some direct control by ongoing cognitive processing. More specifically, participants are able to track a measure of (un)certainty and use this information to guide their sampling behavior. PMID:27819455
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Ritin; Dill, Brian; Chourey, Karuna
2012-01-01
The expanding use of surfactants for proteome sample preparations has prompted the need to systematically optimize the application and removal of these MS-deleterious agents prior to proteome measurements. Here we compare four different detergent clean-up methods (Trichloroacetic acid (TCA) precipitation, Chloroform/Methanol/Water (CMW) extraction, commercial detergent removal spin column method (DRS) and filter-aided sample preparation(FASP)) with respect to varying amounts of protein biomass in the samples, and provide efficiency benchmarks with respect to protein, peptide, and spectral identifications for each method. Our results show that for protein limited samples, FASP outperforms the other three clean-up methods, while at high protein amountmore » all the methods are comparable. This information was used in a dual strategy of comparing molecular weight based fractionated and unfractionated lysates from three increasingly complex samples (Escherichia coli, a five microbial isolate mixture, and a natural microbial community groundwater sample), which were all lysed with SDS and cleaned up using FASP. The two approaches complemented each other by enhancing the number of protein identifications by 8%-25% across the three samples and provided broad pathway coverage.« less
Taking Advantage of Selective Change Driven Processing for 3D Scanning
Vegara, Francisco; Zuccarello, Pedro; Boluda, Jose A.; Pardo, Fernando
2013-01-01
This article deals with the application of the principles of SCD (Selective Change Driven) vision to 3D laser scanning. Two experimental sets have been implemented: one with a classical CMOS (Complementary Metal-Oxide Semiconductor) sensor, and the other one with a recently developed CMOS SCD sensor for comparative purposes, both using the technique known as Active Triangulation. An SCD sensor only delivers the pixels that have changed most, ordered by the magnitude of their change since their last readout. The 3D scanning method is based on the systematic search through the entire image to detect pixels that exceed a certain threshold, showing the SCD approach to be ideal for this application. Several experiments for both capturing strategies have been performed to try to find the limitations in high speed acquisition/processing. The classical approach is limited by the sequential array acquisition, as predicted by the Nyquist–Shannon sampling theorem, and this has been experimentally demonstrated in the case of a rotating helix. These limitations are overcome by the SCD 3D scanning prototype achieving a significantly higher performance. The aim of this article is to compare both capturing strategies in terms of performance in the time and frequency domains, so they share all the static characteristics including resolution, 3D scanning method, etc., thus yielding the same 3D reconstruction in static scenes. PMID:24084110
Barnieh, Lianne; Clement, Fiona; Harris, Anthony; Blom, Marja; Donaldson, Cam; Klarenbach, Scott; Husereau, Don; Lorenzetti, Diane; Manns, Braden
2014-01-01
Publicly-funded drug plans vary in strategies used and policies employed to reduce continually increasing pharmaceutical expenditures. We systematically reviewed the utilization of cost-sharing strategies and physician-directed prescribing regulations in publicly-funded formularies within member nations of the Organization of Economic Cooperation and Development (OECD). Using the OECD nations as the sampling frame, a search for cost-sharing strategies and physician-directed prescribing regulations was done using published and grey literature. Collected data was verified by a system expert within the prescription drug insurance plan in each country, to ensure the accuracy of key data elements across plans. Significant variation in the use of cost-sharing mechanisms was seen. Copayments were the most commonly used cost-containment measure, though their use and amount varied for those with certain conditions, most often chronic diseases (in 17 countries), and by socio-economic status (either income or employment status), or with age (in 15 countries). Caps and deductibles were only used by five systems. Drug cost-containment strategies targeting physicians were also identified in 24 countries, including guideline-based prescribing, prescription monitoring and incentive structures. There was variable use of cost-containment strategies to limit pharmaceutical expenditures in publicly funded formularies within OECD countries. Further research is needed to determine the best approach to constrain costs while maintaining access to pharmaceutical drugs.
Public Perspectives on Biospecimen Procurement: What Biorepositories Should Consider
L'Heureux, Jamie; Murray, Jeffrey C.; Newbury, Elizabeth; Shinkunas, Laura
2013-01-01
Purpose Human biospecimens are central to biobanking efforts, yet how members of the public think about biobank procurement strategies is not well understood. This study aimed to explore public perspectives toward the procurement of residual clinical material versus “direct” procurement strategies such as the drawing of blood. Methods Members of the public residing in and beyond the biobank catchment area of the University of Iowa Hospitals and Clinics were randomly selected to participate in focus groups and a telephone survey. Results The majority of survey participants (75%, n=559) found both residual and direct procurement strategies equally workable. Small proportions preferred either residual (15%; n=117) or direct (5%; n=40) procurement. Focus group participants (n=48) could identify benefits to both procurement strategies, but raised concerns about possible donor inconvenience/discomfort and reduced biospecimen accrual in the case of direct procurement. Residual procurement raised concerns about lower-quality samples being procured without full donor awareness. Conclusion Biobanks should consider that members of the public in their research programs may be willing to make specimen donations regardless of whether a residual or direct procurement strategy is employed. Limiting patient discomfort and inconvenience may make direct procurement strategies more acceptable to some members of the public. Ensuring donor awareness through effective informed consent may allay public concerns about the indirectness of donating clinical biospecimens. PMID:24850089
Coping strategies as mediators in relation to resilience and posttraumatic stress disorder.
Thompson, Nicholas J; Fiorillo, Devika; Rothbaum, Barbara O; Ressler, Kerry J; Michopoulos, Vasiliki
2018-01-01
Resilience has been shown to protect against the development of posttraumatic stress disorder (PTSD) in the aftermath of trauma. However, it remains unclear how coping strategies influence resilience and PTSD development in the acute aftermath of trauma. The current prospective, longitudinal study investigated the relationship between resilience, coping strategies, and the development of chronic PTSD symptoms. A sample of patients was recruited from an emergency department following a Criterion A trauma. Follow-up assessments were completed at 1-, 3-, and 6-months post-trauma to assess PTSD symptom development (N = 164). Resilience at 1-month positively correlated with the majority of active coping strategies (all p < .05) and negatively correlated with the majority of avoidant coping strategies (all p < .05), as well as future PTSD symptoms (p < .001). Additionally, all avoidant coping strategies, including social withdrawal, positively correlated with future PTSD symptoms (all p < .01). After controlling for demographic and clinical variables, social withdrawal at 3-months fully mediated the relationship between resilience at 1-month and PTSD symptoms at 6-months. Limitations include participant drop out and the conceptual overlap between avoidant coping and PTSD. These data suggest that resilience and social withdrawal may be possible therapeutic targets for mitigating the development of chronic PTSD in the aftermath of trauma. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Anagnostakis, Marios J.
2015-11-01
For several decades natural and artificial radioactivity in the environment have been extensively studied all around the world. Nuclear accidents - mainly that of Chernobyl - have led to the development of the field of radioecology, while detector systems and techniques - with predominant that of γ-spectrometry - have been continuously developed through the years to meet researchers' needs. The study of natural radionuclides that was originally limited to 226Ra, 232Th and 40K was then extended to include radionuclides such as 234Th, 210Pb, 235U and 7Be, which allowed the study of radioactive equilibrium. Besides their importance from the radiation protection point of view, many radionuclides are also used as tracers of environmental processes, such as aerosol and transportation of air masses studies (7Be, 10Be, 22Na), soil erosion, sedimentation and geochronology (210Pb, 137Cs), marine ecosystems studies and studies related to climate change. All these studies require specialized samplings strategies and sampling preparation techniques as well as high quality measurements, while the improvement of detection limits is often of vital importance. This work is a review of environmental radioactivity measurements and applications, mainly focused in the field of γ-spectrometry, for which difficulties and limitations will be presented, together with future trends, new challenges and applications.
Prevalence and counts of Salmonella spp. in minimally processed vegetables in São Paulo, Brazil.
Sant'Ana, Anderson S; Landgraf, Mariza; Destro, Maria Teresa; Franco, Bernadette D G M
2011-09-01
Minimally processed vegetables (MPV) may be important vehicles of Salmonella spp. and cause disease. This study aimed at detecting and enumerating Salmonella spp. in MPV marketed in the city of São Paulo, Brazil. A total of 512 samples of MPV packages collected in retail stores were tested for Salmonella spp. and total coliforms and Escherichia coli as indication of the hygienic status. Salmonella spp. was detected in four samples, two using the detection method and two using the counting method, where the results were 8.8 × 10(2) CFU/g and 2.4 × 10(2) CFU/g. The serovars were Salmonella Typhimurium (three samples) and Salmonella enterica subsp. enterica O:47:z4,z23:- (one sample). Fourteen samples (2.7%) presented counts of E. coli above the maximum limit established by the Brazilian regulation for MPV (10(2) CFU/g). Therefore, tightened surveillance and effective intervention strategies are necessary in order to address consumers and governments concerns on safety of MPV. Copyright © 2011 Elsevier Ltd. All rights reserved.
Analysis of biofluids in aqueous environment based on mid-infrared spectroscopy.
Fabian, Heinz; Lasch, Peter; Naumann, Dieter
2005-01-01
In this study we describe a semiautomatic Fourier transform infrared spectroscopic methodology for the analysis of liquid serum samples, which combines simple sample introduction with high sample throughput. The applicability of this new infrared technology to the analysis of liquid serum samples from a cohort of cattle naturally infected with bovine spongiform encephalopathy and from controls was explored in comparison to the conventional approach based on transmission infrared spectroscopy of dried serum films. Artifical neural network analysis of the infrared data was performed to differentiate between bovine spongiform encephalopathy-negative controls and animals in the late stage of the disease. After training of artifical neural network classifiers, infrared spectra of sera from an independent external validation data set were analyzed. In this way, sensitivities between 90 and 96% and specificities between 84 and 92% were achieved, respectively, depending upon the strategy of data collection and data analysis. Based on these results, the advantages and limitations of the liquid sample technique and the dried film approach for routine analysis of biofluids are discussed. 2005 Society of Photo-Optical Instrumentation Engineers.
Mariño-Repizo, Leonardo; Goicoechea, Hector; Raba, Julio; Cerutti, Soledad
2018-06-07
A novel, simple, easy and cheap sample treatment strategy based on salting-out assisted liquid-liquid extraction (SALLE) for ochratoxin A (OTA) ultra-trace analysis in beer samples using ultra-high performance liquid chromatography-tandem mass spectrometry determination was developed. The factors involved in the efficiency of pretreatment were studied employing factorial design in the screening phase and the optimal conditions of the significant variables on the analytical response were evaluated using a central composite face-centred design (CCF). Consequently, the amount of salt ((NH 4 ) 2 SO 4 ), together with the volumes of sample, hydrophilic (acetone) and nonpolar (toluene) solvents, and times of vortexing and centrifugation were optimized. Under optimized conditions, the limits of detection (LOD) and quantification (LOQ) were 0.02 µg l -1 and 0.08 µg l -1 respectively. OTA extraction recovery by SALLE was approximately 90% (0.2 µg l -1 ). Furthermore, the methodology was in agreement with EU Directive requirements and was successfully applied for analysis of beer samples.
User-Driven Sampling Strategies in Image Exploitation
Harvey, Neal R.; Porter, Reid B.
2013-12-23
Visual analytics and interactive machine learning both try to leverage the complementary strengths of humans and machines to solve complex data exploitation tasks. These fields overlap most significantly when training is involved: the visualization or machine learning tool improves over time by exploiting observations of the human-computer interaction. This paper focuses on one aspect of the human-computer interaction that we call user-driven sampling strategies. Unlike relevance feedback and active learning sampling strategies, where the computer selects which data to label at each iteration, we investigate situations where the user selects which data is to be labeled at each iteration. User-drivenmore » sampling strategies can emerge in many visual analytics applications but they have not been fully developed in machine learning. We discovered that in user-driven sampling strategies suggest new theoretical and practical research questions for both visualization science and machine learning. In this paper we identify and quantify the potential benefits of these strategies in a practical image analysis application. We find user-driven sampling strategies can sometimes provide significant performance gains by steering tools towards local minima that have lower error than tools trained with all of the data. Furthermore, in preliminary experiments we find these performance gains are particularly pronounced when the user is experienced with the tool and application domain.« less
User-driven sampling strategies in image exploitation
NASA Astrophysics Data System (ADS)
Harvey, Neal; Porter, Reid
2013-12-01
Visual analytics and interactive machine learning both try to leverage the complementary strengths of humans and machines to solve complex data exploitation tasks. These fields overlap most significantly when training is involved: the visualization or machine learning tool improves over time by exploiting observations of the human-computer interaction. This paper focuses on one aspect of the human-computer interaction that we call user-driven sampling strategies. Unlike relevance feedback and active learning sampling strategies, where the computer selects which data to label at each iteration, we investigate situations where the user selects which data is to be labeled at each iteration. User-driven sampling strategies can emerge in many visual analytics applications but they have not been fully developed in machine learning. User-driven sampling strategies suggest new theoretical and practical research questions for both visualization science and machine learning. In this paper we identify and quantify the potential benefits of these strategies in a practical image analysis application. We find user-driven sampling strategies can sometimes provide significant performance gains by steering tools towards local minima that have lower error than tools trained with all of the data. In preliminary experiments we find these performance gains are particularly pronounced when the user is experienced with the tool and application domain.
Grey W. Pendleton
1995-01-01
Many factors affect the use of point counts for monitoring bird populations, including sampling strategies, variation in detection rates, and independence of sample points. The most commonly used sampling plans are stratified sampling, cluster sampling, and systematic sampling. Each of these might be most useful for different objectives or field situations. Variation...
Flood, David; Garcia, Pablo; Douglas, Kate; Hawkins, Jessica
2018-01-01
Objective Screening is a key strategy to address the rising burden of chronic kidney disease (CKD) in low-income and middle-income countries. However, there are few reports regarding the implementation of screening programmes in resource-limited settings. The objectives of this study are to (1) to share programmatic experiences implementing CKD screening in a rural, resource-limited setting and (2) to assess the burden of renal disease in a community-based diabetes programme in rural Guatemala. Design Cross-sectional assessment of glomerular filtration rate (GFR) and urine albumin. Setting Central Highlands of Guatemala. Participants We enrolled 144 adults with type 2 diabetes in a community-based CKD screening activity carried out by the sponsoring institution. Outcome measures Prevalence of renal disease and risk of CKD progression using Kidney Disease: Improving Global Outcomes definitions and classifications. Results We found that 57% of the sample met GFR and/or albuminuria criteria suggestive of CKD. Over half of the sample had moderate or greater increased risk for CKD progression, including nearly 20% who were classified as high or very high risk. Hypertension was common in the sample (42%), and glycaemic control was suboptimal (mean haemoglobin A1c 9.4%±2.5% at programme enrolment and 8.6%±2.3% at time of CKD screening). Conclusions The high burden of renal disease in our patient sample suggests an imperative to better understand the burden and risk factors of CKD in Guatemala. The implementation details we share reveal the tension between evidence-based CKD screening versus screening that can feasibly be delivered in resource-limited global settings. PMID:29358450
Cuadros-Inostroza, Alvaro; Caldana, Camila; Redestig, Henning; Kusano, Miyako; Lisec, Jan; Peña-Cortés, Hugo; Willmitzer, Lothar; Hannah, Matthew A
2009-12-16
Metabolite profiling, the simultaneous quantification of multiple metabolites in an experiment, is becoming increasingly popular, particularly with the rise of systems-level biology. The workhorse in this field is gas-chromatography hyphenated with mass spectrometry (GC-MS). The high-throughput of this technology coupled with a demand for large experiments has led to data pre-processing, i.e. the quantification of metabolites across samples, becoming a major bottleneck. Existing software has several limitations, including restricted maximum sample size, systematic errors and low flexibility. However, the biggest limitation is that the resulting data usually require extensive hand-curation, which is subjective and can typically take several days to weeks. We introduce the TargetSearch package, an open source tool which is a flexible and accurate method for pre-processing even very large numbers of GC-MS samples within hours. We developed a novel strategy to iteratively correct and update retention time indices for searching and identifying metabolites. The package is written in the R programming language with computationally intensive functions written in C for speed and performance. The package includes a graphical user interface to allow easy use by those unfamiliar with R. TargetSearch allows fast and accurate data pre-processing for GC-MS experiments and overcomes the sample number limitations and manual curation requirements of existing software. We validate our method by carrying out an analysis against both a set of known chemical standard mixtures and of a biological experiment. In addition we demonstrate its capabilities and speed by comparing it with other GC-MS pre-processing tools. We believe this package will greatly ease current bottlenecks and facilitate the analysis of metabolic profiling data.
2009-01-01
Background Metabolite profiling, the simultaneous quantification of multiple metabolites in an experiment, is becoming increasingly popular, particularly with the rise of systems-level biology. The workhorse in this field is gas-chromatography hyphenated with mass spectrometry (GC-MS). The high-throughput of this technology coupled with a demand for large experiments has led to data pre-processing, i.e. the quantification of metabolites across samples, becoming a major bottleneck. Existing software has several limitations, including restricted maximum sample size, systematic errors and low flexibility. However, the biggest limitation is that the resulting data usually require extensive hand-curation, which is subjective and can typically take several days to weeks. Results We introduce the TargetSearch package, an open source tool which is a flexible and accurate method for pre-processing even very large numbers of GC-MS samples within hours. We developed a novel strategy to iteratively correct and update retention time indices for searching and identifying metabolites. The package is written in the R programming language with computationally intensive functions written in C for speed and performance. The package includes a graphical user interface to allow easy use by those unfamiliar with R. Conclusions TargetSearch allows fast and accurate data pre-processing for GC-MS experiments and overcomes the sample number limitations and manual curation requirements of existing software. We validate our method by carrying out an analysis against both a set of known chemical standard mixtures and of a biological experiment. In addition we demonstrate its capabilities and speed by comparing it with other GC-MS pre-processing tools. We believe this package will greatly ease current bottlenecks and facilitate the analysis of metabolic profiling data. PMID:20015393
Lindahl, Patric; Keith-Roach, Miranda; Worsfold, Paul; Choi, Min-Seok; Shin, Hyung-Seon; Lee, Sang-Hoon
2010-06-25
Sources of plutonium isotopes to the marine environment are well defined, both spatially and temporally, which makes Pu a potential tracer for oceanic processes. This paper presents the selection, optimisation and validation of a sample preparation method for the ultra-trace determination of Pu isotopes ((240)Pu and (239)Pu) in marine samples by multi-collector (MC) ICP-MS. The method was optimised for the removal of the interference from (238)U and the chemical recovery of Pu. Comparison of various separation strategies using AG1-X8, TEVA, TRU, and UTEVA resins to determine Pu in marine calcium carbonate samples is reported. A combination of anion-exchange (AG1-X8) and extraction chromatography (UTEVA/TRU) was the most suitable, with a radiochemical Pu yield of 87+/-5% and a U decontamination factor of 1.2 x 10(4). Validation of the method was accomplished by determining Pu in various IAEA certified marine reference materials. The estimated MC-ICP-MS instrumental limit of detection for (239)Pu and (240)Pu was 0.02 fg mL(-1), with an absolute limit of quantification of 0.11 fg. The proposed method allows the determination of ultra-trace Pu, at femtogram levels, in small size marine samples (e.g., 0.6-2.0 g coral or 15-20 L seawater). Finally, the analytical method was applied to determining historical records of the Pu signature in coral samples from the tropical Northwest Pacific and (239+240)Pu concentrations and (240)Pu/(239)Pu atom ratios in seawater samples as part of the 2008 GEOTRACES intercalibration exercise. Copyright 2010 Elsevier B.V. All rights reserved.
Sampling Designs in Qualitative Research: Making the Sampling Process More Public
ERIC Educational Resources Information Center
Onwuegbuzie, Anthony J.; Leech, Nancy L.
2007-01-01
The purpose of this paper is to provide a typology of sampling designs for qualitative researchers. We introduce the following sampling strategies: (a) parallel sampling designs, which represent a body of sampling strategies that facilitate credible comparisons of two or more different subgroups that are extracted from the same levels of study;…
Accounting for undetected compounds in statistical analyses of mass spectrometry 'omic studies.
Taylor, Sandra L; Leiserowitz, Gary S; Kim, Kyoungmi
2013-12-01
Mass spectrometry is an important high-throughput technique for profiling small molecular compounds in biological samples and is widely used to identify potential diagnostic and prognostic compounds associated with disease. Commonly, this data generated by mass spectrometry has many missing values resulting when a compound is absent from a sample or is present but at a concentration below the detection limit. Several strategies are available for statistically analyzing data with missing values. The accelerated failure time (AFT) model assumes all missing values result from censoring below a detection limit. Under a mixture model, missing values can result from a combination of censoring and the absence of a compound. We compare power and estimation of a mixture model to an AFT model. Based on simulated data, we found the AFT model to have greater power to detect differences in means and point mass proportions between groups. However, the AFT model yielded biased estimates with the bias increasing as the proportion of observations in the point mass increased while estimates were unbiased with the mixture model except if all missing observations came from censoring. These findings suggest using the AFT model for hypothesis testing and mixture model for estimation. We demonstrated this approach through application to glycomics data of serum samples from women with ovarian cancer and matched controls.
Elskens, Marc; Vloeberghs, Daniel; Van Elsen, Liesbeth; Baeyens, Willy; Goeyens, Leo
2012-09-15
For reasons of food safety, packaging and food contact materials must be submitted to migration tests. Testing of silicone moulds is often very laborious, since three replicate tests are required to decide about their compliancy. This paper presents a general modelling framework to predict the sample's compliance or non-compliance using results of the first two migration tests. It compares the outcomes of models with multiple continuous predictors with a class of models involving latent and dummy variables. The model's prediction ability was tested using cross and external validations, i.e. model revalidation each time a new measurement set became available. At the overall migration limit of 10 mg dm(-2), the relative uncertainty on a prediction was estimated to be ~10%. Taking the default values for α and β equal to 0.05, the maximum value that can be predicted for sample compliance was therefore 7 mg dm(-2). Beyond this limit the risk for false compliant results increases significantly, and a third migration test should be performed. The result of this latter test defines the sample's compliance or non-compliance. Propositions for compliancy control inspired by the current dioxin control strategy are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.
Teschke, K; Hertzman, C; Morrison, B
1994-03-01
Personal respirable (N = 230) and total (N = 237) dust measurements were made in two coastal British Columbia sawmills using a sampling strategy that randomly selected workers from all jobs in the mills over two seasons. Information about job title, department, season, weather conditions, location of the job relative to wood-cutting machines, and control measures also was collected at the time of sampling. Only 16 respirable wood dust samples were above the detection limit of 0.08 mg/m3; all 16 had levels < or = 0.20 mg/m3. Total wood dust concentrations were also low (36% less than the detection limit), with a mean of 0.51 mg/m3, and ranging from < 0.08 to 52 mg/m3. Measurements of exposure taken close to chippers, planers, and multiple saws had the highest total wood dust levels. Sawmill department and booth enclosures also were associated with wood dust concentrations, while local exhaust ventilation and weather conditions were not. Wood dust levels in this study were generally lower than in other studies of this industry, but most sawmill investigations report mean wood dust concentrations lower than those measured in the furniture and cabinetmaking industries, where concerns about wood dust exposures initially were raised.
A review of electrochemiluminescence (ECL) in and for microfluidic analytical devices.
Kirschbaum, Stefanie E K; Baeumner, Antje J
2015-05-01
The concept and realization of microfluidic total analysis systems (microTAS) have revolutionized the analytical process by integrating the whole breadth of analytical techniques into miniaturized systems. Paramount for efficient and competitive microTAS are integrated detection strategies, which lead to low limits of detection while reducing the sample volume. The concept of electrochemiluminescence (ECL) has been intriguing ever since its introduction based on Ru(bpy)3 (2+) by Tokel and Bard [1] (J Am Chem Soc 1853:2862-2863, 1972), especially because of its immense sensitivity, nonexistent auto-luminescent background signal, and simplicity in experimental design. Therefore, integrating ECL detection into microTAS is a logical consequence to achieve simple, yet highly sensitive, sensors. However, published microanalytical devices employing ECL detection focus in general on traditional ECL chemistry and have yet to take advantage of advances made in standard bench-top ECL strategies. This review will therefore focus on the most recent advancements in microfluidic ECL approaches, but also evaluate the potential impact of bench-top ECL research progress that would further improve performance and lower limits of detection of micro analytical ECL systems, ensuring their desirability as detection principle for microTAS applications.
Owojuyigbe, Michael; Bolorunduro, Miracle-Eunice; Busari, Dauda
2017-05-01
Disability encompasses the limitations on an individual's basic physical activities, and the consequent social oppressions such individual faces in society. In this regard, the limitation on the use of some parts of the genitals in a patriarchal system is considered a form of disability. This paper describes the perceptions of and the coping mechanisms employed by affected couples dealing with the consequences of female genital mutilation (FGM) as a form of sexual disability. Cultural Libertarianism was employed as a theoretical framework. The paper presents the results of a descriptive cross-sectional study conducted in Akure, Ondo State, Nigeria, with 10 male and 12 female respondents purposively selected through a snowball sampling for in-depth interviews. The findings present the justifications provided for the practice of FGM, and victims' perceptions of how it affects their sexual relations. Furthermore, it highlights coping strategies employed by affected women and their spouses. The study shows that the disabling consequence of FGM is largely sexual in nature, leading to traumatic experiences and negative beliefs about sex, and requiring a myriad of coping strategies employed by the disabled women, and their spouses, which may have its own implications for marital and sexual bliss.
[Perceptions of adolescents Jalisco campaign on breast cancer].
Tapia-Curiel, Amparo; Nuño-Gutiérrez, Bertha Lidia; Flores-Padilla, Luis; Villaseñor-Farías, Martha; López-López, José Luis; Covarrubias-Bermúdez, María de los Ángeles
2015-01-01
Breast cancer campaigns and awareness strategies with limited evidence of their effectiveness in youth. Behavioral model of perception that shows how individuals choose, organize and interpret information. This study shows the perceptions of youth from Jalisco regarding breast cancer campaigns. Cross-sectional qualitative exploratory study based on constructionist and associationist theories of perception. Informed consent interviews with 13 focus groups, 129 men and women between 12 and 19 years old, enrolled students, residents of 6 regions of Jalisco. The sampling consisted in snowball technique. Interviews transcribed and processed with Atlas Ti version 4.1, open coding analysis. 10 campaigns were identified and the perceptions about them showed 3 processes: 1) SELECTION: permeated by the campaign design elements; 2) ORGANIZATION: influenced by pre-structured meanings of the color pink, scope and limitations of the campaigns; and 3) INTERPRETATION: showed judgments about the visibility of breast cancer, accessibility of knowledge and resources, permeability of positive ads and cultural codes and the lack of coverage meeting expectations. A high awareness of breast cancer among teenagers was evident as well as the extensive need of information and services. We recommend creating strategies for formal education about breast cancer during adolescence.
Teaching the Concept of Limit by Using Conceptual Conflict Strategy and Desmos Graphing Calculator
ERIC Educational Resources Information Center
Liang, Senfeng
2016-01-01
Although the mathematics community has long accepted the concept of limit as the foundation of modern Calculus, the concept of limit itself has been marginalized in undergraduate Calculus education. In this paper, I analyze the strategy of conceptual conflict to teach the concept of limit with the aid of an online tool--Desmos graphing calculator.…
A dynamic routing strategy with limited buffer on scale-free network
NASA Astrophysics Data System (ADS)
Wang, Yufei; Liu, Feng
2016-04-01
In this paper, we propose an integrated routing strategy based on global static topology information and local dynamic data packet queue lengths to improve the transmission efficiency of scale-free networks. The proposed routing strategy is a combination of a global static routing strategy (based on the shortest path algorithm) and local dynamic queue length management, in which, instead of using an infinite buffer, the queue length of each node i in the proposed routing strategy is limited by a critical queue length Qic. When the network traffic is lower and the queue length of each node i is shorter than its critical queue length Qic, it forwards packets according to the global routing table. With increasing network traffic, when the buffers of the nodes with higher degree are full, they do not receive packets due to their limited buffers and the packets have to be delivered to the nodes with lower degree. The global static routing strategy can shorten the transmission time that it takes a packet to reach its destination, and the local limited queue length can balance the network traffic. The optimal critical queue lengths of nodes have been analysed. Simulation results show that the proposed routing strategy can get better performance than that of the global static strategy based on topology, and almost the same performance as that of the global dynamic routing strategy with less complexity.
77 FR 53941 - Emerging Global Advisors, LLC, et al.;
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-04
... sampling strategy to track its Underlying Index. A Fund using a replication strategy will invest in the.... A Fund using a representative sampling strategy will hold some, but not necessarily all, of the... strategy, a Fund is not expected to track the performance of its Underlying Index with the same degree of...
Kuo, Janice R; Fitzpatrick, Skye; Metcalfe, Rebecca K; McMain, Shelley
2016-03-01
Borderline personality disorder (BPD) is conceptualized as a disorder of heightened emotional reactivity and difficulties with emotion regulation. However, findings regarding emotional reactivity in BPD are mixed and there are limited studies examining emotion regulation capabilities in this population. Twenty-five individuals with BPD and 30 healthy controls (HCs) engaged in a baseline assessment followed by the presentation of neutral and BPD-relevant negative images. Participants were instructed to react as they naturally would to the image, or to use a mindfulness-based or distraction-based strategy to feel less negative. Self-reported and physiological (i.e., heart rate, electrodermal activity, and respiratory sinus arrhythmia) measures were collected. Compared with the HCs, the BPD group exhibited elevated heart rate and reduced respiratory sinus arrhythmia at baseline. However, there were no differences in emotional reactivity in self-report or physiological indices between the two groups. In addition, the BPD group did not exhibit deficits in the ability to implement either emotion regulation strategy, with the exception that the BPD group reported less positive emotions while distracting compared with the HCs. This study is limited by a small sample size and the inclusion of a medicated BPD sample. Emotion dysregulation in BPD might be better accounted for by abnormal baseline emotional functioning rather than heightened emotional reactivity or deficits in emotion regulation. Treatments for BPD might be enhanced by directly targeting resting state emotional functioning rather than emotional reactions or regulatory attempts. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cassaignau, Anaïs M E; Launay, Hélène M M; Karyadi, Maria-Evangelia; Wang, Xiaolin; Waudby, Christopher A; Deckert, Annika; Robertson, Amy L; Christodoulou, John; Cabrita, Lisa D
2016-08-01
During biosynthesis on the ribosome, an elongating nascent polypeptide chain can begin to fold, in a process that is central to all living systems. Detailed structural studies of co-translational protein folding are now beginning to emerge; such studies were previously limited, at least in part, by the inherently dynamic nature of emerging nascent chains, which precluded most structural techniques. NMR spectroscopy is able to provide atomic-resolution information for ribosome-nascent chain complexes (RNCs), but it requires large quantities (≥10 mg) of homogeneous, isotopically labeled RNCs. Further challenges include limited sample working concentration and stability of the RNC sample (which contribute to weak NMR signals) and resonance broadening caused by attachment to the large (2.4-MDa) ribosomal complex. Here, we present a strategy to generate isotopically labeled RNCs in Escherichia coli that are suitable for NMR studies. Uniform translational arrest of the nascent chains is achieved using a stalling motif, and isotopically labeled RNCs are produced at high yield using high-cell-density E. coli growth conditions. Homogeneous RNCs are isolated by combining metal affinity chromatography (to isolate ribosome-bound species) with sucrose density centrifugation (to recover intact 70S monosomes). Sensitivity-optimized NMR spectroscopy is then applied to the RNCs, combined with a suite of parallel NMR and biochemical analyses to cross-validate their integrity, including RNC-optimized NMR diffusion measurements to report on ribosome attachment in situ. Comparative NMR studies of RNCs with the analogous isolated proteins permit a high-resolution description of the structure and dynamics of a nascent chain during its progressive biosynthesis on the ribosome.
Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses
Liu, Ruijie; Holik, Aliaksei Z.; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E.; Asselin-Labat, Marie-Liesse; Smyth, Gordon K.; Ritchie, Matthew E.
2015-01-01
Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean–variance relationship of the log-counts-per-million using ‘voom’. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source ‘limma’ package. PMID:25925576
Coding Strategies and Implementations of Compressive Sensing
NASA Astrophysics Data System (ADS)
Tsai, Tsung-Han
This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.
Wang, Bronwen; Owens, Victoria; Bailey, Elizabeth; Lee, Greg
2011-01-01
We report on the chemical analysis of water samples collected from the Taylor Mountains 1:250,000- and Dillingham D-4 1:63,360-scale quadrangles, Alaska. Reported parameters include pH, conductivity, water temperature, major cation and anion concentrations, and trace-element concentrations. We collected the samples as part of a multiyear U.S. Geological Survey project entitled "Geologic and Mineral Deposit Data for Alaskan Economic Development." Data presented here are from samples collected in June and August 2008. Minimal interpretation accompanies this data release. This is the fourth release of aqueous geochemical data from this project; data from samples collected in 2004, 2005, and 2006 were published previously. The data in this report augment but do not duplicate or supersede the previous data releases. Site selection was based on a regional sampling strategy that focused on first- and second-order drainages. Water sample sites were selected on the basis of landscape parameters that included physiography, wetland extent, lithological changes, and a cursory field review of mineralogy from pan concentrates. Stream water in the study area is dominated by bicarbonate (HCO3-), although in a few samples more than 50 percent of the anionic charge can be attributed to sulfate (SO42-). The major-cation chemistry of these samples ranges from Ca2+-Mg2+ dominated to a mix of Ca2+-Mg2+-Na++K2+. In most cases, analysis of duplicate samples showed good agreement for the major cation and major anions with the exception of the duplicate samples at site 08TA565. At site 08TA565, Ca, Mg, Cl, and CaCO3 exceeded 25 percent and the concentrations of trace elements As, Fe and Mn also exceeded 25 percent in this duplicate pair. Chloride concentration varied by more than 25 percent in 5 of the 11 duplicated samples. Trace-element concentrations in these samples generally were at or near the detection limit for the method used and, except for Co at site 08TA565, generally good agreement was determined between duplicate samples for elements with detectable concentrations. Major-ion concentrations were below detection limits in all field blanks, and the trace-element concentrations also were generally below detection limits; however, Co, Mn, Na, Zn, Cl, and Hg were detected in one or more field blank samples.
NASA Astrophysics Data System (ADS)
Mota, Mariana F. B.; Gama, Ednilton M.; Rodrigues, Gabrielle de C.; Rodrigues, Guilherme D.; Nascentes, Clésia C.; Costa, Letícia M.
2018-01-01
In this work, a dilute-and-shoot method was developed for Ca, P, S and Zn determination in new and used lubricating oil samples by total reflection X-ray fluorescence (TXRF). The oil samples were diluted with organic solvents followed by addition of yttrium as internal standard and the TXRF measurements were performed after solvent evaporation. The method was optimized using an interlaboratorial reference material. The experimental parameters evaluated were sample volume (50 or 100 μL), measurement time (250 or 500 s) and volume deposited on the quartz glass sample carrier (5 or 10 μL). All of them were evaluated and optimized using xylene, kerosene and hexane. Analytical figures of merit (accuracy, precision, limit of detection and quantification) were used to evaluate the performance of the analytical method for all solvents. The recovery rates varied from 99 to 111% and the relative standard deviation remained between 1.7% and 10% (n = 8). For all elements, the results obtained by applying the new method were in agreement with the certified value. After the validation step, the method was applied for Ca, P, S and Zn quantification in eight new and four used lubricating oil samples, for all solvents. The concentration of the elements in the samples varied in the ranges of 1620-3711 mg L- 1 for Ca, 704-1277 mg L- 1 for P, 2027-9147 mg L- 1 for S, and 898-1593 mg L- 1 for Zn. The association of TXRF with a dilute-and-shoot sample preparation strategy was efficient for Ca, P, S and Zn determination in lubricating oils, presenting accurate results. Additionally, the time required for analysis is short, the reagent volumes are low minimizing waste generation, and the technique does not require calibration curves.
Seifert, Lisa; Harbeck, Michaela; Thomas, Astrid; Hoke, Nadja; Zöller, Lothar; Wiechmann, Ingrid; Grupe, Gisela; Scholz, Holger C.; Riehm, Julia M.
2013-01-01
Yersinia pestis has been identified as the causative agent of the Black Death pandemic in the 14th century. However, retrospective diagnostics in human skeletons after more than 600 years are critical. We describe a strategy following a modern diagnostic algorithm and working under strict ancient DNA regime for the identification of medieval human plague victims. An initial screening and DNA quantification assay detected the Y. pestis specific pla gene of the high copy number plasmid pPCP1. Results were confirmed by conventional PCR and sequence analysis targeting both Y. pestis specific virulence plasmids pPCP1 and pMT1. All assays were meticulously validated according to human clinical diagnostics requirements (ISO 15189) regarding efficiency, sensitivity, specificity, and limit of detection (LOD). Assay specificity was 100% tested on 41 clinically relevant bacteria and 29 Y. pseudotuberculosis strains as well as for DNA of 22 Y. pestis strains and 30 previously confirmed clinical human plague samples. The optimized LOD was down to 4 gene copies. 29 individuals from three different multiple inhumations were initially assessed as possible victims of the Black Death pandemic. 7 samples (24%) were positive in the pPCP1 specific screening assay. Confirmation through second target pMT1 specific PCR was successful for 4 of the positive individuals (14%). A maximum of 700 and 560 copies per µl aDNA were quantified in two of the samples. Those were positive in all assays including all repetitions, and are candidates for future continuative investigations such as whole genome sequencing. We discuss that all precautions taken here for the work with aDNA are sufficient to prevent external sample contamination and fulfill the criteria of authenticity. With regard to retrospective diagnostics of a human pathogen and the uniqueness of ancient material we strongly recommend using a careful strategy and validated assays as presented in our study. PMID:24069445
Seifert, Lisa; Harbeck, Michaela; Thomas, Astrid; Hoke, Nadja; Zöller, Lothar; Wiechmann, Ingrid; Grupe, Gisela; Scholz, Holger C; Riehm, Julia M
2013-01-01
Yersinia pestis has been identified as the causative agent of the Black Death pandemic in the 14(th) century. However, retrospective diagnostics in human skeletons after more than 600 years are critical. We describe a strategy following a modern diagnostic algorithm and working under strict ancient DNA regime for the identification of medieval human plague victims. An initial screening and DNA quantification assay detected the Y. pestis specific pla gene of the high copy number plasmid pPCP1. Results were confirmed by conventional PCR and sequence analysis targeting both Y. pestis specific virulence plasmids pPCP1 and pMT1. All assays were meticulously validated according to human clinical diagnostics requirements (ISO 15189) regarding efficiency, sensitivity, specificity, and limit of detection (LOD). Assay specificity was 100% tested on 41 clinically relevant bacteria and 29 Y. pseudotuberculosis strains as well as for DNA of 22 Y. pestis strains and 30 previously confirmed clinical human plague samples. The optimized LOD was down to 4 gene copies. 29 individuals from three different multiple inhumations were initially assessed as possible victims of the Black Death pandemic. 7 samples (24%) were positive in the pPCP1 specific screening assay. Confirmation through second target pMT1 specific PCR was successful for 4 of the positive individuals (14%). A maximum of 700 and 560 copies per µl aDNA were quantified in two of the samples. Those were positive in all assays including all repetitions, and are candidates for future continuative investigations such as whole genome sequencing. We discuss that all precautions taken here for the work with aDNA are sufficient to prevent external sample contamination and fulfill the criteria of authenticity. With regard to retrospective diagnostics of a human pathogen and the uniqueness of ancient material we strongly recommend using a careful strategy and validated assays as presented in our study.
Horney, Jennifer; Zotti, Marianne E.; Williams, Amy; Hsia, Jason
2015-01-01
Introduction and Background Women of reproductive age, in particular women who are pregnant or fewer than 6 months postpartum, are uniquely vulnerable to the effects of natural disasters, which may create stressors for caregivers, limit access to prenatal/postpartum care, or interrupt contraception. Traditional approaches (e.g., newborn records, community surveys) to survey women of reproductive age about unmet needs may not be practical after disasters. Finding pregnant or postpartum women is especially challenging because fewer than 5% of women of reproductive age are pregnant or postpartum at any time. Methods From 2009 to 2011, we conducted three pilots of a sampling strategy that aimed to increase the proportion of pregnant and postpartum women of reproductive age who were included in postdisaster reproductive health assessments in Johnston County, North Carolina, after tornadoes, Cobb/Douglas Counties, Georgia, after flooding, and Bertie County, North Carolina, after hurricane-related flooding. Results Using this method, the percentage of pregnant and postpartum women interviewed in each pilot increased from 0.06% to 21%, 8% to 19%, and 9% to 17%, respectively. Conclusion and Discussion Two-stage cluster sampling with referral can be used to increase the proportion of pregnant and postpartum women included in a postdisaster assessment. This strategy may be a promising way to assess unmet needs of pregnant and postpartum women in disaster-affected communities. PMID:22365134
Toulza, Eve; Tagliabue, Alessandro; Blain, Stéphane; Piganeau, Gwenael
2012-01-01
Microbial metagenomes are DNA samples of the most abundant, and therefore most successful organisms at the sampling time and location for a given cell size range. The study of microbial communities via their DNA content has revolutionized our understanding of microbial ecology and evolution. Iron availability is a critical resource that limits microbial communities' growth in many oceanic areas. Here, we built a database of 2319 sequences, corresponding to 140 gene families of iron metabolism with a large phylogenetic spread, to explore the microbial strategies of iron acquisition in the ocean's bacterial community. We estimate iron metabolism strategies from metagenome gene content and investigate whether their prevalence varies with dissolved iron concentrations obtained from a biogeochemical model. We show significant quantitative and qualitative variations in iron metabolism pathways, with a higher proportion of iron metabolism genes in low iron environments. We found a striking difference between coastal and open ocean sites regarding Fe(2+) versus Fe(3+) uptake gene prevalence. We also show that non-specific siderophore uptake increases in low iron open ocean environments, suggesting bacteria may acquire iron from natural siderophore-like organic complexes. Despite the lack of knowledge of iron uptake mechanisms in most marine microorganisms, our approach provides insights into how the iron metabolic pathways of microbial communities may vary with seawater iron concentrations.
Thomann, M; Imbert, E; Engstrand, R C; Cheptou, P-O
2015-04-01
Global change is expected to impose new selection pressures on natural populations. Phenotypic responses, such as earlier phenology in response to climate warming, have been repeatedly observed in the field. The recent pollinator decline is also expected to change selection on reproductive traits in flowering plants. However, it remains unclear whether short-term adaptation of plant reproductive strategies occurs in response to global change. In this study, we report the evolution of some important reproductive traits of the annual self-incompatible weed Centaurea cyanus. In a common garden experiment, we germinated stored seeds, sampled 18 years apart from the same location, in a region where warmer springs and indices of pollinator decline have been reported. Compared to the ancestral population (1992), our results showed that plants of the descendant population (2010) flowered earlier and also produced larger capitula with longer receptivity and a larger floral display. QST -FST comparisons indicated that natural selection has likely contributed to the evolution of some of the traits investigated. Lower FST within temporal samples than among spatial samples further suggests a limited role of gene flow from neighbouring populations. We therefore propose that trait shifts could partly be due to adaptation to global change. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.
Ghasemi, Fakhradin; Kalatpour, Omid; Moghimbeigi, Abbas; Mohammadfam, Iraj
2017-03-04
High-risk unsafe behaviors (HRUBs) have been known as the main cause of occupational accidents. Considering the financial and societal costs of accidents and the limitations of available resources, there is an urgent need for managing unsafe behaviors at workplaces. The aim of the present study was to find strategies for decreasing the rate of HRUBs using an integrated approach of safety behavior sampling technique and Bayesian networks analysis. A cross-sectional study. The Bayesian network was constructed using a focus group approach. The required data was collected using the safety behavior sampling, and the parameters of the network were estimated using Expectation-Maximization algorithm. Using sensitivity analysis and belief updating, it was determined that which factors had the highest influences on unsafe behavior. Based on BN analyses, safety training was the most important factor influencing employees' behavior at the workplace. High quality safety training courses can reduce the rate of HRUBs about 10%. Moreover, the rate of HRUBs increased by decreasing the age of employees. The rate of HRUBs was higher in the afternoon and last days of a week. Among the investigated variables, training was the most important factor affecting safety behavior of employees. By holding high quality safety training courses, companies would be able to reduce the rate of HRUBs significantly.
NASA Astrophysics Data System (ADS)
Ott, L.; Sellers, P. J.; Schimel, D.; Moore, B., III; O'Dell, C.; Crowell, S.; Kawa, S. R.; Pawson, S.; Chatterjee, A.; Baker, D. F.; Schuh, A. E.
2017-12-01
Satellite observations of carbon dioxide (CO2) and methane (CH4) are critically needed to improve understanding of the contemporary carbon budget and carbon-climate feedbacks. Though current carbon observing satellites have provided valuable data in regions not covered by surface in situ measurements, limited sampling of key regions and small but spatially coherent biases have limited the ability to estimate fluxes at the time and space scales needed for improved process-level understanding and informed decision-making. Next generation satellites will improve coverage in data sparse regions, either through use of active remote sensing, a geostationary vantage point, or increased swath width, but all techniques have limitations. The relative strengths and weaknesses of these approaches and their synergism have not previously been examined. To address these needs, a significant subset of the US carbon modeling community has come together with support from NASA to conduct a series of coordinated observing system simulation experiments (OSSEs), with close collaboration in framing the experiments and in analyzing the results. Here, we report on the initial phase of this initiative, which focused on creating realistic, physically consistent synthetic CO2 and CH4 observational datasets for use in inversion and signal detection experiments. These datasets have been created using NASA's Goddard Earth Observing System Model (GEOS) to represent the current state of atmospheric carbon as well as best available estimates of expected flux changes. Scenarios represented include changes in urban emissions, release of permafrost soil carbon, changes in carbon uptake in tropical and mid-latitude forests, changes in the Southern Ocean sink, and changes in both anthropogenic and natural methane emissions. This GEOS carbon `nature run' was sampled by instrument simulators representing the most prominent observing strategies with a focus on consistently representing the impacts of random errors and limitations in viewing due to clouds and aerosols. Statistical analyses of these synthetic datasets provide a simple, objective method for evaluating mission design choices. These datasets will also be made publicly available for use by the international carbon modeling community and in mission planning activities.
Laser micro-machining strategies for transparent brittle materials using ultrashort pulsed lasers
NASA Astrophysics Data System (ADS)
Bernard, Benjamin; Matylitsky, Victor
2017-02-01
Cutting and drilling of transparent materials using short pulsed laser systems are important industrial production processes. Applications ranging from sapphire cutting, hardened glass processing, and flat panel display cutting, to diamond processing are possible. The ablation process using a Gaussian laser beam incident on the topside of a sample with several parallel overlapping lines leads to a V-shaped structured groove. This limits the structuring depth for a given kerf width. The unique possibility for transparent materials to start the ablation process from the backside of the sample is a well-known strategy to improve the aspect ratio of the ablated features. This work compares the achievable groove depth depending on the kerf width for front-side and back-side ablation and presents the best relation between the kerf width and number of overscans. Additionally, the influence of the number of pulses in one burst train on the ablation efficiency is investigated. The experiments were carried out using Spirit HE laser from Spectra-Physics, with the features of adjustable pulse duration from <400 fs to 10 ps, three different repetition rates (100 kHz, 200 kHz and 400 kHz) and average output powers of >16 W ( at 1040 nm wavelength).
Novel highly-performing immunosensor-based strategy for ochratoxin A detection in wine samples.
Prieto-Simón, Beatriz; Campàs, Mònica; Marty, Jean-Louis; Noguer, Thierry
2008-02-28
The increasing concern about ochratoxin A (OTA) contamination of different food and feedstuffs demands high-performing detection techniques for quality assessment. Two indirect competitive enzyme-linked immunosorbent assay (ELISA) strategies were investigated for the development of OTA electrochemical immunosensors based on different OTA immobilisation procedures. Immunosensors based on avidin/biotin-OTA showed enhanced performance characteristics compared to those based on the adsorption of bovine serum albumin (BSA)-OTA conjugate. Performance of polyclonal (PAb) and monoclonal (MAb) antibodies against OTA was compared, showing at least one-order of magnitude lower IC(50) values when working with MAb. Alkaline phosphatase (ALP)- and horseradish peroxidase (HRP)-labelled secondary antibodies were evaluated. Both conjugates led to similar results when working with OTA standard solutions in buffer. However, whereas electroactive interferences present in spiked wine samples did not affect HRP-labelled immunosensors (4% slope deviation), they were likely oxidised at 0.225 V versus Ag/AgCl, the working potential for ALP-labelled immunosensors (25% slope deviation). Considering 80% of antibody binding as the limit of detection, values of 0.7 and 0.3 ng/mL for HRP- and ALP-labelled immunosensors respectively, validate these immunosensors as useful screening tools to assess OTA levels in wine.
Cai, Ailong; Wang, Linyuan; Zhang, Hanming; Yan, Bin; Li, Lei; Xi, Xiaoqi; Li, Jianxin
2014-01-01
Linear scan computed tomography (CT) is a promising imaging configuration with high scanning efficiency while the data set is under-sampled and angularly limited for which high quality image reconstruction is challenging. In this work, an edge guided total variation minimization reconstruction (EGTVM) algorithm is developed in dealing with this problem. The proposed method is modeled on the combination of total variation (TV) regularization and iterative edge detection strategy. In the proposed method, the edge weights of intermediate reconstructions are incorporated into the TV objective function. The optimization is efficiently solved by applying alternating direction method of multipliers. A prudential and conservative edge detection strategy proposed in this paper can obtain the true edges while restricting the errors within an acceptable degree. Based on the comparison on both simulation studies and real CT data set reconstructions, EGTVM provides comparable or even better quality compared to the non-edge guided reconstruction and adaptive steepest descent-projection onto convex sets method. With the utilization of weighted alternating direction TV minimization and edge detection, EGTVM achieves fast and robust convergence and reconstructs high quality image when applied in linear scan CT with under-sampled data set.
NASA Astrophysics Data System (ADS)
Liger, Lucie; Margoum, Christelle; Guillemain, Céline; Carluer, Nadia
2014-05-01
The implementation of the WFD (Water Framework Directive), requests European Union member states to achieve good qualitative and quantitative status of all water bodies in 2015. The monitoring of organic micropollutants such as pesticides is an essential step to assess the chemical and biological state of streams, to understand the reasons of degradation and to implement sound mitigation solutions in the watershed. In particular, the water sampling, which can be performed according to several strategies, has to be closely adapted to the experimental goals. In this study, we present and compare 3 different active sampling strategies: grab sampling, time-related and flow-dependent automatic samplings. In addition, the last two can be fractionated (i.e., several samples collected, and each one contained in a single bottle) or averaged (i.e., several samples mixed in the same bottle). Time-related samples allow the assessment of average exposure concentrations of organic micropollutants, whereas flow-dependent samples lead to average flux concentrations. The 3 sampling strategies were applied and compared during the monitoring of the pesticide contamination of a river located in a French vineyard watershed (the Morcille River, located 60 km north of Lyon, in the Beaujolais region). Data were collected between 2007 and 2011, during different seasons and for a range of hydrological events. The Morcille watershed is characterized by contrasted hydrological events with a very short-time response due to its small size (5 km²), steep slopes (20 to 28%) and highly permeable sandy soils. These features make it particularly difficult to monitor water quality, due to fast variations of pesticide concentrations depending on rain events. This comparative study is performed in 2 steps. At first, we compare the timestamps of each sample composing the weekly-averaged samples and those of the grab samples with hydrological data. This allows us to evaluate the efficiency of these 2 sampling strategies in the integration of flow variations and therefore pesticide concentration variations during the sampling campaign. In a second step, we use the fractionated samples data during flood events to calculate the concentrations of virtual averaged samples of the events. Different time or flow steps were used for the calculation, to assess their impact on the pesticide averaged-concentrations or fluxes. These analyses highlight the benefits and drawbacks of each sampling strategy. They show that the sampling strategy should be carefully chosen and designed depending on the final aim of the study and on the watershed characteristics (in particular its hydrological dynamics). This study may help to design future monitoring on water quality. Key Words: sampling strategies, surface water, concentration, flux, pesticides.
Dezutter, Jessie; Dewitte, Laura; Thauvoye, Evalyne; Vanhooren, Siebrecht
2017-02-01
Trying to cope with chronic pain is a highly demanding and challenging task and pain patients often need to reformulate goals or aspirations due to their pain condition. This goal violation is often related with experienced distress and requires coping processes in order to decrease the distress and stimulate a healthy adaptation. Some scholars, however, argued that in so-called unsolvable or irreparable stressors such as chronic pain, conventional coping strategies like problem-focused coping might not be the most adaptive option. In these situations, meaningful coping strategies attempting to transform the meaning of the stressful experience would be more accurate. In this study, we aim to test if goal violation triggers meaningful coping strategies over time and whether engagement in these meaningful coping strategies result in improved life satisfaction, as an indicator of adaptation. A longitudinal three wave study in a sample of paint patients (n = 125) tests whether goal violation triggers positive reappraisal and downward comparison, two possible meaningful coping strategies. The study furthermore tests if engagement in these strategies results in a better adaptation to the pain condition, reflected in higher life satisfaction. Results partially supported our hypotheses by pointing to the benevolent role of downward comparison on life satisfaction via decreased goal violation of pain patients. Our findings however did also show that positive reappraisal predicted lower life satisfaction via increased levels of appraised goal violation which questions the role of positive reappraisal as a genuine meaningful coping strategy. Implications and limitations are discussed. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Application experiments to trace N-P interactions in forest ecosystems
NASA Astrophysics Data System (ADS)
Krüger, Jaane; Niederberger, Jörg; Schulz, Stefanie; Lang, Friederike
2017-04-01
Phosphorus is a limited resource and there is increasing debate regarding the principles of tight P recycling. Forest ecosystems show commonly high P use efficiencies but the processes behind this phenomenon are still unresolved. In frame of the priority program "SPP 1685 Ecosystem nutrition - Forest strategies for limited phosphorus resources" around 70 researchers from different disciplines collaborate to unravel these processes. The overall hypothesis to be tested is that the P nutrition strategy of forest ecosystems at sites rich in mineral P is characterized by high P uptake efficiency (acquiring systems). In contrast, the P strategy of forest ecosystems facing low soil P stocks is characterized by highly efficient mechanisms of P recycling. To test this hypothesis, we analyzed five beech forest ecosystems on silicate rock with different parent materials representing a gradient of total P stocks (160 - 900 g P m-2, down to 1m soil depth). In fact, we found evidence confirming our hypothesis, but controls and drivers of P strategies are still unknown as other environmental variables differ. One of those might be the N content, as organisms strive to reach a specific internal N:P ratio. Thus, an additional application of N might also alter P nutrition. To test this, we established a factorial P x N application experiment at three of the study sites. With our presentation we will introduce this experiment and give a review on published P x N experiments discussing different advantages and disadvantages of different basic conditions (e.g. amount and application form, doses, sampling and statistical design, monitoring periods, budget calculation, isotopic tracing). Finally, we want to initiate a common discussion on the standardization of P x N field experiments to enable interdisciplinary and across-compartment comparisons (e.g. different land use, different climate zones, terrestrial and aquatic ecosystems).
NASA Astrophysics Data System (ADS)
Skeffington, R. A.; Halliday, S. J.; Wade, A. J.; Bowes, M. J.; Loewenthal, M.
2015-05-01
The EU Water Framework Directive (WFD) requires that the ecological and chemical status of water bodies in Europe should be assessed, and action taken where possible to ensure that at least "good" quality is attained in each case by 2015. This paper is concerned with the accuracy and precision with which chemical status in rivers can be measured given certain sampling strategies, and how this can be improved. High-frequency (hourly) chemical data from four rivers in southern England were subsampled to simulate different sampling strategies for four parameters used for WFD classification: dissolved phosphorus, dissolved oxygen, pH and water temperature. These data sub-sets were then used to calculate the WFD classification for each site. Monthly sampling was less precise than weekly sampling, but the effect on WFD classification depended on the closeness of the range of concentrations to the class boundaries. In some cases, monthly sampling for a year could result in the same water body being assigned to three or four of the WFD classes with 95% confidence, due to random sampling effects, whereas with weekly sampling this was one or two classes for the same cases. In the most extreme case, the same water body could have been assigned to any of the five WFD quality classes. Weekly sampling considerably reduces the uncertainties compared to monthly sampling. The width of the weekly sampled confidence intervals was about 33% that of the monthly for P species and pH, about 50% for dissolved oxygen, and about 67% for water temperature. For water temperature, which is assessed as the 98th percentile in the UK, monthly sampling biases the mean downwards by about 1 °C compared to the true value, due to problems of assessing high percentiles with limited data. Low-frequency measurements will generally be unsuitable for assessing standards expressed as high percentiles. Confining sampling to the working week compared to all 7 days made little difference, but a modest improvement in precision could be obtained by sampling at the same time of day within a 3 h time window, and this is recommended. For parameters with a strong diel variation, such as dissolved oxygen, the value obtained, and thus possibly the WFD classification, can depend markedly on when in the cycle the sample was taken. Specifying this in the sampling regime would be a straightforward way to improve precision, but there needs to be agreement about how best to characterise risk in different types of river. These results suggest that in some cases it will be difficult to assign accurate WFD chemical classes or to detect likely trends using current sampling regimes, even for these largely groundwater-fed rivers. A more critical approach to sampling is needed to ensure that management actions are appropriate and supported by data.
Adaptive Sampling of Time Series During Remote Exploration
NASA Technical Reports Server (NTRS)
Thompson, David R.
2012-01-01
This work deals with the challenge of online adaptive data collection in a time series. A remote sensor or explorer agent adapts its rate of data collection in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility (all its datapoints lie in the past) and limited control (it can only decide when to collect its next datapoint). This problem is treated from an information-theoretic perspective, fitting a probabilistic model to collected data and optimizing the future sampling strategy to maximize information gain. The performance characteristics of stationary and nonstationary Gaussian process models are compared. Self-throttling sensors could benefit environmental sensor networks and monitoring as well as robotic exploration. Explorer agents can improve performance by adjusting their data collection rate, preserving scarce power or bandwidth resources during uninteresting times while fully covering anomalous events of interest. For example, a remote earthquake sensor could conserve power by limiting its measurements during normal conditions and increasing its cadence during rare earthquake events. A similar capability could improve sensor platforms traversing a fixed trajectory, such as an exploration rover transect or a deep space flyby. These agents can adapt observation times to improve sample coverage during moments of rapid change. An adaptive sampling approach couples sensor autonomy, instrument interpretation, and sampling. The challenge is addressed as an active learning problem, which already has extensive theoretical treatment in the statistics and machine learning literature. A statistical Gaussian process (GP) model is employed to guide sample decisions that maximize information gain. Nonsta tion - ary (e.g., time-varying) covariance relationships permit the system to represent and track local anomalies, in contrast with current GP approaches. Most common GP models are stationary, e.g., the covariance relationships are time-invariant. In such cases, information gain is independent of previously collected data, and the optimal solution can always be computed in advance. Information-optimal sampling of a stationary GP time series thus reduces to even spacing, and such models are not appropriate for tracking localized anomalies. Additionally, GP model inference can be computationally expensive.
ERIC Educational Resources Information Center
Wu, Chia-Pei; Lin, Huey-Ju
2016-01-01
This study utilized the Oxford Strategy Inventory for Language Learning (SILL) and an English writing anxiety scale to examine the relationship between learning strategies and English writing anxiety in 102 university-level English language learners (ELLs) with Limited English Proficiency (LEP) in a university in Taiwan. Kruskal Wallis Test…
Tu, Chengjian; Shen, Shichen; Sheng, Quanhu; Shyr, Yu; Qu, Jun
2017-01-30
Reliable quantification of low-abundance proteins in complex proteomes is challenging largely owing to the limited number of spectra/peptides identified. In this study we developed a straightforward method to improve the quantitative accuracy and precision of proteins by strategically retrieving the less confident peptides that were previously filtered out using the standard target-decoy search strategy. The filtered-out MS/MS spectra matched to confidently-identified proteins were recovered, and the peptide-spectrum-match FDR were re-calculated and controlled at a confident level of FDR≤1%, while protein FDR maintained at ~1%. We evaluated the performance of this strategy in both spectral count- and ion current-based methods. >60% increase of total quantified spectra/peptides was respectively achieved for analyzing a spike-in sample set and a public dataset from CPTAC. Incorporating the peptide retrieval strategy significantly improved the quantitative accuracy and precision, especially for low-abundance proteins (e.g. one-hit proteins). Moreover, the capacity of confidently discovering significantly-altered proteins was also enhanced substantially, as demonstrated with two spike-in datasets. In summary, improved quantitative performance was achieved by this peptide recovery strategy without compromising confidence of protein identification, which can be readily implemented in a broad range of quantitative proteomics techniques including label-free or labeling approaches. We hypothesize that more quantifiable spectra and peptides in a protein, even including less confident peptides, could help reduce variations and improve protein quantification. Hence the peptide retrieval strategy was developed and evaluated in two spike-in sample sets with different LC-MS/MS variations using both MS1- and MS2-based quantitative approach. The list of confidently identified proteins using the standard target-decoy search strategy was fixed and more spectra/peptides with less confidence matched to confident proteins were retrieved. However, the total peptide-spectrum-match false discovery rate (PSM FDR) after retrieval analysis was still controlled at a confident level of FDR≤1%. As expected, the penalty for occasionally incorporating incorrect peptide identifications is negligible by comparison with the improvements in quantitative performance. More quantifiable peptides, lower missing value rate, better quantitative accuracy and precision were significantly achieved for the same protein identifications by this simple strategy. This strategy is theoretically applicable for any quantitative approaches in proteomics and thereby provides more quantitative information, especially on low-abundance proteins. Published by Elsevier B.V.
Advanced Navigation Strategies for an Asteroid Sample Return Mission
NASA Technical Reports Server (NTRS)
Bauman, J.; Getzandanner, K.; Williams, B.; Williams, K.
2011-01-01
The proximity operations phases of a sample return mission to an asteroid have been analyzed using advanced navigation techniques derived from experience gained in planetary exploration. These techniques rely on tracking types such as Earth-based radio metric Doppler and ranging, spacecraft-based ranging, and optical navigation using images of landmarks on the asteroid surface. Navigation strategies for the orbital phases leading up to sample collection, the touch down for collecting the sample, and the post sample collection phase at the asteroid are included. Options for successfully executing the phases are studied using covariance analysis and Monte Carlo simulations of an example mission to the near Earth asteroid 4660 Nereus. Two landing options were studied including trajectories with either one or two bums from orbit to the surface. Additionally, a comparison of post-sample collection strategies is presented. These strategies include remaining in orbit about the asteroid or standing-off a given distance until departure to Earth.
USDA-ARS?s Scientific Manuscript database
Statistically robust sampling strategies form an integral component of grain storage and handling activities throughout the world. Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult due to species biology and behavioral characteristics. ...
Limited sampling strategy for determining metformin area under the plasma concentration–time curve
Santoro, Ana Beatriz; Stage, Tore Bjerregaard; Struchiner, Claudio José; Christensen, Mette Marie Hougaard; Brosen, Kim
2016-01-01
Aim The aim was to develop and validate limited sampling strategy (LSS) models to predict the area under the plasma concentration–time curve (AUC) for metformin. Methods Metformin plasma concentrations (n = 627) at 0–24 h after a single 500 mg dose were used for LSS development, based on all subsets linear regression analysis. The LSS‐derived AUC(0,24 h) was compared with the parameter ‘best estimate’ obtained by non‐compartmental analysis using all plasma concentration data points. Correlation between the LSS‐derived and the best estimated AUC(0,24 h) (r 2), bias and precision of the LSS estimates were quantified. The LSS models were validated in independent cohorts. Results A two‐point (3 h and 10 h) regression equation with no intercept estimated accurately the individual AUC(0,24 h) in the development cohort: r 2 = 0.927, bias (mean, 95% CI) –0.5, −2.7–1.8% and precision 6.3, 4.9–7.7%. The accuracy of the two point LSS model was verified in study cohorts of individuals receiving single 500 or 1000 mg (r 2 = –0.933–0.934) or seven 1000 mg daily doses (r 2 = 0.918), as well as using data from 16 published studies covering a wide range of metformin doses, demographics, clinical and experimental conditions (r 2 = 0.976). The LSS model reproduced previously reported results for effects of polymorphisms in OCT2 and MATE1 genes on AUC(0,24 h) and renal clearance of metformin. Conclusions The two point LSS algorithm may be used to assess the systemic exposure to metformin under diverse conditions, with reduced costs of sampling and analysis, and saving time for both subjects and investigators. PMID:27324407
Cui, Liang; Zou, Yuan; Lin, Ninghang; Zhu, Zhi; Jenkins, Gareth; Yang, Chaoyong James
2012-07-03
Fluorescence anisotropy (FA) is a reliable and excellent choice for fluorescence sensing. One of the key factors influencing the FA value for any molecule is the molar mass of the molecule being measured. As a result, the FA method with functional nucleic acid aptamers has been limited to macromolecules such as proteins and is generally not applicable for the analysis of small molecules because their molecular masses are relatively too small to produce observable FA value changes. We report here a molecular mass amplifying strategy to construct anisotropy aptamer probes for small molecules. The probe is designed in such a way that only when a target molecule binds to the probe does it activate its binding ability to an anisotropy amplifier (a high molecular mass molecule such as protein), thus significantly increasing the molecular mass and FA value of the probe/target complex. Specifically, a mass amplifying probe (MAP) consists of a targeting aptamer domain against a target molecule and molecular mass amplifying aptamer domain for the amplifier protein. The probe is initially rendered inactive by a small blocking strand partially complementary to both target aptamer and amplifier protein aptamer so that the mass amplifying aptamer domain would not bind to the amplifier protein unless the probe has been activated by the target. In this way, we prepared two probes that constitute a target (ATP and cocaine respectively) aptamer, a thrombin (as the mass amplifier) aptamer, and a fluorophore. Both probes worked well against their corresponding small molecule targets, and the detection limits for ATP and cocaine were 0.5 μM and 0.8 μM, respectively. More importantly, because FA is less affected by environmental interferences, ATP in cell media and cocaine in urine were directly detected without any tedious sample pretreatment. Our results established that our molecular mass amplifying strategy can be used to design aptamer probes for rapid, sensitive, and selective detection of small molecules by means of FA in complex biological samples.
Suarez-Kurtz, Guilherme; Ribeiro, Frederico Mota; Vicente, Flávio L.; Struchiner, Claudio J.
2001-01-01
Amoxicillin plasma concentrations (n = 1,152) obtained from 48 healthy subjects in two bioequivalence studies were used to develop limited-sampling strategy (LSS) models for estimating the area under the concentration-time curve (AUC), the maximum concentration of drug in plasma (Cmax), and the time interval of concentration above MIC susceptibility breakpoints in plasma (T>MIC). Each subject received 500-mg amoxicillin, as reference and test capsules or suspensions, and plasma concentrations were measured by a validated microbiological assay. Linear regression analysis and a “jack-knife” procedure revealed that three-point LSS models accurately estimated (R2, 0.92; precision, <5.8%) the AUC from 0 h to infinity (AUC0-∞) of amoxicillin for the four formulations tested. Validation tests indicated that a three-point LSS model (1, 2, and 5 h) developed for the reference capsule formulation predicts the following accurately (R2, 0.94 to 0.99): (i) the individual AUC0-∞ for the test capsule formulation in the same subjects, (ii) the individual AUC0-∞ for both reference and test suspensions in 24 other subjects, and (iii) the average AUC0-∞ following single oral doses (250 to 1,000 mg) of various amoxicillin formulations in 11 previously published studies. A linear regression equation was derived, using the same sampling time points of the LSS model for the AUC0-∞, but using different coefficients and intercept, for estimating Cmax. Bioequivalence assessments based on LSS-derived AUC0-∞'s and Cmax's provided results similar to those obtained using the original values for these parameters. Finally, two-point LSS models (R2 = 0.86 to 0.95) were developed for T>MICs of 0.25 or 2.0 μg/ml, which are representative of microorganisms susceptible and resistant to amoxicillin. PMID:11600352
Hao, Chen; Erzheng, Chen; Anwei, Mao; Zhicheng, Yu; Baiyong, Shen; Xiaxing, Deng; Weixia, Zhang; Chenghong, Peng; Hongwei, Li
2007-12-01
Mycophenolate mofetil (MMF) is indicated as immunosuppressive therapy in liver transplantation. The abbreviated models for the estimation of mycophenolic acid (MPA) area under the concentration-time curve (AUC) have been established by limited sampling strategies (LSSs) in adult liver transplant recipients. In the current study, the performance of the abbreviated models to predict MPA exposure was validated in an independent group of patients. A total of 30 MPA pharmacokinetic profiles from 30 liver transplant recipients receiving MMF in combination with tacrolimus were used to compare 8 models' performance with a full 10 time-point MPA-AUC. Linear regression analysis and Bland-Altman analysis were used to compare the estimated MPA-AUC0-12h from each model against the measured MPA-AUC0-12h. A wide range of agreement was shown when estimated MPA-AUC0-12h was compared with measured MPA-AUC0-12h, and the range of coefficient of determination (r2) was from 0.479 to 0.936. The model based on MPA pharmacokinetic parameters C1h, C2h, C6h, and C8h had the best ability to predict measured MPA-AUC0-12h, with the best coefficient of determination (r2=0.936), the excellent prediction bias (2.18%), the best prediction precision (5.11%), and the best prediction variation (2SD=+/-7.88 mg.h/L). However, the model based on MPA pharmacokinetic sampling time points C1h, C2h, and C4h was more suitable when concerned with clinical convenience, which had shorter sampling interval, an excellent coefficient of determination (r2=0.795), an excellent prediction bias (3.48%), an acceptable prediction precision (14.37%), and a good prediction variation (2SD=+/-13.23 mg.h/L). Measured MPA-AUC0-12h could be best predicted by using MPA pharmacokinetic parameters C1h, C2h, C6h, and C8h. The model based on MPA pharmacokinetic parameters C1h, C2h, and C4h was more feasible in clinical application. Copyright (c) 2007 AASLD.
Reynolds, Michelle H.; Pearce, John M.; Lavretsky, Philip; Peters Jeffrey L,; Courtot, Karen; Seixas, Pedro P.
2015-01-01
Genetic diversity is assumed to reflect the evolutionary potential and adaptability of populations, and thus quantifying the genetic diversity of endangered species is useful for recovery programs. In particular, if conservation strategies include reintroductions, periodic genetic assessments are useful to evaluate whether management efforts have resulted in the maximization or loss of genetic variation within populations over generations. In this study, we collected blood, feather, and tissue samples during 1999–2009 and quantified genetic diversity for a critically endangered waterfowl species endemic to the Hawaiian archipelago, the Laysan teal or duck (Anas laysanensis; n = 239 individual birds sampled). The last extant population of this species at Laysan Island was sourced in 2004–2005 for a ‘wild to wild’ translocation of 42 individuals for an experimental reintroduction to Midway Atoll. To inform future management strategies, we compared genetic diversity sampled from the source population (n = 133 Laysan birds) including 23 of Midway’s founders and offspring of the translocated population 2–5 years post release (n = 96 Midway birds). We attempted to identify polymorphic markers by screening nuclear microsatellite (N = 83) and intronic loci (N = 19), as well as the mitochondrial control region (mtDNA) for a subset of samples. Among 83 microsatellite loci screened, six were variable. We found low nuclear variation consistent with the species’ historical population bottlenecks and sequence variation was observed at a single intron locus. We detected no variation within the mtDNA. We found limited but similar estimates of allelic richness (2.58 alleles per locus) and heterozygosity within islands. Two rare alleles found in the Laysan Island source population were not present in the Midway translocated group, and a rare allele was discovered in an individual on Midway in 2008. We found similar genetic diversity and low, but statistically significant, levels of differentiation (0.6%) between island populations suggesting that genetic drift (as a result of translocation-induced population bottlenecking) has had a limited effect within five years post-release. Our results have utility for informing translocation and genetic management decisions.
Levels-Of-Processing Effect on Word Recognition in Schizophrenia
Ragland, J. Daniel; Moelter, Stephen T.; McGrath, Claire; Hill, S. Kristian; Gur, Raquel E.; Bilker, Warren B.; Siegel, Steven J.; Gur, Ruben C.
2015-01-01
Background Individuals with schizophrenia have difficulty organizing words semantically to facilitate encoding. This is commonly attributed to organizational rather than semantic processing limitations. By requiring participants to classify and encode words on either a shallow (e.g., uppercase/lowercase) or deep level (e.g., concrete/abstract), the levels-of-processing paradigm eliminates the need to generate organizational strategies. Methods This paradigm was administered to 30 patients with schizophrenia and 30 healthy comparison subjects to test whether providing a strategy would improve patient performance. Results Word classification during shallow and deep encoding was slower and less accurate in patients. Patients also responded slowly during recognition testing and maintained a more conservative response bias following deep encoding; however, both groups showed a robust levels-of-processing effect on recognition accuracy, with unimpaired patient performance following both shallow and deep encoding. Conclusions This normal levels-of-processing effect in the patient sample suggests that semantic processing is sufficiently intact for patients to benefit from organizational cues. Memory remediation efforts may therefore be most successful if they focus on teaching patients to form organizational strategies during initial encoding. PMID:14643082
Gary, Faye A; Yarandi, Hossein; Hassan, Mona
2015-08-01
Culture enhances the ability to address the stressors related to ethnicity/race, employment, and lifestyle. From this interaction, two coping patterns emerge: individualist-oriented or collectivist-oriented, of which women prefer the latter. However, there is limited knowledge about the impact of ethnicity/race on the coping strategies of Black working women in the USA. Therefore, the purpose of this cross-sectional survey was to examine the coping strategies of two groups of Black women, those who work in non-professional service-related jobs and those employed as professionals. We explored Black women from two southern states, Florida and Georgia, in their use of coping strategies for everyday stressors. A modified version of Lazarus and Folkman's Transactional Model was used as the framework of this study. The sample for this cross-sectional survey consisted of 313 Black women employed in non-professional service jobs and 343 in professional roles. The thoughts and actions related to coping in everyday stressors were measured with The Ways of Coping Questionnaire.
Guerra, Jaime E.; Cruz-Nieto, Javier; Ortiz-Maciel, Sonia Gabriela; Wright, Timothy F.
2012-01-01
The populations of many species are declining worldwide, and conservation efforts struggle to keep pace with extinction rates. Conservation biologists commonly employ strategies such as translocation and reintroduction, which move individuals of endangered species from one part of their range to another. Because individuals from endangered populations are nonexpendable, identifying any potential barriers to the establishment of viable populations prior to release of individuals should be a priority. This study evaluates the potential for learned communication signals to constrain conservation strategies such as reintroduction in an endangered species, the Thick-billed Parrot (Rhynchopsitta pachyrhyncha). We conducted vocal surveys at three geographically distinct breeding populations in the Sierra Madre Occidental of Chihuahua, Mexico. Acoustic analyses utilizing both spectrogram cross-correlations and parameter measurements from spectrograms revealed no significant differences among the three sites in two common call types. Calls did vary among individuals within a site. The apparent lack of significant geographic variation across sampled sites suggests that differences in learned communication signals are unlikely to pose a barrier to the integration of translocated individuals from different populations into newly established populations. PMID:22307993
African american mothers' self-described discipline strategies with young children.
Lecuyer, Elizabeth A; Christensen, Julie J; Kearney, Margaret H; Kitzman, Harriet J
2011-01-01
African American families have been described as using higher levels of non-abusive physical discipline with their children than European American families. Few if any studies have documented AA families' use of disciplinary strategies in their own words, however, or their reasons for their use. In this qualitative study, 51 African American mothers from a 1992 Memphis sample described their disciplinary strategies with their 12 to 19 month old children. Seventy-seven percent of mothers described using verbal teaching along with non-abusive physical discipline, such as tapping their children's hands. Mothers also expressed concern about being too strict, described awareness of their children's developmental limits, and used non-physical disciplinary methods, consistent with positive accepting parent-child relationships. These findings are important because the low-income girls who experienced such parenting in the 1990s are now young mothers themselves. Current practice guidelines encourage new mothers to think about how they themselves were parented; the knowledge from this study may assist practitioners to engage present-day African American mothers in supportive discussions about physical and non-physical discipline methods with young children.
Levels-of-processing effect on word recognition in schizophrenia.
Ragland, J Daniel; Moelter, Stephen T; McGrath, Claire; Hill, S Kristian; Gur, Raquel E; Bilker, Warren B; Siegel, Steven J; Gur, Ruben C
2003-12-01
Individuals with schizophrenia have difficulty organizing words semantically to facilitate encoding. This is commonly attributed to organizational rather than semantic processing limitations. By requiring participants to classify and encode words on either a shallow (e.g., uppercase/lowercase) or deep level (e.g., concrete/abstract), the levels-of-processing paradigm eliminates the need to generate organizational strategies. This paradigm was administered to 30 patients with schizophrenia and 30 healthy comparison subjects to test whether providing a strategy would improve patient performance. Word classification during shallow and deep encoding was slower and less accurate in patients. Patients also responded slowly during recognition testing and maintained a more conservative response bias following deep encoding; however, both groups showed a robust levels-of-processing effect on recognition accuracy, with unimpaired patient performance following both shallow and deep encoding. This normal levels-of-processing effect in the patient sample suggests that semantic processing is sufficiently intact for patients to benefit from organizational cues. Memory remediation efforts may therefore be most successful if they focus on teaching patients to form organizational strategies during initial encoding.
Pathways to Male Transactional Sex in Mumbai, India.
Srivastava, Ankur; Goldbach, Jeremy T
2017-10-27
The article explores pathways to male transactional sex, focusing on entering, soliciting practices, role of pimps, client characteristics, and negotiations. Little scientific literature exists regarding male transactional sex in India, who represent a high-risk group for HIV infection. Case studies with 10 men who engage in transactional sex were conducted, who were recruited using purposive snowball sampling and interviewed at a social service organization. Participants differed in their sexual identity, sex roles, soliciting practices, and clients. Most of them listed economic crisis as their reason for entering transactional sex. Strategies to find clients included self-solicitation and referrals, while pimps played a major role in solicitation, negotiations, and events of crisis. The relationship among men who engage in transactional sex, pimps, and clients involves points of negotiation, opportunities, and limitations. HIV prevention should focus on identifying and addressing the groups' unique needs and working with pimps on risk-reduction strategies and crisis interventions.
Placing Families in Context: Challenges for Cross-National Family Research
Yu, Wei-hsin
2015-01-01
Cross-national comparisons constitute a valuable strategy to assess how broader cultural, political, and institutional contexts shape family outcomes. One typical approach of cross-national family research is to use comparable data from a limited number of countries, fit similar regression models for each country, and compare results across country-specific models. Increasingly, researchers are adopting a second approach, which requires merging data from many more societies and testing multilevel models using the pooled sample. Although the second approach has the advantage of allowing direct estimates of the effects of nation-level characteristics, it is more likely to suffer from the problems of omitted-variable bias, influential cases, and measurement and construct nonequivalence. I discuss ways to improve the first approach's ability to infer macrolevel influences, as well as how to deal with challenges associated with the second one. I also suggest choosing analytical strategies according to whether the data meet multilevel models’ assumptions. PMID:25999603
Nursing faculty preparedness for clinical teaching.
Suplee, Patricia Dunphy; Gardner, Marcia; Jerome-D'Emilia, Bonnie
2014-03-01
Nursing faculty who teach in clinical settings face complex situations requiring evidence-based educational and evaluative strategies, yet many have had limited preparation for these tasks. A convenience sample of 74 nursing faculty participated in a survey about clinical teaching in prelicensure nursing programs. Most faculty developed teaching skills through conferences (57%), orientation at their educational institution (53%), or exposure in graduate school (38%). Thirty-one percent reported having no preparation for clinical teaching. Faculty felt least prepared to manage students with learning, physical, or emotional disabilities and incivility. Twenty-six percent had no preparation for evaluating students in the clinical setting, and only 17% had worked with a faculty mentor. Few evidence-based teaching strategies were used by the faculty. These findings indicate gaps exist in the preparation of clinical faculty. Graduate education, comprehensive orientation programs, and continuing professional development may help to ensure faculty are effective in managing and evaluating student learning. Copyright 2014, SLACK Incorporated.
NASA Astrophysics Data System (ADS)
Zhu, Yu-Feng; Wang, Yong-Sheng; Zhou, Bin; Huang, Yan-Qin; Li, Xue-Jiao; Chen, Si-Han; Wang, Xiao-Feng; Tang, Xian
2018-01-01
We for the first time confirmed that the low concentrations of Ag(I) could induce a silver specific aptamer probe (SAP) from a random coil sequence form to G-quadruplex structure. Thereby, a novel highly sensitive fluorescence strategy for silver(I) assay was established. The designed multifunctional SAP could act as a recognition element for Ag(I) and a signal reporter. The use of such a SAP can ultrasensitively and selectively detect Ag(I), giving a detection limit down to 0.64 nM. This is much lower than those reported by related literatures. This strategy has been applied successfully for the detection of Ag(I) in real samples, further proving its reliability. Taken together, the designed SAP is not only a useful recognition and signal probe for silver, but also gives a platform to study the interaction of monovalent cations with DNA.
Strategies for the coupling of global and local crystal growth models
NASA Astrophysics Data System (ADS)
Derby, Jeffrey J.; Lun, Lisa; Yeckel, Andrew
2007-05-01
The modular coupling of existing numerical codes to model crystal growth processes will provide for maximum effectiveness, capability, and flexibility. However, significant challenges are posed to make these coupled models mathematically self-consistent and algorithmically robust. This paper presents sample results from a coupling of the CrysVUn code, used here to compute furnace-scale heat transfer, and Cats2D, used to calculate melt fluid dynamics and phase-change phenomena, to form a global model for a Bridgman crystal growth system. However, the strategy used to implement the CrysVUn-Cats2D coupling is unreliable and inefficient. The implementation of under-relaxation within a block Gauss-Seidel iteration is shown to be ineffective for improving the coupling performance in a model one-dimensional problem representative of a melt crystal growth model. Ideas to overcome current convergence limitations using approximations to a full Newton iteration method are discussed.
Rutgers University Subcontract B611610 Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soundarajan, Sucheta; Eliassi-Rad, Tina; Gallagher, Brian
Given an incomplete (i.e., partially-observed) network, which nodes should we actively probe in order to achieve the highest accuracy for a given network feature? For example, consider a cyber-network administrator who observes only a portion of the network at time t and wants to accurately identify the most important (e.g., highest PageRank) nodes in the complete network. She has a limited budget for probing the network. Of all the nodes she has observed, which should she probe in order to most accurately identify the important nodes? We propose a novel and scalable algorithm, MaxOutProbe, and evaluate it w.r.t. four networkmore » features (largest connected component, PageRank, core-periphery, and community detection), five network sampling strategies, and seven network datasets from different domains. Across a range of conditions, MaxOutProbe demonstrates consistently high performance relative to several baseline strategies« less
Strongyle infections and parasitic control strategies in German horses - a risk assessment.
Schneider, Stephanie; Pfister, Kurt; Becher, Anne M; Scheuerle, Miriam C
2014-11-12
As a consequence of the increasing levels of anthelmintic resistance in cyathostomes, new strategies for equine parasite control are being implemented. To assess the potential risks of these, the occurrence of strongyles was evaluated in a group of 1887 horses. The distribution of fecal egg counts (FECs), the frequency of anthelmintic drug use, and the deworming intervals were also analyzed. Between June 2012 and May 2013, 1887 fecal samples from either selectively or strategically dewormed horses were collected at 195 horse farms all over Germany and analyzed quantitatively with a modified McMaster technique. All samples with FEC ≥20 eggs per gram (EPG) were subjected to coproculture to generate third-stage larvae (LIII) for species differentiation. Egg counts were below the limit of detection (20 EPG) in 1046 (55.4%) samples and above it in 841 (44.6%) samples. Strongylus vulgaris larvae were identified in two of the 841 positive samples. Infections with cyathostomes were found on every farm. The most frequently applied anthelmintic was ivermectin (788/50.8%), followed by pyrantel (336/21.6%). The mean time since last treatment was 6.3 months. High-egg-shedding (>500 EPG) strategically dewormed horses (183/1357) were treated, on average, three times/year. The planned treatment date was already exceeded by 72.5% of the high egg-shedders and by 58.1% of the moderate (200-500 EPG) and low egg-shedders (20-199 EPG). S. vulgaris seems to be rare in Germany and no difference in its frequency has yet been found between selectively treated horses and horses receiving treatment in strategic intervals. However, inconsistent parasite control has been observed. Therefore, to minimize the risks for disease, consistent and efficient parasite control should be implemented.
NASA Astrophysics Data System (ADS)
Kwiecien, O.; Breitenbach, S. F. M.
2017-12-01
Since the seminal work of Goodfriend (1992, EPSL 11), several studies confirmed a relation between the isotopic composition (δ18O, δ13C) of land snail shell carbonate, and environmental parameters like precipitation amount, moisture source, temperature and vegetation. This relation, however, is not straightforward and, importantly, site dependent. The choice of sampling strategy (discrete or bulk sampling), cleaning procedure, and/or pre-depositional history further complicate the shell analysis. The advantage of using snail shells as environmental archive lies in their limited mobility, and thus an intrinsic aptitude of recording local and site-specific conditions. However, snail shells found at archaeological sites, even if of local origin, often represent a dietary component and boiling/roasting could potentially alter the isotopic signature of aragonite material. While thermal processing affects the clumped isotope composition of carbonates, its influence on traditional isotopes is still debated (Ritter et al. 2017, Sedimentology; Müller et al. 2017, Scientific Reports). Consequently, a proper sampling strategy is of great importance and should be chosen according to scientific question. Horizontal high-resolution shell sampling (drill holes along growth axis, across growth lines) provides insights into the amplitude of seasonal variability, while vertical high-resolution sampling (multiple drill holes along the same growth line) produces reproducible results. We took advantage of this reproducibility and, on a yet unprecedented scale, experimentally and sequentially tested the influence of boiling on the δ18O and δ13C signature of shells of modern Helix pomatia. Our results challenge recent reports on alteration due to boiling (Müller et al., 2017, Scientific Reports) and support uncompromised application of snail shells from archeological sites for paleoenvironmental reconstructions.
Shimakawa, Ginga; Matsuda, Yusuke; Nakajima, Kensuke; Tamoi, Masahiro; Shigeoka, Shigeru; Miyake, Chikahiro
2017-01-20
Photosynthesis produces chemical energy from photon energy in the photosynthetic electron transport and assimilates CO 2 using the chemical energy. Thus, CO 2 limitation causes an accumulation of excess energy, resulting in reactive oxygen species (ROS) which can cause oxidative damage to cells. O 2 can be used as an alternative energy sink when oxygenic phototrophs are exposed to high light. Here, we examined the responses to CO 2 limitation and O 2 dependency of two secondary algae, Euglena gracilis and Phaeodactylum tricornutum. In E. gracilis, approximately half of the relative electron transport rate (ETR) of CO 2 -saturated photosynthesis was maintained and was uncoupled from photosynthesis under CO 2 limitation. The ETR showed biphasic dependencies on O 2 at high and low O 2 concentrations. Conversely, in P. tricornutum, most relative ETR decreased in parallel with the photosynthetic O 2 evolution rate in response to CO 2 limitation. Instead, non-photochemical quenching was strongly activated under CO 2 limitation in P. tricornutum. The results indicate that these secondary algae adopt different strategies to acclimatize to CO 2 limitation, and that both strategies differ from those utilized by cyanobacteria and green algae. We summarize the diversity of strategies for prevention of photo-oxidative damage under CO 2 limitation in cyanobacterial and algal photosynthesis.
KRAS mutations in blood circulating cell-free DNA: a pancreatic cancer case-control
Le Calvez-Kelm, Florence; Foll, Matthieu; Wozniak, Magdalena B.; Delhomme, Tiffany M.; Durand, Geoffroy; Chopard, Priscilia; Pertesi, Maroulio; Fabianova, Eleonora; Adamcakova, Zora; Holcatova, Ivana; Foretova, Lenka; Janout, Vladimir; Vallee, Maxime P.; Rinaldi, Sabina; Brennan, Paul; McKay, James D.; Byrnes, Graham B.; Scelo, Ghislaine
2016-01-01
The utility of KRAS mutations in plasma circulating cell-free DNA (cfDNA) samples as non-invasive biomarkers for the detection of pancreatic cancer has never been evaluated in a large case-control series. We applied a KRAS amplicon-based deep sequencing strategy combined with analytical pipeline specifically designed for the detection of low-abundance mutations to screen plasma samples of 437 pancreatic cancer cases, 141 chronic pancreatitis subjects, and 394 healthy controls. We detected mutations in 21.1% (N=92) of cases, of whom 82 (89.1%) carried at least one mutation at hotspot codons 12, 13 or 61, with mutant allelic fractions from 0.08% to 79%. Advanced stages were associated with an increased proportion of detection, with KRAS cfDNA mutations detected in 10.3%, 17,5% and 33.3% of cases with local, regional and systemic stages, respectively. We also detected KRAS cfDNA mutations in 3.7% (N=14) of healthy controls and in 4.3% (N=6) of subjects with chronic pancreatitis, but at significantly lower allelic fractions than in cases. Combining cfDNA KRAS mutations and CA19-9 plasma levels on a limited set of case-control samples did not improve the overall performance of the biomarkers as compared to CA19-9 alone. Whether the limited sensitivity and specificity observed in our series of KRAS mutations in plasma cfDNA as biomarkers for pancreatic cancer detection are attributable to methodological limitations or to the biology of cfDNA should be further assessed in large case-control series. PMID:27705932
Assays for therapeutic drug monitoring of β-lactam antibiotics: A structured review.
Carlier, Mieke; Stove, Veronique; Wallis, Steven C; De Waele, Jan J; Verstraete, Alain G; Lipman, Jeffrey; Roberts, Jason A
2015-10-01
In some patient groups, including critically ill patients, the pharmacokinetics of β-lactam antibiotics may be profoundly disturbed due to pathophysiological changes in distribution and elimination. Therapeutic drug monitoring (TDM) is a strategy that may help to optimise dosing. The aim of this review was to identify and analyse the published literature on the methods used for β-lactam quantification in TDM programmes. Sixteen reports described methods for the simultaneous determination of three or more β-lactam antibiotics in plasma/serum. Measurement of these antibiotics, due to low frequency of usage relative to some other tests, is generally limited to in-house chromatographic methods coupled to ultraviolet or mass spectrometric detection. Although many published methods state they are fit for TDM, they are inconvenient because of intensive sample preparation and/or long run times. Ideally, methods used for routine TDM should have a short turnaround time (fast run-time and fast sample preparation), a low limit of quantification and a sufficiently high upper limit of quantification. The published assays included a median of 6 analytes [interquartile range (IQR) 4-10], with meropenem and piperacillin being the most frequently measured β-lactam antibiotics. The median run time was 8 min (IQR 5.9-21.3 min). There is also a growing number of methods measuring free concentrations. An assay that measures antibiotics without any sample preparation would be the next step towards real-time monitoring; no such method is currently available. Copyright © 2015 Elsevier B.V. and the International Society of Chemotherapy. All rights reserved.
Hunter, Margaret; Dorazio, Robert M.; Butterfield, John S.; Meigs-Friend, Gaia; Nico, Leo; Ferrante, Jason A.
2017-01-01
A set of universal guidelines is needed to determine the limit of detection (LOD) in PCR-based analyses of low concentration DNA. In particular, environmental DNA (eDNA) studies require sensitive and reliable methods to detect rare and cryptic species through shed genetic material in environmental samples. Current strategies for assessing detection limits of eDNA are either too stringent or subjective, possibly resulting in biased estimates of species’ presence. Here, a conservative LOD analysis grounded in analytical chemistry is proposed to correct for overestimated DNA concentrations predominantly caused by the concentration plateau, a nonlinear relationship between expected and measured DNA concentrations. We have used statistical criteria to establish formal mathematical models for both quantitative and droplet digital PCR. To assess the method, a new Grass Carp (Ctenopharyngodon idella) TaqMan assay was developed and tested on both PCR platforms using eDNA in water samples. The LOD adjustment reduced Grass Carp occupancy and detection estimates while increasing uncertainty – indicating that caution needs to be applied to eDNA data without LOD correction. Compared to quantitative PCR, digital PCR had higher occurrence estimates due to increased sensitivity and dilution of inhibitors at low concentrations. Without accurate LOD correction, species occurrence and detection probabilities based on eDNA estimates are prone to a source of bias that cannot be reduced by an increase in sample size or PCR replicates. Other applications also could benefit from a standardized LOD such as GMO food analysis, and forensic and clinical diagnostics.
Kosack, Cara S.; Shanks, Leslie; Beelaert, Greet; Benson, Tumwesigye; Savane, Aboubacar; Ng'ang'a, Anne; Bita, André; Zahinda, Jean-Paul B. N.; Fransen, Katrien
2017-01-01
ABSTRACT Our objective was to evaluate the performance of HIV testing algorithms based on WHO recommendations, using data from specimens collected at six HIV testing and counseling sites in sub-Saharan Africa (Conakry, Guinea; Kitgum and Arua, Uganda; Homa Bay, Kenya; Douala, Cameroon; Baraka, Democratic Republic of Congo). A total of 2,780 samples, including 1,306 HIV-positive samples, were included in the analysis. HIV testing algorithms were designed using Determine as a first test. Second and third rapid diagnostic tests (RDTs) were selected based on site-specific performance, adhering where possible to the WHO-recommended minimum requirements of ≥99% sensitivity and specificity. The threshold for specificity was reduced to 98% or 96% if necessary. We also simulated algorithms consisting of one RDT followed by a simple confirmatory assay. The positive predictive values (PPV) of the simulated algorithms ranged from 75.8% to 100% using strategies recommended for high-prevalence settings, 98.7% to 100% using strategies recommended for low-prevalence settings, and 98.1% to 100% using a rapid test followed by a simple confirmatory assay. Although we were able to design algorithms that met the recommended PPV of ≥99% in five of six sites using the applicable high-prevalence strategy, options were often very limited due to suboptimal performance of individual RDTs and to shared falsely reactive results. These results underscore the impact of the sequence of HIV tests and of shared false-reactivity data on algorithm performance. Where it is not possible to identify tests that meet WHO-recommended specifications, the low-prevalence strategy may be more suitable. PMID:28747371
Microfluidic devices for sample preparation and rapid detection of foodborne pathogens.
Kant, Krishna; Shahbazi, Mohammad-Ali; Dave, Vivek Priy; Ngo, Tien Anh; Chidambara, Vinayaka Aaydha; Than, Linh Quyen; Bang, Dang Duong; Wolff, Anders
2018-03-10
Rapid detection of foodborne pathogens at an early stage is imperative for preventing the outbreak of foodborne diseases, known as serious threats to human health. Conventional bacterial culturing methods for foodborne pathogen detection are time consuming, laborious, and with poor pathogen diagnosis competences. This has prompted researchers to call the current status of detection approaches into question and leverage new technologies for superior pathogen sensing outcomes. Novel strategies mainly rely on incorporating all the steps from sample preparation to detection in miniaturized devices for online monitoring of pathogens with high accuracy and sensitivity in a time-saving and cost effective manner. Lab on chip is a blooming area in diagnosis, which exploits different mechanical and biological techniques to detect very low concentrations of pathogens in food samples. This is achieved through streamlining the sample handling and concentrating procedures, which will subsequently reduce human errors and enhance the accuracy of the sensing methods. Integration of sample preparation techniques into these devices can effectively minimize the impact of complex food matrix on pathogen diagnosis and improve the limit of detections. Integration of pathogen capturing bio-receptors on microfluidic devices is a crucial step, which can facilitate recognition abilities in harsh chemical and physical conditions, offering a great commercial benefit to the food-manufacturing sector. This article reviews recent advances in current state-of-the-art of sample preparation and concentration from food matrices with focus on bacterial capturing methods and sensing technologies, along with their advantages and limitations when integrated into microfluidic devices for online rapid detection of pathogens in foods and food production line. Copyright © 2018. Published by Elsevier Inc.
Illumina Unamplified Indexed Library Construction: An Automated Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hack, Christopher A.; Sczyrba, Alexander; Cheng, Jan-Fang
Manual library construction is a limiting factor in Illumina sequencing. Constructing libraries by hand is costly, time-consuming, low-throughput, and ergonomically hazardous, and constructing multiple libraries introduces risk of library failure due to pipetting errors. The ability to construct multiple libraries simultaneously in automated fashion represents significant cost and time savings. Here we present a strategy to construct up to 96 unamplified indexed libraries using Illumina TruSeq reagents and a Biomek FX robotic platform. We also present data to indicate that this library construction method has little or no risk of cross-contamination between samples.
Cost-effective Diagnostic Checklists for Meningitis in Resource Limited Settings
Durski, Kara N.; Kuntz, Karen M.; Yasukawa, Kosuke; Virnig, Beth A.; Meya, David B.; Boulware, David R.
2013-01-01
Background Checklists can standardize patient care, reduce errors, and improve health outcomes. For meningitis in resource-limited settings, with high patient loads and limited financial resources, CNS diagnostic algorithms may be useful to guide diagnosis and treatment. However, the cost-effectiveness of such algorithms is unknown. Methods We used decision analysis methodology to evaluate the costs, diagnostic yield, and cost-effectiveness of diagnostic strategies for adults with suspected meningitis in resource limited settings with moderate/high HIV prevalence. We considered three strategies: 1) comprehensive “shotgun” approach of utilizing all routine tests; 2) “stepwise” strategy with tests performed in a specific order with additional TB diagnostics; 3) “minimalist” strategy of sequential ordering of high-yield tests only. Each strategy resulted in one of four meningitis diagnoses: bacterial (4%), cryptococcal (59%), TB (8%), or other (aseptic) meningitis (29%). In model development, we utilized prevalence data from two Ugandan sites and published data on test performance. We validated the strategies with data from Malawi, South Africa, and Zimbabwe. Results The current comprehensive testing strategy resulted in 93.3% correct meningitis diagnoses costing $32.00/patient. A stepwise strategy had 93.8% correct diagnoses costing an average of $9.72/patient, and a minimalist strategy had 91.1% correct diagnoses costing an average of $6.17/patient. The incremental cost effectiveness ratio was $133 per additional correct diagnosis for the stepwise over minimalist strategy. Conclusions Through strategically choosing the order and type of testing coupled with disease prevalence rates, algorithms can deliver more care more efficiently. The algorithms presented herein are generalizable to East Africa and Southern Africa. PMID:23466647
Diversity and Evolutionary History of Iron Metabolism Genes in Diatoms
Groussman, Ryan D.; Parker, Micaela S.; Armbrust, E. Virginia
2015-01-01
Ferroproteins arose early in Earth’s history, prior to the emergence of oxygenic photosynthesis and the subsequent reduction of bioavailable iron. Today, iron availability limits primary productivity in about 30% of the world’s oceans. Diatoms, responsible for nearly half of oceanic primary production, have evolved molecular strategies for coping with variable iron concentrations. Our understanding of the evolutionary breadth of these strategies has been restricted by the limited number of species for which molecular sequence data is available. To uncover the diversity of strategies marine diatoms employ to meet cellular iron demands, we analyzed 367 newly released marine microbial eukaryotic transcriptomes, which include 47 diatom species. We focused on genes encoding proteins previously identified as having a role in iron management: iron uptake (high-affinity ferric reductase, multi-copper oxidase, and Fe(III) permease); iron storage (ferritin); iron-induced protein substitutions (flavodoxin/ferredoxin, and plastocyanin/cytochrome c6) and defense against reactive oxygen species (superoxide dismutases). Homologs encoding the high-affinity iron uptake system components were detected across the four diatom Classes suggesting an ancient origin for this pathway. Ferritin transcripts were also detected in all Classes, revealing a more widespread utilization of ferritin throughout diatoms than previously recognized. Flavodoxin and plastocyanin transcripts indicate possible alternative redox metal strategies. Predicted localization signals for ferredoxin identify multiple examples of gene transfer from the plastid to the nuclear genome. Transcripts encoding four superoxide dismutase metalloforms were detected, including a putative nickel-coordinating isozyme. Taken together, our results suggest that the majority of iron metabolism genes in diatoms appear to be vertically inherited with functional diversity achieved via possible neofunctionalization of paralogs. This refined view of iron use strategies in diatoms elucidates the history of these adaptations, and provides potential molecular markers for determining the iron nutritional status of different diatom species in environmental samples. PMID:26052941
The limits of weak selection and large population size in evolutionary game theory.
Sample, Christine; Allen, Benjamin
2017-11-01
Evolutionary game theory is a mathematical approach to studying how social behaviors evolve. In many recent works, evolutionary competition between strategies is modeled as a stochastic process in a finite population. In this context, two limits are both mathematically convenient and biologically relevant: weak selection and large population size. These limits can be combined in different ways, leading to potentially different results. We consider two orderings: the [Formula: see text] limit, in which weak selection is applied before the large population limit, and the [Formula: see text] limit, in which the order is reversed. Formal mathematical definitions of the [Formula: see text] and [Formula: see text] limits are provided. Applying these definitions to the Moran process of evolutionary game theory, we obtain asymptotic expressions for fixation probability and conditions for success in these limits. We find that the asymptotic expressions for fixation probability, and the conditions for a strategy to be favored over a neutral mutation, are different in the [Formula: see text] and [Formula: see text] limits. However, the ordering of limits does not affect the conditions for one strategy to be favored over another.
2014-01-01
Background In 2012 mobile phone numbers were included into the ongoing New South Wales Population Health Survey (NSWPHS) using an overlapping dual-frame design. Previously in the NSWPHS the sample was selected using random digit dialing (RDD) of landline phone numbers. The survey was undertaken using computer assisted telephone interviewing (CATI). The weighting strategy needed to be significantly expanded to manage the differing probabilities of selection by frame, including that of children of mobile-only phone users, and to adjust for the increased chance of selection of dual-phone users. This paper describes the development of the final weighting strategy to properly combine the data from two overlapping sample frames accounting for the fact that population benchmarks for the different sampling frames were not available at the state or regional level. Methods Estimates of the number of phone numbers for the landline and mobile phone frames used to calculate the differing probabilities of selection by frame, for New South Wales (NSW) and by stratum, were obtained by apportioning Australian estimates as none were available for NSW. The weighting strategy was then developed by calculating person selection probabilities, selection weights, applying a constant composite factor to the dual-phone users sample weights, and benchmarking to the latest NSW population by age group, sex and stratum. Results Data from the NSWPHS for the first quarter of 2012 was used to test the weighting strategy. This consisted of data on 3395 respondents with 2171 (64%) from the landline frame and 1224 (36%) from the mobile frame. However, in order to calculate the weights, data needed to be available for all core weighting variables and so 3378 respondents, 2933 adults and 445 children, had sufficient data to be included. Average person weights were 3.3 times higher for the mobile-only respondents, 1.3 times higher for the landline-only respondents and 1.7 times higher for dual-phone users in the mobile frame compared to the dual-phone users in the landline frame. The overall weight effect for the first quarter of 2012 was 1.93 and the coefficient of variation of the weights was 0.96. The weight effects for 2012 were similar to, and in many cases less than, the effects found in the corresponding quarter of the 2011 NSWPHS when only a landline based sample was used. Conclusions The inclusion of mobile phone numbers, through an overlapping dual-frame design, improved the coverage of the survey and an appropriate weighing procedure is feasible, although it added substantially to the complexity of the weighting strategy. Access to accurate Australian, State and Territory estimates of the number of landline and mobile phone numbers and type of phone use by at least age group and sex would greatly assist in the weighting of dual-frame surveys in Australia. PMID:25189826
Barr, Margo L; Ferguson, Raymond A; Hughes, Phil J; Steel, David G
2014-09-04
In 2012 mobile phone numbers were included into the ongoing New South Wales Population Health Survey (NSWPHS) using an overlapping dual-frame design. Previously in the NSWPHS the sample was selected using random digit dialing (RDD) of landline phone numbers. The survey was undertaken using computer assisted telephone interviewing (CATI). The weighting strategy needed to be significantly expanded to manage the differing probabilities of selection by frame, including that of children of mobile-only phone users, and to adjust for the increased chance of selection of dual-phone users. This paper describes the development of the final weighting strategy to properly combine the data from two overlapping sample frames accounting for the fact that population benchmarks for the different sampling frames were not available at the state or regional level. Estimates of the number of phone numbers for the landline and mobile phone frames used to calculate the differing probabilities of selection by frame, for New South Wales (NSW) and by stratum, were obtained by apportioning Australian estimates as none were available for NSW. The weighting strategy was then developed by calculating person selection probabilities, selection weights, applying a constant composite factor to the dual-phone users sample weights, and benchmarking to the latest NSW population by age group, sex and stratum. Data from the NSWPHS for the first quarter of 2012 was used to test the weighting strategy. This consisted of data on 3395 respondents with 2171 (64%) from the landline frame and 1224 (36%) from the mobile frame. However, in order to calculate the weights, data needed to be available for all core weighting variables and so 3378 respondents, 2933 adults and 445 children, had sufficient data to be included. Average person weights were 3.3 times higher for the mobile-only respondents, 1.3 times higher for the landline-only respondents and 1.7 times higher for dual-phone users in the mobile frame compared to the dual-phone users in the landline frame. The overall weight effect for the first quarter of 2012 was 1.93 and the coefficient of variation of the weights was 0.96. The weight effects for 2012 were similar to, and in many cases less than, the effects found in the corresponding quarter of the 2011 NSWPHS when only a landline based sample was used. The inclusion of mobile phone numbers, through an overlapping dual-frame design, improved the coverage of the survey and an appropriate weighing procedure is feasible, although it added substantially to the complexity of the weighting strategy. Access to accurate Australian, State and Territory estimates of the number of landline and mobile phone numbers and type of phone use by at least age group and sex would greatly assist in the weighting of dual-frame surveys in Australia.
Humanizing birth: a global grassroots movement.
Goer, Henci
2004-12-01
A survey of a convenience sample of 24 grassroots birth activist groups based in several countries revealed remarkable similarities despite differences in culture and maternity care systems. With few exceptions, they began with a few individuals, generally women, who were dissatisfied or angry with an obstetric management system that failed to provide safe, effective, humane maternity care, that suppressed alternative models of care and nonconforming practitioners, or both. Responses indicated that organizational structures tend to fall into a limited number of categories, and strategies intended to accomplish reform overlap considerably. All groups have experienced difficulties resulting from the hegemony of conventional obstetric management and active opposition of practitioners within that model. Most groups are volunteer based, and all struggle under the handicap of limited resources compared with the forces arrayed against them and the scope of what they hope to accomplish.
Distributed event-triggered consensus strategy for multi-agent systems under limited resources
NASA Astrophysics Data System (ADS)
Noorbakhsh, S. Mohammad; Ghaisari, Jafar
2016-01-01
The paper proposes a distributed structure to address an event-triggered consensus problem for multi-agent systems which aims at concurrent reduction in inter-agent communication, control input actuation and energy consumption. Following the proposed approach, asymptotic convergence of all agents to consensus requires that each agent broadcasts its sampled-state to the neighbours and updates its control input only at its own triggering instants, unlike the existing related works. Obviously, it decreases the network bandwidth usage, sensor energy consumption, computation resources usage and actuator wears. As a result, it facilitates the implementation of the proposed consensus protocol in the real-world applications with limited resources. The stability of the closed-loop system under an event-based protocol is proved analytically. Some numerical results are presented which confirm the analytical discussion on the effectiveness of the proposed design.
Search for heavy long-lived particles that decay to photons at CDF II.
Abulencia, A; Adelman, J; Affolder, T; Akimoto, T; Albrow, M G; Amerio, S; Amidei, D; Anastassov, A; Anikeev, K; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Behari, S; Bellettini, G; Bellinger, J; Belloni, A; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carillo, S; Carlsmith, D; Carosi, R; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, I; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Cilijak, M; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Coca, M; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Almenar, C Cuenca; Cuevas, J; Culbertson, R; Cully, J C; Daronco, S; Datta, M; D'Auria, S; Davies, T; Dagenhart, D; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; De Lorenzo, G; Dell'Orso, M; Delli Paoli, F; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Dörr, C; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, I; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Field, R; Flanagan, G; Forrest, R; Forrester, S; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garcia, J E; Garberson, F; Garfinkel, A F; Gay, C; Gerberich, H; Gerdes, D; Giagu, S; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Goldstein, J; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; da Costa, J Guimaraes; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Holloway, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; Iyutin, B; James, E; Jang, D; Jayatilaka, B; Jeans, D; Jeon, E J; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Karchin, P E; Kato, Y; Kemp, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kraan, A C; Kraus, J; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhlmann, S E; Kuhr, T; Kulkarni, N P; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; LeCompte, T; Lee, E; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lu, R-S; Lucchesi, D; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; MacQueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Margaroli, F; Marginean, R; Marino, C; Marino, C P; Martin, A; Martin, M; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Matsunaga, H; Mattson, M E; Mazini, R; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miles, J; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyamoto, A; Moed, S; Moggi, N; Mohr, B; Moon, C S; Moore, R; Morello, M; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norniella, O; Nurse, E; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savard, P; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; Staveris-Polykalas, A; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sun, H; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tsuno, S; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vazquez, F; Velev, G; Veramendi, G; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vollrath, I; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner, J; Wagner, W; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zhou, J; Zucchelli, S
2007-09-21
We present the first search for heavy, long-lived particles that decay to photons at a hadron collider. We use a sample of gamma + jet + missing transverse energy events in pp[over] collisions at square root[s] = 1.96 TeV taken with the CDF II detector. Candidate events are selected based on the arrival time of the photon at the detector. Using an integrated luminosity of 570 pb(-1) of collision data, we observe 2 events, consistent with the background estimate of 1.3+/-0.7 events. While our search strategy does not rely on model-specific dynamics, we set cross section limits in a supersymmetric model with [Formula: see text] and place the world-best 95% C.L. lower limit on the [Formula: see text] mass of 101 GeV/c(2) at [Formula: see text].
Liu, Xiaofei; Ying, Guangyao; Sun, Chaonan; Yang, Meihua; Zhang, Lei; Zhang, Shanshan; Xing, Xiaoyan; Li, Qian; Kong, Weijun
2018-01-01
The high acidity and complex components of Hibiscus sabdariffa have provided major challenges for sensitive determination of trace aflatoxins. In this study, sample pretreatment of H. sabdariffa was systematically developed for sensitive high performance liquid chromatography-fluorescence detection (HPLC-FLD) after ultrasonication-assisted extraction, immunoaffinity column (IAC) clean-up and on-line post-column photochemical derivatization (PCD). Aflatoxins B1, B2, G1, G2 were extracted from samples by using methanol/water (70:30, v/v) with the addition of NaCl. The solutions were diluted 1:8 with 0.1 M phosphate buffer (pH 8.0) to negate the issues of high acidity and matrix interferences. The established method was validated with satisfactory linearity (R > 0.999), sensitivity (limits of detection (LODs) and limits of quantitation (LOQs) of 0.15–0.65 and 0.53–2.18 μg/kg, respectively), precision (RSD <11%), stability (RSD of 0.2–3.6%), and accuracy (recovery rates of 86.0–102.3%), which all met the stipulated analytical requirements. Analysis of 28 H. sabdariffa samples indicated that one sample incubated with Aspergillus flavus was positive with aflatoxin B1 (AFB1) at 3.11 μg/kg. The strategy developed in this study also has the potential to reliably extract and sensitively detect more mycotoxins in other complex acidic matrices, such as traditional Chinese medicines, foodstuffs, etc. PMID:29681848
Liu, Xiaofei; Ying, Guangyao; Sun, Chaonan; Yang, Meihua; Zhang, Lei; Zhang, Shanshan; Xing, Xiaoyan; Li, Qian; Kong, Weijun
2018-01-01
The high acidity and complex components of Hibiscus sabdariffa have provided major challenges for sensitive determination of trace aflatoxins. In this study, sample pretreatment of H. sabdariffa was systematically developed for sensitive high performance liquid chromatography-fluorescence detection (HPLC-FLD) after ultrasonication-assisted extraction, immunoaffinity column (IAC) clean-up and on-line post-column photochemical derivatization (PCD). Aflatoxins B 1 , B 2 , G 1 , G 2 were extracted from samples by using methanol/water (70:30, v/v ) with the addition of NaCl. The solutions were diluted 1:8 with 0.1 M phosphate buffer (pH 8.0) to negate the issues of high acidity and matrix interferences. The established method was validated with satisfactory linearity ( R > 0.999), sensitivity (limits of detection (LODs) and limits of quantitation (LOQs) of 0.15-0.65 and 0.53-2.18 μg/kg, respectively), precision (RSD <11%), stability (RSD of 0.2-3.6%), and accuracy (recovery rates of 86.0-102.3%), which all met the stipulated analytical requirements. Analysis of 28 H. sabdariffa samples indicated that one sample incubated with Aspergillus flavus was positive with aflatoxin B 1 (AFB 1 ) at 3.11 μg/kg. The strategy developed in this study also has the potential to reliably extract and sensitively detect more mycotoxins in other complex acidic matrices, such as traditional Chinese medicines, foodstuffs, etc.
Kerekes, Kata; Bonilauri, Paolo; Serraino, Andrea; Giacometti, Federica; Piva, Silvia; Zambrini, Vittorio; Canever, Alessandra; Farkas, Zsuzsa; Ambrus, Árpád
2016-12-01
The study reports the results of testing the sensitivity of an early warning sampling plan for detecting milk batches with high aflatoxin AFM 1 concentration. The effectiveness of the method was investigated by the analysis of 9017 milk samples collected in Italian milk processing plants that applied control plans with different action limits (AL). For those milk processing plants where 30 ng kg -1 AL has been applied, the AFM 1 contamination was significantly lower at or above the 95th percentile of the milk samples when compared with plants that used 40 ng kg -1 AL. The results show that the control plan can be used effectively for early warning of occurrence of high AFM 1 contamination of milk and to carry out pro-active measures to limit the level of contamination. Estimation of dietary exposure was also carried out, based on the aflatoxin M 1 content of the milk samples and on Italian food consumption data. Estimated Daily Intakes (EDI) and Hazard Indices (HI) were calculated for different age groups of the population. HIs show that no adverse effects are expected for the adult population, but in the case of children under age three, the approximate HI values were considerably higher. This underlines the importance of the careful monitoring and control of aflatoxin M 1 in milk and dairy products.
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
Millard, Pierre; Massou, Stéphane; Portais, Jean-Charles; Létisse, Fabien
2014-10-21
Mass spectrometry (MS) is widely used for isotopic studies of metabolism in which detailed information about biochemical processes is obtained from the analysis of isotope incorporation into metabolites. The biological value of such experiments is dependent on the accuracy of the isotopic measurements. Using MS, isotopologue distributions are measured from the quantitative analysis of isotopic clusters. These measurements are prone to various biases, which can occur during the experimental workflow and/or MS analysis. The lack of relevant standards limits investigations of the quality of the measured isotopologue distributions. To meet that need, we developed a complete theoretical and experimental framework for the biological production of metabolites with fully controlled and predictable labeling patterns. This strategy is valid for different isotopes and different types of metabolisms and organisms, and was applied to two model microorganisms, Pichia augusta and Escherichia coli, cultivated on (13)C-labeled methanol and acetate as sole carbon source, respectively. The isotopic composition of the substrates was designed to obtain samples in which the isotopologue distribution of all the metabolites should give the binomial coefficients found in Pascal's triangle. The strategy was validated on a liquid chromatography-tandem mass spectrometry (LC-MS/MS) platform by quantifying the complete isotopologue distributions of different intracellular metabolites, which were in close agreement with predictions. This strategy can be used to evaluate entire experimental workflows (from sampling to data processing) or different analytical platforms in the context of isotope labeling experiments.
Pilon, Alan Cesar; Carnevale Neto, Fausto; Freire, Rafael Teixeira; Cardoso, Patrícia; Carneiro, Renato Lajarim; Da Silva Bolzani, Vanderlan; Castro-Gamboa, Ian
2016-03-01
A major challenge in metabolomic studies is how to extract and analyze an entire metabolome. So far, no single method was able to clearly complete this task in an efficient and reproducible way. In this work we proposed a sequential strategy for the extraction and chromatographic separation of metabolites from leaves Jatropha gossypifolia using a design of experiments and partial least square model. The effect of 14 different solvents on extraction process was evaluated and an optimized separation condition on liquid chromatography was estimated considering mobile phase composition and analysis time. The initial conditions of extraction using methanol and separation in 30 min between 5 and 100% water/methanol (1:1 v/v) with 0.1% of acetic acid, 20 μL sample volume, 3.0 mL min(-1) flow rate and 25°C column temperature led to 107 chromatographic peaks. After the optimization strategy using i-propanol/chloroform (1:1 v/v) for extraction, linear gradient elution of 60 min between 5 and 100% water/(acetonitrile/methanol 68:32 v/v with 0.1% of acetic acid), 30 μL sample volume, 2.0 mL min(-1) flow rate, and 30°C column temperature, we detected 140 chromatographic peaks, 30.84% more peaks compared to initial method. This is a reliable strategy using a limited number of experiments for metabolomics protocols. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Shaffer, Anne; Obradović, Jelena
2017-03-01
Parenting is a cognitive, emotional, and behavioral endeavor, yet limited research investigates parents' executive functions and emotion regulation as predictors of how parents interact with their children. The current study is a multimethod investigation of parental self-regulation in relation to the quality of parenting behavior and parent-child interactions in a diverse sample of parents and kindergarten-age children. Using path analyses, we tested how parent executive functions (inhibitory control) and lack of emotion regulation strategies uniquely relate to both sensitive/responsive behaviors and positive/collaborative behaviors during observed interaction tasks. In our analyses, we accounted for parent education, financial stress, and social support as socioeconomic factors that likely relate to parent executive function and emotion regulation skills. In a diverse sample of primary caregivers (N = 102), we found that direct assessment of parent inhibitory control was positively associated with sensitive/responsive behaviors, whereas parent self-reported difficulties in using emotion regulation strategies were associated with lower levels of positive and collaborative dyadic behaviors. Parent education and financial stress predicted inhibitory control, and social support predicted emotion regulation difficulties; parent education was also a significant predictor of sensitive/responsive behaviors. Greater inhibitory control skills and fewer difficulties identifying effective emotion regulation strategies were not significantly related in our final path model. We discuss our findings in the context of current and emerging parenting interventions. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Counterfeit analysis strategy illustrated by a case study.
Dégardin, Klara; Roggo, Yves
2016-01-01
Medicine counterfeiting is a current problem that the whole pharmaceutical field has to deal with. In 2014, counterfeits entered the legitimate supply chain in Europe. Quick and efficient action had to be taken. The aim of this paper is to explain which analytical strategy was chosen to deal with six of the cases concerned and which criteria have to be considered to provide quick and thorough information about the counterfeits. The evaluation of the packaging was performed in a first step, based on a comparison with genuine samples and evaluation of manipulation signs. Chemical methods were then used, consisting of near infrared and infrared spectroscopy, capillary zone electrophoresis and ultraviolet-visible spectrophotometry, in order to authenticate the samples and provide the chemical composition of the confirmed counterfeits. Among the 20 samples analyzed, 17 were confirmed as counterfeits. The counterfeits were the results of the manipulation of genuine samples, and one contained totally counterfeited parts. Several manipulation signs were asserted, like the addition of glue on the boxes and the vials. Genuine stolen goods had been diluted with water, while for an isolated case, a different active ingredient had been introduced in a vial. The analytical data generated were further investigated from a forensic intelligence perspective. Links could be revealed between the analyzed counterfeits, together with some interesting information about the modus operandi of the counterfeiters. The study was performed on a limited number of cases, and therefore encourages chemical and packaging profiling of counterfeits at a bigger scale. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Multi-point objective-oriented sequential sampling strategy for constrained robust design
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhang, Siliang; Chen, Wei
2015-03-01
Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.
Sanna, Daria; Pala, Maria; Cossu, Piero; Dedola, Gian Luca; Melis, Sonia; Fresu, Giovanni; Morelli, Laura; Obinu, Domenica; Tonolo, Giancarlo; Secchi, Giannina; Triunfo, Riccardo; Lorenz, Joseph G.; Scheinfeldt, Laura; Torroni, Antonio; Robledo, Renato; Francalacci, Paolo
2011-01-01
We report a sampling strategy based on Mendelian Breeding Units (MBUs), representing an interbreeding group of individuals sharing a common gene pool. The identification of MBUs is crucial for case-control experimental design in association studies. The aim of this work was to evaluate the possible existence of bias in terms of genetic variability and haplogroup frequencies in the MBU sample, due to severe sample selection. In order to reach this goal, the MBU sampling strategy was compared to a standard selection of individuals according to their surname and place of birth. We analysed mitochondrial DNA variation (first hypervariable segment and coding region) in unrelated healthy subjects from two different areas of Sardinia: the area around the town of Cabras and the western Campidano area. No statistically significant differences were observed when the two sampling methods were compared, indicating that the stringent sample selection needed to establish a MBU does not alter original genetic variability and haplogroup distribution. Therefore, the MBU sampling strategy can be considered a useful tool in association studies of complex traits. PMID:21734814
2011-01-01
Abstract Background The combinatorial library strategy of using multiple candidate ligands in mixtures as library members is ideal in terms of cost and efficiency, but needs special screening methods to estimate the affinities of candidate ligands in such mixtures. Herein, a new method to screen candidate ligands present in unknown molar quantities in mixtures was investigated. Results The proposed method involves preparing a processed-mixture-for-screening (PMFS) with each mixture sample and an exogenous reference ligand, initiating competitive binding among ligands from the PMFS to a target immobilized on magnetic particles, recovering target-ligand complexes in equilibrium by magnetic force, extracting and concentrating bound ligands, and analyzing ligands in the PMFS and the concentrated extract by chromatography. The relative affinity of each candidate ligand to its reference ligand is estimated via an approximation equation assuming (a) the candidate ligand and its reference ligand bind to the same site(s) on the target, (b) their chromatographic peak areas are over five times their intercepts of linear response but within their linear ranges, (c) their binding ratios are below 10%. These prerequisites are met by optimizing primarily the quantity of the target used and the PMFS composition ratio. The new method was tested using the competitive binding of biotin derivatives from mixtures to streptavidin immobilized on magnetic particles as a model. Each mixture sample containing a limited number of candidate biotin derivatives with moderate differences in their molar quantities were prepared via parallel-combinatorial-synthesis (PCS) without purification, or via the pooling of individual compounds. Some purified biotin derivatives were used as reference ligands. This method showed resistance to variations in chromatographic quantification sensitivity and concentration ratios; optimized conditions to validate the approximation equation could be applied to different mixture samples. Relative affinities of candidate biotin derivatives with unknown molar quantities in each mixture sample were consistent with those estimated by a homogenous method using their purified counterparts as samples. Conclusions This new method is robust and effective for each mixture possessing a limited number of candidate ligands whose molar quantities have moderate differences, and its integration with PCS has promise to routinely practice the mixture-based library strategy. PMID:21545719
Barnieh, Lianne; Clement, Fiona; Harris, Anthony; Blom, Marja; Donaldson, Cam; Klarenbach, Scott; Husereau, Don; Lorenzetti, Diane; Manns, Braden
2014-01-01
Background Publicly-funded drug plans vary in strategies used and policies employed to reduce continually increasing pharmaceutical expenditures. We systematically reviewed the utilization of cost-sharing strategies and physician-directed prescribing regulations in publicly-funded formularies within member nations of the Organization of Economic Cooperation and Development (OECD). Methods & Findings Using the OECD nations as the sampling frame, a search for cost-sharing strategies and physician-directed prescribing regulations was done using published and grey literature. Collected data was verified by a system expert within the prescription drug insurance plan in each country, to ensure the accuracy of key data elements across plans. Significant variation in the use of cost-sharing mechanisms was seen. Copayments were the most commonly used cost-containment measure, though their use and amount varied for those with certain conditions, most often chronic diseases (in 17 countries), and by socio-economic status (either income or employment status), or with age (in 15 countries). Caps and deductibles were only used by five systems. Drug cost-containment strategies targeting physicians were also identified in 24 countries, including guideline-based prescribing, prescription monitoring and incentive structures. Conclusions There was variable use of cost-containment strategies to limit pharmaceutical expenditures in publicly funded formularies within OECD countries. Further research is needed to determine the best approach to constrain costs while maintaining access to pharmaceutical drugs. PMID:24618721
Howard, Andrea L; Pritchard, Tyler R
2017-01-01
This study examined rates of heavy drinking and alcohol problems in relation to drinking motives and protective behavioral strategies in university students with a documented current diagnosis of attention-deficit/hyperactivity disorder (ADHD; n = 31) compared with students with no history of ADHD (n = 146). Participants completed a Web-based questionnaire, and logistic regression models tested interactions between ADHD/comparison group membership and motives and protective strategies. Group differences in rates of heavy drinking and alcohol problems were not statistically significant, but medium-sized risk ratios showed that students without ADHD reported heavy drinking at a rate 1.44 times higher than students with ADHD and met screening criteria for problematic alcohol use at a rate of 1.54 times higher than students with ADHD. Other key findings were, first, that drinking to enhance positive affect (e.g., drinking because it is exciting), but not to cope with negative affect (e.g., drinking to forget your worries), predicted both heavy drinking and alcohol problems. Second, only protective behavioral strategies that emphasize alcohol avoidance predicted both heavy drinking and alcohol problems. Contrary to expectations, we found no ADHD-related moderation of effects of motives or protective strategies on our alcohol outcomes. Results of this study are limited by the small sample of students with ADHD but highlight tentative similarities and differences in effects of motives and strategies on drinking behaviors and alcohol problems reported by students with and without ADHD. PMID:28814878
Li, Huiyu; Zhao, Li; Xu, Yuan; Zhou, Tianyu; Liu, Haochi; Huang, Ning; Ding, Jie; Li, Yi; Ding, Lan
2018-08-01
It is difficult to detect tetracycline (TC) in honey sample by using carbon dots (CDs) because the autofluorescence of the matrix of honey sample overlaps with the fluorescence emission spectrum of the large majority of CDs. Herein, single-hole hollow molecularly imprinted polymers embedded carbon dots (HMIP@CD) was prepared via microwave-assisted method. TC in diluted honey sample was adsorbed by the HMIP@CD within 3 min, after which the HMIP@CD absorbed with TC was separated by centrifugation from honey sample and redispersed into phosphate buffer solution. The autofluorescence of honey that interferes with the fluorescence signal of HMIP@CD was avoided. The method exhibited an excellent linearity within 10-200 μg L -1 and a low detection limit of 3.1 μg L -1 . At three spiking levels of TC, the recoveries ranged from 93% to 105% with precisions below 1.6%. This method provides an effective strategy for detecting analyte in complex matrix with autofluorescence interference. Copyright © 2018. Published by Elsevier B.V.
Method to determine 226Ra in small sediment samples by ultralow background liquid scintillation.
Sanchez-Cabeza, Joan-Albert; Kwong, Laval Liong Wee; Betti, Maria
2010-08-15
(210)Pb dating of sediment cores is a widely used tool to reconstruct ecosystem evolution and historical pollution during the last century. Although (226)Ra can be determined by gamma spectrometry, this method shows severe limitations which are, among others, sample size requirements and counting times. In this work, we propose a new strategy based on the analysis of (210)Pb through (210)Po in equilibrium by alpha spectrometry, followed by the determination of (226)Ra (base or supported (210)Pb) without any further chemical purification by liquid scintillation and with a higher sample throughput. Although gamma spectrometry might still be required to determine (137)Cs as an independent tracer, the effort can then be focused only on those sections dated around 1963, when maximum activities are expected. In this work, we optimized the counting conditions, calibrated the system for changing quenching, and described the new method to determine (226)Ra in small sediment samples, after (210)Po determination, allowing a more precise determination of excess (210)Pb ((210)Pb(ex)). The method was validated with reference materials IAEA-384, IAEA-385, and IAEA-313.
Ma, Yafeng; Luk, Alison; Young, Francis P; Lynch, David; Chua, Wei; Balakrishnar, Bavanthi; de Souza, Paul; Becker, Therese M
2016-08-04
Androgen receptor splice variant V7 (AR-V7) was recently identified as a valuable predictive biomarker in metastatic castrate-resistant prostate cancer. Here, we report a new, sensitive and accurate screen for AR-V7 mRNA expression directly from circulating tumor cells (CTCs): We combined EpCAM-based immunomagnetic CTC isolation using the IsoFlux microfluidic platform with droplet digital polymerase chain reaction (ddPCR) to analyze total AR and AR-V7 expression from prostate cancer patients CTCs. We demonstrate that AR-V7 is reliably detectable in enriched CTC samples with as little as five CTCs, even considering tumor heterogeneity, and confirm detection of AR-V7 in CTC samples from advanced prostate cancer (PCa) patients with AR-V7 detection limited to castrate resistant disease status in our sample set. Sensitive molecular analyses of circulating tumor cells (CTCs) or circulating tumor nucleic acids present exciting strategies to detect biomarkers, such as AR-V7 from non-invasive blood samples, so-called blood biopsies.
Multidirectional Image Sensing for Microscopy Based on a Rotatable Robot.
Shen, Yajing; Wan, Wenfeng; Zhang, Lijun; Yong, Li; Lu, Haojian; Ding, Weili
2015-12-15
Image sensing at a small scale is essentially important in many fields, including microsample observation, defect inspection, material characterization and so on. However, nowadays, multi-directional micro object imaging is still very challenging due to the limited field of view (FOV) of microscopes. This paper reports a novel approach for multi-directional image sensing in microscopes by developing a rotatable robot. First, a robot with endless rotation ability is designed and integrated with the microscope. Then, the micro object is aligned to the rotation axis of the robot automatically based on the proposed forward-backward alignment strategy. After that, multi-directional images of the sample can be obtained by rotating the robot within one revolution under the microscope. To demonstrate the versatility of this approach, we view various types of micro samples from multiple directions in both optical microscopy and scanning electron microscopy, and panoramic images of the samples are processed as well. The proposed method paves a new way for the microscopy image sensing, and we believe it could have significant impact in many fields, especially for sample detection, manipulation and characterization at a small scale.
Dobecki, Marek
2012-01-01
This paper reviews the requirements for measurement methods of chemical agents in the air at workstations. European standards, which have a status of Polish standards, comprise some requirements and information on sampling strategy, measuring techniques, type of samplers, sampling pumps and methods of occupational exposure evaluation at a given technological process. Measurement methods, including air sampling and analytical procedure in a laboratory, should be appropriately validated before intended use. In the validation process, selected methods are tested and budget of uncertainty is set up. The validation procedure that should be implemented in the laboratory together with suitable statistical tools and major components of uncertainity to be taken into consideration, were presented in this paper. Methods of quality control, including sampling and laboratory analyses were discussed. Relative expanded uncertainty for each measurement expressed as a percentage, should not exceed the limit of values set depending on the type of occupational exposure (short-term or long-term) and the magnitude of exposure to chemical agents in the work environment.
Xiang, Guoqiang; Wang, Yule; Zhang, Heng; Fan, Huanhuan; Fan, Lu; He, Lijun; Jiang, Xiuming; Zhao, Wenjie
2018-09-15
In this work, a simple and effective strategy for designing a ratiometric fluorescent nanosensor was described. A carbon dots (CDs) based dual-emission nanosensor for nitrite was prepared by coating the CDs on to dye-doped silica nanoparticles. Dual-emission silica nanoparticles fluorescence was quenched in sulfuric acid using potassium bromate (KBrO 3 ). The nitrite present catalyzed the KBrO 3 oxidation, resulting in ratiometric fluorescence response of the dual-emission silica nanoparticles. Several important parameters affecting the performance of the nanosensor were investigated. Under optimized conditions, the limit of detection was 1.0 ng mL -1 and the linear range 10-160 ng mL -1 . Furthermore, the sensor was suitable for nitrite determination in different food samples. Copyright © 2018 Elsevier Ltd. All rights reserved.
Frison, Eline; Eggermont, Steven
2016-03-01
Social networking sites, such as Facebook, offer adolescent users an ideal platform for negative comparison (i.e., experiencing negative feelings from social comparison). Although such negative comparison on Facebook has been associated with users' well-being, the reciprocal relations between the two remain unclear, particularly in an adolescent sample. To examine this reciprocal process, a two-wave study among a representative sample of Flemish adolescents was set up (N Time1 = 1,840). Data were analyzed using structural equation modeling. Cross-lagged analyses indicated that negative comparison on Facebook predicted decreases in life satisfaction over time. Conversely, lower scores on life satisfaction predicted increases in negative comparison on Facebook. The discussion focuses on the understanding of these findings, key limitations, directions for future research, and implications for prevention and intervention strategies.
Pérez, Rocío L; Escandar, Graciela M
2014-07-04
Following the green analytical chemistry principles, an efficient strategy involving second-order data provided by liquid chromatography (LC) with diode array detection (DAD) was applied for the simultaneous determination of estriol, 17β-estradiol, 17α-ethinylestradiol and estrone in natural water samples. After a simple pre-concentration step, LC-DAD matrix data were rapidly obtained (in less than 5 min) with a chromatographic system operating isocratically. Applying a second-order calibration algorithm based on multivariate curve resolution with alternating least-squares (MCR-ALS), successful resolution was achieved in the presence of sample constituents that strongly coelute with the analytes. The flexibility of this multivariate model allowed the quantification of the four estrogens in tap, mineral, underground and river water samples. Limits of detection in the range between 3 and 13 ng L(-1), and relative prediction errors from 2 to 11% were achieved. Copyright © 2014 Elsevier B.V. All rights reserved.
Phillips, Russell E; Stein, Catherine H
2007-06-01
Qualitative research has demonstrated that religious meaning-making coping, defined as attributions of a stressful life event that involve the sacred, is particularly relevant to persons with serious mental illness. However, recent research advances in the study of religious coping have yet to be employed in clinical samples. This longitudinal study examines religious meaning-making coping in a sample of 48 young adults diagnosed with schizophrenia or bipolar disorder over a one-year period. Young adults with mental illness generally reported using religious meaning-making coping in levels comparable to nonpsychiatric samples. Reports of benevolent religious reappraisals were associated with perceptions of positive mental health, whereas punishing God reappraisals and reappraisals of God's power were associated with self-reported distress and personal loss. Religious coping variables accounted for variation in adults' reports of psychiatric symptoms and personal loss one year later over and above demographic and global religious variables. Implications of findings for clinical practice are discussed.
de Oliveira, Fabio Santos; Korn, Mauro
2006-01-15
A sensitive SIA method was developed for sulphate determination in automotive fuel ethanol. This method was based on the reaction of sulphate with barium-dimethylsulphonazo(III) leading to a decrease on the magnitude of analytical signal monitored at 665 nm. Alcohol fuel samples were previously burned up to avoid matrix effects for sulphate determinations. Binary sampling and stop-flow strategies were used to increase the sensitivity of the method. The optimization of analytical parameter was performed by response surface method using Box-Behnker and central composite designs. The proposed sequential flow procedure permits to determine up to 10.0mg SO(4)(2-)l(-1) with R.S.D. <2.5% and limit of detection of 0.27 mg l(-1). The method has been successfully applied for sulphate determination in automotive fuel alcohol and the results agreed with the reference volumetric method. In the optimized condition the SIA system carried out 27 samples per hour.
Interface Prostheses With Classifier-Feedback-Based User Training.
Fang, Yinfeng; Zhou, Dalin; Li, Kairu; Liu, Honghai
2017-11-01
It is evident that user training significantly affects performance of pattern-recognition-based myoelectric prosthetic device control. Despite plausible classification accuracy on offline datasets, online accuracy usually suffers from the changes in physiological conditions and electrode displacement. The user ability in generating consistent electromyographic (EMG) patterns can be enhanced via proper user training strategies in order to improve online performance. This study proposes a clustering-feedback strategy that provides real-time feedback to users by means of a visualized online EMG signal input as well as the centroids of the training samples, whose dimensionality is reduced to minimal number by dimension reduction. Clustering feedback provides a criterion that guides users to adjust motion gestures and muscle contraction forces intentionally. The experiment results have demonstrated that hand motion recognition accuracy increases steadily along the progress of the clustering-feedback-based user training, while conventional classifier-feedback methods, i.e., label feedback, hardly achieve any improvement. The result concludes that the use of proper classifier feedback can accelerate the process of user training, and implies prosperous future for the amputees with limited or no experience in pattern-recognition-based prosthetic device manipulation.It is evident that user training significantly affects performance of pattern-recognition-based myoelectric prosthetic device control. Despite plausible classification accuracy on offline datasets, online accuracy usually suffers from the changes in physiological conditions and electrode displacement. The user ability in generating consistent electromyographic (EMG) patterns can be enhanced via proper user training strategies in order to improve online performance. This study proposes a clustering-feedback strategy that provides real-time feedback to users by means of a visualized online EMG signal input as well as the centroids of the training samples, whose dimensionality is reduced to minimal number by dimension reduction. Clustering feedback provides a criterion that guides users to adjust motion gestures and muscle contraction forces intentionally. The experiment results have demonstrated that hand motion recognition accuracy increases steadily along the progress of the clustering-feedback-based user training, while conventional classifier-feedback methods, i.e., label feedback, hardly achieve any improvement. The result concludes that the use of proper classifier feedback can accelerate the process of user training, and implies prosperous future for the amputees with limited or no experience in pattern-recognition-based prosthetic device manipulation.
Optimal sampling strategies for detecting zoonotic disease epidemics.
Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W
2014-06-01
The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.
Sample Strategies Used To Serve Rural Students in the Least Restrictive Environment.
ERIC Educational Resources Information Center
Helge, Doris
This booklet provides sample strategies to ameliorate service delivery problems commonly encountered by rural special educators. Strategies to increase acceptance of disabled students by nondisabled peers include buddy systems and class activities that promote personal interaction, simulation activities, and social and personal skills development.…
Setting Win Limits: An Alternative Approach to "Responsible Gambling"?
Walker, Douglas M; Litvin, Stephen W; Sobel, Russell S; St-Pierre, Renée A
2015-09-01
Social scientists, governments, and the casino industry have all emphasized the need for casino patrons to "gamble responsibly." Strategies for responsible gambling include self-imposed time limits and loss limits on gambling. Such strategies help prevent people from losing more than they can afford and may help prevent excessive gambling behavior. Yet, loss limits also make it more likely that casino patrons leave when they are losing. Oddly, the literature makes no mention of "win limits" as a potential approach to responsible gambling. A win limit would be similar to a loss limit, except the gambler would leave the casino upon reaching a pre-set level of winnings. We anticipate that a self-imposed win limit will reduce the gambler's average loss and, by default, also reduce the casino's profit. We test the effect of a self-imposed win limit by running slot machine simulations in which the treatment group of players has self-imposed and self-enforced win and loss limits, while the control group has a self-imposed loss limit or no limit. We find that the results conform to our expectations: the win limit results in improved player performance and reduced casino profits. Additional research is needed, however, to determine whether win limits could be a useful component of a responsible gambling strategy.
Freight transportation : strategies needed to address planning and financing limitations
DOT National Transportation Integrated Search
2003-12-01
The General Accounting Office (GAO) was asked to address (1) the challenges to freight mobility, (2) the limitations key stakeholders have encountered in addressing these challenges, and (3) strategies that may aid decision makers in enhancing freigh...
Sulfonated chitosan and dopamine based coatings for metallic implants in contact with blood.
Campelo, Clayton S; Chevallier, Pascale; Vaz, Juliana M; Vieira, Rodrigo S; Mantovani, Diego
2017-03-01
Thrombosis and calcification constitute the main clinical problems when blood-interacting devices are implanted in the body. Coatings with thin polymer layers represent an acknowledged strategy to modulate interactions between the material surface and the blood environment. To ensure the implant success, at short-term the coating should limit platelets adhesion and delay the clot formation, and at long-term it should delay the calcification process. Sulfonated chitosan, if compared to native chitosan, shows the unique ability to reduce proteins adsorption, decrease thrombogenic properties and limit calcification. In this work, stainless steel surfaces, commonly used for cardiovascular applications, were coated with sulfonated chitosan, by using dopamine and PEG as anchors, and the effect of these grafted surfaces on platelet adhesion, clot formation as well as on calcification were investigated. Surface characterization techniques evidenced that the coating formation was successful, and the sulfonated chitosan grafted sample exhibited a higher roughness and hydrophilicity, if compared to native chitosan one. Moreover, sulfonated surface limited platelet activation and the process of clot formation, thus confirming its high biological performances in blood. Calcium deposits were also lower on the sulfonated chitosan sample compared to the chitosan one, thus showing that calcification was minimal in presence of sulfonate groups. In conclusion, this sulfonated-modified surface has potential to be as blood-interacting material. Copyright © 2016. Published by Elsevier B.V.
Lázaro-Nogal, Ana; Forner, Alicia; Traveset, Anna; Valladares, Fernando
2013-12-01
Plants have evolved different strategies to cope with drought, involving alternative ecophysiologies and different levels of plasticity. These strategies are critical for species of limited distribution, which are especially vulnerable to the current rates of rapid environmental change. The aim of this study was to assess the water strategy of two species with limited distribution, Cneorum tricoccon L. and Rhamnus ludovici-salvatoris Chodat., and evaluate their interpopulation variability along an aridity gradient to estimate their vulnerability to a drier climate. We measured different ecophysiological traits influenced by drought--stomatal conductance, maximum photochemical efficiency of photosynthesis II, carbon isotope ratio and chlorophyll concentration--in two climatically contrasting years, before and during summer drought. Both species were vulnerable to drought at the aridity limit of the gradient, but showed contrasting water strategies: while C. tricoccon was consistent in its water conservation strategy across the aridity gradient, R. ludovici-salvatoris was not, displaying higher and more variable stomatal conductances and being able to increase water-use efficiency at the most xeric sites. Changes in length and intensity of drought events may favor one species' strategy to the detriment of the other: C. tricoccon is more vulnerable to chronic and prolonged droughts, whereas short but acute droughts might have a stronger effect on R. ludovici-salvatoris. In those communities where these two species coexist, such different strategies might lead to changes in community structure under climate change scenarios, with unknown cascade effects on ecosystem functioning.
Sturrock, Hugh J W; Gething, Pete W; Ashton, Ruth A; Kolaczinski, Jan H; Kabatereine, Narcis B; Brooker, Simon
2011-09-01
In schistosomiasis control, there is a need to geographically target treatment to populations at high risk of morbidity. This paper evaluates alternative sampling strategies for surveys of Schistosoma mansoni to target mass drug administration in Kenya and Ethiopia. Two main designs are considered: lot quality assurance sampling (LQAS) of children from all schools; and a geostatistical design that samples a subset of schools and uses semi-variogram analysis and spatial interpolation to predict prevalence in the remaining unsurveyed schools. Computerized simulations are used to investigate the performance of sampling strategies in correctly classifying schools according to treatment needs and their cost-effectiveness in identifying high prevalence schools. LQAS performs better than geostatistical sampling in correctly classifying schools, but at a cost with a higher cost per high prevalence school correctly classified. It is suggested that the optimal surveying strategy for S. mansoni needs to take into account the goals of the control programme and the financial and drug resources available.
Sadler, Georgia Robins; Lee, Hau-Chen; Seung-Hwan Lim, Rod; Fullerton, Judith
2011-01-01
Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author’s program of research are provided to demonstrate how adaptations of snowball sampling can be effectively used in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or subjects for research studies when recruitment of a population based sample is not essential. PMID:20727089
Recruitment of hard-to-reach population subgroups via adaptations of the snowball sampling strategy.
Sadler, Georgia Robins; Lee, Hau-Chen; Lim, Rod Seung-Hwan; Fullerton, Judith
2010-09-01
Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author's program of research are provided to demonstrate how adaptations of snowball sampling can be used effectively in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more-vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or for research studies when the recruitment of a population-based sample is not essential.
Impact of sampling strategy on stream load estimates in till landscape of the Midwest
Vidon, P.; Hubbard, L.E.; Soyeux, E.
2009-01-01
Accurately estimating various solute loads in streams during storms is critical to accurately determine maximum daily loads for regulatory purposes. This study investigates the impact of sampling strategy on solute load estimates in streams in the US Midwest. Three different solute types (nitrate, magnesium, and dissolved organic carbon (DOC)) and three sampling strategies are assessed. Regardless of the method, the average error on nitrate loads is higher than for magnesium or DOC loads, and all three methods generally underestimate DOC loads and overestimate magnesium loads. Increasing sampling frequency only slightly improves the accuracy of solute load estimates but generally improves the precision of load calculations. This type of investigation is critical for water management and environmental assessment so error on solute load calculations can be taken into account by landscape managers, and sampling strategies optimized as a function of monitoring objectives. ?? 2008 Springer Science+Business Media B.V.
Marcks, Brook A; Woods, Douglas W
2007-11-01
Cognitive-behavioral models of obsessive-compulsive disorder (OCD) assume that obsessions have their origin in normal intrusive thoughts. These models propose that certain beliefs, such as thought-action fusion (TAF) beliefs, combined with the use of ineffective coping strategies, such as thought suppression, lead to the development of OCD. The purpose of the current study was to examine the relationship between these variables in a non-clinical sample in addition to exploring the effects of an alternative, acceptance-based coping strategy. This study explored the relationship between TAF beliefs, thought suppression, and OC-consistent symptoms via mediational analyses. Results showed that thought suppression mediated the relationship between TAF beliefs and OC-consistent symptoms. This study also experimentally examined the effects of various coping strategies (suppression, acceptance, or monitor-only) on the frequency of a distressing intrusion and appraisal ratings (e.g., anxiety, guilt, responsibility) after a TAF induction. Spontaneous suppression in the monitor-only group made comparisons of the experimental data difficult. However, analyses provided preliminary evidence suggesting that thought suppression is related to more intrusions, higher levels of anxiety, and negative appraisals, whereas an acceptance-based approach may be a useful alternative. Additional findings, limitations of the current study, and directions for future research are discussed.
Li, Ben; Li, Yunxiao; Qin, Zhaohui S
2017-06-01
Modern high-throughput biotechnologies such as microarray and next generation sequencing produce a massive amount of information for each sample assayed. However, in a typical high-throughput experiment, only limited amount of data are observed for each individual feature, thus the classical 'large p , small n ' problem. Bayesian hierarchical model, capable of borrowing strength across features within the same dataset, has been recognized as an effective tool in analyzing such data. However, the shrinkage effect, the most prominent feature of hierarchical features, can lead to undesirable over-correction for some features. In this work, we discuss possible causes of the over-correction problem and propose several alternative solutions. Our strategy is rooted in the fact that in the Big Data era, large amount of historical data are available which should be taken advantage of. Our strategy presents a new framework to enhance the Bayesian hierarchical model. Through simulation and real data analysis, we demonstrated superior performance of the proposed strategy. Our new strategy also enables borrowing information across different platforms which could be extremely useful with emergence of new technologies and accumulation of data from different platforms in the Big Data era. Our method has been implemented in R package "adaptiveHM", which is freely available from https://github.com/benliemory/adaptiveHM.
Evans, Scott R; Rubin, Daniel; Follmann, Dean; Pennello, Gene; Huskins, W Charles; Powers, John H; Schoenfeld, David; Chuang-Stein, Christy; Cosgrove, Sara E; Fowler, Vance G; Lautenbach, Ebbing; Chambers, Henry F
2015-09-01
Clinical trials that compare strategies to optimize antibiotic use are of critical importance but are limited by competing risks that distort outcome interpretation, complexities of noninferiority trials, large sample sizes, and inadequate evaluation of benefits and harms at the patient level. The Antibacterial Resistance Leadership Group strives to overcome these challenges through innovative trial design. Response adjusted for duration of antibiotic risk (RADAR) is a novel methodology utilizing a superiority design and a 2-step process: (1) categorizing patients into an overall clinical outcome (based on benefits and harms), and (2) ranking patients with respect to a desirability of outcome ranking (DOOR). DOORs are constructed by assigning higher ranks to patients with (1) better overall clinical outcomes and (2) shorter durations of antibiotic use for similar overall clinical outcomes. DOOR distributions are compared between antibiotic use strategies. The probability that a randomly selected patient will have a better DOOR if assigned to the new strategy is estimated. DOOR/RADAR represents a new paradigm in assessing the risks and benefits of new strategies to optimize antibiotic use. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Hughesman, Curtis B; Lu, X J David; Liu, Kelly Y P; Zhu, Yuqi; Poh, Catherine F; Haynes, Charles
2016-01-01
The ability of droplet digital PCR (ddPCR) to accurately determine the concentrations of amplifiable targets makes it a promising platform for measuring copy number alterations (CNAs) in genomic biomarkers. However, its application to clinical samples, particularly formalin-fixed paraffin-embedded specimens, will require strategies to reliably determine CNAs in DNA of limited quantity and quality. When applied to cancerous tissue, those methods must also account for global genetic instability and the associated probability that the abundance(s) of one or more chosen reference loci do not represent the average ploidy of cells comprising the specimen. Here we present an experimental design strategy and associated data analysis tool that enables accurate determination of CNAs in a panel of biomarkers using multiplexed ddPCR. The method includes strategies to optimize primer and probes design to cleanly segregate droplets in the data output from reaction wells amplifying multiple independent templates, and to correct for bias from artifacts such as DNA fragmentation. We demonstrate how a panel of reference loci can be used to determine a stable CNA-neutral benchmark. These innovations, when taken together, provide a comprehensive strategy that can be used to reliably detect biomarker CNAs in DNA extracted from either frozen or FFPE tissue biopsies.
Li, Ben; Li, Yunxiao; Qin, Zhaohui S.
2016-01-01
Modern high-throughput biotechnologies such as microarray and next generation sequencing produce a massive amount of information for each sample assayed. However, in a typical high-throughput experiment, only limited amount of data are observed for each individual feature, thus the classical ‘large p, small n’ problem. Bayesian hierarchical model, capable of borrowing strength across features within the same dataset, has been recognized as an effective tool in analyzing such data. However, the shrinkage effect, the most prominent feature of hierarchical features, can lead to undesirable over-correction for some features. In this work, we discuss possible causes of the over-correction problem and propose several alternative solutions. Our strategy is rooted in the fact that in the Big Data era, large amount of historical data are available which should be taken advantage of. Our strategy presents a new framework to enhance the Bayesian hierarchical model. Through simulation and real data analysis, we demonstrated superior performance of the proposed strategy. Our new strategy also enables borrowing information across different platforms which could be extremely useful with emergence of new technologies and accumulation of data from different platforms in the Big Data era. Our method has been implemented in R package “adaptiveHM”, which is freely available from https://github.com/benliemory/adaptiveHM. PMID:28919931
Parsons, Jeffrey T.; Vial, Andrea C.; Starks, Tyrel J.; Golub, Sarit A.
2012-01-01
Although online and field-based samples of men who have sex with men (MSM) have been compared on a variety of markers, differences in drug use have not been well examined. In addition, generalization from studies comparing recruitment methods is often limited by a focus on either HIV seropositive or seronegative MSM. We compared two New York City-based samples of MSM recruited simultaneously between July 2009 and January 2010—one sample recruited in the field (n = 2402) and one sample recruited via the Internet (n = 694). All recruitment efforts targeted men without restriction on age or self-reported HIV status. Our results show marked differences in drug and alcohol use between online and field-based samples of MSM. Specifically, men surveyed online were significantly more likely to have tried a variety of drugs, including methamphetamine, cocaine, and ecstasy. Men recruited online were also more likely to report older age, HIV positive serostatus, and “never” using condoms. Internet-based recruitment was found to be more cost-effective in terms of recruitment yield than was field-based recruitment. PMID:22684657
Physiological and Proteomic Analysis of Escherichia coli Iron-Limited Chemostat Growth
Folsom, James Patrick; Parker, Albert E.
2014-01-01
Iron bioavailability is a major limiter of bacterial growth in mammalian host tissue and thus represents an important area of study. Escherichia coli K-12 metabolism was studied at four levels of iron limitation in chemostats using physiological and proteomic analyses. The data documented an E. coli acclimation gradient where progressively more severe iron scarcity resulted in a larger percentage of substrate carbon being directed into an overflow metabolism accompanied by a decrease in biomass yield on glucose. Acetate was the primary secreted organic by-product for moderate levels of iron limitation, but as stress increased, the metabolism shifted to secrete primarily lactate (∼70% of catabolized glucose carbon). Proteomic analysis reinforced the physiological data and quantified relative increases in glycolysis enzyme abundance and decreases in tricarboxylic acid (TCA) cycle enzyme abundance with increasing iron limitation stress. The combined data indicated that E. coli responds to limiting iron by investing the scarce resource in essential enzymes, at the cost of catabolic efficiency (i.e., downregulating high-ATP-yielding pathways containing enzymes with large iron requirements, like the TCA cycle). Acclimation to iron-limited growth was contrasted experimentally with acclimation to glucose-limited growth to identify both general and nutrient-specific acclimation strategies. While the iron-limited cultures maximized biomass yields on iron and increased expression of iron acquisition strategies, the glucose-limited cultures maximized biomass yields on glucose and increased expression of carbon acquisition strategies. This study quantified ecologically competitive acclimations to nutrient limitations, yielding knowledge essential for understanding medically relevant bacterial responses to host and to developing intervention strategies. PMID:24837288
Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses.
Liu, Ruijie; Holik, Aliaksei Z; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E; Asselin-Labat, Marie-Liesse; Smyth, Gordon K; Ritchie, Matthew E
2015-09-03
Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean-variance relationship of the log-counts-per-million using 'voom'. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source 'limma' package. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Astrophysics Data System (ADS)
Becker, Johanna Sabine
2002-12-01
Inductively coupled plasma mass spectrometry (ICP-MS) and laser ablation ICP-MS (LA-ICP-MS) have been applied as the most important inorganic mass spectrometric techniques having multielemental capability for the characterization of solid samples in materials science. ICP-MS is used for the sensitive determination of trace and ultratrace elements in digested solutions of solid samples or of process chemicals (ultrapure water, acids and organic solutions) for the semiconductor industry with detection limits down to sub-picogram per liter levels. Whereas ICP-MS on solid samples (e.g. high-purity ceramics) sometimes requires time-consuming sample preparation for its application in materials science, and the risk of contamination is a serious drawback, a fast, direct determination of trace elements in solid materials without any sample preparation by LA-ICP-MS is possible. The detection limits for the direct analysis of solid samples by LA-ICP-MS have been determined for many elements down to the nanogram per gram range. A deterioration of detection limits was observed for elements where interferences with polyatomic ions occur. The inherent interference problem can often be solved by applying a double-focusing sector field mass spectrometer at higher mass resolution or by collision-induced reactions of polyatomic ions with a collision gas using an ICP-MS fitted with collision cell. The main problem of LA-ICP-MS is quantification if no suitable standard reference materials with a similar matrix composition are available. The calibration problem in LA-ICP-MS can be solved using on-line solution-based calibration, and different procedures, such as external calibration and standard addition, have been discussed with respect to their application in materials science. The application of isotope dilution in solution-based calibration for trace metal determination in small amounts of noble metals has been developed as a new calibration strategy. This review discusses new analytical developments and possible applications of ICP-MS and LA-ICP-MS for the quantitative determination of trace elements and in surface analysis for materials science.
Combating the illegal trade in African elephant ivory with DNA forensics.
Wasser, Samuel K; Joseph Clark, William; Drori, Ofir; Stephen Kisamo, Emily; Mailand, Celia; Mutayoba, Benezeth; Stephens, Matthew
2008-08-01
International wildlife crime is burgeoning in this climate of global trade. We contend that the most effective way to contain this illegal trade is to determine where the wildlife is being removed. This allows authorities to direct law enforcement to poaching hot spots, potentially stops trade before the wildlife is actually killed, prevents countries from denying their poaching problems at home, and thwarts trade before it enters into an increasingly complex web of international criminal activity. Forensic tools have been limited in their ability to determine product origin because the information they can provide typically begins only at the point of shipment. DNA assignment analyses can determine product origin, but its use has been limited by the inability to assign samples to locations where reference samples do not exist. We applied new DNA assignment methods that can determine the geographic origin(s) of wildlife products from anywhere within its range. We used these methods to examine the geographic origin(s) of 2 strings of seizures involving large volumes of elephant ivory, 1 string seized in Singapore and Malawi and the other in Hong Kong and Cameroon. These ivory traffickers may comprise 2 of the largest poaching rings in Africa. In both cases all ivory seized in the string had common origins, which indicates that crime syndicates are targeting specific populations for intense exploitation. This result contradicts the dominant belief that dealers are using a decentralized plan of procuring ivory stocks as they became available across Africa. Large quantities of ivory were then moved, in multiple shipments, through an intermediate country prior to shipment to Asia, as a risk-reduction strategy that distances the dealer from the poaching locale. These smuggling strategies could not have been detected by forensic information, which typically begins only at the shipping source.
Sugden, Nicole A.; Moulson, Margaret C.
2015-01-01
Psychological and developmental research have been critiqued for the lack of diversity of research samples. Because differences in culture, race, and ethnicity can influence participant behavior, limited diversity limits the generalizability of the findings. These differences may also impact how participants behave in response to recruitment attempts, which suggests that recruitment itself may be leveraged to increase sample diversity. The goal of the current study was to determine what factors, within a recruitment interaction, could be leveraged to increase success and diversity when recruiting families with children for developmental research. Study 1 found three factors influenced success: (1) recruitment was more successful when other potential participants were also interested (i.e., recruiters were busy), (2) recruiters of particular races were more successful than recruiters of other races, and (3) differences in success were related to what the recruiter said to engage the potential participant (i.e., the script). The latter two factors interacted, suggesting some recruiters were using less optimal scripts. To improve success rates, study 2 randomly assigned scripts to recruiters and encouraged them to recruit more vigorously during busy periods. Study 2 found that two factors influenced success: (1) some scripts were more successful than others and (2) we were more successful at recruiting non-White potential participants than White participants. These two interacted, with some scripts being more successful with White and other scripts being more successful with non-White families. This intervention significantly increased recruitment success rate by 8.1% and the overall number of families recruited by 15.3%. These findings reveal that empirically evaluating and tailoring recruitment efforts based on the most successful strategies is effective in boosting diversity through increased participation of children from non-White families. PMID:25972829
Bayes, Adam; Parker, Gordon; McClure, Georgia
2016-11-01
Differentiation of the bipolar disorders (BP) from a borderline personality disorder (BPD) can be challenging owing to shared features, with emotional dysregulation being the likely principal one. To assess differences in emotion regulation strategies in those with BP alone, BPD alone and those comorbid for both. We interviewed participants previously receiving a BP or BPD diagnosis, studying those who met DSM criteria for one or both conditions. The sample comprised 83 with bipolar disorder, 53 with BPD and 54 comorbid for both. Analyses established linear trends, with the greatest impairment in emotion regulation strategies in the comorbid group followed by the BPD group, and with the lowest in the BP group. Specific deficits in the comorbid group included impulsivity, difficulties with goal directed behaviour, and accessing strategies. A similar linear profile was quantified for maladaptive cognitive emotion regulation strategies, weighted to catastrophizing and rumination. Adaptive emotion regulation strategies were superior in the bipolar group, without significant differences observed between the comorbid and BPD groups. Reliance on self-report measures; combined BP I and II participants limits generalisability of results to each bipolar sub-type; use of DSM diagnoses risking artefactual comorbidity; while there was an over-representation of females in all groups. Differences in emotion regulation strategies advance differentiation of those with either BP or BPD, while we identify the specificity of differing strategies to each condition and their synergic effect in those comorbid for both conditions. Study findings should assist the development and application of targeted strategies for those with either or both conditions. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Watson, Dennis P; Young, Jeani; Ahonen, Emily; Xu, Huiping; Henderson, Macey; Shuman, Valery; Tolliver, Randi
2014-10-17
There is currently a lack of scientifically designed and tested implementation strategies. Such strategies are particularly important for highly complex interventions that require coordination between multiple parts to be successful. This paper presents a protocol for the development and testing of an implementation strategy for a complex intervention known as the Housing First model (HFM). Housing First is an evidence-based practice for chronically homeless individuals demonstrated to significantly improve a number of outcomes. Drawing on practices demonstrated to be useful in implementation and e-learning theory, our team is currently adapting a face-to-face implementation strategy so that it can be delivered over a distance. Research activities will be divided between Chicago and Central Indiana, two areas with significantly different barriers to HFM implementation. Ten housing providers (five from Chicago and five from Indiana) will be recruited to conduct an alpha test of each of four e-learning modules as they are developed. Providers will be requested to keep a detailed log of their experience completing the modules and participate in one of two focus groups. After refining the modules based on alpha test results, we will test the strategy among a sample of four housing organizations (two from Chicago and two from Indiana). We will collect and analyze both qualitative and quantitative data from administration and staff. Measures of interest include causal factors affecting implementation, training outcomes, and implementation outcomes. This project is an important first step in the development of an evidence-based implementation strategy to increase scalability and impact of the HFM. The project also has strong potential to increase limited scientific knowledge regarding implementation strategies in general.
An Immunization Strategy for Hidden Populations.
Chen, Saran; Lu, Xin
2017-06-12
Hidden populations, such as injecting drug users (IDUs), sex workers (SWs) and men who have sex with men (MSM), are considered at high risk of contracting and transmitting infectious diseases such as AIDS, gonorrhea, syphilis etc. However, public health interventions to such groups are prohibited due to strong privacy concerns and lack of global information, which is a necessity for traditional strategies such as targeted immunization and acquaintance immunization. In this study, we introduce an innovative intervention strategy to be used in combination with a sampling approach that is widely used for hidden populations, Respondent-driven Sampling (RDS). The RDS strategy is implemented in two steps: First, RDS is used to estimate the average degree (personal network size) and degree distribution of the target population with sample data. Second, a cut-off threshold is calculated and used to screen the respondents to be immunized. Simulations on model networks and real-world networks reveal that the efficiency of the RDS strategy is close to that of the targeted strategy. As the new strategy can be implemented with the RDS sampling process, it provides a cost-efficient and feasible approach for disease intervention and control for hidden populations.
Memarian, Elham; Hosseiny Davarani, Saied Saeed; Nojavan, Saeed; Movahed, Siyavash Kazemi
2016-09-07
In this work, a new solid-phase microextraction fiber was prepared based on nitrogen-doped graphene (N-doped G). Moreover, a new strategy was proposed to solve problems dealt in direct coating of N-doped G. For this purpose, first, Graphene oxide (GO) was coated on Pt wire by electrophoretic deposition method. Then, chemical reduction of coated GO to N-doped G was accomplished by hydrazine and NH3. The prepared fiber showed good mechanical and thermal stabilities. The obtained fiber was used in two different modes (conventional headspace solid-phase microextraction and cold-fiber headspace solid-phase microextraction (CF-HS-SPME)). Both modes were optimized and applied for the extraction of benzene and xylenes from different aqueous samples. All effective parameters including extraction time, salt content, stirring rate, and desorption time were optimized. The optimized CF-HS-SPME combined with GC-FID showed good limit of detections (LODs) (0.3-2.3 μg/L), limit of quantifications (LOQs) (1.0-7.0 μg/L) and linear ranges (1.0-5000 μg/L). The developed method was applied for the analysis of benzene and xylenes in rainwater and some wastewater samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Yin, Fangyuan; Liu, Junlong; Liu, Aihong; Li, Youquan; Luo, Jianxun; Guan, Guiquan; Yin, Hong
2017-04-15
Rapid and accurate diagnosis of Theileria annulata infection contributes to the formulation of strategies to eradicate this parasite. A simple and efficient diagnostic tool, recombinase polymerase amplification (RPA) combined with a lateral flow (LF) strip, was used in detection of Theileria and compared to other methods that require expensive instruments and skilled personnel. Herein, we established and optimized an LF-RPA method to detect the cytochrome b gene of T. annulata mitochondrial DNA from experimentally infected and field-collected blood samples. This method has many unparalleled characteristics, including that it is rapid (clear detection in 5min at constant temperature), sensitive (the limitation of detection is at least 2pg genomic DNA), and specific (no cross-reaction with other piroplasms that infect cattle). The LF-RPA assay was evaluated via testing 17 field blood samples and comparing the results of that of a PCR, showing 100% agreement, which demonstrates the ability of the LF-RPA assay to detect T. annulata infections in small number of samples (n=17). Taken together, the results indicate that this method could be used as an ideal diagnostic tool for detecting T. annulata in endemic regions with limited to fewer and local resources and could also be a potential technique for the surveillance and control of blood protozoa. Copyright © 2017 Elsevier B.V. All rights reserved.
Determination of total phthalate in cosmetics using a simple three-phase sample preparation method.
Liu, Laping; Wang, Zhengmeng; Zhao, Sihan; Duan, Jiahui; Tao, Hu; Wang, Wenji; Liu, Shuhui
2018-02-01
A simple sample preparation method requiring minimal organic solvents is proposed for the determination of the total phthalate content in cosmetics by high-performance liquid chromatography-tandem mass spectrometry. The hydrolysis of phthalates and purification of interfering substances were performed in a three-phase system that included an upper n-hexane phase, a middle ethanol phase, and a lower aqueous alkali solution. This three-phase system utilized an incremental purification strategy. The apolar ingredients were extracted with n-hexane, the polar pigments accumulated in the ethanol phase, and the hydrolysis product, phthalic acid, remained in the hydrolysate. Under the optimized conditions, the correlation coefficients (r) for the calibration curves were 0.998-0.999 in the range 0.60-12 mol L -1 . The limit of detection was 5.1 μmol kg -1 , and the limit of quantification was 9.2 μmol kg -1 . The recoveries varied from 84 to 97% with RSDs equal to or lower than 11%. The intra-day and inter-day repeatability values, expressed as the relative standard deviation, were less than 8.7 and 9.8, respectively. No obvious matrix effect existed in the different cosmetics matrices. The validated method was applied for the analysis of 57 commercial cosmetic samples. Graphical abstract Analysis of phthalates in cosmetics using a three-phase preparation method.
Burstyn, I; Kromhout, H; Boffetta, P
2000-01-01
Workers in the road construction industry include asphalt plant, ground construction, and road paving workers. These individuals can be exposed to a wide range of potentially hazardous substances. A summary of levels of exposure to different substances measured during road construction is presented. In modern road paving, workers typically are exposed to 0.1 to 2 mg/m3 of bitumen fume, which includes 10 to 200 ng/m3 of benzo(a)pyrene. Sampling strategies and analytical methods employed in each reviewed survey are described briefly. The published reports provide some insight into the identity of factors that influence exposure to bitumen among road construction workers: type of work performed, meteorological conditions, temperature of paved asphalt. However, there is a lack of (a) comprehensive and well-designed studies that evaluate determinants of exposure to bitumen in road construction, and (b) standard methods for bitumen sampling and analysis. Information on determinants of other exposures in road construction is either absent or limited. It is concluded that data available through published reports have limited value in assessing historical exposure levels in the road construction industry.
Pendleton, G.W.; Ralph, C. John; Sauer, John R.; Droege, Sam
1995-01-01
Many factors affect the use of point counts for monitoring bird populations, including sampling strategies, variation in detection rates, and independence of sample points. The most commonly used sampling plans are stratified sampling, cluster sampling, and systematic sampling. Each of these might be most useful for different objectives or field situations. Variation in detection probabilities and lack of independence among sample points can bias estimates and measures of precision. All of these factors should be con-sidered when using point count methods.
Zhang, Yaohai; Jiao, Bining
2013-09-01
A fast and simple technique composed of dispersive liquid-liquid microextraction (DLLME) and online preconcentration MEKC with diode array detection was developed for the determination of four phenoxyacetic acids, 2,4,5-trichlorophenoxyacetic acid, 2,4-dichlorophenoxyacetic acid, 2,6-dichlorophenoxyacetic acid, and 4-chlorophenoxyacetic acid, in drinking water. The four phenoxyacetic acids were separated in reversed-migration MEKC to the baseline. About 145-fold increases in detection sensitivity were observed with online concentration strategy, compared with standard hydrodynamic injection (5 s at 25 mbar pressure). LODs ranged from 0.002 to 0.005 mg/L using only the online preconcentration procedures without any offline concentration of the extract. A DLLME procedure was used in combination with the proposed online preconcentration strategies, which achieved the determination of analytes at limits of quantification ranging from 0.2 to 0.5 μg/kg, which is far lower than the maximum residue limits established by China. The satisfactory recoveries obtained by DLMME spiked at two levels ranged from 67.2 to 99.4% with RSD <15%, making this proposed method suitable for the determination of phenoxyacetic acids in water samples. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Catalysis-reduction strategy for sensing inorganic and organic mercury based on gold nanoparticles.
Li, Xiaokun; Zhang, Youlin; Chang, Yulei; Xue, Bin; Kong, Xianggui; Chen, Wei
2017-06-15
In view of the high biotoxicity and trace concentration of mercury (Hg) in environmental water, developing simple, ultra-sensitive and highly selective method capable of simultaneous determination of various Hg species has attracted wide attention. Here, we present a novel catalysis-reduction strategy for sensing inorganic and organic mercury in aqueous solution through the cooperative effect of AuNP-catalyzed properties and the formation of gold amalgam. For the first time, a new AuNP-catalyzed-organic reaction has been discovered and directly used for sensing Hg 2+ , Hg 2 2+ and CH 3 Hg + according to the change of the amount of the catalytic product induced by the deposition of Hg atoms on the surface of AuNPs. The detection limit of Hg species is 5.0pM (1 ppt), which is 3 orders of magnitude lower than the U.S. Environmental Protection Agency (EPA) limit value of Hg for drinking water (2 ppb). The high selectivity can be exceptionally achieved by the specific formation of gold amalgam. Moreover, the application for detecting tap water samples further demonstrates that this AuNP-based assay can be an excellent method used for sensing mercury at very low content in the environment. Copyright © 2016 Elsevier B.V. All rights reserved.
Shared Reading: assessing the intrinsic value of a literature-based health intervention.
Longden, Eleanor; Davis, Philip; Billington, Josie; Lampropoulou, Sofia; Farrington, Grace; Magee, Fiona; Walsh, Erin; Corcoran, Rhiannon
2015-12-01
Public health strategies have placed increasing emphasis on psychosocial and arts-based strategies for promoting well-being. This study presents preliminary findings for a specific literary-based intervention, Shared Reading, which provides community-based spaces in which individuals can relate with both literature and one another. A 12-week crossover design was conducted with 16 participants to compare benefits associated with six sessions of Shared Reading versus a comparison social activity, Built Environment workshops. Data collected included quantitative self-report measures of psychological well-being, as well as transcript analysis of session recordings and individual video-assisted interviews. Qualitative findings indicated five intrinsic benefits associated with Shared Reading: liveness, creative inarticulacy, the emotional, the personal and the group (or collective identity construction). Quantitative data additionally showed that the intervention is associated with enhancement of a sense of 'Purpose in Life'. Limitations of the study included the small sample size and ceiling effects created by generally high levels of psychological well-being at baseline. The therapeutic potential of reading groups is discussed, including the distinction between instrumental and intrinsic value within arts-and-health interventions. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Adult Coping with Childhood Sexual Abuse: A Theoretical and Empirical Review
Walsh, Kate; Fortier, Michelle A.; DiLillo, David
2009-01-01
Coping has been suggested as an important element in understanding the long-term functioning of individuals with a history of child sexual abuse (CSA). The present review synthesizes the literature on coping with CSA, first by examining theories of coping with trauma, and, second by examining how these theories have been applied to studies of coping in samples of CSA victims. Thirty-nine studies were reviewed, including eleven descriptive studies of the coping strategies employed by individuals with a history of CSA, eighteen correlational studies of the relationship between coping strategies and long-term functioning of CSA victims, and ten investigations in which coping was examined as a mediational factor in relation to long-term outcomes. These studies provide initial information regarding early sexual abuse and subsequent coping processes. However, this literature is limited by several theoretical and methodological issues, including a failure to specify the process of coping as it occurs, a disparity between theory and research, and limited applicability to clinical practice. Future directions of research are discussed and include the need to understand coping as a process, identification of coping in relation to adaptive outcomes, and considerations of more complex mediational and moderational processes in the study of coping with CSA. PMID:20161502
Impact of self-reported multiple chemical sensitivity on everyday life: a qualitative study.
Skovbjerg, Sine; Brorson, Stig; Rasmussen, Alice; Johansen, Jeanne Duus; Elberling, Jesper
2009-08-01
Multiple chemical sensitivity (MCS) is a descriptive term covering symptoms attributed to exposure to common airborne chemicals. There are no internationally accepted criteria, but it has been suggested that MCS is a chronic and disabling condition. However, details of the impact of MCS on everyday life are limited. To describe the impact of MCS on everyday life, strategies for managing the condition, and experiences with healthcare management. A focus group study was conducted, including two interviews with a sample of six women and six men between 27 and 78 years of age, a duration of MCS of at least 1 year, and with different occupational conditions. MCS may severely influence different aspects of everyday life, including lifestyle, social relations, and occupational conditions. Avoiding common airborne chemicals was the most prevalent coping strategy, which implied creating a chemical-free living space and limiting social activities. Experiences with healthcare management were overall reported as negative in terms of not receiving acknowledgement of the reported symptoms. MCS may have serious implications for daily functioning. Further research on individual consequences and the social and psychological factors that may be associated with MCS is needed in order to add to our understanding of this condition and to the provision of more satisfactory healthcare.
2014-01-01
Background Anastrepha fraterculus Wiedemann is a horticultural pest which causes significant economic losses in the fruit-producing areas of the American continent and limits the access of products to international markets. The use of environmentally friendly control strategies against this pest is constrained due to the limited knowledge of its population structure. Results We developed microsatellite markers for A. fraterculus from four genomic libraries, which were enriched in CA, CAA, GA and CAT microsatellite motifs. Fifty microsatellite regions were evaluated and 14 loci were selected for population genetics studies. Genotypes of 122 individuals sampled from four A. fraterculus populations were analyzed. The level of polymorphism ranged from three to 13 alleles per locus and the mean expected heterozygosity ranged from 0.60 to 0.64. Comparison between allelic and genotypic frequencies showed significant differences among all pairs of populations. Conclusions This novel set of microsatellite markers provides valuable information for the description of genetic variability and population structure of wild populations and laboratory strains of A. fraterculus. This information will be used to identify and characterize candidate strains suitable to implement effective pest control strategies and might represent a first step towards having a more comprehensive knowledge about the genetics of this pest. PMID:25471285
Armenta, Sergio; Garrigues, Salvador; de la Guardia, Miguel; Brassier, Judit; Alcalà, Manel; Blanco, Marcelo
2015-03-06
We developed and evaluated two different strategies for determining abuse drugs based on (i) the analysis of saliva by ion mobility spectrometry (IMS) after thermal desorption and (ii) the joint use of IMS and infrared (IR) spectroscopy after liquid-liquid microextraction (LLME) to enable the sensitivity-enhanced detection and double confirmation of ecstasy (MDMA) abuse. Both strategies proved effective for the intended purpose. Analysing saliva by IMS after thermal desorption, which provides a limit of detection (LOD) of 160μgL(-1), requires adding 0.2M acetic acid to the sample and using the truncated negative second derivative of the ion mobility spectrum. The joint use of IMS and IR spectroscopy after LLME provides an LOD of 11μgL(-1) with the former technique and 800μgL(-1) with the latter, in addition to a limit of confirmation (LOC) of 1.5mgL(-1). Using IMS after thermal desorption simplifies the operational procedure, and using it jointly with IR spectroscopy after LLME allows double confirmation of MDMA abuse with two techniques based on different principles (viz., IMS drift times and IR spectra). Also, it affords on-site analyses, albeit at a lower throughput. Copyright © 2015 Elsevier B.V. All rights reserved.
Hunt, Anne; Carter, Bernie; Abbott, Janice; Parker, Arija; Spinty, Stefan; deGoede, Christian
2016-07-01
There is limited research exploring the pain experience of boys and young men with Duchenne Muscular Dystrophy. We conducted a mixed-methods pilot study to assess the feasibility of using particular measures of pain, pain coping and quality of life within semi-structured interviews with boys and young men with Duchenne Muscular Dystrophy and a postal survey of their parents. Non-probability, convenience sampling was used. Twelve young men aged 11-21 years (median 15 years), three of whom were still ambulant, and their parents/guardians were recruited. The measures used were acceptable to the young men and demonstrated potential to provide useful data. Two-thirds of young men suffered from significant daily pain which was associated with reduced quality of life. Pain complaints were largely kept within the family. Young men's pain-coping strategies were limited by their restricted physical abilities. Statistical power based on these preliminary results suggests a study of approximately 50 boys/young men which appears feasible. Further study is needed to explore acceptable and effective methods of pain management in this population and ways of enhancing pain-coping strategies. In clinical practice, assessment of pains and discomfort should form part of all routine consultations. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Mahmoud, Ahmed N; Elgendy, Islam Y; Mansoor, Hend; Wen, Xuerong; Mojadidi, Mohammad K; Bavry, Anthony A; Anderson, R David
2017-03-18
There are limited data on the merits of an early invasive strategy in diabetics with non-ST-elevation acute coronary syndrome, with unclear influence of this strategy on survival. The aim of this study was to evaluate the in-hospital survival of diabetics with non-ST-elevation acute coronary syndrome treated with an early invasive strategy compared with an initial conservative strategy. The National Inpatient Sample database, years 2012-2013, was queried for diabetics with a primary diagnosis of non-ST-elevation acute coronary syndrome defined as either non-ST-elevation myocardial infarction or unstable angina (unstable angina). An early invasive strategy was defined as coronary angiography±revascularization within 48 hours of admission. Propensity scores were used to assemble a cohort managed with either an early invasive or initial conservative strategy balanced on >50 baseline characteristics and hospital presentations. Incidence of in-hospital mortality was compared in both groups. In a cohort of 363 500 diabetics with non-ST-elevation acute coronary syndrome, 164 740 (45.3%) were treated with an early invasive strategy. Propensity scoring matched 21 681 diabetics in both arms. Incidence of in-hospital mortality was lower with an early invasive strategy in both the unadjusted (2.0% vs 4.8%; odds ratio [OR], 0.41; 95% CI, 0.39-0.42; P <0.0001) and propensity-matched models (2.2% vs 3.8%; OR, 0.57; 95% CI, 0.50-0.63; P <0.0001). The benefit was observed across various subgroups, except for patients with unstable angina ( P interaction =0.02). An early invasive strategy may be associated with a lower incidence of in-hospital mortality in patients with diabetes. The benefit of this strategy appears to be superior in patients presenting with non-ST-elevation myocardial infarction compared with unstable angina. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
ERIC Educational Resources Information Center
Huang, Jiuhuan; Newbern, Claudia
2012-01-01
This article reports on a study that examines the effects of metacognitive reading strategy instruction on reading performance of adult ESL learners with limited English and literacy skills. The strategy instruction was implemented over a period of four months with a group of 18 learners who were enrolled in a high beginning literacy course in an…
Assessment of medical students’ learning and study strategies in self-regulated learning
JOUHARI, ZAHRA; HAGHANI, FARIBA; CHANGIZ, TAHEREH
2016-01-01
Introduction Research on medical students shows that adopting self-regulation of effort, time, and study strategies can positively influence academic achievement. The purpose of the current study was to assess learning and study strategies in medical students. Methods This cross-sectional study was carried out in 2014-2015 at Isfahan University of Medical Sciences. The sample size was determined 360 students based on the results of a pilot study on 30 members of the study population. Medical students in the first to fourth year of their studies were selected through simple sampling randomly. A valid and reliable Persian translation of Learning and Study Strategies Inventory (LASSI) questionnaire was completed by the students. It measures three latent factors of self-regulated learning: Skill, Will, and Self-Regulation. It consists of 80 questions in ten different scales (each scale including eight questions and a variable score of 8-40). Data were analyzed using t-test, correlation analysis, and ANOVA. Results Considering the ten LASSI scales, the highest mean score belonged to test strategies (28.67±4.44), and the lowest mean to self-testing (21.91±4.91). The results showed significant statistical differences between male and female students in selecting the main idea, attitude, and self-testing. ANOVA and post hoc Tukey tests showed a significant difference between the mean scores of different areas of LASSI among students with different grade point average (GPA) in the university. In all areas except the study aids, the mean scores of students with GPA higher than 17.5 were significantly higher than those of students with GPA lower than 14.5. Conclusion The results showed that students need help and consultation in most areas of learning and study strategies. Using 10 areas of LASSI can determine the strengths and weaknesses of students in various areas. Knowing their own limitations, students will be able to improve their study habits. Hence, it is suggested to evaluate the students when enrolling at universities and design educational programs based on the students’ characteristics. PMID:27104201
Monolithic graphene fibers for solid-phase microextraction.
Fan, Jing; Dong, Zelin; Qi, Meiling; Fu, Ruonong; Qu, Liangti
2013-12-13
Monolithic graphene fibers for solid-phase microextraction (SPME) were fabricated through a dimensionally confined hydrothermal strategy and their extraction performance was evaluated. For the fiber fabrication, a glass pipeline was innovatively used as a hydrothermal reactor instead of a Teflon-lined autoclave. Compared with conventional methods for SPME fibers, the proposed strategy can fabricate a uniform graphene fiber as long as several meters or more at a time. Coupled to capillary gas chromatography (GC), the monolithic graphene fibers in a direct-immersion (DI) mode achieved higher extraction efficiencies for aromatics than those for n-alkanes, especially for polycyclic aromatic hydrocarbons (PAHs), thanks to π-π stacking interaction and hydrophobic effect. Additionally, the fibers exhibited excellent durability and can be repetitively used more than 160 times without significant loss of extraction performance. As a result, an optimum extraction condition of 40°C for 50min with 20% NaCl (w/w) was finally used for SPME of PAHs in aqueous samples. For the determination of PAHs in water samples, the proposed DI-SPME-GC method exhibited linear range of 0.05-200μg/L, limits of detection (LOD) of 4.0-50ng/L, relative standard deviation (RSD) less than 9.4% and 12.1% for one fiber and different fibers, respectively, and recoveries of 78.9-115.9%. The proposed method can be used for analysis of PAHs in environmental water samples. Copyright © 2013 Elsevier B.V. All rights reserved.