Measuring the effect of inter-study variability on estimating prediction error.
Ma, Shuyi; Sung, Jaeyun; Magis, Andrew T; Wang, Yuliang; Geman, Donald; Price, Nathan D
2014-01-01
The biomarker discovery field is replete with molecular signatures that have not translated into the clinic despite ostensibly promising performance in predicting disease phenotypes. One widely cited reason is lack of classification consistency, largely due to failure to maintain performance from study to study. This failure is widely attributed to variability in data collected for the same phenotype among disparate studies, due to technical factors unrelated to phenotypes (e.g., laboratory settings resulting in "batch-effects") and non-phenotype-associated biological variation in the underlying populations. These sources of variability persist in new data collection technologies. Here we quantify the impact of these combined "study-effects" on a disease signature's predictive performance by comparing two types of validation methods: ordinary randomized cross-validation (RCV), which extracts random subsets of samples for testing, and inter-study validation (ISV), which excludes an entire study for testing. Whereas RCV hardwires an assumption of training and testing on identically distributed data, this key property is lost in ISV, yielding systematic decreases in performance estimates relative to RCV. Measuring the RCV-ISV difference as a function of number of studies quantifies influence of study-effects on performance. As a case study, we gathered publicly available gene expression data from 1,470 microarray samples of 6 lung phenotypes from 26 independent experimental studies and 769 RNA-seq samples of 2 lung phenotypes from 4 independent studies. We find that the RCV-ISV performance discrepancy is greater in phenotypes with few studies, and that the ISV performance converges toward RCV performance as data from additional studies are incorporated into classification. We show that by examining how fast ISV performance approaches RCV as the number of studies is increased, one can estimate when "sufficient" diversity has been achieved for learning a molecular signature likely to translate without significant loss of accuracy to new clinical settings.
Image updating for brain deformation compensation in tumor resection
NASA Astrophysics Data System (ADS)
Fan, Xiaoyao; Ji, Songbai; Olson, Jonathan D.; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.
2016-03-01
Preoperative magnetic resonance images (pMR) are typically used for intraoperative guidance in image-guided neurosurgery, the accuracy of which can be significantly compromised by brain deformation. Biomechanical finite element models (FEM) have been developed to estimate whole-brain deformation and produce model-updated MR (uMR) that compensates for brain deformation at different surgical stages. Early stages of surgery, such as after craniotomy and after dural opening, have been well studied, whereas later stages after tumor resection begins remain challenging. In this paper, we present a method to simulate tumor resection by incorporating data from intraoperative stereovision (iSV). The amount of tissue resection was estimated from iSV using a "trial-and-error" approach, and the cortical shift was measured from iSV through a surface registration method using projected images and an optical flow (OF) motion tracking algorithm. The measured displacements were employed to drive the biomechanical brain deformation model, and the estimated whole-brain deformation was subsequently used to deform pMR and produce uMR. We illustrate the method using one patient example. The results show that the uMR aligned well with iSV and the overall misfit between model estimates and measured displacements was 1.46 mm. The overall computational time was ~5 min, including iSV image acquisition after resection, surface registration, modeling, and image warping, with minimal interruption to the surgical flow. Furthermore, we compare uMR against intraoperative MR (iMR) that was acquired following iSV acquisition.
Mostafa, T; Rashed, L A; Zeidan, A S; Hosni, A
2015-02-01
This study aimed to assess glutathione-S-transferase (GST) enzyme- oxidative stress (OS) relationship in the internal spermatic vein (ISV) of infertile men associated with varicocele (Vx). Ninety five infertile oligoasthenoteratozoospemic (OAT) men associated with Vx were subjected to history taking, clinical examination and semen analysis. During inguinal varicocelectomy, GST, malondialdehyde (MDA) and glutathione peroxidase (GPx) were estimated in the blood samples drawn from ISV and median cubital veins. The mean levels of GST, GPx were significantly decreased and the mean level of GPx was significantly increased in the ISV compared with the peripheral blood. The mean level of GST and GPx in the ISV was significantly decreased, and the mean level of MDA was significantly increased in Vx grade III compared with Vx grade II cases. There was nonsignificant difference in the mean level of GST in the ISV in unilateral Vx cases compared with bilateral Vx cases. There was significant positive correlation of GST with sperm count, sperm motility, GPx and significant negative correlation with sperm abnormal forms, MDA. It is concluded that ISV of infertile men associated with Vx has decreased levels of GST compared with peripheral venous circulation that is correlated with both OS and Vx grade. © 2014 Blackwell Verlag GmbH.
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Waas, Anthony M.
2011-01-01
A thermodynamically-based work potential theory for modeling progressive damage and failure in fiber-reinforced laminates is presented. The current, multiple-internal state variable (ISV) formulation, enhanced Schapery theory (EST), utilizes separate ISVs for modeling the effects of damage and failure. Damage is considered to be the effect of any structural changes in a material that manifest as pre-peak non-linearity in the stress versus strain response. Conversely, failure is taken to be the effect of the evolution of any mechanisms that results in post-peak strain softening. It is assumed that matrix microdamage is the dominant damage mechanism in continuous fiber-reinforced polymer matrix laminates, and its evolution is controlled with a single ISV. Three additional ISVs are introduced to account for failure due to mode I transverse cracking, mode II transverse cracking, and mode I axial failure. Typically, failure evolution (i.e., post-peak strain softening) results in pathologically mesh dependent solutions within a finite element method (FEM) setting. Therefore, consistent character element lengths are introduced into the formulation of the evolution of the three failure ISVs. Using the stationarity of the total work potential with respect to each ISV, a set of thermodynamically consistent evolution equations for the ISVs is derived. The theory is implemented into commercial FEM software. Objectivity of total energy dissipated during the failure process, with regards to refinements in the FEM mesh, is demonstrated. The model is also verified against experimental results from two laminated, T800/3900-2 panels containing a central notch and different fiber-orientation stacking sequences. Global load versus displacement, global load versus local strain gage data, and macroscopic failure paths obtained from the models are compared to the experiments.
Saville, Christopher W N; Feige, Bernd; Kluckert, Christian; Bender, Stephan; Biscaldi, Monica; Berger, Andrea; Fleischhaker, Christian; Henighausen, Klaus; Klein, Christoph
2015-07-01
Increased intra-subject variability (ISV) in reaction times (RTs) is a promising endophenotype for attention-deficit hyperactivity disorder (ADHD) and among the most robust hallmarks of the disorder. ISV has been assumed to represent an attentional deficit, either reflecting lapses in attention or increased neural noise. Here, we use an innovative single-trial event-related potential approach to assess whether the increased ISV associated with ADHD is indeed attributable to attention, or whether it is related to response-related processing. We measured electroencephalographic responses to working memory oddball tasks in patients with ADHD (N = 20, aged 11.3 ± 1.1) and healthy controls (N = 25, aged 11.7 ± 1.1), and analysed these data with a recently developed method of single-trial event-related potential analysis. Estimates of component latency variability were computed for the stimulus-locked and response-locked forms of the P3b and the lateralised readiness potential (LRP). ADHD patients showed significantly increased ISV in behavioural ISV. This increased ISV was paralleled by an increase in variability in response-locked event-related potential latencies, while variability in stimulus-locked latencies was equivalent between groups. This result held across the P3b and LRP. Latency of all components predicted RTs on a single-trial basis, confirming that all were relevant for speed of processing. These data suggest that the increased ISV found in ADHD could be associated with response-end, rather than stimulus-end processes, in contrast to prevailing conceptions about the endophenotype. This mental chronometric approach may also be useful for exploring whether the existing lack of specificity of ISV to particular psychiatric conditions can be improved upon. © 2014 Association for Child and Adolescent Mental Health.
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Waas, Anthony M.
2012-01-01
A thermodynamically-based work potential theory for modeling progressive damage and failure in fiber-reinforced laminates is presented. The current, multiple-internal state variable (ISV) formulation, enhanced Schapery theory (EST), utilizes separate ISVs for modeling the effects of damage and failure. Damage is considered to be the effect of any structural changes in a material that manifest as pre-peak non-linearity in the stress versus strain response. Conversely, failure is taken to be the effect of the evolution of any mechanisms that results in post-peak strain softening. It is assumed that matrix microdamage is the dominant damage mechanism in continuous fiber-reinforced polymer matrix laminates, and its evolution is controlled with a single ISV. Three additional ISVs are introduced to account for failure due to mode I transverse cracking, mode II transverse cracking, and mode I axial failure. Typically, failure evolution (i.e., post-peak strain softening) results in pathologically mesh dependent solutions within a finite element method (FEM) setting. Therefore, consistent character element lengths are introduced into the formulation of the evolution of the three failure ISVs. Using the stationarity of the total work potential with respect to each ISV, a set of thermodynamically consistent evolution equations for the ISVs is derived. The theory is implemented into commercial FEM software. Objectivity of total energy dissipated during the failure process, with regards to refinements in the FEM mesh, is demonstrated. The model is also verified against experimental results from two laminated, T800/3900-2 panels containing a central notch and different fiber-orientation stacking sequences. Global load versus displacement, global load versus local strain gage data, and macroscopic failure paths obtained from the models are compared to the experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, J.C.; Thomas, B.L.; Pool, K.H.
1996-07-01
This report describes the results of vapor samples obtained to compare vapor sampling of the tank headspace using the Vapor Sampling System (VSS) and In Situ Vapor Sampling System (ISVS) with and without particulate prefiltration. Samples were collected from the headspace of waste storage tank 241-S-102 (Tank S-102) at the Hanford Site in Washington State. Pacific Northwest National Laboratory (PNNL) was contracted by Westinghouse Hanford Company (WHC) to provide sampling devices and analyze samples for water, ammonia, permanent gases, total nonmethane hydrocarbons (TNMHCs, also known as TO-12), and organic analytes in samples collected in SUMMA{trademark} canisters and on triple sorbentmore » traps (TSTs) from the tank headspace. The analytical work was performed by the PNNL Vapor Analytical Laboratory (VAL) by the Tank Vapor Characterization Project. Work performed was based on a sampling and analysis plan (SAP) prepared by WHC. The SAP provided job-specific instructions for samples, analyses, and reporting. The SAP for this sample job was {open_quotes}Sampling and Analysis Plan for Tank Vapor Sampling Comparison Test{close_quote}, and the sample jobs were designated S6007, S6008, and S6009. Samples were collected by WHC on January 26, 1996, using the VSS, a truck-based sampling method using a heated probe; and the ISVS with and without particulate prefiltration.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pool, K.H.; Evans, J.C.; Thomas, B.L.
1996-07-01
This report describes the results of vapor samples obtained to compare vapor sampling of the tank headspace using the Vapor Sampling System (VSS) and In Situ Vapor Sampling System (ISVS) with and without particulate prefiltration. Samples were collected from the headspace of waste storage tank 241-BY-108 (Tank BY-108) at the Hanford Site in Washington State. Pacific Northwest National Laboratory (PNNL) was contracted by Westinghouse Hanford Company (WHC) to provide sampling devices and analyze samples for water, ammonia, permanent gases, total nonmethane hydrocarbons (TNMHCs, also known as TO-12), and organic analytes in samples collected in SUMMA{trademark} canisters and on triple sorbentmore » traps (TSTs) from the tank headspace. The analytical work was performed by the PNNL Vapor Analytical Laboratory (VAL) by the Tank Vapor Characterization Project. Work performed was based on a sampling and analysis plan (SAP) prepared by WHC. The SAP provided job-specific instructions for samples, analyses, and reporting. The SAP for this sample job was {open_quotes}Sampling and Analysis Plan for Tank Vapor Sampling Comparison Test{close_quotes}, and the sample jobs were designated S6004, S6005, and S6006. Samples were collected by WHC on January 23, 1996, using the VSS, a truck-based sampling method using a heated probe; and the ISVS with and without particulate prefiltration.« less
Tribology and Friction of Soft Materials: Mississippi State Case Study
2010-03-18
elastomers , foams, and fabrics. B. Develop internal state variable (ISV) material model. Model will be calibrated using database and verified...Rubbers Natural rubber Santoprene (Vulcanized Elastomer ) Styrene Butadiene Rubber (SBR) Foams Polypropylene Foam Polyurethane Foam Fabrics Kevlar...Axially symmetric model PC Disk PC Numerical Implementation in FEM Codes Experiment SEM Optical methods ISV Model Void Nucleation FEM Analysis
Simulation of the Intraseasonal Variability over the Eastern Pacific ITCZ in Climate Models
NASA Technical Reports Server (NTRS)
Jiang, Xianan; Waliser, Duane E.; Kim, Daehyun; Zhao, Ming; Sperber, Kenneth R.; Stern, W. F.; Schubert, Siegfried D.; Zhang, Guang J.; Wang, Wanqiu; Khairoutdinov, Marat;
2012-01-01
During boreal summer, convective activity over the eastern Pacific (EPAC) inter-tropical convergence zone (ITCZ) exhibits vigorous intraseasonal variability (ISV). Previous observational studies identified two dominant ISV modes over the EPAC, i.e., a 40-day mode and a quasi-biweekly mode (QBM). The 40-day ISV mode is generally considered a local expression of the Madden-Julian Oscillation. However, in addition to the eastward propagation, northward propagation of the 40-day mode is also evident. The QBM mode bears a smaller spatial scale than the 40-day mode, and is largely characterized by northward propagation. While the ISV over the EPAC exerts significant influences on regional climate/weather systems, investigation of contemporary model capabilities in representing these ISV modes over the EPAC is limited. In this study, the model fidelity in representing these two dominant ISV modes over the EPAC is assessed by analyzing six atmospheric and three coupled general circulation models (GCMs), including one super-parameterized GCM (SPCAM) and one recently developed high-resolution GCM (GFDL HIRAM) with horizontal resolution of about 50 km. While it remains challenging for GCMs to faithfully represent these two ISV modes including their amplitude, evolution patterns, and periodicities, encouraging simulations are also noted. In general, SPCAM and HIRAM exhibit relatively superior skill in representing the two ISV modes over the EPAC. While the advantage of SPCAM is achieved through explicit representation of the cumulus process by the embedded 2-D cloud resolving models, the improved representation in HIRAM could be ascribed to the employment of a strongly entraining plume cumulus scheme, which inhibits the deep convection, and thus effectively enhances the stratiform rainfall. The sensitivity tests based on HIRAM also suggest that fine horizontal resolution could also be conducive to realistically capture the ISV over the EPAC, particularly for the QBM mode. Further analysis illustrates that the observed 40-day ISV mode over the EPAC is closely linked to the eastward propagating ISV signals from the Indian Ocean/Western Pacific, which is in agreement with the general impression that the 40-day ISV mode over the EPAC could be a local expression of the global Madden-Julian Oscillation (MJO). In contrast, the convective signals associated with the 40-day mode over the EPAC in most of the GCM simulations tend to originate between 150degE and 150degW, suggesting the 40-day ISV mode over the EPAC might be sustained without the forcing by the eastward propagating MJO. Further investigation is warranted towards improved understanding of the origin of the ISV over the EPAC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horstemeyer, Mark R.; Chaudhuri, Santanu
2015-09-30
A multiscale modeling Internal State Variable (ISV) constitutive model was developed that captures the fundamental structure-property relationships. The macroscale ISV model used lower length scale simulations (Butler-Volmer and Electronics Structures results) in order to inform the ISVs at the macroscale. The chemomechanical ISV model was calibrated and validated from experiments with magnesium (Mg) alloys that were investigated under corrosive environments coupled with experimental electrochemical studies. Because the ISV chemomechanical model is physically based, it can be used for other material systems to predict corrosion behavior. As such, others can use the chemomechanical model for analyzing corrosion effects on their designs.
King, Gillian; Orchard, Carole; Khalili, Hossein; Avery, Lisa
2016-01-01
Measures of interprofessional (IP) socialization are needed to capture the role of interprofessional education in preparing students and health practitioners to function as part of IP health care teams. The aims of this study were to refine a previously published version of the Interprofessional Socialization and Valuing Scale (the ISVS-24) and create two shorter equivalent forms to be used in pre-post studies. A graded response model was used to identify ISVS items in a practitioner data set (n = 345), with validation (measure invariance) conducted using a separate student sample (n = 341). Analyses indicated a unidimensional 21-item version with excellent measurement properties, Cronbach alpha of 0.988, 95% confidence interval (CI) 0.985-0.991. There was evidence of measure invariance, as there was excellent agreement of the factor scores for the practitioner and student data, intraclass correlation coefficient = 0.993, 95% CI 0.991-0.994. This indicates that the ISVS-21 measures IP socialization consistently across groups. Two 9-item equivalent versions for pre-post use were developed, with excellent agreement between the two forms. The student score agreement for the two item sets was excellent: intraclass correlation coefficient = 0.970, 95% CI 0.963-0.976. The ISVS-21 is a refined measure to assess existing levels of IP socialization in practitioners and students, and relate IP socialization to other important constructs such as IP collaboration and the development of an IP identity. The equivalent versions can be used to assess change in IP socialization as a result of interprofessional education.
Adamo, Nicoletta; Huo, Lan; Adelsberg, Samantha; Petkova, Eva; Castellanos, F. Xavier
2013-01-01
Despite the common co-occurrence of symptoms of attention deficit hyperactivity disorder (ADHD) in individuals with autism spectrum disorders (ASD), the underlying mechanisms are under-explored. A potential candidate for investigation is response time intra-subject variability (RT-ISV), a hypothesized marker of attentional lapses. Direct comparisons of RT-ISV in ASD versus ADHD are limited and contradictory. We aimed to examine whether distinct fluctuations in RT-ISV characterize children with ASD and with ADHD relative to typically developing children (TDC). We applied both a priori-based and data-driven strategies to RT performance of 46 children with ASD, 46 with ADHD, and 36 TDC (aged 7–11.9 years). Specifically, we contrasted groups relative to the amplitude of four preselected frequency bands as well as to 400 frequency bins from 0.006 to 0.345 Hz. In secondary analyses, we divided the ASD group into children with and without substantial ADHD symptoms (ASD+ and ASD−, respectively). Regardless of the strategy employed, RT-ISV fluctuations at frequencies between 0.20 and 0.345 Hz distinguished children with ADHD, but not children with ASD, from TDC. Children with ASD+ and those with ADHD shared elevated amplitudes of RT-ISV fluctuations in frequencies between 0.18 and 0.345 Hz relative to TDC. In contrast, the ASD− subgroup did not differ from TDC in RT-ISV frequency fluctuations. RT-ISV fluctuations in frequencies 0.18–0.345 Hz (i.e., periods between 3 and 5 s) are associated with ADHD symptoms regardless of categorical diagnosis and may represent a biomarker. These results suggest that children with ADHD and those with ASD+ share common underlying pathophysiological mechanisms of RT-ISV. PMID:23716135
NASA Technical Reports Server (NTRS)
Pineda, Evan Jorge; Waas, Anthony M.
2013-01-01
A thermodynamically-based work potential theory for modeling progressive damage and failure in fiber-reinforced laminates is presented. The current, multiple-internal state variable (ISV) formulation, referred to as enhanced Schapery theory (EST), utilizes separate ISVs for modeling the effects of damage and failure. Consistent characteristic lengths are introduced into the formulation to govern the evolution of the failure ISVs. Using the stationarity of the total work potential with respect to each ISV, a set of thermodynamically consistent evolution equations for the ISVs are derived. The theory is implemented into a commercial finite element code. The model is verified against experimental results from two laminated, T800/3900-2 panels containing a central notch and different fiber-orientation stacking sequences. Global load versus displacement, global load versus local strain gage data, and macroscopic failure paths obtained from the models are compared against the experimental results.
Isosteviol Has Beneficial Effects on Palmitate-Induced α-Cell Dysfunction and Gene Expression
Chen, Xiaoping; Hermansen, Kjeld; Xiao, Jianzhong; Bystrup, Sara Kjaergaard; O'Driscoll, Lorraine; Jeppesen, Per Bendix
2012-01-01
Background Long-term exposure to high levels of fatty acids impairs insulin secretion and exaggerates glucagon secretion. The aim of this study was to explore if the antihyperglycemic agent, Isosteviol (ISV), is able to counteract palmitate-induced α-cell dysfunction and to influence α-cell gene expression. Methodology/Principal Findings Long-term incubation studies with clonal α-TC1–6 cells were performed in the presence of 0.5 mM palmitate with or without ISV. We investigated effects on glucagon secretion, glucagon content, cellular triglyceride (TG) content, cell proliferation, and expression of genes involved in controlling glucagon synthesis, fatty acid metabolism, and insulin signal transduction. Furthermore, we studied effects of ISV on palmitate-induced glucagon secretion from isolated mouse islets. Culturing α-cells for 72-h with 0.5 mM palmitate in the presence of 18 mM glucose resulted in a 56% (p<0.01) increase in glucagon secretion. Concomitantly, the TG content of α-cells increased by 78% (p<0.01) and cell proliferation decreased by 19% (p<0.05). At 18 mM glucose, ISV (10−8 and 10−6 M) reduced palmitate-stimulated glucagon release by 27% (p<0.05) and 27% (p<0.05), respectively. ISV (10−6 M) also counteracted the palmitate-induced hypersecretion of glucagon in mouse islets. ISV (10−6 M) reduced α-TC1–6 cell proliferation rate by 25% (p<0.05), but ISV (10−8 and 10−6 M) had no effect on TG content in the presence of palmitate. Palmitate (0.5 mM) increased Pcsk2 (p<0.001), Irs2 (p<0.001), Fasn (p<0.001), Srebf2 (p<0.001), Acaca (p<0.01), Pax6 (p<0.05) and Gcg mRNA expression (p<0.05). ISV significantly (p<0.05) up-regulated Insr, Irs1, Irs2, Pik3r1 and Akt1 gene expression in the presence of palmitate. Conclusions/Significance ISV counteracts α-cell hypersecretion and apparently contributes to changes in expression of key genes resulting from long-term exposure to palmitate. ISV apparently acts as a glucagonostatic drug with potential as a new anti-diabetic drug for the treatment of type 2 diabetes. PMID:22479612
Shaidakov, E V; Rosukhovsky, D A; Grigoryan, A G; Bulatov, V L; Ilyukhin, E A
2016-01-01
In the intersaphenous vein (ISV) there may take place the so-called "antegrade" or "paradoxical" reflux. This type of blood flow is revealed in a series of patients during muscular diastole and is a link of the pathogenesis of varicose disease, but has, as distinct from the "classical" reflux, an antegrade direction. An incompetent saphenopopliteal junction (SPJ) is a source of the antegrade diastolic blood flow (ADBF) through the ISV. Descriptions of possible variants of impaired blood flow through the ISV are fragmentary and their interpretations are controversial. Prevalence and pathogenesis of these disorders impairments have not yet been studied. A cross-sectional study: over 4 years three centres examined a total of 1,413 patients diagnosed with class C2-C6 varicose veins according the CEAP classification. All patients underwent ultrasound duplex scanning of lower limb veins. The ADBF was determined as a unidirectional antegrade blood flow with the duration of not more than 0.5 second, observed after the crus was relived of compression (in the diastole). Of the patients included into the study who had no varicose veins on the contralateral extremity with the ISV being spotted we sequentially selected 40 subjects including them into the Study Group for the analysis of blood flow and the diameter of the ISV in health. Impairments of blood flow in the ISV were revealed in 61 (4.8%) of 1,265 extremities included into the study: the "classical" reflux in 9 (14.8%) limbs, ADBF was revealed in 37 (60.7%) limbs, a combination of the "classical" blood flow and ADBF - in 15 (24.6%) limbs. Hence, the patients were subdivided into three groups. Studying the nature of blood flow through the ISV in the control group on 40 lower limbs revealed no blood flow disorders. The mean ISV diameter amounted to 1.68 mm (ME=1 mm). The ISV diameter was considerably higher in all studied groups as compared with the control one (p<0.0001). The diameter of the ISV in its proximal portion averagely amounted to 4.48 mm (SD 1.337 mm, SE 0.171 mm). The diameter in the distal portion amounted to 5.39 mm (SD 1.725 mm, SE 0.221 mm).
Automatic intraoperative fiducial-less patient registration using cortical surface
NASA Astrophysics Data System (ADS)
Fan, Xiaoyao; Roberts, David W.; Olson, Jonathan D.; Ji, Songbai; Paulsen, Keith D.
2017-03-01
In image-guided neurosurgery, patient registration is typically performed in the operating room (OR) at the beginning of the procedure to establish the patient-to-image transformation. The accuracy and efficiency of patient registration are crucial as they are associated with surgical outcome, workflow, and healthcare costs. In this paper, we present an automatic fiducial-less patient registration (FLR) by directly registering cortical surface acquired from intraoperative stereovision (iSV) with preoperative MR (pMR) images without incorporating any prior information, and illustrate the method using one patient example. T1-weighted MR images were acquired prior to surgery and the brain was segmented. After dural opening, an image pair of the exposed cortical surface was acquired using an intraoperative stereovision (iSV) system, and a three-dimensional (3D) texture-encoded profile of the cortical surface was reconstructed. The 3D surface was registered with pMR using a multi-start binary registration method to determine the location and orientation of the iSV patch with respect to the segmented brain. A final transformation was calculated to establish the patient-to-MR relationship. The total computational time was 30 min, and can be significantly improved through code optimization, parallel computing, and/or graphical processing unit (GPU) acceleration. The results show that the iSV texture map aligned well with pMR using the FLR transformation, while misalignment was evident with fiducial-based registration (FBR). The difference between FLR and FBR was calculated at the center of craniotomy and the resulting distance was 4.34 mm. The results presented in this paper suggest potential for clinical application in the future.
Jegaskanda, Sinthujan; Mason, Rosemarie D; Andrews, Sarah F; Wheatley, Adam K; Zhang, Ruijun; Reynoso, Glennys V; Ambrozak, David R; Santos, Celia P; Luke, Catherine J; Matsuoka, Yumiko; Brenchley, Jason M; Hickman, Heather D; Talaat, Kawsar R; Permar, Sallie R; Liao, Hua-Xin; Yewdell, Jonathan W; Koup, Richard A; Roederer, Mario; McDermott, Adrian B; Subbarao, Kanta
2018-05-01
Pandemic live attenuated influenza vaccines (pLAIV) prime subjects for a robust neutralizing antibody response upon subsequent administration of a pandemic inactivated subunit vaccine (pISV). However, a difference was not detected in H5-specific memory B cells in the peripheral blood between pLAIV-primed and unprimed subjects prior to pISV boost. To investigate the mechanism underlying pLAIV priming, we vaccinated groups of 12 African green monkeys (AGMs) with H5N1 pISV or pLAIV alone or H5N1 pLAIV followed by pISV and examined immunity systemically and in local draining lymph nodes (LN). The AGM model recapitulated the serologic observations from clinical studies. Interestingly, H5N1 pLAIV induced robust germinal center B cell responses in the mediastinal LN (MLN). Subsequent boosting with H5N1 pISV drove increases in H5-specific B cells in the axillary LN, spleen, and circulation in H5N1 pLAIV-primed animals. Thus, H5N1 pLAIV primes localized B cell responses in the MLN that are recalled systemically following pISV boost. These data provide mechanistic insights for the generation of robust humoral responses via prime-boost vaccination. IMPORTANCE We have previously shown that pandemic live attenuated influenza vaccines (pLAIV) prime for a rapid and robust antibody response on subsequent administration of inactivated subunit vaccine (pISV). This is observed even in individuals who had undetectable antibody (Ab) responses following the initial vaccination. To define the mechanistic basis of pLAIV priming, we turned to a nonhuman primate model and performed a detailed analysis of B cell responses in systemic and local lymphoid tissues following prime-boost vaccination with pLAIV and pISV. We show that the nonhuman primate model recapitulates the serologic observations from clinical studies. Further, we found that pLAIVs induced robust germinal center B cell responses in the mediastinal lymph node. Subsequent boosting with pISV in pLAIV-primed animals resulted in detection of B cells in the axillary lymph nodes, spleen, and peripheral blood. We demonstrate that intranasally administered pLAIV elicits a highly localized germinal center B cell response in the mediastinal lymph node that is rapidly recalled following pISV boost into germinal center reactions at numerous distant immune sites. Copyright © 2018 American Society for Microbiology.
Probe for optically monitoring progress of in-situ vitrification of soil
Timmerman, Craig L.; Oma, Kenton H.; Davis, Karl C.
1988-01-01
A detector system for sensing the progress of an ISV process along an expected path comprises multiple sensors each having an input port. The input ports are distributed along the expected path of the ISV process between a starting location and an expected ending location. Each sensor generates an electrical signal representative of the temperature in the vicinity of its input port. A signal processor is coupled to the sensors to receive an electrical signal generated by a sensor, and generate a signal which is encoded with information which identifies the sensor and whether the ISV process has reached the sensor's input port. A transmitter propagates the encoded signal. The signal processor and the transmitter are below ground at a location beyond the expected ending location of the ISV process in the direction from the starting location to the expected ending location. A signal receiver and a decoder are located above ground for receiving the encoded signal propagated by the transmitter, decoding the encoded signal and providing a human-perceptible indication of the progress of the ISV process.
Probe for optically monitoring progress of in-situ vitrification of soil
Timmerman, C.L.; Oma, K.H.; Davis, K.C.
1988-08-09
A detector system for sensing the progress of an ISV process along an expected path comprises multiple sensors each having an input port. The input ports are distributed along the expected path of the ISV process between a starting location and an expected ending location. Each sensor generates an electrical signal representative of the temperature in the vicinity of its input port. A signal processor is coupled to the sensors to receive an electrical signal generated by a sensor, and generate a signal which is encoded with information which identifies the sensor and whether the ISV process has reached the sensor's input port. A transmitter propagates the encoded signal. The signal processor and the transmitter are below ground at a location beyond the expected ending location of the ISV process in the direction from the starting location to the expected ending location. A signal receiver and a decoder are located above ground for receiving the encoded signal propagated by the transmitter, decoding the encoded signal and providing a human-perceptible indication of the progress of the ISV process. 7 figs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This Quality Assurance Project Plan (QAPjP) establishes the quality assurance procedures and requirements to be implemented for the control of quality-related activities for Phase 3 of the Treatability Study (TS) of In Situ Vitrification (ISV) of Seepage Pit 1, ORNL Waste Area Grouping 7. This QAPjP supplements the Quality Assurance Plan for Oak Ridge National Laboratory Environmental Restoration Program by providing information specific to the ISV-TS. Phase 3 of the TS involves the actual ISV melt operations and posttest monitoring of Pit 1 and vicinity. Previously, Phase 1 activities were completed, which involved determining the boundaries of Pit 1, usingmore » driven rods and pipes and mapping the distribution of radioactivity using logging tools within the pipes. Phase 2 involved sampling the contents, both liquid and solids, in and around seepage Pit 1 to determine their chemical and radionuclide composition and the spatial distribution of these attributes. A separate QAPjP was developed for each phase of the project. A readiness review of the Phase 3 activities presented QAPjP will be conducted prior to initiating field activities, and an Operational Acceptance, Test (OAT) will also be conducted with no contamination involved. After, the OAT is complete, the ISV process will be restarted, and the melt will be allowed to increase with depth and incorporate the radionuclide contamination at the bottom of Pit 1. Upon completion of melt 1, the equipment will be shut down and mobilized to an adjacent location at which melt 2 will commence.« less
Hwang-Gu, Shoou-Lian; Lin, Hsiang-Yuan; Chen, Yu-Chi; Tseng, Yu-Han; Hsu, Wen-Yau; Chou, Miao-Chun; Chou, Wen-Jun; Wu, Yu-Yu; Gau, Susan Shur-Fen
2018-05-30
Increased intrasubject variability in reaction times (RT-ISV) is frequently found in individuals with autism spectrum disorder (ASD). However, how dimensional attention deficit/hyperactivity disorder (ADHD) symptoms impact RT-ISV in individuals with ASD remains elusive. We assessed 97 high-functioning youths with co-occurring ASD and ADHD (ASD+ADHD), 124 high-functioning youths with ASD only, 98 youths with ADHD only, and 249 typically developing youths, 8-18 years of age, using the Conners Continuous Performance Test (CCPT). We compared the conventional CCPT parameters (omission errors, commission errors, mean RT and RT standard error (RTSE) as well as the ex-Gaussian parameters of RT (mu, sigma, and tau) across the four groups. We also conducted regression analyses to assess the relationships between RT indices and symptoms of ADHD and ASD in the ASD group (i.e., the ASD+ADHD and ASD-only groups). The ASD+ADHD and ADHD-only groups had higher RT-ISV than the other two groups. RT-ISV, specifically RTSE and tau, was significantly associated with ADHD symptoms rather than autistic traits in the ASD group. Regression models also revealed that sex partly accounted for RT-ISV variance in the ASD group. A post hoc analysis showed girls with ASD had higher tau and RTSE values than their male counterparts. Our results suggest that RT-ISV is primarily associated with co-occurring ADHD symptoms/diagnosis in children and adolescents with ASD. These results do not support the hypothesis of response variability as a transdiagnostic phenotype for ASD and ADHD and warrant further validation at a neural level.
GEOSAFE CORPORATION IN SITU VITRIFICATION: INNOVATIVE TECHNOLOGY EVALUATION REPORT
This report summarizes the findings associated with a Demonstration of the Geosafe Corporation (Geosafe) In Situ Vitrification (ISV) Process. The Geosafe ISV Technology was evaluated under the EPA Superfund Innovative Technology Evaluation (SITE) Program in conjuction with remedi...
Operability test report for the in SITU vapor sampling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corbett, J.E., Westinghouse Hanford
1996-05-31
This report documents the successful completion of testing for the In Situ Vapor Sampling (ISVS) system. The report includes the test procedure (WHC-SD-WM-OTP-196, Rev OA), data sheets, exception resolutions, and a test report summary. This report conforms to the guidelines established in WHC-IP-1026, `Engineering Practice Guidelines,` Appendix L, `Operability Test Procedures and Reports.`
ERIC Educational Resources Information Center
Brotman, Melissa A.; Rooney, Melissa H.; Skup, Martha; Pine, Daniel S.; Leibenluft, Ellen
2009-01-01
Intrasubject variability in response time (ISV-RT) was higher in youths with bipolar disorder (BD) and those with first-degree relatives with BD compared to youths without BD. ISV-RT may be a risk marker for BD.
Hall, Roy A.; Bielefeldt-Ohmann, Helle; McLean, Breeanna J.; O’Brien, Caitlin A.; Colmant, Agathe M.G.; Piyasena, Thisun B.H.; Harrison, Jessica J.; Newton, Natalee D.; Barnard, Ross T.; Prow, Natalie A.; Deerain, Joshua M.; Mah, Marcus G.K.Y.; Hobson-Peters, Jody
2016-01-01
Recent advances in virus detection strategies and deep sequencing technologies have enabled the identification of a multitude of new viruses that persistently infect mosquitoes but do not infect vertebrates. These are usually referred to as insect-specific viruses (ISVs). These novel viruses have generated considerable interest in their modes of transmission, persistence in mosquito populations, the mechanisms that restrict their host range to mosquitoes, and their interactions with pathogens transmissible by the same mosquito. In this article, we discuss studies in our laboratory and others that demonstrate that many ISVs are efficiently transmitted directly from the female mosquito to their progeny via infected eggs, and, moreover, that persistent infection of mosquito cell cultures or whole mosquitoes with ISVs can restrict subsequent infection, replication, and transmission of some mosquito-borne viral pathogens. This suggests that some ISVs may act as natural regulators of arboviral transmission. We also discuss viral and host factors that may be responsible for their host restriction. PMID:28096646
Rosch, Keri S.; Dirlikov, Benjamin; Mostofsky, Stewart H.
2015-01-01
This study examined the impact of motivational contingencies (reinforcement and punishment) on Go/No-Go (GNG) task performance in girls and boys with ADHD relative to typically developing (TD) children and associations with prefrontal anatomy. Children ages 8–12 with ADHD (n=107, 36 girls) and TD controls (n=95, 34 girls) completed a standard and a motivational GNG task and associations with prefrontal cortex (PFC) surface area were examined. Intrasubject variability (ISV) was lower during the motivational compared to the standard GNG among TD girls and boys, and boys with ADHD, but not among girls with ADHD. A greater reduction in ISV was associated with greater PFC surface area among children with ADHD. This novel demonstration of improvement in ISV with motivational contingencies for boys, but not girls, with ADHD and associations with PFC anatomy informs our understanding of sex differences and motivational factors contributing to ISV in children with ADHD. PMID:26141238
Romero, Adolfo; Cobos, Andrés; Gómez, Juan; Muñoz, Manuel
2012-01-18
The presence of pre-analytical errors (PE) is a usual contingency in laboratories. The incidence may increase where it is difficult to control that period, as it is the case with samples sent from primary care (PC) to clinical reference laboratory. Detection of a large number of PE in PC samples in our Institution led to the development and implementation of preventive strategies. The first of these has been the realization of a cycle of educational sessions for PC nurses, followed by the evaluation of their impact on PE number. The incidence of PE was assessed in two periods, before (October-November 2007) and after (October-November, 2009) the implementation of educational sessions. Eleven PC centers in the urban area and 17 in the rural area participated. In the urban area, samples were withdrawn by any PC nurse; in the rural area, samples were obtained by the patient's reference nurse. The types of analyzed PE included missed sample (MS), hemolyzed sample (HS), coagulated sample (CS), incorrect sample (ISV) and others (OPE), such as lipemic or icteric serum or plasma. In the former period, we received 52,669 blood samples and 18,852 urine samples, detecting 3885 (7.5%) and 1567 (8.3%) PEs, respectively. After the educational intervention, there were 52,659 and 19,048 samples with 5057 (9.6%) and 1.256 (6.5%) PEs, respectively (p<0.001). According to the type of PE, the incidents compared before and after compared incidences were: MS, 4.8% vs. 3.8%, p<0.001; HS, 1.97% vs. 3.9%, p<0.001; CS, 0.54% vs. 0.25%, p<0.001; ISV, 0.15% vs. 0.19% p=0.08; and OPE, 0.3% vs. 0.42%, p<0.001. Surprisingly the PE incidence increased after the educational intervention, although it should be noted that it was primarily due to the increase of HS, as the other EP incidence decreased (MS and CS) or remained unchanged (ISV). This seems to indicate the need for a comprehensive approach to reduce the incidence of errors in the pre-analytical period, as one stage interventions do not seem to be effective enough. Copyright © 2011 Elsevier B.V. All rights reserved.
Matrix cracking in laminated composites under monotonic and cyclic loadings
NASA Technical Reports Server (NTRS)
Allen, David H.; Lee, Jong-Won
1991-01-01
An analytical model based on the internal state variable (ISV) concept and the strain energy method is proposed for characterizing the monotonic and cyclic response of laminated composites containing matrix cracks. A modified constitution is formulated for angle-ply laminates under general in-plane mechanical loading and constant temperature change. A monotonic matrix cracking criterion is developed for predicting the crack density in cross-ply laminates as a function of the applied laminate axial stress. An initial formulation for a cyclic matrix cracking criterion for cross-ply laminates is also discussed. For the monotonic loading case, a number of experimental data and well-known models are compared with the present study for validating the practical applicability of the ISV approach.
Linearization improves the repeatability of quantitative dynamic contrast-enhanced MRI.
Jones, Kyle M; Pagel, Mark D; Cárdenas-Rodríguez, Julio
2018-04-01
The purpose of this study was to compare the repeatabilities of the linear and nonlinear Tofts and reference region models (RRM) for dynamic contrast-enhanced MRI (DCE-MRI). Simulated and experimental DCE-MRI data from 12 rats with a flank tumor of C6 glioma acquired over three consecutive days were analyzed using four quantitative and semi-quantitative DCE-MRI metrics. The quantitative methods used were: 1) linear Tofts model (LTM), 2) non-linear Tofts model (NTM), 3) linear RRM (LRRM), and 4) non-linear RRM (NRRM). The following semi-quantitative metrics were used: 1) maximum enhancement ratio (MER), 2) time to peak (TTP), 3) initial area under the curve (iauc64), and 4) slope. LTM and NTM were used to estimate K trans , while LRRM and NRRM were used to estimate K trans relative to muscle (R Ktrans ). Repeatability was assessed by calculating the within-subject coefficient of variation (wSCV) and the percent intra-subject variation (iSV) determined with the Gage R&R analysis. The iSV for R Ktrans using LRRM was two-fold lower compared to NRRM at all simulated and experimental conditions. A similar trend was observed for the Tofts model, where LTM was at least 50% more repeatable than the NTM under all experimental and simulated conditions. The semi-quantitative metrics iauc64 and MER were as equally repeatable as K trans and R Ktrans estimated by LTM and LRRM respectively. The iSV for iauc64 and MER were significantly lower than the iSV for slope and TTP. In simulations and experimental results, linearization improves the repeatability of quantitative DCE-MRI by at least 30%, making it as repeatable as semi-quantitative metrics. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, H.D.
1991-11-01
Several of the technologies being evaluated for the treatment of waste material involve chemical reactions. Our example is the in situ vitrification (ISV) process where electrical energy is used to melt soil and waste into a ``glass like`` material that immobilizes and encapsulates any residual waste. During the ISV process, various chemical reactions may occur that produce significant amounts of products which must be contained and treated. The APOLLO program was developed to assist in predicting the composition of the gases that are formed. Although the development of this program was directed toward ISV applications, it should be applicable tomore » other technologies where chemical reactions are of interest. This document presents the mathematical methodology of the APOLLO computer code. APOLLO is a computer code that calculates the products of both equilibrium and kinetic chemical reactions. The current version, written in FORTRAN, is readily adaptable to existing transport programs designed for the analysis of chemically reacting flow systems. Separate subroutines EQREACT and KIREACT for equilibrium ad kinetic chemistry respectively have been developed. A full detailed description of the numerical techniques used, which include both Lagrange multiplies and a third-order integrating scheme is presented. Sample test problems are presented and the results are in excellent agreement with those reported in the literature.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, H.D.
1991-11-01
Several of the technologies being evaluated for the treatment of waste material involve chemical reactions. Our example is the in situ vitrification (ISV) process where electrical energy is used to melt soil and waste into a glass like'' material that immobilizes and encapsulates any residual waste. During the ISV process, various chemical reactions may occur that produce significant amounts of products which must be contained and treated. The APOLLO program was developed to assist in predicting the composition of the gases that are formed. Although the development of this program was directed toward ISV applications, it should be applicable tomore » other technologies where chemical reactions are of interest. This document presents the mathematical methodology of the APOLLO computer code. APOLLO is a computer code that calculates the products of both equilibrium and kinetic chemical reactions. The current version, written in FORTRAN, is readily adaptable to existing transport programs designed for the analysis of chemically reacting flow systems. Separate subroutines EQREACT and KIREACT for equilibrium ad kinetic chemistry respectively have been developed. A full detailed description of the numerical techniques used, which include both Lagrange multiplies and a third-order integrating scheme is presented. Sample test problems are presented and the results are in excellent agreement with those reported in the literature.« less
Promoting collaborative dementia care via online interprofessional education.
Cartwright, Jade; Franklin, Diane; Forman, Dawn; Freegard, Heather
2015-06-01
This study aimed to develop, implement and evaluate an online interprofessional education (IPE) dementia case study for health science students. The IPE initiative aimed to develop collaborative interprofessional capabilities and client-centred mindsets that underpin high-quality dementia care. A mixed methods research design was used to assess students' values, attitudes and learning outcomes using an interprofessional socialization and valuing scale (ISVS) completed pre and post the online case study and via thematic analysis of free text responses. Students' ISVS scores improved significantly following online participation, and the qualitative results support a shift towards interprofessional collaboration and client-centred care. This online IPE case study was successful in developing the collaborative mindsets and interprofessional capabilities required by a future workforce to meet the complex, client-centred needs of people living with dementia. © 2013 ACOTA.
NASA Astrophysics Data System (ADS)
Liu, Xiangwen; Wu, Tongwen; Yang, Song; Li, Qiaoping; Cheng, Yanjie; Liang, Xiaoyun; Fang, Yongjie; Jie, Weihua; Nie, Suping
2014-09-01
Using hindcasts of the Beijing Climate Center Climate System Model, the relationships between interannual variability (IAV) and intraseasonal variability (ISV) of the Asian-western Pacific summer monsoon are diagnosed. Predictions show reasonable skill with respect to some basic characteristics of the ISV and IAV of the western North Pacific summer monsoon (WNPSM) and the Indian summer monsoon (ISM). However, the links between the seasonally averaged ISV (SAISV) and seasonal mean of ISM are overestimated by the model. This deficiency may be partially attributable to the overestimated frequency of long breaks and underestimated frequency of long active spells of ISV in normal ISM years, although the model is capable of capturing the impact of ISV on the seasonal mean by its shift in the probability of phases. Furthermore, the interannual relationships of seasonal mean, SAISV, and seasonally averaged long-wave variability (SALWV; i.e., the part with periods longer than the intraseasonal scale) of the WNPSM and ISM with SST and low-level circulation are examined. The observed seasonal mean, SAISV, and SALWV show similar correlation patterns with SST and atmospheric circulation, but with different details. However, the model presents these correlation distributions with unrealistically small differences among different scales, and it somewhat overestimates the teleconnection between monsoon and tropical central-eastern Pacific SST for the ISM, but underestimates it for the WNPSM, the latter of which is partially related to the too-rapid decrease in the impact of El Niño-Southern Oscillation with forecast time in the model.
NASA Astrophysics Data System (ADS)
Fu, Xiouhua; Hsu, Pang-chi
2011-08-01
A conventional atmosphere-ocean coupled system initialized with NCEP FNL analysis has successfully predicted a tropical cyclogenesis event in the northern Indian Ocean with a lead time of two weeks. The coupled forecasting system reproduces the westerly wind bursts in the equatorial Indian Ocean associated with an eastward-propagating Madden-Julian Oscillation (MJO) event as well as the accompanying northward-propagating westerly and convective disturbances. After reaching the Bay of Bengal, this northward-propagating Intra-Seasonal Variability (ISV) fosters the tropical cyclogenesis. The present finding demonstrates that a realistic MJO/ISV prediction will make the extended-range forecasting of tropical cyclogenesis possible and also calls for improved representation of the MJO/ISV in contemporary weather and climate forecast models.
Modeling of Non-isothermal Austenite Formation in Spring Steel
NASA Astrophysics Data System (ADS)
Huang, He; Wang, Baoyu; Tang, Xuefeng; Li, Junling
2017-12-01
The austenitization kinetics description of spring steel 60Si2CrA plays an important role in providing guidelines for industrial production. The dilatometric curves of 60Si2CrA steel were measured using a dilatometer DIL805A at heating rates of 0.3 K to 50 K/s (0.3 °C/s to 50 °C/s). Based on the dilatometric curves, a unified kinetics model using the internal state variable (ISV) method was derived to describe the non-isothermal austenitization kinetics of 60Si2CrA, and the abovementioned model models the incubation and transition periods. The material constants in the model were determined using a genetic algorithm-based optimization technique. Additionally, good agreement between predicted and experimental volume fractions of transformed austenite was obtained, indicating that the model is effective for describing the austenitization kinetics of 60Si2CrA steel. Compared with other modeling methods of austenitization kinetics, this model, which uses the ISV method, has some advantages, such as a simple formula and explicit physics meaning, and can be probably used in engineering practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
At about 6:12 pm, EDT on April 21, 1996, steam and molten material were expelled from Pit 1 in situ vitrification (ISV) project at the Oak Ridge National Laboratory (ORNL). At the request of the director of the Environmental Restoration (ER) Division, Department of Energy Oak Ridge Operations (DOE ORO), an independent investigation team was established on April 26, 1996. This team was tasked to determine the facts related to the ORNL Pit 1 melt expulsion event (MEE) in the areas of environment safety and health concerns such as the adequacy of the ISV safety systems; operational control restrictions; emergencymore » response planning/execution; and readiness review, and report the investigation team findings within 45 days from the date of incident. These requirements were stated in the letter of appointment presented in Appendix A of this report. This investigation did not address the physical causes of the MEE. A separate investigation was conducted by ISV project personnel to determine the causes of the melt expulsion and the extent of the effects of this phenomenon. In response to this event, occurrence report ORO-LMES-X10ENVRES-1996-0006 (Appendix B) was filed. The investigation team did not address the occurrence reporting or event notification process. The project personnel (project team) examined the physical evidence at Pit 1 ISV site (e.g., the ejected melt material and the ISV hood), reviewed documents such as the site- specific health and safety plan (HASP), and interviewed personnel involved in the event and/or the project. A listing of the personnel interviewed and evidence reviewed is provided in Appendix C.« less
King, Gillian; Shaw, Lynn; Orchard, Carole A; Miller, Stacy
2010-01-01
There is a need for tools by which to evaluate the beliefs, behaviors, and attitudes that underlie interprofessional socialization and collaborative practice in health care settings. This paper introduces the Interprofessional Socialization and Valuing Scale (ISVS), a 24-item self-report measure based on concepts in the interprofessional literature concerning shifts in beliefs, behaviors, and attitudes that underlie interprofessional socialization. The ISVS was designed to measure the degree to which transformative learning takes place, as evidenced by changed assumptions and worldviews, enhanced knowledge and skills concerning interprofessional collaborative teamwork, and shifts in values and identities. The scales of the ISVS were determined using principal components analysis. The principal components analysis revealed three scales accounting for approximately 49% of the variance in responses: (a) Self-Perceived Ability to Work with Others, (b) Value in Working with Others, and (c) Comfort in Working with Others. These empirically derived scales showed good fit with the conceptual basis of the measure. The ISVS provides insight into the abilities, values, and beliefs underlying socio-cultural aspects of collaborative and authentic interprofessional care in the workplace, and can be used to evaluate the impact of interprofessional education efforts, in house team training, and workshops.
Enhanced Schapery Theory Software Development for Modeling Failure of Fiber-Reinforced Laminates
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Waas, Anthony M.
2013-01-01
Progressive damage and failure analysis (PDFA) tools are needed to predict the nonlinear response of advanced fiber-reinforced composite structures. Predictive tools should incorporate the underlying physics of the damage and failure mechanisms observed in the composite, and should utilize as few input parameters as possible. The purpose of the Enhanced Schapery Theory (EST) was to create a PDFA tool that operates in conjunction with a commercially available finite element (FE) code (Abaqus). The tool captures the physics of the damage and failure mechanisms that result in the nonlinear behavior of the material, and the failure methodology employed yields numerical results that are relatively insensitive to changes in the FE mesh. The EST code is written in Fortran and compiled into a static library that is linked to Abaqus. A Fortran Abaqus UMAT material subroutine is used to facilitate the communication between Abaqus and EST. A clear distinction between damage and failure is imposed. Damage mechanisms result in pre-peak nonlinearity in the stress strain curve. Four internal state variables (ISVs) are utilized to control the damage and failure degradation. All damage is said to result from matrix microdamage, and a single ISV marks the micro-damage evolution as it is used to degrade the transverse and shear moduli of the lamina using a set of experimentally obtainable matrix microdamage functions. Three separate failure ISVs are used to incorporate failure due to fiber breakage, mode I matrix cracking, and mode II matrix cracking. Failure initiation is determined using a failure criterion, and the evolution of these ISVs is controlled by a set of traction-separation laws. The traction separation laws are postulated such that the area under the curves is equal to the fracture toughness of the material associated with the corresponding failure mechanism. A characteristic finite element length is used to transform the traction-separation laws into stress-strain laws. The ISV evolution equations are derived in a thermodynamically consistent manner by invoking the stationary principle on the total work of the system with respect to each ISV. A novel feature is the inclusion of both pre-peak damage and appropriately scaled, post-peak strain softening failure. Also, the characteristic elements used in the failure degradation scheme are calculated using the element nodal coordinates, rather than simply the square root of the area of the element.
NASA Astrophysics Data System (ADS)
Yokoi, S.
2013-12-01
The Japan Meteorological Agency (JMA) recently released a new reanalysis dataset JRA-55 with the use of a JMA operational prediction model and 4D-VAR data assimilation. To evaluate merit in utilizing the JRA-55 dataset to investigate dynamics of the tropical intraseasonal variability (ISV) including the Madden-Julian Oscillation (MJO), this study examines ISV-scale precipitable water vapor (PWV) budget over the period 1989-2012. The ISV-scale PWV anomaly related to the boreal-winter MJO propagates eastward along with precipitation, consistent with the SSM/I PWV product. Decomposition of the PWV tendency into that simulated by the model and the analysis increment estimated by the data assimilation reveals that the model makes the PWV anomaly move eastward. On the other hand, the analysis increment exhibits positive values over the area where the PWV anomaly is positive, indicating that the model tends to damp the MJO signal. Note that the analysis increment over the Maritime Continent has comparable magnitude to the model tendency. The positive analysis increment may mainly be caused by an excess of precipitation anomaly with respect to the magnitude of PWV anomaly. In addition to the boreal-winter MJO, this study also examines the PWV budget associated with northward-propagating ISV during the boreal summer and find similar relationship between the PWV anomaly and analysis increment.
Dual Insect specific virus infection limits Arbovirus replication in Aedes mosquito cells.
Schultz, Michaela J; Frydman, Horacio M; Connor, John H
2018-05-01
Aedes mosquitoes are vectors for many pathogenic viruses. Cell culture systems facilitate the investigation of virus growth in the mosquito vector. We found Zika virus (ZIKV) growth to be consistent in A. albopictus cells but hypervariable in A. aegypti cell lines. As a potential explanation of this variability, we tested the hypothesis that our cells harbored opportunistic viruses. We screened Aedes cell lines for the presence of insect specific viruses (ISVs), Cell-fusing agent virus (CFAV) and Phasi charoen-like virus (PCLV). PCLV was present in the ZIKV-growth-variable A. aegypti cell lines but absent in A. albopictus lines, suggesting that these ISVs may interfere with ZIKV growth. In support of this hypothesis, PCLV infection of CFAV-positive A. albopictus cells inhibited the growth of ZIKV, dengue virus and La Crosse virus. These data suggest ISV infection of cell lines can impact arbovirus growth leading to significant changes in cell permissivity to arbovirus infection. Copyright © 2018 Elsevier Inc. All rights reserved.
Junglen, Sandra; Korries, Marvin; Grasse, Wolfgang; Wieseler, Janett; Kopp, Anne; Hermanns, Kyra; León-Juárez, Moises; Drosten, Christian; Kümmerer, Beate Mareike
2017-01-01
The genus Flavivirus contains emerging arthropod-borne viruses (arboviruses) infecting vertebrates, as well as insect-specific viruses (ISVs) (i.e., viruses whose host range is restricted to insects). ISVs are evolutionary precursors to arboviruses. Knowledge of the nature of the ISV infection block in vertebrates could identify functions necessary for the expansion of the host range toward vertebrates. Mapping of host restrictions by complementation of ISV and arbovirus genome functions could generate knowledge critical to predicting arbovirus emergence. Here we isolated a novel flavivirus, termed Niénokoué virus (NIEV), from mosquitoes sampled in Côte d'Ivoire. NIEV groups with insect-specific flaviviruses (ISFs) in phylogeny and grows in insect cells but not in vertebrate cells. We generated an infectious NIEV cDNA clone and a NIEV reporter replicon to study growth restrictions of NIEV in comparison to yellow fever virus (YFV), for which the same tools are available. Efficient RNA replication of the NIEV reporter replicon was observed in insect cells but not in vertebrate cells. Initial translation of the input replicon RNA in vertebrate cells was functional, but RNA replication did not occur. Chimeric YFV carrying the envelope proteins of NIEV was recovered via electroporation in C6/36 insect cells but did not infect vertebrate cells, indicating a block at the level of entry. Since the YF/NIEV chimera readily produced infectious particles in insect cells but not in vertebrate cells despite efficient RNA replication, restriction is also determined at the level of assembly/release. Taking the results together, the ability of ISF to infect vertebrates is blocked at several levels, including attachment/entry and RNA replication as well as assembly/release. IMPORTANCE Most viruses of the genus Flavivirus , e.g., YFV and dengue virus, are mosquito borne and transmitted to vertebrates during blood feeding of mosquitoes. Within the last decade, an increasing number of viruses with a host range exclusively restricted to insects in close relationship to the vertebrate-pathogenic flaviviruses were discovered in mosquitoes. To identify barriers that could block the arboviral vertebrate tropism, we set out to identify the steps at which the ISF replication cycle fails in vertebrates. Our studies revealed blocks at several levels, suggesting that flavivirus host range expansion from insects to vertebrates was a complex process that involved overcoming multiple barriers.
Junglen, Sandra; Korries, Marvin; Grasse, Wolfgang; Wieseler, Janett; Kopp, Anne; Hermanns, Kyra; León-Juárez, Moises; Drosten, Christian
2017-01-01
ABSTRACT The genus Flavivirus contains emerging arthropod-borne viruses (arboviruses) infecting vertebrates, as well as insect-specific viruses (ISVs) (i.e., viruses whose host range is restricted to insects). ISVs are evolutionary precursors to arboviruses. Knowledge of the nature of the ISV infection block in vertebrates could identify functions necessary for the expansion of the host range toward vertebrates. Mapping of host restrictions by complementation of ISV and arbovirus genome functions could generate knowledge critical to predicting arbovirus emergence. Here we isolated a novel flavivirus, termed Niénokoué virus (NIEV), from mosquitoes sampled in Côte d’Ivoire. NIEV groups with insect-specific flaviviruses (ISFs) in phylogeny and grows in insect cells but not in vertebrate cells. We generated an infectious NIEV cDNA clone and a NIEV reporter replicon to study growth restrictions of NIEV in comparison to yellow fever virus (YFV), for which the same tools are available. Efficient RNA replication of the NIEV reporter replicon was observed in insect cells but not in vertebrate cells. Initial translation of the input replicon RNA in vertebrate cells was functional, but RNA replication did not occur. Chimeric YFV carrying the envelope proteins of NIEV was recovered via electroporation in C6/36 insect cells but did not infect vertebrate cells, indicating a block at the level of entry. Since the YF/NIEV chimera readily produced infectious particles in insect cells but not in vertebrate cells despite efficient RNA replication, restriction is also determined at the level of assembly/release. Taking the results together, the ability of ISF to infect vertebrates is blocked at several levels, including attachment/entry and RNA replication as well as assembly/release. IMPORTANCE Most viruses of the genus Flavivirus, e.g., YFV and dengue virus, are mosquito borne and transmitted to vertebrates during blood feeding of mosquitoes. Within the last decade, an increasing number of viruses with a host range exclusively restricted to insects in close relationship to the vertebrate-pathogenic flaviviruses were discovered in mosquitoes. To identify barriers that could block the arboviral vertebrate tropism, we set out to identify the steps at which the ISF replication cycle fails in vertebrates. Our studies revealed blocks at several levels, suggesting that flavivirus host range expansion from insects to vertebrates was a complex process that involved overcoming multiple barriers. PMID:28101536
Libration-point staging concepts for Earth-Mars transportation
NASA Technical Reports Server (NTRS)
Farquhar, Robert; Dunham, David
1986-01-01
The use of libration points as transfer nodes for an Earth-Mars transportation system is briefly described. It is assumed that a reusable Interplanetary Shuttle Vehicle (ISV) operates between the libration point and Mars orbit. Propellant for the round-trip journey to Mars and other supplies would be carried from low Earth orbit (LEO) to the ISV by additional shuttle vehicles. Different types of trajectories between LEO and libration points are presented, and approximate delta-V estimates for these transfers are given. The possible use of lunar gravity-assist maneuvers is also discussed.
3D Viewer Platform of Cloud Clustering Management System: Google Map 3D
NASA Astrophysics Data System (ADS)
Choi, Sung-Ja; Lee, Gang-Soo
The new management system of framework for cloud envrionemnt is needed by the platfrom of convergence according to computing environments of changes. A ISV and small business model is hard to adapt management system of platform which is offered from super business. This article suggest the clustering management system of cloud computing envirionments for ISV and a man of enterprise in small business model. It applies the 3D viewer adapt from map3D & earth of google. It is called 3DV_CCMS as expand the CCMS[1].
NASA Astrophysics Data System (ADS)
Cho, H. E.; Horstemeyer, M. F.; Baumgardner, J. R.
2017-12-01
In this study, we present an internal state variable (ISV) constitutive model developed to model static and dynamic recrystallization and grain size progression in a unified manner. This method accurately captures temperature, pressure and strain rate effect on the recrystallization and grain size. Because this ISV approach treats dislocation density, volume fraction of recrystallization and grain size as internal variables, this model can simultaneously track their history during the deformation with unprecedented realism. Based on this deformation history, this method can capture realistic mechanical properties such as stress-strain behavior in the relationship of microstructure-mechanical property. Also, both the transient grain size during the deformation and the steady-state grain size of dynamic recrystallization can be predicted from the history variable of recrystallization volume fraction. Furthermore, because this model has a capability to simultaneously handle plasticity and creep behaviors (unified creep-plasticity), the mechanisms (static recovery (or diffusion creep), dynamic recovery (or dislocation creep) and hardening) related to dislocation dynamics can also be captured. To model these comprehensive mechanical behaviors, the mathematical formulation of this model includes elasticity to evaluate yield stress, work hardening in treating plasticity, creep, as well as the unified recrystallization and grain size progression. Because pressure sensitivity is especially important for the mantle minerals, we developed a yield function combining Drucker-Prager shear failure and von Mises yield surfaces to model the pressure dependent yield stress, while using pressure dependent work hardening and creep terms. Using these formulations, we calibrated against experimental data of the minerals acquired from the literature. Additionally, we also calibrated experimental data for metals to show the general applicability of our model. Understanding of realistic mantle dynamics can only be acquired once the various deformation regimes and mechanisms are comprehensively modeled. The results of this study demonstrate that this ISV model is a good modeling candidate to help reveal the realistic dynamics of the Earth's mantle.
Biscaldi, M; Bednorz, N; Weissbrodt, K; Saville, C W N; Feige, B; Bender, S; Klein, C
2016-07-01
Autism spectrum disorder (ASD) and attention-deficit/hyperactivity disorder (ADHD) have previously been studied mainly in isolation from each other. However the two conditions may be aetiologically related and thus show overlap in aetiologically relevant functions. In order to address this question of potential aetiological overlap between ADHD and ASD, the present study set out to investigate putative endophenotypes of ADHD in N=33 typically developing (TD) children and N=28 patients with ASD that were (ASD+) or were not (ASD-) co-morbid for ADHD. With regard to both the cognitive endophenotype candidates (working memory, inhibition, temporal processing) and intra-subject variability (ISV) the pattern of abnormalities was inconsistent. Furthermore, the overall profile of ASD-TD differences was extremely similar to the pattern of differences between the ASD+ and ASD- sub-groups, suggesting that any abnormalities found were due to the comorbid ASD subgroup. This held in particular for ISV, which did not show in patients with ASD the task-general increase that is common in ADHD samples. Altogether, the present results do not support the hypothesis of aetiological overlap between ASD and ADHD. Copyright © 2016 Elsevier B.V. All rights reserved.
Howard, Sean M A; Cumming, Sean P; Atkinson, Mark; Malina, Robert M
2016-11-01
The study aimed to evaluate the mediating effect of biological maturation on anthropometrical measurements, performance indicators and subsequent selection in a group of academy rugby union players. Fifty-one male players 14-17 years of age were assessed for height, weight and BMI, and percentage of predicted mature status attained at the time of observation was used as an indicator of maturity status. Following this, initial sprint velocity (ISV), Wattbike peak power output (PPO) and initial sprint momentum (ISM) were assessed. A bias towards on-time (n = 44) and early (n = 7) maturers was evident in the total sample and magnified with age cohort. Relative to UK reference values, weight and height were above the 90th and 75th centiles, respectively. Significant (p ≤ .01) correlations were observed between maturity status and BMI (r = .48), weight (r = .63) and height (r = .48). Regression analysis (controlling for age) revealed that maturity status and height explained 68% of ISM variance; however, including BMI in the model attenuated the influence of maturity status below statistical significance (p = .72). Height and BMI explained 51% of PPO variance, while no initial significant predictors were identified for ISV. The sample consisted of players who were on-time and early in maturation with no late maturers represented. This was attributable, in part, to the mediating effect of maturation on body size, which, in turn, predicted performance variables.
Mazzaferro, P K; Repasky, E A; Black, J; Kubo, R T; Bankert, R B
1987-01-01
In the companion paper, it was established that a secretory form of immunoglobulin, sIg, is present at or near the cell surface. This unexpected occurrence of sIg was postulated to be due to the labelling of sIg which remains temporarily associated with the cell packaged in a vesicle which appears to bud from the plasma membrane at a single pole of the cell. The question that is addressed in this report is whether or not this polar accumulation of sIg represents a common pathway for the exit of this protein which is used by antibody-producing cells. This question is important since, in spite of the fact that the intracellular events associated with immunoglobulin synthesis (processing and movement between subcellular compartments) have been defined, very little data exists on how or where immunoglobulin finally leaves the plasma cell. This question was approached here by first demonstrating that the polar immunoglobulin secretory vesicles (ISV) are associated with several sIg-producing cells including other hybridomas, B-cell lines, and mitogen-activated spleen cells. The second approach was to characterize the ISV on the cell ultrastructurally and to establish that these vesicles are released from the cell carrying with them sIg. Isolated vesicles released from biosynthetically labeled Ig-producing cells were analyzed by SDS-PAGE in order to confirm the presence of sIg and to determine the number of other proteins associated with the ISV, their molecular weights, and the degree of disulfide crosslinking of the molecules comprising this structure. Finally, the kinetics of sIg release was established by a pulse chase protocol for biosynthetically labeled cells, and by monitoring the release of radioactive Ig from surface iodinated cells. As was predicted from our biochemical studies of the ISV, we observed a very slow phase of sIg release as well as a rapid release phase. Our studies have established that at least one of the pathways for the release of Ig from hybridomas, B-cell lines, and normal splenic B-cells is via a polar multivesiculated structure that we have termed ISV, and that the sIg can be released either as a free form of the protein or packaged within a satellite vesicle which may release the sIg later and perhaps at considerable distance from the cell that produced it.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowery, P.S.; Lessor, D.L.
Waste glass melter and in situ vitrification (ISV) processes represent the combination of electrical thermal, and fluid flow phenomena to produce a stable waste-from product. Computational modeling of the thermal and fluid flow aspects of these processes provides a useful tool for assessing the potential performance of proposed system designs. These computations can be performed at a fraction of the cost of experiment. Consequently, computational modeling of vitrification systems can also provide and economical means for assessing the suitability of a proposed process application. The computational model described in this paper employs finite difference representations of the basic continuum conservationmore » laws governing the thermal, fluid flow, and electrical aspects of the vitrification process -- i.e., conservation of mass, momentum, energy, and electrical charge. The resulting code is a member of the TEMPEST family of codes developed at the Pacific Northwest Laboratory (operated by Battelle for the US Department of Energy). This paper provides an overview of the numerical approach employed in TEMPEST. In addition, results from several TEMPEST simulations of sample waste glass melter and ISV processes are provided to illustrate the insights to be gained from computational modeling of these processes. 3 refs., 13 figs.« less
Karpinski, Aryn C.
2016-01-01
Objective. To examine racial differences in communication apprehension and interprofessional socialization in fourth-year PharmD students and to investigate the relationship between the two constructs. Methods. Two measures with reliability and validity psychometric evidence were administered to fourth-year pharmacy students at a single historically black university with a large racial minority population. The Personal Report of Communication Apprehension (PRCA-24) measures level of fear or anxiety associated with communication. The Interprofessional Socialization and Valuing Scale (ISVS) measures beliefs, attitudes, and behaviors towards interprofessional collaborative practice. Results. One hundred fourteen students completed the survey. This produced a 77.4% response rate and 45.6% of the participants were African American. There were significant differences between races (ie, White, African-American, and Asian) on both measures. The PCRA-24 and ISVS were significantly correlated in each racial group. Conclusion. As pharmacy education moves to more interprofessional collaborations, the racial differences need to be considered and further explored. Pharmacy curricula can be structured to promote students’ comfort when communicating interprofessionally across racial groups. Understanding of culture and early education in cultural competence may need to be emphasized to navigate racial or cultural differences. PMID:26941434
The 2014 China meeting of the International Society for Vascular Surgery.
Dardik, Alan; Ouriel, Kenneth; Wang, JinSong; Liapis, Christos
2014-10-01
The 2014 meeting of the International Society for Vascular Surgery (ISVS) was held in Guangzhou, China, in conjunction with the fifth annual Wang Zhong-Gao's Vascular Forum, the eighth annual China Southern Endovascular Congress, and the third annual Straits Vascular Forum. Keynote addresses were given by Professors Christos Liapis, Wang Zhong-Gao, and Wang Shen-Ming. President Liapis presented the first ISVS Lifetime Achievement Award to Professor Wang Zhong-Gao for his multi-decade accomplishments establishing Vascular Surgery as a specialty in China. Faculty presentations were made in plenary sessions that focused on diseases relevant to the patterns of vascular disease prevalent in China. Thirty-one abstracts were presented by vascular surgeons from around the globe, and the top 10 presentations were recognized. Thirteen countries were represented in the meeting. The 2014 ISVS meeting was a success. Partnership of this meeting with host Chinese Vascular Surgery societies was of mutual benefit, bringing vascular surgeons of international reputation to the local area for academic and intellectual exchange and formation of collaborations; integration of the meetings allows easier logistics to facilitate meeting organization and optimization of time for both faculty and attendees. This integrated model may serve as an optimal model for future meetings. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
NASA Astrophysics Data System (ADS)
Possamai, Bianca; Vieira, João P.; Grimm, Alice M.; Garcia, Alexandre M.
2018-03-01
Global climatic phenomena like El Niño events are known to alter hydrological cycles and local abiotic conditions leading to changes in structure and dynamics of terrestrial and aquatic biological communities worldwide. Based on a long-term (19 years) standardized sampling of shallow water estuarine fishes, this study investigated the temporal variability in composition and dominance patterns of trophic guilds in a subtropical estuary (Patos Lagoon estuary, Southern Brazil) and their relationship with local and regional driving forces associated with moderate (2002-2003 and 2009-2010) and very strong (1997-1998 and 2015-2016) El Niño events. Fish species were classified into eight trophic guilds (DTV detritivore, HVP herbivore-phytoplankton, HVM macroalgae herbivore, ISV insectivore, OMN omnivore, PSV piscivore, ZBV zoobenthivore and ZPL zooplanktivore) and their abundances were correlated with environmental factors. Canonical correspondence analysis revealed that less dominant (those comprising < 10% of total abundance) trophic guilds, such as HVP, HVM, ISV, PSV, increased their relative abundance in the estuary during higher rainfall and lower salinity conditions associated with moderate and very strong El Niño events. An opposite pattern was observed for the dominant trophic fish guilds like OMN and, at lesser extent, DTV and ZPL, which had greater association with higher values of salinity and water transparency occurring mostly during non-El Niño conditions. In contrast, ZBV's abundance was not correlated with contrasting environmental conditions, but rather, had higher association with samples characterized by intermediate environmental values. Overall, these findings show that moderate and very strong El Niño events did not substantially disrupt the dominance patterns among trophic fish guilds in the estuary. Rather, they increased trophic estuarine diversity by flushing freshwater fishes with distinct feeding habits into the estuary.
Induction salt bath for electrolytic boronizing
NASA Astrophysics Data System (ADS)
Simonenko, A. N.
1983-08-01
The induction salt bath ISV-ÉB is intended for electrolytic and nonelectrolytic boronizing and for heating steel parts to be hardened in toolrooms of engineering plants equipped with high-frequency installations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, B.L.; Pool, K.H.; Evans, J.C.
1997-01-01
This report describes the analytical results of vapor samples taken from the headspace of waste storage tank 241-BY-108 (Tank BY-108) at the Hanford Site in Washington State. The results described in this report is the second in a series comparing vapor sampling of the tank headspace using the Vapor Sampling System (VSS) and In Situ Vapor Sampling (ISVS) system without high efficiency particulate air (HEPA) prefiltration. The results include air concentrations of water (H{sub 2}O) and ammonia (NH{sub 3}), permanent gases, total non-methane organic compounds (TO-12), and individual organic analytes collected in SUMMA{trademark} canisters and on triple sorbent traps (TSTs).more » Samples were collected by Westinghouse Hanford Company (WHC) and analyzed by Pacific Northwest National Laboratory (PNNL). Analyses were performed by the Vapor Analytical Laboratory (VAL) at PNNL. Analyte concentrations were based on analytical results and, where appropriate, sample volume measurements provided by WHC.« less
Prabhu, Rajkumar; Whittington, Wilburn R; Patnaik, Sourav S; Mao, Yuxiong; Begonia, Mark T; Williams, Lakiesha N; Liao, Jun; Horstemeyer, M F
2015-05-18
This study offers a combined experimental and finite element (FE) simulation approach for examining the mechanical behavior of soft biomaterials (e.g. brain, liver, tendon, fat, etc.) when exposed to high strain rates. This study utilized a Split-Hopkinson Pressure Bar (SHPB) to generate strain rates of 100-1,500 sec(-1). The SHPB employed a striker bar consisting of a viscoelastic material (polycarbonate). A sample of the biomaterial was obtained shortly postmortem and prepared for SHPB testing. The specimen was interposed between the incident and transmitted bars, and the pneumatic components of the SHPB were activated to drive the striker bar toward the incident bar. The resulting impact generated a compressive stress wave (i.e. incident wave) that traveled through the incident bar. When the compressive stress wave reached the end of the incident bar, a portion continued forward through the sample and transmitted bar (i.e. transmitted wave) while another portion reversed through the incident bar as a tensile wave (i.e. reflected wave). These waves were measured using strain gages mounted on the incident and transmitted bars. The true stress-strain behavior of the sample was determined from equations based on wave propagation and dynamic force equilibrium. The experimental stress-strain response was three dimensional in nature because the specimen bulged. As such, the hydrostatic stress (first invariant) was used to generate the stress-strain response. In order to extract the uniaxial (one-dimensional) mechanical response of the tissue, an iterative coupled optimization was performed using experimental results and Finite Element Analysis (FEA), which contained an Internal State Variable (ISV) material model used for the tissue. The ISV material model used in the FE simulations of the experimental setup was iteratively calibrated (i.e. optimized) to the experimental data such that the experiment and FEA strain gage values and first invariant of stresses were in good agreement.
Prabhu, Rajkumar; Whittington, Wilburn R.; Patnaik, Sourav S.; Mao, Yuxiong; Begonia, Mark T.; Williams, Lakiesha N.; Liao, Jun; Horstemeyer, M. F.
2015-01-01
This study offers a combined experimental and finite element (FE) simulation approach for examining the mechanical behavior of soft biomaterials (e.g. brain, liver, tendon, fat, etc.) when exposed to high strain rates. This study utilized a Split-Hopkinson Pressure Bar (SHPB) to generate strain rates of 100-1,500 sec-1. The SHPB employed a striker bar consisting of a viscoelastic material (polycarbonate). A sample of the biomaterial was obtained shortly postmortem and prepared for SHPB testing. The specimen was interposed between the incident and transmitted bars, and the pneumatic components of the SHPB were activated to drive the striker bar toward the incident bar. The resulting impact generated a compressive stress wave (i.e. incident wave) that traveled through the incident bar. When the compressive stress wave reached the end of the incident bar, a portion continued forward through the sample and transmitted bar (i.e. transmitted wave) while another portion reversed through the incident bar as a tensile wave (i.e. reflected wave). These waves were measured using strain gages mounted on the incident and transmitted bars. The true stress-strain behavior of the sample was determined from equations based on wave propagation and dynamic force equilibrium. The experimental stress-strain response was three dimensional in nature because the specimen bulged. As such, the hydrostatic stress (first invariant) was used to generate the stress-strain response. In order to extract the uniaxial (one-dimensional) mechanical response of the tissue, an iterative coupled optimization was performed using experimental results and Finite Element Analysis (FEA), which contained an Internal State Variable (ISV) material model used for the tissue. The ISV material model used in the FE simulations of the experimental setup was iteratively calibrated (i.e. optimized) to the experimental data such that the experiment and FEA strain gage values and first invariant of stresses were in good agreement. PMID:26067742
Cognitive Load Differentially Impacts Response Control in Girls and Boys with ADHD
Mostofsky, Stewart H.; Rosch, Keri S.
2015-01-01
Children with attention-deficit hyperactivity disorder (ADHD) consistently show impaired response control, including deficits in response inhibition and increased intrasubject variability (ISV) compared to typically-developing (TD) children. However, significantly less research has examined factors that may influence response control in individuals with ADHD, such as task or participant characteristics. The current study extends the literature by examining the impact of increasing cognitive demands on response control in a large sample of 81children with ADHD (40 girls) and 100 TD children (47 girls), ages 8–12 years. Participants completed a simple Go/No-Go (GNG) task with minimal cognitive demands, and a complex GNG task with increased cognitive load. Results showed that increasing cognitive load differentially impacted response control (commission error rate and tau, an ex-Gaussian measure of ISV) for girls, but not boys, with ADHD compared to same-sex TD children. Specifically, a sexually dimorphic pattern emerged such that boys with ADHD demonstrated higher commission error rate and tau on both the simple and complex GNG tasks as compared to TD boys, whereas girls with ADHD did not differ from TD girls on the simple GNG task, but showed higher commission error rate and tau on the complex GNG task. These findings suggest that task complexity influences response control in children with ADHD in a sexually dimorphic manner. The findings have substantive implications for the pathophysiology of ADHD in boys versus girls with ADHD. PMID:25624066
Construction of an integrated social vulnerability index in urban areas prone to flash flooding
NASA Astrophysics Data System (ADS)
Aroca-Jimenez, Estefania; Bodoque, Jose Maria; Garcia, Juan Antonio; Diez-Herrero, Andres
2017-09-01
Among the natural hazards, flash flooding is the leading cause of weather-related deaths. Flood risk management (FRM) in this context requires a comprehensive assessment of the social risk component. In this regard, integrated social vulnerability (ISV) can incorporate spatial distribution and contribution and the combined effect of exposure, sensitivity and resilience to total vulnerability, although these components are often disregarded. ISV is defined by the demographic and socio-economic characteristics that condition a population's capacity to cope with, resist and recover from risk and can be expressed as the integrated social vulnerability index (ISVI). This study describes a methodological approach towards constructing the ISVI in urban areas prone to flash flooding in Castilla y León (Castile and León, northern central Spain, 94 223 km2, 2 478 376 inhabitants). A hierarchical segmentation analysis (HSA) was performed prior to the principal components analysis (PCA), which helped to overcome the sample size limitation inherent in PCA. ISVI was obtained from weighting vulnerability factors based on the tolerance statistic. In addition, latent class cluster analysis (LCCA) was carried out to identify spatial patterns of vulnerability within the study area. Our results show that the ISVI has high spatial variability. Moreover, the source of vulnerability in each urban area cluster can be identified from LCCA. These findings make it possible to design tailor-made strategies for FRM, thereby increasing the efficiency of plans and policies and helping to reduce the cost of mitigation measures.
Giant magneto-spin-Seebeck effect and magnon transfer torques in insulating spin valves
NASA Astrophysics Data System (ADS)
Cheng, Yihong; Chen, Kai; Zhang, Shufeng
2018-01-01
We theoretically study magnon transport in an insulating spin valve (ISV) made of an antiferromagnetic insulator sandwiched between two ferromagnetic insulator (FI) layers. In the conventional metal-based spin valve, the electron spins propagate between two metallic ferromagnetic layers, giving rise to giant magnetoresistance and spin transfer torque. Here, the incoherent magnons in the ISV serve as angular momentum carriers and are responsible for the angular momentum transport between two FI layers across the antiferromagnetic spacer. We predict two transport phenomena in the presence of the temperature gradient: a giant magneto-spin-Seebeck effect in which the output voltage signal is controlled by the relative orientation of the two FI layers and magnon transfer torque that can be used for switching the magnetization of the FI layers with a temperature gradient of the order of 0.1 Kelvin per nanometer.
Zhang, Hui; Wang, Yan
2017-09-22
To investigate the dry eye after small incision lenticule extraction (SMILE) and explore the correlations between changes in the tear film stability, the tear secretion and the corneal surface regularity. Sixty-two eyes of 22 men and 13 women who underwent SMILE were included in this study. Corneal topography was measured to assess the index of surface variance (ISV) and the index of vertical asymmetry (IVA). Dry eye tests including subjective symptom questionnaire, tear breakup time (TBUT), corneal fluorescein staining and Schirmer's test (ST) were evaluated before and at 1 and 6 months postoperatively. TBUT was found to be significantly decreased from 9.8 ± 3.4 s preoperatively to 7.4 ± 3.8 s at 1 month and 6.5 ± 3.6 s at 6 months (both P < 0.001). There was a significant decrease in ST at 1 month postoperatively (P = 0.012); however, ST returned to baseline by 6 months (P = 0.522). Both ISV and IVA significantly increased after the surgery (all P < 0.001). In addition, the changes in TBUT were negatively correlated with the increases in ISV and IVA (r = -0.343, P = 0.006 and r = -0.311, P = 0.014, respectively). Patients undergoing SMILE might develop a short-TBUT type of dry eye. Corneal surface regularity indices might be helpful in the assessment of tear film stability following SMILE procedure.
NASA Astrophysics Data System (ADS)
Meissner, Robert; Sugden, Wade W.; Siekmann, Arndt F.; Denz, Cornelia
2018-02-01
All higher developed organisms contain complex hierarchical networks of arteries, veins and capillaries. These constitute the cardiovascular system responsible for supplying nutrients, gas and waste exchange. Diseases related to the cardiovascular system are among the main causes for death worldwide. In order to understand the processes leading to arteriovenous malformation, we studied hereditary hemorrhagic telangiectasia (HHT), which has a prevalence of 1:5000 worldwide and causes internal bleeding. In zebrafish, HHT is induced by mutation of the endoglin gene involved in HHT and observed to reduce red blood cell (RBC) flow to intersegmental vessels (ISVs) in the tail due to malformations of the dorsal aorta (DA) and posterior cardinal vein (PCV). However, these capillaries are still functional. Changes in the blood flow pattern are observed from in vivo data from zebrafish embryos through particle image velocimetry (PIV). Wall shear rates (WSRs) and blood flow velocities are obtained non-invasively with millisecond resolution. We observe significant increases of blood flow velocity in the DA for endoglin-deficient zebrafish embryos (mutants) at 3 days post fertilization. In the PCV, this increase is even more pronounced. We identified an increased similarity between the DA and the PCV of mutant fish compared to siblings, i.e., unaffected fish. To counteract the reduced RBC flow to ISVs we implement optical tweezers (OT). RBCs are steered into previously unperfused ISVs showing a significant increase of RBC count per minute. We discuss limitations with respect to biocompatibility of optical tweezers in vivo and determination of in vivo wall shear stress (WSS) connected to normal and endoglin-deficicent zebrafish embryos.
ENGINEERING BULLETIN: IN SITU VITRIFICATION TREATMENT
In situ vitrification (ISV) uses electrical power to heat and melt soil, sludge, mine tailings, buried wastes, and sediments contaminated with organic, inorganic, and metal-bearing hazardous wastes. The molten material cools to form a hard, monolithic, chemically inert, stable...
DEMONSTRATION BULLETIN: IN SITU VITRIFICATION - GEOSAFE CORPORATION
in Situ Vitrification (ISV) is designed to treat soils, sludges, sediments, and mine tailings contaminated with organic and inorganic compounds. The process uses electrical current to heat (mett) and vitrify the soil in place. Organic contaminants are decomposed by the extreme h...
SITE TECHNOLOGY CAPSULE: GEOSAFE CORPORATION IN SITU VITRIFICATION TECHNOLOGY
The Geosafe In Situ Vitrification (ISV) Technology is designed to treat soils, sludges, sediments, and mine tallings contaminated with organic, inorganic, and radioactive compounds. The organic compounds are pyrolyzed and reduced to simple gases which are collected under a treatm...
NASA Astrophysics Data System (ADS)
Xu, Tengfei; Li, Shujiang; Hamzah, Faisal; Setiawan, Agus; Susanto, R. Dwi; Cao, Guojiao; Wei, Zexun
2018-06-01
Sunda Strait is the outflow strait of the South China Sea branch of the Pacific to Indian Ocean Throughflow. The annual mean volume transport through the Sunda Strait is around 0.25 Sv from the Java Sea to the eastern Indian Ocean, only 2.5% of the IndonesianThroughflow, and thus has been ignored by previous investigations. However, the Nutrient concentrations in the Sunda Strait and its vicinity are found highly related to the water transport through the Sunda Strait. Particularly, our observation shows significant intraseasonal variability (ISV) of currents at period around 25-45 days in the Sunda Strait. Both remote and local wind forcing contribute to the ISVs in the Sunda Strait. The intraseasonal oscillation of sea surface wind in the central Indian Ocean drives upwelling/downwelling equatorial Kelvin waves to propagate along the equator and subsequently along the Sumatra-Java coasts, resulting in negative/positive sea level anomalies in the south of the Sunda Strait. The local intraseasonal sea surface wind anomalies also tend to induce negative/positive sea level anomalies in the south of the Sunda Strait by offshore/onshore Ekman transport while there are upwelling/downwelling events. The ensuring sea level gradient associated with the sea level anomalies in the south of the Sunda Strait induces intraseasonal outflow (from Indian Ocean to Java Sea) and inflow (from Java Sea to Indian Ocean) through the strait. Analyses also show that the chlorophyll-a concentrations in the south of the Sunda Strait are lower/higher during the inflow/outflow period of the ISV events in March through May. The mechanism attributes to both the nutrient-rich water transported by the intraseasonal flow in the Sunda Strait and by the upwelling and Ekman transport driven by the local sea surface wind anomalies.
Project SQUID: The Foundations of Nonequilibrium Statistical Mechanics. Volume 1
1963-06-01
equations available (Boltzmann, Landau,, Bogolubov- Balescu -Lenard) are essentially exact and cannot be improved. That isv for kinetic gases (those...effects) as well as to the newly obtained kinetic equation for plasmas (8) (Bogolubov- Balescu -Lenard equation). The hope of ob- taining correctly
Jitter Reduces Response-Time Variability in ADHD: An Ex-Gaussian Analysis.
Lee, Ryan W Y; Jacobson, Lisa A; Pritchard, Alison E; Ryan, Matthew S; Yu, Qilu; Denckla, Martha B; Mostofsky, Stewart; Mahone, E Mark
2015-09-01
"Jitter" involves randomization of intervals between stimulus events. Compared with controls, individuals with ADHD demonstrate greater intrasubject variability (ISV) performing tasks with fixed interstimulus intervals (ISIs). Because Gaussian curves mask the effect of extremely slow or fast response times (RTs), ex-Gaussian approaches have been applied to study ISV. This study applied ex-Gaussian analysis to examine the effects of jitter on RT variability in children with and without ADHD. A total of 75 children, aged 9 to 14 years (44 ADHD, 31 controls), completed a go/no-go test with two conditions: fixed ISI and jittered ISI. ADHD children showed greater variability, driven by elevations in exponential (tau), but not normal (sigma) components of the RT distribution. Jitter decreased tau in ADHD to levels not statistically different than controls, reducing lapses in performance characteristic of impaired response control. Jitter may provide a nonpharmacologic mechanism to facilitate readiness to respond and reduce lapses from sustained (controlled) performance. © 2012 SAGE Publications.
Delov, Vera; Muth-Köhne, Elke; Schäfers, Christoph; Fenske, Martina
2014-05-01
The fish embryo toxicity test (FET) is currently one of the most advocated animal alternative tests in ecotoxicology. To date, the application of the FET with zebrafish (zFET) has focused on acute toxicity assessment, where only lethal morphological effects are accounted for. An application of the zFET beyond acute toxicity, however, necessitates the establishment of more refined and quantifiable toxicological endpoints. A valuable tool in this context is the use of gene expression-dependent fluorescent markers that can even be measured in vivo. We investigated the application of embryos of Tg(fli1:EGFP)(y1) for the identification of vasotoxic substances within the zFET. Tg(fli1:EGFP)(y1) fish express enhanced GFP in the entire vasculature under the control of the fli1 promoter, and thus enable the visualization of vascular defects in live zebrafish embryos. We assessed the fli1 driven EGFP-expression in the intersegmental blood vessels (ISVs) qualitatively and quantitatively, and found an exposure concentration related increase in vascular damage for chemicals like triclosan, cartap and genistein. The fluorescence endpoint ISV-length allowed an earlier and more sensitive detection of vasotoxins than the bright field assessment method. In combination with the standard bright field morphological effect assessment, an increase in significance and value of the zFET for a mechanism-specific toxicity evaluation was achieved. This study highlights the benefits of using transgenic zebrafish as convenient tools for identifying toxicity in vivo and to increase sensitivity and specificity of the zFET. Copyright © 2014 Elsevier B.V. All rights reserved.
Increased Intrasubject Variability in Boys with ADHD across Tests of Motor and Cognitive Control
ERIC Educational Resources Information Center
Rosch, Keri Shiels; Dirlikov, Benjamin; Mostofsky, Stewart H.
2013-01-01
Increased intrasubject variability (ISV), or short-term, within-person fluctuations in behavioral performance is consistently found in Attention-Deficit/Hyperactivity Disorder (ADHD). ADHD is also associated with impairments in motor control, particularly in boys. The results of the few studies that have examined variability in self-generated…
Variations of Luzon Undercurrent from observations and numerical model simulations
NASA Astrophysics Data System (ADS)
Wang, Qingye; Zhai, Fangguo; Hu, Dunxin
2014-06-01
Significant intraseasonal variability (ISV) of about 45-80 days and seasonal variation of the Luzon Undercurrent (LUC) at 18°N are studied using direct current measurements and a high-resolution global Hybrid Coordinate Ocean Model. The variations of the LUC are vertically coherent with those of Kuroshio Current both on intraseasonal and seasonal time scales. The ISV of the LUC is dominated by eddies with diameters of about 200-300 km and extending from sea surface to intermediate layer east of Luzon Island. The LUC becomes strong (weak) when cyclonic (anticyclonic) eddies occur. The eddies east of Luzon Island mainly originate from the bifurcation point (˜13°N) of the North Equatorial Current. These eddies propagate northwestward at a typical propagation speed of about 0.16 m s-1 along the east coast of Philippines, gradually strengthen and pass the Luzon coast, and continue northward to Luzon strait. On seasonal time scale, the LUC is strong (weak) in boreal winter (summer), and this variation is related to the seasonal evolution of large-scale ocean circulation east of Philippines mainly controlled by local wind forcing.
Li, Yuk Mun; Srinivasan, Divya; Vaidya, Parth; Gu, Yibei; Wiesner, Ulrich
2016-10-01
Deviating from the traditional formation of block copolymer derived isoporous membranes from one block copolymer chemistry, here asymmetric membranes with isoporous surface structure are derived from two chemically distinct block copolymers blended during standard membrane fabrication. As a first proof of principle, the fabrication of asymmetric membranes is reported, which are blended from two chemically distinct triblock terpolymers, poly(isoprene-b-styrene-b-(4-vinyl)pyridine) (ISV) and poly(isoprene-b-styrene-b-(dimethylamino)ethyl methacrylate) (ISA), differing in the pH-responsive hydrophilic segment. Using block copolymer self-assembly and nonsolvent induced phase separation process, pure and blended membranes are prepared by varying weight ratios of ISV to ISA. Pure and blended membranes exhibit a thin, selective layer of pores above a macroporous substructure. Observed permeabilities at varying pH values of blended membranes depend on relative triblock terpolymer composition. These results open a new direction for membrane fabrication through the use of mixtures of chemically distinct block copolymers enabling the tailoring of membrane surface chemistries and functionalities. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
2017-01-01
Hematopoietic stem cells (HSCs) are the therapeutic component of bone marrow transplants, but finding immune-compatible donors limits treatment availability and efficacy. Recapitulation of endogenous specification during development is a promising approach to directing HSC specification in vitro, but current protocols are not capable of generating authentic HSCs with high efficiency. Across phyla, HSCs arise from hemogenic endothelium in the ventral floor of the dorsal aorta concurrent with arteriovenous specification and intersegmental vessel (ISV) sprouting, processes regulated by Notch and Wnt. We hypothesized that coordination of HSC specification with vessel patterning might involve modulatory regulatory factors such as R-spondin 1 (Rspo1), an extracellular protein that enhances β-catenin-dependent Wnt signaling and has previously been shown to regulate ISV patterning. We find that Rspo1 is required for HSC specification through control of parallel signaling pathways controlling HSC specification: Wnt16/DeltaC/DeltaD and Vegfa/Tgfβ1. Our results define Rspo1 as a key upstream regulator of two crucial pathways necessary for HSC specification. PMID:28087636
ERIC Educational Resources Information Center
Spinelli, Simona; Vasa, Roma A.; Joel, Suresh; Nelson, Tess E.; Pekar, James J.; Mostofsky, Stewart H.
2011-01-01
Background: Error processing is reflected, behaviorally, by slower reaction times (RT) on trials immediately following an error (post-error). Children with attention-deficit hyperactivity disorder (ADHD) fail to show RT slowing and demonstrate increased intra-subject variability (ISV) on post-error trials. The neural correlates of these behavioral…
Evidence of an Intrinsic Intraseasonal Oscillation over Tropical South America During Austral Summer
NASA Technical Reports Server (NTRS)
Zhou, Jiayu; Lau, William K.-M.
2002-01-01
The intraseasonal variation (ISV) in the 30-60 day band, also known as Madden-Julian oscillation (MJO), has been studied for decades. Madden and Julian showed that the oscillation originated from the western Indian Ocean, propagated eastward, got enhanced over the maritime continent and weakened after passing over the dateline. Composite studies showed evidences of a signal in upper and lower level zonal wind propagating around the globe during an oscillation. Theoretical studies pointed out that the interaction with the warm ocean surface and the coupling with the convective and radiative processes in the atmosphere could manifest the oscillation, which propagates eastward via mutual feedbacks between the wave motions and the cumulus heating. Over tropical South America, no independent 30-60 day oscillation has been reported so far, despite that Amazon is the most distinct tropical convection center over the western hemisphere and the fluxes from its surface of tropical rainforests are close to that from the warm tropical ocean. Liebmann et al. showed a distinct spectral peak of 40-50 day oscillation in outgoing longwave radiation (OLR) over tropical South America and considered that was manifested by the MJO propagation. Nogues-Paegle et al. (2000) focused on a dipole pattern of the OLR anomaly with centers of action over the South Atlantic Convergence Zone (SACZ) and the subtropical plain. They used the regional 10-90 day filtered data and demonstrated this pattern could be represented by the fifth mode of the rotated empirical orthogonal function. Its principal component was further analyzed using the singular spectrum analysis. Their result showed two oscillatory modes with periods of 36-40 days and 22-28 days, of which the former was related to the MJO influence and the latter linked to the remote forcing over southwest of Australia, which produced a wave train propagating southeastward, rounding the southern tip of South America and returning back toward the northeast. The 22-28 day mode has distinct impact on SACZ, responsible for the regional seesaw pattern of alternating dry and wet conditions. In this study we will focus on the 30-60-day spectral band and investigate whether the independent oscillation source over tropical South America is existed. First, we will show the seasonal dependence of the tropical South American ISV in Section 3. Then, the leading principal modes of 30-60 day bandpass filtered 850-hPa velocity potential (VP850) will be computed to distinguish the stationary ISV over tropical South America (SISA) from the propagating MJO in the austral summertime in Section 4. The importance of SISA in representing the regional ISV over South America will be discussed. In Section 5, we will demonstrate the mass oscillation regime of SISA, which is well separated from that of MJO by the Andes, and the convective coupling with rainfall. The dynamical response of SISA and the impact on the South American summer monsoon (SASM) will be presented. Finally, we will give the concluding remarks.
Intraseasonal Variability in the Atmosphere-Ocean Climate System. Second Edition
NASA Technical Reports Server (NTRS)
Lau, William K. M.; Waliser, Duane E.
2011-01-01
Understanding and predicting the intraseasonal variability (ISV) of the ocean and atmosphere is crucial to improving long-range environmental forecasts and the reliability of climate change projections through climate models. This updated, comprehensive and authoritative second edition has a balance of observation, theory and modeling and provides a single source of reference for all those interested in this important multi-faceted natural phenomenon and its relation to major short-term climatic variations.
Mechanical Behavior and Fatigue Studies of Rubber Components in Army Tracked Vehicles
2010-08-13
strategy moved to glassy polymers (Bouvard et al., 2010) – Current efforts to apply ISV modeling strategy to elastomers • Fatigue approach – Researchers...metals at CAVS – Researchers have typically only investigated long crack for elastomers (Mars and Fatemi, 2003; Busfield et al., 2002; Chou et al...2007) – Current efforts are to add MSC/PSC, INC to fatigue modeling of elastomers and incorporate microstructure 13 August 2010 2 Overview 8/13/2010 3
NASA Astrophysics Data System (ADS)
Liang, Zhanlin; Xie, Qiang; Zeng, Lili; Wang, Dongxiao
2018-03-01
In addition to widely discussed seasonal variability, the barrier layer (BL) of the South China Sea (SCS) also exhibits significant intraseasonal variability (ISV) and plays an important role in the upper heat and salt balances. The characteristics and mechanisms of spatiotemporal variations in the BL are investigated using an eddy-resolving ocean model OFES (OGCM For the Earth Simulator) ouput and related atmospheric and oceanic processes. The active intraseasonal BL variability in the SCS occurs mainly during the late summer/autumn and winter and exhibits remarkable differences between these two periods. The BL ISV in late summer/autumn occurs in the southern basin, while in winter, it is limited to the northwestern basin. To further discuss the evolution and driving thermodynamic mechanisms, we quantify the processes that control the variability of intraseasonal BL. Different mechanisms for the intraseasonal BL variability for these two active periods are investigated based on the case study and composite analysis. During late summer/autumn, the active BL in the southern basin is generated by advected and local freshwater, and then decays rapidly with the enhanced wind. In winter, anticyclonic eddy activity is associated with the evolution of the BL by affecting the thermocline and halocline variations, while wind stress and wind stress curl have no obvious influence on BL.
The role of SST variability in the simulation of the MJO
NASA Astrophysics Data System (ADS)
Stan, Cristiana
2017-12-01
The sensitivity of the Madden-Julian Oscillation to high-frequency variability (period 1-5 days) of sea surface temperature (SST) is investigated using numerical experiments with the super-parameterized Community Climate System Model. The findings of this study emphasize the importance of air-sea interactions in the simulation of the MJO, and stress the necessity of an accurate representation of ocean variability on short time scales. Eliminating 1-5-day variability of surface boundary forcing reduces the intraseasonal variability (ISV) of the tropics during the boreal winter. The ISV spectrum becomes close to the red noise background spectrum. The variability of atmospheric circulation shifts to longer time scales. In the absence of high-frequency variability of SST the MJO power gets confined to wavenumbers 1-2 and the magnitude of westward power associated with Rossby waves increases. The MJO convective activity propagating eastward from the Indian Ocean does not cross the Maritime Continent, and convection in the western Pacific Ocean is locally generated. In the Indian Ocean convection tends to follow the meridional propagation of SST anomalies. The response of the MJO to 1-5-day variability in the SST is through the charging and discharging mechanisms contributing to the atmospheric column moist static energy before and after peak MJO convection. Horizontal advection and surface fluxes show the largest sensitivity to SST perturbations.
Application and evaluation of ISVR method in QuickBird image fusion
NASA Astrophysics Data System (ADS)
Cheng, Bo; Song, Xiaolu
2014-05-01
QuickBird satellite images are widely used in many fields, and applications have put forward high requirements for the integration of the spatial information and spectral information of the imagery. A fusion method for high resolution remote sensing images based on ISVR is identified in this study. The core principle of ISVS is taking the advantage of radicalization targeting to remove the effect of different gain and error of satellites' sensors. Transformed from DN to radiance, the multi-spectral image's energy is used to simulate the panchromatic band. The linear regression analysis is carried through the simulation process to find a new synthetically panchromatic image, which is highly linearly correlated to the original panchromatic image. In order to evaluate, test and compare the algorithm results, this paper used ISVR and other two different fusion methods to give a comparative study of the spatial information and spectral information, taking the average gradient and the correlation coefficient as an indicator. Experiments showed that this method could significantly improve the quality of fused image, especially in preserving spectral information, to maximize the spectral information of original multispectral images, while maintaining abundant spatial information.
NASA Astrophysics Data System (ADS)
Zhou, Lei; Murtugudde, Raghu; Neale, Richard B.; Jochum, Markus
2018-01-01
The simulation of the Indian summer monsoon and its pronounced intraseasonal component in a modern climate model remains a significant challenge. Recently, using observations and reanalysis products, the central Indian Ocean (CIO) mode was found to be a natural mode in the ocean-atmosphere coupled system and also shown to have a close mechanistic connection with the monsoon intraseasonal oscillation (MISO). In this study, the simulation of the actual CIO mode in historical Community Earth System Model (CESM) outputs is assessed by comparing with observations and reanalysis products. The simulation of the Madden-Julian Oscillation, a major component of tropical intraseasonal variabilities (ISVs), is satisfactory. However, the CIO mode is not well captured in any of the CESM simulations considered here. The force and response relationship between the atmosphere and the ocean associated with the CIO mode in CESM is opposite to that in nature. The simulated meridional gradient of large-scale zonal winds is too weak, which precludes the necessary energy conversion from the mean state to the ISVs and cuts off the energy source to MISO in CESM. The inability of CESM to reproduce the CIO mode seen clearly in nature highlights the CIO mode as a new dynamical framework for diagnosing the deficiencies in Indian summer monsoon simulation in climate models. The CIO mode is a coupled metric for evaluating climate models and may be a better indicator of a model's skill to accurately capture the tropical multiscale interactions over subseasonal to interannual timescales.
1992-01-01
son menosde 2Okniquecaiusania nvar comun Cs la tifoidea cldsica producida por S parte de las infecciones hunanas La fiebre Isv’lht la parat-foidea. par...TITLE (Include Secunty Clasifi4tin) Brote de Fiebre Parutifoidea Entre Personal de la Marina del Peru 1.PERSON4AL AUTHORCS) Pazzagllia G; Wgnall FS...CLASSIFIATIO F THIS PAGE All othe~redmtons areobolete. ZINCLASSIITIED Best Avai~lable Copy BROTE Dl FIEBRE PARATIFOIDEA ENTRE PERSONAL DE1 LA MARINA DEL PERU G
NASA Astrophysics Data System (ADS)
Basel, Tek Prasad
We studied optical, electrical, and magnetic field responses of films and devices based on organic semiconductors that are used for organic light emitting diodes (OLEDs) and photovoltaic (OPV) solar cell applications. Our studies show that the hyperfine interaction (HFI)-mediated spin mixing is the key process underlying various magnetic field effects (MFE) and spin transport in aluminum tris(8-hydroxyquinoline)[Alq3]-based OLEDs and organic spin-valve (OSV). Conductivity-detected magnetic resonance in OLEDs and magneto-resistance (MR) in OSVs show substantial isotope dependence. In contrast, isotope-insensitive behavior in the magneto-conductance (MC) of same devices is explained by the collision of spin ½ carriers with triplet polaron pairs. We used steady state optical spectroscopy for studying the energy transfer dynamics in films and OLEDs based on host-guest blends of the fluorescent polymer and phosphorescent molecule. We have also studied the magnetic-field controlled color manipulation in these devices, which provide a strong proof for the `polaron-pair' mechanism underlying the MFE in organic devices. The critical issue that hampers organic spintronics device applications is significant magneto-electroluminescence (MEL) at room temperature (RT). Whereas inorganic spin valves (ISVs) show RT magneto-resistance, MR>80%, however, the devices do not exhibit electroluminescence (EL). In contrast, OLEDs show substantive EL emission, and are particularly attractive because of their flexibility, low cost, and potential for multicolor display. We report a conceptual novel hybrid organic/inorganic spintronics device (h-OLED), where we employ both ISV with large MR at RT, and OLED that has efficient EL emission. We investigated the charge transfer process in an OPV solar cell through optical, electrical, and magnetic field measurements of thin films and devices based on a low bandgap polymer, PTB7 (fluorinated poly-thienothiophene-benzodithiophene). We found that one of the major losses that limit the power conversion efficiency of OPV devices is the formation of triplet excitons in the polymer through recombination of charge-transfer (CT) excitons at the interface, and presented a method to suppress the dissociation of CT states by incorporating the spin ½ additive, galvinoxyl in the bulk heterojunction architecture of the active organic blend layer.
Tekin, Kemal; Sonmez, Kenan; Inanc, Merve; Ozdemir, Kubra; Goker, Yasin Sakir; Yilmazbas, Pelin
2018-04-01
To evaluate the corneal topographic changes and postvitrectomy astigmatism after 27-gauge (g) microincision vitrectomy surgery (MIVS) by using Pentacam HR-Scheimpflug imaging system. This prospective descriptive study included 30 eyes of 30 patients who underwent 27-g MIVS. All eyes underwent a Pentacam HR examination preoperatively and on the first week, first month and third month postoperatively. The power of the corneal astigmatism, mean keratometry (K m ), K 1 and K 2 values and corneal asphericity (Q value) values for the both front and back surfaces of the cornea, index of surface variance (ISV), index of vertical asymmetry (IVA), index of height asymmetry (IHA), index of height decentration (IHD) and higher-order aberrations including coma, trefoil, spherical aberration, higher-order root-mean-square and total RMS were recorded. Additionally, the mean induced astigmatism was estimated by vector analysis. No statistically significant changes were observed in the mean power of corneal astigmatism, mean keratometry, K 1 and K 2 values, corneal asphericity values, ISV, IVA, IHA, IHD and higher-order aberrations on the first week, first month and third month after the operation. The mean surgically induced astigmatism was calculated as 0.23 ± 0.11 D on the first week, 0.19 ± 0.10 D on the first month and 0.19 ± 0.08 D on the third month postoperatively. Minor corneal surface and induced astigmatic changes are expected to result in rapid visual rehabilitation after pars plana vitrectomy with the 27-g MIVS system.
Chung, Shiu-Dong; Wu, Chia-Chang; Lin, Victor Chia-Hsiang; Ho, Chen-Hsun; Yang, Stephen Shei Dei; Tsai, Yao-Chou
2011-08-01
In this study we present our experience using minilaparoscopic intracorporeal knot tying to ligate internal spermatic veins (ISV) while sparing the spermatic artery and lymphatics. Minilaparoscopic varicocelectomies were performed in 87 patients between January 2004 and January 2009. All varicoceles were detected clinically according to the World Health Organization (WHO) classification and confirmed by scrotal color Doppler ultrasonography. The surgical indications were scrotal symptoms in 71, infertility in 16, and both conditions in 2. Three 3.5 mm minilaparoscopic ports were used for the operation. The ISVs were dissected and then ligated with intracorporeal knot-tying. The testicular artery and lymphatic vessels were carefully preserved to minimize procedure-related complications. Unilateral laparoscopic varicocelectomy was performed in 21 (24.2%) patients and bilateral in 66 (75.8%). Mean operative time was 71.1 ± 29.2 and 46.8 ± 12.6 min for bilateral and unilateral varicocelectomies, respectively. All patients were discharged within 24 h after surgery. Neither immediate major nor late procedure-related complications were noted. Of the 71 patients with scrotal symptoms, the symptoms completely subsided in 55 (77.5%) and partially subsided in 10 (14.1%). Only one (1.2%) recurrent varicocele was detected within a mean follow-up of 21 months (range = 3-42). Neither hydrocele formation nor testicular atrophy was found during the follow-up period. Our 5-year experience revealed that minilaparoscopic varicocelectomy with sparing of artery and lymphatic vessels could safely and effectively ligate all spermatic veins and preserve spermatic arteries and lymphatic channels without leading to a high varicocele persistence or recurrence.
2013-02-15
red fuming nitric acid (RFNA), which is composed of nitric acid (HNO3, 85 wt%) and NO2 (8–15 wt%). Recently the impinging stream vortex engine (ISVE... nitric acid [51]. As a result, growth of the particles is favored over H-abstraction reactions at the low temperatures of our experiments. As the...followed by the proton transfer from NAH bond to NO3 to form nitric acid , as shown in Scheme 3. Although it is very easy to form nitric acid (enthalpic
Retractor-induced brain shift compensation in image-guided neurosurgery
NASA Astrophysics Data System (ADS)
Fan, Xiaoyao; Ji, Songbai; Hartov, Alex; Roberts, David; Paulsen, Keith
2013-03-01
In image-guided neurosurgery, intraoperative brain shift significantly degrades the accuracy of neuronavigation that is solely based on preoperative magnetic resonance images (pMR). To compensate for brain deformation and to maintain the accuracy in image guidance achieved at the start of surgery, biomechanical models have been developed to simulate brain deformation and to produce model-updated MR images (uMR) to compensate for brain shift. To-date, most studies have focused on shift compensation at early stages of surgery (i.e., updated images are only produced after craniotomy and durotomy). Simulating surgical events at later stages such as retraction and tissue resection are, perhaps, clinically more relevant because of the typically much larger magnitudes of brain deformation. However, these surgical events are substantially more complex in nature, thereby posing significant challenges in model-based brain shift compensation strategies. In this study, we present results from an initial investigation to simulate retractor-induced brain deformation through a biomechanical finite element (FE) model where whole-brain deformation assimilated from intraoperative data was used produce uMR for improved accuracy in image guidance. Specifically, intensity-encoded 3D surface profiles at the exposed cortical area were reconstructed from intraoperative stereovision (iSV) images before and after tissue retraction. Retractor-induced surface displacements were then derived by coregistering the surfaces and served as sparse displacement data to drive the FE model. With one patient case, we show that our technique is able to produce uMR that agrees well with the reconstructed iSV surface after retraction. The computational cost to simulate retractor-induced brain deformation was approximately 10 min. In addition, our approach introduces minimal interruption to the surgical workflow, suggesting the potential for its clinical application.
2014-01-01
Background Mutations in the cyclin-dependent kinase-like 5 (CDKL5) (NM_003159.2) gene have been associated with early-onset epileptic encephalopathies or Hanefeld variants of RTT(Rett syndrome). In order to clarify the CDKL5 genotype-phenotype correlations in Chinese patients, CDKL5 mutational screening in cases with early-onset epileptic encephalopathies and RTT without MECP2 mutation were performed. Methods The detailed clinical information including clinical manifestation, electroencephalogram (EEG), magnetic resonance imaging (MRI), blood, urine amino acid and organic acid screening of 102 Chinese patients with early-onset epileptic encephalopathies and RTT were collected. CDKL5 gene mutations were analyzed by PCR, direct sequencing and multiplex ligation-dependent probe amplification (MLPA). The patterns of X-chromosome inactivation (XCI) were studied in the female patients with CDKL5 gene mutation. Results De novo CDKL5 gene mutations were found in ten patients including one missense mutation (c.533G > A, p.R178Q) which had been reported, two splicing mutations (ISV6 + 1A > G, ISV13 + 1A > G), three micro-deletions (c.1111delC, c.2360delA, c.234delA), two insertions (c.1791 ins G, c.891_892 ins TT in a pair of twins) and one nonsense mutation (c.1375C > T, p.Q459X). Out of ten patients, 7 of 9 females with Hanefeld variants of RTT and the remaining 2 females with early onset epileptic encephalopathy, were detected while only one male with infantile spasms was detected. The common features of all female patients with CDKL5 gene mutations included refractory seizures starting before 4 months of age, severe psychomotor retardation, Rett-like features such as hand stereotypies, deceleration of head growth after birth and poor prognosis. In contrast, the only one male patient with CDKL5 mutation showed no obvious Rett-like features as females in our cohort. The X-chromosome inactivation patterns of all the female patients were random. Conclusions Mutations in CDKL5 gene are responsible for 7 with Hanefeld variants of RTT and 2 with early-onset epileptic encephalopathy in 71 girls as well as for 1 infantile spasms in 31 males. There are some differences in the phenotypes among genders with CDKL5 gene mutations and CDKL5 gene mutation analysis should be considered in both genders. PMID:24564546
NASA Technical Reports Server (NTRS)
Saleeb, Atef F.; Vaidyanathan, Raj
2016-01-01
The report summarizes the accomplishments made during the 4-year duration of the project. Here, the major emphasis is placed on the different tasks performed by the two research teams; i.e., the modeling activities by the University of Akron (UA) team and the experimental and neutron diffraction studies conducted by the University of Central Florida (UCF) team, during this 4-year period. Further technical details are given in the upcoming sections by UA and UCF for each of the milestones/years (together with the corresponding figures and captions).The project majorly involved the development, validation, and application of a general theoretical model that is capable of capturing the nonlinear hysteretic responses, including pseudoelasticity, shape memory effect, rate-dependency, multi-axiality, asymmetry in tension versus compression response of shape memory alloys. Among the targeted goals for the SMA model was its ability to account for the evolutionary character response (including transient and long term behavior under sustained cycles) for both conventional and high temperature (HT) SMAs, as well as being able to simulate some of the devices which exploit these unique material systems. This required extensive (uniaxial and multi-axial) experiments needed to guide us in calibrating and characterizing the model. Moreover, since the model is formulated on the theoretical notion of internal state variables (ISVs), neutron diffraction experiments were needed to establish the linkage between the micromechanical changes and these ISVs. In addition, the design of the model should allow easy implementation in large scale finite element application to study the behavior of devices making use of these SMA materials under different loading controls. Summary of the activities, progress/achievements made during this period is given below in details for the University of Akron and the University (Section 2.0) of Central Florida (Section 3.0).
Connectivity supporting attention in children with attention deficit hyperactivity disorder
Barber, Anita D.; Jacobson, Lisa A.; Wexler, Joanna L.; Nebel, Mary Beth; Caffo, Brian S.; Pekar, James J.; Mostofsky, Stewart H.
2014-01-01
Intra-subject variability (ISV) is the most consistent behavioral deficit in Attention Deficit Hyperactivity Disorder (ADHD). ISV may be associated with networks involved in sustaining task control (cingulo-opercular network: CON) and self-reflective lapses of attention (default mode network: DMN). The current study examined whether connectivity supporting attentional control is atypical in children with ADHD. Group differences in full-brain connection strength and brain–behavior associations with attentional control measures were examined for the late-developing CON and DMN in 50 children with ADHD and 50 typically-developing (TD) controls (ages 8–12 years). Children with ADHD had hyper-connectivity both within the CON and within the DMN. Full-brain behavioral associations were found for a number of between-network connections. Across both groups, more anti-correlation between DMN and occipital cortex supported better attentional control. However, in the TD group, this brain–behavior association was stronger and occurred for a more extensive set of DMN–occipital connections. Differential support for attentional control between the two groups occurred with a number of CON–DMN connections. For all CON–DMN connections identified, increased between-network anti-correlation was associated with better attentional control for the ADHD group, but worse attentional control in the TD group. A number of between-network connections with the medial frontal cortex, in particular, showed this relationship. Follow-up analyses revealed that these associations were specific to attentional control and were not due to individual differences in working memory, IQ, motor control, age, or scan motion. While CON–DMN anti-correlation is associated with improved attention in ADHD, other circuitry supports improved attention in TD children. Greater CON–DMN anti-correlation supported better attentional control in children with ADHD, but worse attentional control in TD children. On the other hand, greater DMN–occipital anti-correlation supported better attentional control in TD children. PMID:25610768
Connectivity supporting attention in children with attention deficit hyperactivity disorder.
Barber, Anita D; Jacobson, Lisa A; Wexler, Joanna L; Nebel, Mary Beth; Caffo, Brian S; Pekar, James J; Mostofsky, Stewart H
2015-01-01
Intra-subject variability (ISV) is the most consistent behavioral deficit in Attention Deficit Hyperactivity Disorder (ADHD). ISV may be associated with networks involved in sustaining task control (cingulo-opercular network: CON) and self-reflective lapses of attention (default mode network: DMN). The current study examined whether connectivity supporting attentional control is atypical in children with ADHD. Group differences in full-brain connection strength and brain-behavior associations with attentional control measures were examined for the late-developing CON and DMN in 50 children with ADHD and 50 typically-developing (TD) controls (ages 8-12 years). Children with ADHD had hyper-connectivity both within the CON and within the DMN. Full-brain behavioral associations were found for a number of between-network connections. Across both groups, more anti-correlation between DMN and occipital cortex supported better attentional control. However, in the TD group, this brain-behavior association was stronger and occurred for a more extensive set of DMN-occipital connections. Differential support for attentional control between the two groups occurred with a number of CON-DMN connections. For all CON-DMN connections identified, increased between-network anti-correlation was associated with better attentional control for the ADHD group, but worse attentional control in the TD group. A number of between-network connections with the medial frontal cortex, in particular, showed this relationship. Follow-up analyses revealed that these associations were specific to attentional control and were not due to individual differences in working memory, IQ, motor control, age, or scan motion. While CON-DMN anti-correlation is associated with improved attention in ADHD, other circuitry supports improved attention in TD children. Greater CON-DMN anti-correlation supported better attentional control in children with ADHD, but worse attentional control in TD children. On the other hand, greater DMN-occipital anti-correlation supported better attentional control in TD children.
Zhou, Tian; Dong, Qinglei; Shen, Yang; Wu, Wei; Wu, Haide; Luo, Xianglin; Liao, Xiaoling; Wang, Guixue
2016-01-01
Micro/nanoparticles could cause adverse effects on cardiovascular system and increase the risk for cardiovascular disease-related events. Nanoparticles prepared from poly(ethylene glycol) (PEG)-b-poly(ε-caprolactone) (PCL), namely PEG-b-PCL, a widely studied biodegradable copolymer, are promising carriers for the drug delivery systems. However, it is unknown whether polymeric PEG-b-PCL nano-micelles give rise to potential complications of the cardiovascular system. Zebrafish were used as an in vivo model to evaluate the effects of PEG-b-PCL nano-micelle on cardiovascular development. The results showed that PEG-b-PCL nano-micelle caused embryo mortality as well as embryonic and larval malformations in a dose-dependent manner. To determine PEG-b-PCL nano-micelle effects on embryonic angiogenesis, a critical process in zebrafish cardiovascular development, growth of intersegmental vessels (ISVs) and caudal vessels (CVs) in flk1-GFP transgenic zebrafish embryos using fluorescent stereomicroscopy were examined. The expression of fetal liver kinase 1 (flk1), an angiogenic factor, by real-time quantitative polymerase chain reaction (qPCR) and in situ whole-mount hybridization were also analyzed. PEG-b-PCL nano-micelle decreased growth of ISVs and CVs, as well as reduced flk1 expression in a concentration-dependent manner. Parallel to the inhibitory effects on angiogenesis, PEG-b-PCL nano-micelle exposure upregulated p53 pro-apoptotic pathway and induced cellular apoptosis in angiogenic regions by qPCR and terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) apoptosis assay. This study further showed that inhibiting p53 activity, either by pharmacological inhibitor or RNA interference, could abrogate the apoptosis and angiogenic defects caused by PEG-b-PCL nano-micelles, indicating that PEG-b-PCL nano-micelle inhibits angiogenesis by activating p53-mediated apoptosis. This study indicates that polymeric PEG-b-PCL nano-micelle could pose potential hazards to cardiovascular development. PMID:27980407
Zhou, Tian; Dong, Qinglei; Shen, Yang; Wu, Wei; Wu, Haide; Luo, Xianglin; Liao, Xiaoling; Wang, Guixue
Micro/nanoparticles could cause adverse effects on cardiovascular system and increase the risk for cardiovascular disease-related events. Nanoparticles prepared from poly(ethylene glycol) (PEG)- b -poly( ε -caprolactone) (PCL), namely PEG- b -PCL, a widely studied biodegradable copolymer, are promising carriers for the drug delivery systems. However, it is unknown whether polymeric PEG- b -PCL nano-micelles give rise to potential complications of the cardiovascular system. Zebrafish were used as an in vivo model to evaluate the effects of PEG- b -PCL nano-micelle on cardiovascular development. The results showed that PEG- b -PCL nano-micelle caused embryo mortality as well as embryonic and larval malformations in a dose-dependent manner. To determine PEG- b -PCL nano-micelle effects on embryonic angiogenesis, a critical process in zebrafish cardiovascular development, growth of intersegmental vessels (ISVs) and caudal vessels (CVs) in flk1-GFP transgenic zebrafish embryos using fluorescent stereomicroscopy were examined. The expression of fetal liver kinase 1 (flk1), an angiogenic factor, by real-time quantitative polymerase chain reaction (qPCR) and in situ whole-mount hybridization were also analyzed. PEG- b -PCL nano-micelle decreased growth of ISVs and CVs, as well as reduced flk1 expression in a concentration-dependent manner. Parallel to the inhibitory effects on angiogenesis, PEG- b -PCL nano-micelle exposure upregulated p53 pro-apoptotic pathway and induced cellular apoptosis in angiogenic regions by qPCR and terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) apoptosis assay. This study further showed that inhibiting p53 activity, either by pharmacological inhibitor or RNA interference, could abrogate the apoptosis and angiogenic defects caused by PEG- b -PCL nano-micelles, indicating that PEG- b -PCL nano-micelle inhibits angiogenesis by activating p53-mediated apoptosis. This study indicates that polymeric PEG- b -PCL nano-micelle could pose potential hazards to cardiovascular development.
Mechanical and Fatigue Properties of Additively Manufactured Metallic Materials
NASA Astrophysics Data System (ADS)
Yadollahi, Aref
This study aims to investigate the mechanical and fatigue behavior of additively manufactured metallic materials. Several challenges associated with different metal additive manufacturing (AM) techniques (i.e. laser-powder bed fusion and direct laser deposition) have been addressed experimentally and numerically. Experiments have been carried out to study the effects of process inter-layer time interval--i.e. either building the samples one-at-a-time or multi-at-a-time (in-parallel)--on the microstructural features and mechanical properties of 316L stainless steel samples, fabricated via a direct laser deposition (DLD). Next, the effect of building orientation--i.e. the orientation in which AM parts are built--on microstructure, tensile, and fatigue behaviors of 17-4 PH stainless steel, fabricated via a laser-powder bed fusion (L-PBF) method was investigated. Afterwards, the effect of surface finishing--here, as-built versus machined--on uniaxial fatigue behavior and failure mechanisms of Inconel 718 fabricated via a laser-powder bed fusion technique was sought. The numerical studies, as part of this dissertation, aimed to model the mechanical behavior of AM materials, under monotonic and cyclic loading, based on the observations and findings from the experiments. Despite significant research efforts for optimizing process parameters, achieving a homogenous, defect-free AM product--immediately after fabrication--has not yet been fully demonstrated. Thus, one solution for ensuring the adoption of AM materials for application should center on predicting the variations in mechanical behavior of AM parts based on their resultant microstructure. In this regard, an internal state variable (ISV) plasticity-damage model was employed to quantify the damage evolution in DLD 316L SS, under tensile loading, using the microstructural features associated with the manufacturing process. Finally, fatigue behavior of AM parts has been modeled based on the crack-growth concept. Using the FASTRAN code, the fatigue-life of L-PBF Inconel 718 was accurately calculated using the size and shape of process-induced voids in the material. In addition, the maximum valley depth of the surface profile was found to be an appropriate representative of the initial surface flaw for fatigue-life prediction of AM materials in an as-built surface condition.
Development and Implementation of the Casting of Rods Made of Refractory Cast Alloys
NASA Astrophysics Data System (ADS)
Kabanov, I. V.; Urin, S. L.; Ivanyuk, A. S.; Nesterov, A. N.; Bogdanov, S. V.
2017-12-01
The problems of the production of a so-called casting rod blank made of a refractory casting alloy in the vacuum induction furnaces of AO Metallurgical Plant Electrostal are considered. A unique technology of casting and subsequent treatment of as-cast rod blanks made of refractory alloys is developed, tested, and optimized. As a result of the developed and performed measures for the production of metal products in the Consarc furnace, the ingot-to-product yield increases by 15% as compared to metal casting in an ISV-1.0 furnace. As a result, we have widened the range of cast alloy grades and are going to cast metals for the manufacture of blanks of other sizes and ranges of alloy an steel grades.
Comparing and Contrasting Detectors: JWST NIR vs HST WFC3
NASA Technical Reports Server (NTRS)
Rauscher, Bernard J.
2015-01-01
In many ways, WFC3s IR channel is a good indicator for what to expect with JWST. There are some differences, most of which should be beneficial in JWST- JWSTs lower operating temperature will freeze out charge traps that would affect WFC3. Benefits should include lower dark current, lower persistence, and better reciprocity- JWSTs more recent HgCdTe process has lower defect density. The benefits are as described above- JWST uses better indium barriers. The benefits should include fewer RC type pixels. One area where more study might be beneficial is stability. The detector electronics play a significant role in determining how stable a detector system is(v.s. bias drifts and photometry). JWSTs SIDECARs are completely WFC3s Ball electronics- Studies comparing the bias and photometric stability of WFC3 and JWST might be useful to informing data acquisition and calibration strategies for JWST.
Zhao, Ying; Zhang, Xiaoying; Bao, Xinhua; Zhang, Qingping; Zhang, Jingjing; Cao, Guangna; Zhang, Jie; Li, Jiarui; Wei, Liping; Pan, Hong; Wu, Xiru
2014-02-25
Mutations in the cyclin-dependent kinase-like 5 (CDKL5) (NM_003159.2) gene have been associated with early-onset epileptic encephalopathies or Hanefeld variants of RTT(Rett syndrome). In order to clarify the CDKL5 genotype-phenotype correlations in Chinese patients, CDKL5 mutational screening in cases with early-onset epileptic encephalopathies and RTT without MECP2 mutation were performed. The detailed clinical information including clinical manifestation, electroencephalogram (EEG), magnetic resonance imaging (MRI), blood, urine amino acid and organic acid screening of 102 Chinese patients with early-onset epileptic encephalopathies and RTT were collected. CDKL5 gene mutations were analyzed by PCR, direct sequencing and multiplex ligation-dependent probe amplification (MLPA). The patterns of X-chromosome inactivation (XCI) were studied in the female patients with CDKL5 gene mutation. De novo CDKL5 gene mutations were found in ten patients including one missense mutation (c.533G > A, p.R178Q) which had been reported, two splicing mutations (ISV6 + 1A > G, ISV13 + 1A > G), three micro-deletions (c.1111delC, c.2360delA, c.234delA), two insertions (c.1791 ins G, c.891_892 ins TT in a pair of twins) and one nonsense mutation (c.1375C > T, p.Q459X). Out of ten patients, 7 of 9 females with Hanefeld variants of RTT and the remaining 2 females with early onset epileptic encephalopathy, were detected while only one male with infantile spasms was detected. The common features of all female patients with CDKL5 gene mutations included refractory seizures starting before 4 months of age, severe psychomotor retardation, Rett-like features such as hand stereotypies, deceleration of head growth after birth and poor prognosis. In contrast, the only one male patient with CDKL5 mutation showed no obvious Rett-like features as females in our cohort. The X-chromosome inactivation patterns of all the female patients were random. Mutations in CDKL5 gene are responsible for 7 with Hanefeld variants of RTT and 2 with early-onset epileptic encephalopathy in 71 girls as well as for 1 infantile spasms in 31 males. There are some differences in the phenotypes among genders with CDKL5 gene mutations and CDKL5 gene mutation analysis should be considered in both genders.
NASA Astrophysics Data System (ADS)
Mani, N. J.; Waliser, D. E.; Jiang, X.
2014-12-01
While the boreal summer monsoon intraseasonal variability (BSISV) exerts profound influence on the south Asian monsoon, the capability of present day dynamical models in simulating and predicting the BSISV is still limited. The global model evaluation project on vertical structure and diabatic processes of the Madden Julian Oscillations (MJO) is a joint venture, coordinated by the Working Group on Numerical Experimentation (WGNE) MJO Task Force and GEWEX Atmospheric System Study (GASS) program, for assessing the model deficiencies in simulating the ISV and for improving our understanding of the underlying processes. In this study the simulation of the northward propagating BSISV is investigated in 26 climate models with special focus on the vertical diabatic heating structure and clouds. Following parallel lines of inquiry as the MJO Task Force has done with the eastward propagating MJO, we utilize previously proposed and newly developed model performance metrics and process diagnostics and apply them to the global climate model simulations of BSISV.
NASA Astrophysics Data System (ADS)
Rivera Almeyda, Oscar G.
In this investigation, the processing-structure-property relations are correlated for solid state additively manufactured (SSAM) Inconel 625 (IN 625) and a SSAM aluminum alloy 2219 (AA2219). This is the first research of these materials processed by a new SSAM method called additive friction stir (AFS). The AFS process results in a refined grain structure by extruding solid rod through a rotating tool generating heat and severe plastic deformation. In the case of the AFS IN625, the IN625 alloy is known for exhibiting oxidation resistance and temperature mechanical stability, including strength and ductility. This study is the first to investigate the beneficial grain refinement and densification produced by AFS in IN625 that results in advantageous mechanical properties (YS, UTS, epsilonf) at both quasi-static and high strain rate. Electron Backscatter Diffraction (EBSD) observed dynamic recrystallization and grain refinement during the layer deposition in the AFS specimens, where the results identified fine equiaxed grain structures formed by dynamic recrystallization (DRX) with even finer grain structures forming at the layer interfaces. The EBSD quantified grains as fine as 0.27 microns in these interface regions while the average grain size was approximately 1 micron. Additionally, this is the first study to report on the strain rate dependence of AFS IN625 through quasi-static (QS) (0.001/s) and high strain rate (HR) (1500/s) tensile experiments using a servo hydraulic frame and a direct tension-Kolsky bar, respectively, which captured both yield and ultimate tensile strengths increasing as strain rate increased. Fractography performed on specimens showed a ductile fracture surface on both QS, and HR. Alternatively, the other AFS material system investigated in this study, AA2219, is mostly used for aerospace applications, specifically for rocket fuel tanks. EBSD was performed in the cross-section of the AA2219, also exhibiting DRX with equiaxed microstructure in the three directions and an average grain size of 2.5 microns. EBSD PFs showed that the material has a strong torsional fiber A texture in the top of the build, and this texture gets weaker in the middle and bottom sections. TEM showed that there are no theta' precipitates in the as-deposited cross-section, therefore no precipitation strengthening should be expected. Strain rate and stress state dependence was study, and in both tension and compression, with an increase in strain rate, the YS increase and the UTS decreased. Ductile fracture surface was observed on specimens tested to failure in both QS and HR. The AFS AA2219 processing-structure-property relations are being studied in this investigation to address the stress-state and strain rate dependence of AFS AA2219 with an internal sate variable (ISV) plasticity-damage model to capture the different yield stress, work hardening and failure strain in the AFS AA2219 for high fidelity modeling of AFS components. The ISV plasticity model successfully captured the material behavior in tension, compression, tension-followed-by-compression and compression-followed-by-tension experiments. Furthermore, the damage parameters of the model were calibrated using the final void density measured from the fracture surfaces.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spalding, B.P.; Naney, M.T.
1995-06-01
This plan is to be implemented for Phase III ISV operations and post operations sampling. Two previous project phases involving site characterization have been completed and required their own site specific health and safety plans. Project activities will take place at Seepage Pit 1 in Waste Area Grouping 7 at ORNL, Oak Ridge, Tennessee. Purpose of this document is to establish standard health and safety procedures for ORNL project personnel and contractor employees in performance of this work. Site activities shall be performed in accordance with Energy Systems safety and health policies and procedures, DOE orders, Occupational Safety and Healthmore » Administration Standards 29 CFR Part 1910 and 1926; applicable United States Environmental Protection Agency requirements; and consensus standards. Where the word ``shall`` is used, the provisions of this plan are mandatory. Specific requirements of regulations and orders have been incorporated into this plan in accordance with applicability. Included from 29 CFR are 1910.120 Hazardous Waste Operations and Emergency Response; 1910.146, Permit Required - Confined Space; 1910.1200, Hazard Communication; DOE Orders requirements of 5480.4, Environmental Protection, Safety and Health Protection Standards; 5480.11, Radiation Protection; and N5480.6, Radiological Control Manual. In addition, guidance and policy will be followed as described in the Environmental Restoration Program Health and Safety Plan. The levels of personal protection and the procedures specified in this plan are based on the best information available from reference documents and site characterization data. Therefore, these recommendations represent the minimum health and safety requirements to be observed by all personnel engaged in this project.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovesdi, C.; Spielman, Z.; LeBlanc, K.
An important element of human factors engineering (HFE) pertains to measurement and evaluation (M&E). The role of HFE-M&E should be integrated throughout the entire control room modernization (CRM) process and be used for human-system performance evaluation and diagnostic purposes with resolving potential human engineering deficiencies (HEDs) and other human machine interface (HMI) design issues. NUREG-0711 describes how HFE in CRM should employ a hierarchical set of measures, particularly during integrated system validation (ISV), including plant performance, personnel task performance, situation awareness, cognitive workload, and anthropometric/ physiological factors. Historically, subjective measures have been primarily used since they are easier to collectmore » and do not require specialized equipment. However, there are pitfalls with relying solely on subjective measures in M&E such that negatively impact reliability, sensitivity, and objectivity. As part of comprehensively capturing a diverse set of measures that strengthen findings and inferences made of the benefits from emerging technologies like advanced displays, this paper discusses the value of using eye tracking as an objective method that can be used in M&E. A brief description of eye tracking technology and relevant eye tracking measures is provided. Additionally, technical considerations and the unique challenges with using eye tracking in full-scaled simulations are addressed. Finally, this paper shares preliminary findings regarding the use of a wearable eye tracking system in a full-scale simulator study. These findings should help guide future full-scale simulator studies using eye tracking as a methodology to evaluate human-system performance.« less
Localizing and tracking electrodes using stereovision in epilepsy cases
NASA Astrophysics Data System (ADS)
Fan, Xiaoyao; Ji, Songbai; Roberts, David W.; Paulsen, Keith D.
2015-03-01
In epilepsy cases, subdural electrodes are often implanted to acquire intracranial EEG (iEEG) for seizure localization and resection planning. However, the electrodes may shift significantly between implantation and resection, during the time that the patient is monitored for iEEG recording. As a result, the accuracy of surgical planning based on electrode locations at the time of resection can be compromised. Previous studies have only quantified the electrode shift with respect to the skull, but not with respect to the cortical surface, because tracking cortical shift between surgeries is challenging. In this study, we use an intraoperative stereovision (iSV) system to visualize and localize the cortical surface as well as electrodes, record three-dimensional (3D) locations of the electrodes in MR space at the time of implantation and resection, respectively, and quantify the raw displacements, i.e., with respect to the skull. Furthermore, we track the cortical surface and quantify the shift between surgeries using an optical flow (OF) based motion-tracking algorithm. Finally, we compute the electrode shift with respect to the cortical surface by subtracting the cortical shift from raw measured displacements. We illustrate the method using one patient example. In this particular patient case, the results show that the electrodes not only shifted significantly with respect to the skull (8.79 +/- 3.00 mm in the lateral direction, ranging from 2.88 mm to 12.87 mm), but also with respect to the cortical surface (7.20 +/- 3.58 mm), whereas the cortical surface did not shift significantly in the lateral direction between surgeries (2.23 +/- 0.76 mm).
NASA Astrophysics Data System (ADS)
Di Sante, Fabio; Coppola, Erika; Farneti, Riccardo; Giorgi, Filippo
2017-04-01
The South Asia climate is dominated by the monsoon precipitation that divides the climate in two different seasons, the wet and dry seasons, and it influences the lives of billions of peoples. The Indian Summer Monsoon (ISM) has different temporal and spatial scales of variability and it is mainly driven by strong air sea interactions. The monsoon interannual variability (IAV) and the intraseasonal variability (ISV) of daily rainfall are the two most important scale of analysis of this phenomenon. In this work, the Regional Earth System Model (RegCM-ES) (Sitz et al, 2016) is used to simulate the South Asia climate. Several model settings are experimented to assess the sensitivity of the monsoon system like for example two different cumulous convection schemes (Tidtke, 1989 and Emanuel, 1991), two different lateral boundary conditions in the regional ocean model (NOAA/Geophysical 5 Fluid Dynamics Laboratory MOM run, Danabasoglu et al 2014; and ORAP reanalysis, Zuo et Al 2015) and two different hydrological models (Cetemps Hydrological Model, Coppola et al, 2007; Max-Planck's HD model, Hagemann and Dümenil, 1998) for a total of 5 coupled and uncoupled simulations all covering the period from 1979 to 2008. One of the main results of the analysis of the mini RegCM-ES ensemble shows that a better representation of the IAV and of the ENSO-monsoon relationship is present in the coupled simulations. Moreover a source of monsoon predictability has been found in the one-year-lag correlation between JJAS India precipitation and ENSO, this is only evident in the coupled system where the one-year-lagged correlation coefficient between the Niño-3.4 and the ISM rainfall is much higher respect to the uncoupled one and similar to values observed between the observations and the Niño-3.4. For the subseasonal time scale, RegCM-ES shows better performance compared to the standalone version of RegCM4 (Giorgi et al 2012), in reproducing "active" and "break" spells that characterize the ISV of the monsoon system. This is probably due to the air-sea interactions over the Bay of Bengal (BoB) that are mostly driven by large heat flux variations at the surface. To further assess this last hypothesis the northward migration of the boreal summer intraseasonal oscillation has been investigated. The coupled system shows a phase lag of about 10 days between SST and convection in agreement with the observations whereas no phase lag is observed in the standalone version of RegCM4.
[The social vulnerability index regarding Medellín's disabled population].
Cardona-Arango, Doris; Agudelo-Martínez, Alejandra; Restrepo-Molina, Lucas; Segura-Cardona, Angela M
2014-01-01
Constructing a social vulnerability index (SVI) for Medellín's disabled population during 2008 aimed at determining areas which were reducing opportunities for this population to use their tangible and intangible assets, thus impairing their quality of life. This descriptive cross-sectional study drew on a source of secondary information regarding people having some kind of limitation recorded in the Quality of Life Survey, 2008. Physical, human and social variables were grouped when constructing the SVI; the models were run in principal component analysis to determine their degree of vulnerability, defined by the number of negative factors identified (high category=4 or 5, medium=2 or 3 and low=1 or none). Such classification led to identifying non-causal relationships with demographic variables through Mann-Whitney, Chi-square and Kruskal-Wallis tests (5.0 % statistical significance level); multinomial logistic regression was used for calculating adjusted measures for epidemiological measurement, such as opportunity ratios and confidence intervals. A degree of medium vulnerability predominated in disabled people living in Medellín (60.3 %) followed by low vulnerability (28.7 %) and high vulnerability populations (11.0 %). The proposed ISV classified the city's communes according to high, medium or low vulnerability, supported by the use of statistical and spatial location techniques.
Polysaccharides from astragali radix restore chemical-induced blood vessel loss in zebrafish
2012-01-01
Background Astragali Radix has been used widely for the treatment of cardiovascular and cerebrovascular diseases, and to enhance endurance and stamina in traditional Chinese medicine (TCM) for over 2000 years. The polysaccharide constituents of Astragali Radix (ARP) are considered as one of the major constituents contributing to the multiple pharmacological effects of this medicinal plant. The purpose of the study is to evaluate the vascular regenerative activities of ARPs in a chemically-induced blood vessel loss model in zebrafish. Methods Blood vessel loss was induced in both Tg(fli-1a:EGFP)y1 and Tg(fli-1a:nEGFP)y7 embryos by administration of 300 nM VEGFR tyrosine kinase inhibitor II (VRI) for 3 h at 24 hpf (hour post-fertilization). Then, the blood vessel damaged zebrafish were treated with ARPs for 21 h and 45 h after VRI withdrawal. Morphological changes in intersegmental vessels (ISVs) of zebrafish larvae were observed under the fluorescence microscope and measured quantitatively. The rescue effect of ARPs in the zebrafish models was validated by measuring the relative mRNA expressions of Kdrl, Kdr and Flt-1 using real-time PCR. Results Two polysaccharide fractions, P4 (50000 D < molecular weight & diameter < 0.1 μm) and P5 (molecular diameter > 0.1 μm), isolated from Astragali Radix by ultrafiltration, produced a significant and dose-dependent recovery in VRI-induced blood vessel loss in zebrafish. Furthermore, the down-regulation of Flk-1 and Flt-1 mRNA expression induced by VRI was reversed by treatment with P4. Conclusion The present study demonstrates that P4 isolated from Astragali Radix reduces VRI-induced blood vessel loss in zebrafish. These findings support the hypothesis that polysaccharides are one of the active constituents in Astragali Radix, contributing to its beneficial effect on treatment of diseases associated with a deficiency in angiogenesis. PMID:22357377
NASA Astrophysics Data System (ADS)
Gasmi, Sonia; Bernard, Ismaël; Pouvreau, Stéphane; Maurer, Danièle; Schaal, Gauthier; Ganthy, Florian; Cominassi, Louise; Allain, Gwenhael; Sautour, Benoit; David, Valérie
2017-01-01
In macrotidal coastal ecosystems, spatial heterogeneity of the water column properties is induced by both oceanic and continental influences. Hydrodynamic processes generate a land-sea gradient of environmental conditions, affecting the biological performances of sedentary organisms. The aim of the present study is to establish an extensive spatial assessment in the reproductive investment of the wild Pacific oyster Crassostrea gigas in Arcachon Bay. This is done by looking for a relationship between the Lawrence and Scott condition index (LSCI) and two tidal processes: the immersion level (IL) and the local oceanic flushing time (LoFt). The LSCI of C. gigas was assessed, just before gamete release, at 68 sampling stations in Arcachon Bay. Oyster performance was overall low and spatially variable. Significant differences in the LSCI were detected between the outer and inner bay. Oyster reefs located toward the mouth of the bay exhibited high LSCI (between 9 and 11), while oyster reefs located in inner bay, especially in south-eastern part around the Eyre River, had low LSCI (below 6). Linear modelling allowed to highlight significant effects of both tidal processes IL and LoFt on the obtained LSCI gradient. IL, LoFt explained 33% of the spatial variability observed on LSCI (IL = 3%; LoFt = 17%; LoFt + IL: 13%), 6% were attributed to the intra-station variation (ISv). Thus, high IL and rapid LoFt favor a better development of somatic-gonadal volume, probably because of longer feeding time and higher supply of food from the ocean by tide flows. Disentangling the effects of IL and LoFt on LSCI allowed to describe the spatial pattern in 61% of variability not explained by both tidal factors. A residual gradient directed southeast-northwest highlighted that others factors, independent from IL and LoFt seems to hamper inner bay oyster reproductive performance. Consequently, investigating on the ecological functioning (Eyre influences), trophic potential and anthropogenic pressures of this zone seem crucial on the understanding of C. gigas reproductive pattern in Arcachon Bay.
Neonatal screening of cystic fibrosis: diagnostic problems with CFTR mild mutations.
Roussey, M; Le Bihannic, A; Scotet, V; Audrezet, M P; Blayau, M; Dagorne, M; David, V; Deneuville, E; Giniès, J L; Laurans, M; Moisan-Petit, V; Rault, G; Vigneron, P; Férec, C
2007-08-01
Newborn screening (NBS) of cystic fibrosis (CF) was implemented throughout the whole of France in 2002, but it had been established earlier in three western French regions. It can reveal atypical CF with one or two known CFTR mild mutations, with an uncertain evolution. The sweat test can be normal or borderline. In Brittany, from 1989 to 2004, 196 CF cases were diagnosed (1/2885 births). The incidence of atypical CF diagnosed by NBS is 9.7% (19 from 196). The outcome of 17 (2 lost of view) has been studied, with 9 other atypical CF cases diagnosed by NBS in two other regions. The follow-up period extends from 0.25 to 19.8 years (NBS implemented in Normandy in 1980) with mean age 4.6 years. The most frequent mild mutation is R117H ISV8-7T (50%). At the time of the last visit, nutritional status is normal. All these CF patients are pancreatic sufficient. Only one patient exhibits respiratory infections, whereas 7 others have them intermittently. Two of them had intermittent Pseudomonas aeruginosa colonization at 2.8 and 6.5 years. Mean Shwachman score is 96.7, mean Brasfield score is 22.8. Eight children have had lung function tests (mean follow-up of 10 years): mean FVC was 99% of predicted, mean FEV1 101%, but one of them has FEV1 of 48%. Predicting the phenotype of these atypical CF patients remains difficult, thus complicating any genetic counselling. A regular clinical evaluation is necessary, if possible by a CF unit, because CF symptoms may appear later.
Improved regeneration and transformation protocols for three strawberry cultivars
Zakaria, Hossam; Hussein, Gihan M; Abdel-Hadi, Abdel-Hadi A; Abdallah, Naglaa A
2014-01-01
Strawberry (Fragaria × ananassa) is an economically important soft fruit crop with polyploid genome which makes the breeding of new cultivars difficult. Simple and efficient method for transformation and regeneration is required for cultivars improvement in strawberry. In the present study, adventitious shoot regeneration has been investigated in three cultivated strawberry plants, i.e., Festival, Sweet Charly and Florida via direct organogenesis using the in vitro juvenile leaves as explants. Explants were collected after sub-culturing on a propagation medium composed of MS supplemented with 0.5 mg/l BA; 0.1 mg/l GA3 and 0.1 mg/l IBA. To select the suitable organogenesis, the explants of the three cultivars were cultured on MS medium supplemented with different concentrations of TDZ (1, 2, 3, and 4 mg/l), then incubated at a temperature of 22 °C ± 2. Medium containing 2 mg/l TDZ revealed the best regeneration efficiency with the three cultivars (72% for Festival, and 73% for Sweet Charly and Florida). After 4 weeks, the produced shoots were cultured on MS medium with different concentrations of BA and Kin to enhance shoot elongation. Results showed that the medium containing 1.5 mg/l BA and 0.5 mg/l Kin revealed highest elongation efficiency (88% and 94%) for Festival and Sweet Charly, respectively. On the other hand, medium containing 1.5 mg/l BA and 0.1 mg/l Kin showed highest elongation efficiency (90%) in Florida. Elongated shoots were successfully rooted on MS medium containing 1.5 mg/l NAA. Furthermore, transformation of the two cultivars, Festival and Sweet Charly, has been established via Agrobacterium strain LBA44404 containing the plasmid pISV2678 with gus-intron and bar genes. Three days post co-cultivation, GUS activity was screening using the histochemical assay. The results showed 16% and 18% of the tested plant materials has changed into blue color for Festival and Sweet Charly, respectively. Out of 120 explants only 13 shoots were developed on bialaphos medium for each cultivar, representing 10.8% bialaphos resistant strawberry shoot. The presence of the both genes bar and uid A was detected by PCR and Northern giving a transformation efficiency of 5%. PMID:24322545
Microstructure-sensitive plasticity and fatigue modeling of extruded 6061 aluminum alloys
NASA Astrophysics Data System (ADS)
McCullough, Robert Ross
In this study, the development of fatigue failure and stress anisotropy in light weight ductile metal alloys, specifically Al-Mg-Si aluminum alloys, was investigated. The experiments were carried out on an extruded 6061 aluminum alloy. Reverse loading experiments were performed up to a prestrain of 5% in both tension-followed-by-compression and compression-followed-by-tension. The development of isotropic and kinematic hardening and subsequent anisotropy was indicated by the observation of the Bauschinger effect phenomenon. Experimental results show that 6061 aluminum alloy exhibited a slight increase in the kinematic hardening versus applied prestrain. However, the ratio of kinematic-to-isotropic hardening remained near unity. An internal state variable (ISV) plasticity and damage model was used to capture the evolution of the anisotropy for the as-received T6 and partially annealed conditions. Following the stress anisotropy experiments, the same extruded 6061 aluminum alloy was tested under fully reversing, strain-controlled low cycle fatigue at up to 2.5% strain amplitudes and two heat treatment conditions. Observations were made of the development of striation fields up to the point of nucleation at cracked and clustered precipitants and free surfaces through localized precipitant slip band development. A finite element enabled micro-mechanics study of fatigue damage development of local strain field in the presence of hard phases was conducted. Both the FEA and experimental data sets were utilized in the implementation of a multi-stage fatigue model in order to predict the microstructure response, including fatigue nucleation and propagation contributions on the total fatigue life in AA6061. Good correlation between experimental and predicted results in the number of cycles to final failure was observed. The AA6061 material maintained relatively consistent low cycle fatigue performance despite two different heat treatments.
Methodology Series Module 5: Sampling Strategies.
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.
Toward cost-efficient sampling methods
NASA Astrophysics Data System (ADS)
Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie
2015-09-01
The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.
Methodology Series Module 5: Sampling Strategies
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438
40 CFR Appendix I to Part 261 - Representative Sampling Methods
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 25 2010-07-01 2010-07-01 false Representative Sampling Methods I...—Representative Sampling Methods The methods and equipment used for sampling waste materials will vary with the form and consistency of the waste materials to be sampled. Samples collected using the sampling...
40 CFR Appendix I to Part 261 - Representative Sampling Methods
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 26 2011-07-01 2011-07-01 false Representative Sampling Methods I...—Representative Sampling Methods The methods and equipment used for sampling waste materials will vary with the form and consistency of the waste materials to be sampled. Samples collected using the sampling...
A new sampling method for fibre length measurement
NASA Astrophysics Data System (ADS)
Wu, Hongyan; Li, Xianghong; Zhang, Junying
2018-06-01
This paper presents a new sampling method for fibre length measurement. This new method can meet the three features of an effective sampling method, also it can produce the beard with two symmetrical ends which can be scanned from the holding line to get two full fibrograms for each sample. The methodology was introduced and experiments were performed to investigate effectiveness of the new method. The results show that the new sampling method is an effective sampling method.
Log sampling methods and software for stand and landscape analyses.
Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough
2008-01-01
We describe methods for efficient, accurate sampling of logs at landscape and stand scales to estimate density, total length, cover, volume, and weight. Our methods focus on optimizing the sampling effort by choosing an appropriate sampling method and transect length for specific forest conditions and objectives. Sampling methods include the line-intersect method and...
Archfield, Stacey A.; LeBlanc, Denis R.
2005-01-01
To evaluate diffusion sampling as an alternative method to monitor volatile organic compound (VOC) concentrations in ground water, concentrations in samples collected by traditional pumped-sampling methods were compared to concentrations in samples collected by diffusion-sampling methods for 89 monitoring wells at or near the Massachusetts Military Reservation, Cape Cod. Samples were analyzed for 36 VOCs. There was no substantial difference between the utility of diffusion and pumped samples to detect the presence or absence of a VOC. In wells where VOCs were detected, diffusion-sample concentrations of tetrachloroethene (PCE) and trichloroethene (TCE) were significantly lower than pumped-sample concentrations. Because PCE and TCE concentrations detected in the wells dominated the calculation of many of the total VOC concentrations, when VOC concentrations were summed and compared by sampling method, visual inspection also showed a downward concentration bias in the diffusion-sample concentration. The degree to which pumped- and diffusion-sample concentrations agreed was not a result of variability inherent within the sampling methods or the diffusion process itself. A comparison of the degree of agreement in the results from the two methods to 13 quantifiable characteristics external to the sampling methods offered only well-screen length as being related to the degree of agreement between the methods; however, there is also evidence to indicate that the flushing rate of water through the well screen affected the agreement between the sampling methods. Despite poor agreement between the concentrations obtained by the two methods at some wells, the degree to which the concentrations agree at a given well is repeatable. A one-time, well-bywell comparison between diffusion- and pumped-sampling methods could determine which wells are good candidates for the use of diffusion samplers. For wells with good method agreement, the diffusion-sampling method is a time-saving and cost-effective alternative to pumped-sampling methods in a long-term monitoring program, such as at the Massachusetts Military Reservation.
Wroble, Julie; Frederick, Timothy; Frame, Alicia; Vallero, Daniel
2017-01-01
Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)'s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete ("grab") samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples.
2017-01-01
Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)’s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete (“grab”) samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples. PMID:28759607
Modified electrokinetic sample injection method in chromatography and electrophoresis analysis
Davidson, J. Courtney; Balch, Joseph W.
2001-01-01
A sample injection method for horizontal configured multiple chromatography or electrophoresis units, each containing a number of separation/analysis channels, that enables efficient introduction of analyte samples. This method for loading when taken in conjunction with horizontal microchannels allows much reduced sample volumes and a means of sample stacking to greatly reduce the concentration of the sample. This reduction in the amount of sample can lead to great cost savings in sample preparation, particularly in massively parallel applications such as DNA sequencing. The essence of this method is in preparation of the input of the separation channel, the physical sample introduction, and subsequent removal of excess material. By this method, sample volumes of 100 nanoliter to 2 microliters have been used successfully, compared to the typical 5 microliters of sample required by the prior separation/analysis method.
An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang
2016-06-29
To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.
Methods for purifying carbon materials
Dailly, Anne [Pasadena, CA; Ahn, Channing [Pasadena, CA; Yazami, Rachid [Los Angeles, CA; Fultz, Brent T [Pasadena, CA
2009-05-26
Methods of purifying samples are provided that are capable of removing carbonaceous and noncarbonaceous impurities from a sample containing a carbon material having a selected structure. Purification methods are provided for removing residual metal catalyst particles enclosed in multilayer carbonaceous impurities in samples generate by catalytic synthesis methods. Purification methods are provided wherein carbonaceous impurities in a sample are at least partially exfoliated, thereby facilitating subsequent removal of carbonaceous and noncarbonaceous impurities from the sample. Methods of purifying carbon nanotube-containing samples are provided wherein an intercalant is added to the sample and subsequently reacted with an exfoliation initiator to achieve exfoliation of carbonaceous impurities.
Sampling techniques for thrips (Thysanoptera: Thripidae) in preflowering tomato.
Joost, P Houston; Riley, David G
2004-08-01
Sampling techniques for thrips (Thysanoptera: Thripidae) were compared in preflowering tomato plants at the Coastal Plain Experiment Station in Tifton, GA, in 2000 and 2003, to determine the most effective method of determining abundance of thrips on tomato foliage early in the growing season. Three relative sampling techniques, including a standard insect aspirator, a 946-ml beat cup, and an insect vacuum device, were compared for accuracy to an absolute method and to themselves for precision and efficiency of sampling thrips. Thrips counts of all relative sampling methods were highly correlated (R > 0.92) to the absolute method. The aspirator method was the most accurate compared with the absolute sample according to regression analysis in 2000. In 2003, all sampling methods were considered accurate according to Dunnett's test, but thrips numbers were lower and sample variation was greater than in 2000. In 2000, the beat cup method had the lowest relative variation (RV) or best precision, at 1 and 8 d after transplant (DAT). Only the beat cup method had RV values <25 for all sampling dates. In 2003, the beat cup method had the lowest RV value at 15 and 21 DAT. The beat cup method also was the most efficient method for all sample dates in both years. Frankliniella fusca (Pergande) was the most abundant thrips species on the foliage of preflowering tomato in both years of study at this location. Overall, the best thrips sampling technique tested was the beat cup method in terms of precision and sampling efficiency.
Chao, Li-Wei; Szrek, Helena; Peltzer, Karl; Ramlagan, Shandir; Fleming, Peter; Leite, Rui; Magerman, Jesswill; Ngwenya, Godfrey B.; Pereira, Nuno Sousa; Behrman, Jere
2011-01-01
Finding an efficient method for sampling micro- and small-enterprises (MSEs) for research and statistical reporting purposes is a challenge in developing countries, where registries of MSEs are often nonexistent or outdated. This lack of a sampling frame creates an obstacle in finding a representative sample of MSEs. This study uses computer simulations to draw samples from a census of businesses and non-businesses in the Tshwane Municipality of South Africa, using three different sampling methods: the traditional probability sampling method, the compact segment sampling method, and the World Health Organization’s Expanded Programme on Immunization (EPI) sampling method. Three mechanisms by which the methods could differ are tested, the proximity selection of respondents, the at-home selection of respondents, and the use of inaccurate probability weights. The results highlight the importance of revisits and accurate probability weights, but the lesser effect of proximity selection on the samples’ statistical properties. PMID:22582004
Evaluating fungal contamination indoors is complicated because of the many different sampling methods utilized. In this study, fungal contamination was evaluated using five sampling methods and four matrices for results. The five sampling methods were a 48 hour indoor air sample ...
Evaluation of seven aquatic sampling methods for amphibians and other aquatic fauna
Gunzburger, M.S.
2007-01-01
To design effective and efficient research and monitoring programs researchers must have a thorough understanding of the capabilities and limitations of their sampling methods. Few direct comparative studies exist for aquatic sampling methods for amphibians. The objective of this study was to simultaneously employ seven aquatic sampling methods in 10 wetlands to compare amphibian species richness and number of individuals detected with each method. Four sampling methods allowed counts of individuals (metal dipnet, D-frame dipnet, box trap, crayfish trap), whereas the other three methods allowed detection of species (visual encounter, aural, and froglogger). Amphibian species richness was greatest with froglogger, box trap, and aural samples. For anuran species, the sampling methods by which each life stage was detected was related to relative length of larval and breeding periods and tadpole size. Detection probability of amphibians varied across sampling methods. Box trap sampling resulted in the most precise amphibian count, but the precision of all four count-based methods was low (coefficient of variation > 145 for all methods). The efficacy of the four count sampling methods at sampling fish and aquatic invertebrates was also analyzed because these predatory taxa are known to be important predictors of amphibian habitat distribution. Species richness and counts were similar for fish with the four methods, whereas invertebrate species richness and counts were greatest in box traps. An effective wetland amphibian monitoring program in the southeastern United States should include multiple sampling methods to obtain the most accurate assessment of species community composition at each site. The combined use of frogloggers, crayfish traps, and dipnets may be the most efficient and effective amphibian monitoring protocol. ?? 2007 Brill Academic Publishers.
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
Monitoring benthic aIgal communides: A comparison of targeted and coefficient sampling methods
Edwards, Matthew S.; Tinker, M. Tim
2009-01-01
Choosing an appropriate sample unit is a fundamental decision in the design of ecological studies. While numerous methods have been developed to estimate organism abundance, they differ in cost, accuracy and precision.Using both field data and computer simulation modeling, we evaluated the costs and benefits associated with two methods commonly used to sample benthic organisms in temperate kelp forests. One of these methods, the Targeted Sampling method, relies on different sample units, each "targeted" for a specific species or group of species while the other method relies on coefficients that represent ranges of bottom cover obtained from visual esti-mates within standardized sample units. Both the field data and the computer simulations suggest that both methods yield remarkably similar estimates of organism abundance and among-site variability, although the Coefficient method slightly underestimates variability among sample units when abundances are low. In contrast, the two methods differ considerably in the effort needed to sample these communities; the Targeted Sampling requires more time and twice the personnel to complete. We conclude that the Coefficent Sampling method may be better for environmental monitoring programs where changes in mean abundance are of central concern and resources are limiting, but that the Targeted sampling methods may be better for ecological studies where quantitative relationships among species and small-scale variability in abundance are of central concern.
Perpendicular distance sampling: an alternative method for sampling downed coarse woody debris
Michael S. Williams; Jeffrey H. Gove
2003-01-01
Coarse woody debris (CWD) plays an important role in many forest ecosystem processes. In recent years, a number of new methods have been proposed to sample CWD. These methods select individual logs into the sample using some form of unequal probability sampling. One concern with most of these methods is the difficulty in estimating the volume of each log. A new method...
Bannerman, J A; Costamagna, A C; McCornack, B P; Ragsdale, D W
2015-06-01
Generalist natural enemies play an important role in controlling soybean aphid, Aphis glycines (Hemiptera: Aphididae), in North America. Several sampling methods are used to monitor natural enemy populations in soybean, but there has been little work investigating their relative bias, precision, and efficiency. We compare five sampling methods: quadrats, whole-plant counts, sweep-netting, walking transects, and yellow sticky cards to determine the most practical methods for sampling the three most prominent species, which included Harmonia axyridis (Pallas), Coccinella septempunctata L. (Coleoptera: Coccinellidae), and Orius insidiosus (Say) (Hemiptera: Anthocoridae). We show an important time by sampling method interaction indicated by diverging community similarities within and between sampling methods as the growing season progressed. Similarly, correlations between sampling methods for the three most abundant species over multiple time periods indicated differences in relative bias between sampling methods and suggests that bias is not consistent throughout the growing season, particularly for sticky cards and whole-plant samples. Furthermore, we show that sticky cards produce strongly biased capture rates relative to the other four sampling methods. Precision and efficiency differed between sampling methods and sticky cards produced the most precise (but highly biased) results for adult natural enemies, while walking transects and whole-plant counts were the most efficient methods for detecting coccinellids and O. insidiosus, respectively. Based on bias, precision, and efficiency considerations, the most practical sampling methods for monitoring in soybean include walking transects for coccinellid detection and whole-plant counts for detection of small predators like O. insidiosus. Sweep-netting and quadrat samples are also useful for some applications, when efficiency is not paramount. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Comparability of river suspended-sediment sampling and laboratory analysis methods
Groten, Joel T.; Johnson, Gregory D.
2018-03-06
Accurate measurements of suspended sediment, a leading water-quality impairment in many Minnesota rivers, are important for managing and protecting water resources; however, water-quality standards for suspended sediment in Minnesota are based on grab field sampling and total suspended solids (TSS) laboratory analysis methods that have underrepresented concentrations of suspended sediment in rivers compared to U.S. Geological Survey equal-width-increment or equal-discharge-increment (EWDI) field sampling and suspended sediment concentration (SSC) laboratory analysis methods. Because of this underrepresentation, the U.S. Geological Survey, in collaboration with the Minnesota Pollution Control Agency, collected concurrent grab and EWDI samples at eight sites to compare results obtained using different combinations of field sampling and laboratory analysis methods.Study results determined that grab field sampling and TSS laboratory analysis results were biased substantially low compared to EWDI sampling and SSC laboratory analysis results, respectively. Differences in both field sampling and laboratory analysis methods caused grab and TSS methods to be biased substantially low. The difference in laboratory analysis methods was slightly greater than field sampling methods.Sand-sized particles had a strong effect on the comparability of the field sampling and laboratory analysis methods. These results indicated that grab field sampling and TSS laboratory analysis methods fail to capture most of the sand being transported by the stream. The results indicate there is less of a difference among samples collected with grab field sampling and analyzed for TSS and concentration of fines in SSC. Even though differences are present, the presence of strong correlations between SSC and TSS concentrations provides the opportunity to develop site specific relations to address transport processes not captured by grab field sampling and TSS laboratory analysis methods.
Some connections between importance sampling and enhanced sampling methods in molecular dynamics.
Lie, H C; Quer, J
2017-11-21
In molecular dynamics, enhanced sampling methods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced sampling methods.
Some connections between importance sampling and enhanced sampling methods in molecular dynamics
NASA Astrophysics Data System (ADS)
Lie, H. C.; Quer, J.
2017-11-01
In molecular dynamics, enhanced sampling methods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced sampling methods.
Rapid detection of Salmonella spp. in food by use of the ISO-GRID hydrophobic grid membrane filter.
Entis, P; Brodsky, M H; Sharpe, A N; Jarvis, G A
1982-01-01
A rapid hydrophobic grid-membrane filter (HGMF) method was developed and compared with the Health Protection Branch cultural method for the detection of Salmonella spp. in 798 spiked samples and 265 naturally contaminated samples of food. With the HGMF method, Salmonella spp. were isolated from 618 of the spiked samples and 190 of the naturally contaminated samples. The conventional method recovered Salmonella spp. from 622 spiked samples and 204 unspiked samples. The isolation rates from Salmonella-positive samples for the two methods were not significantly different (94.6% overall for the HGMF method and 96.7% for the conventional approach), but the HGMF results were available in only 2 to 3 days after sample receipt compared with 3 to 4 days by the conventional method. Images PMID:7059168
Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei
2015-01-01
A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.
Coes, Alissa L.; Paretti, Nicholas V.; Foreman, William T.; Iverson, Jana L.; Alvarez, David A.
2014-01-01
A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19–23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method.
Coes, Alissa L; Paretti, Nicholas V; Foreman, William T; Iverson, Jana L; Alvarez, David A
2014-03-01
A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19-23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method. Published by Elsevier B.V.
Lísa, Miroslav; Cífková, Eva; Khalikova, Maria; Ovčačíková, Magdaléna; Holčapek, Michal
2017-11-24
Lipidomic analysis of biological samples in a clinical research represents challenging task for analytical methods given by the large number of samples and their extreme complexity. In this work, we compare direct infusion (DI) and chromatography - mass spectrometry (MS) lipidomic approaches represented by three analytical methods in terms of comprehensiveness, sample throughput, and validation results for the lipidomic analysis of biological samples represented by tumor tissue, surrounding normal tissue, plasma, and erythrocytes of kidney cancer patients. Methods are compared in one laboratory using the identical analytical protocol to ensure comparable conditions. Ultrahigh-performance liquid chromatography/MS (UHPLC/MS) method in hydrophilic interaction liquid chromatography mode and DI-MS method are used for this comparison as the most widely used methods for the lipidomic analysis together with ultrahigh-performance supercritical fluid chromatography/MS (UHPSFC/MS) method showing promising results in metabolomics analyses. The nontargeted analysis of pooled samples is performed using all tested methods and 610 lipid species within 23 lipid classes are identified. DI method provides the most comprehensive results due to identification of some polar lipid classes, which are not identified by UHPLC and UHPSFC methods. On the other hand, UHPSFC method provides an excellent sensitivity for less polar lipid classes and the highest sample throughput within 10min method time. The sample consumption of DI method is 125 times higher than for other methods, while only 40μL of organic solvent is used for one sample analysis compared to 3.5mL and 4.9mL in case of UHPLC and UHPSFC methods, respectively. Methods are validated for the quantitative lipidomic analysis of plasma samples with one internal standard for each lipid class. Results show applicability of all tested methods for the lipidomic analysis of biological samples depending on the analysis requirements. Copyright © 2017 Elsevier B.V. All rights reserved.
Prevalence of Mixed-Methods Sampling Designs in Social Science Research
ERIC Educational Resources Information Center
Collins, Kathleen M. T.
2006-01-01
The purpose of this mixed-methods study was to document the prevalence of sampling designs utilised in mixed-methods research and to examine the interpretive consistency between interpretations made in mixed-methods studies and the sampling design used. Classification of studies was based on a two-dimensional mixed-methods sampling model. This…
Che, W W; Frey, H Christopher; Lau, Alexis K H
2014-12-01
Population and diary sampling methods are employed in exposure models to sample simulated individuals and their daily activity on each simulation day. Different sampling methods may lead to variations in estimated human exposure. In this study, two population sampling methods (stratified-random and random-random) and three diary sampling methods (random resampling, diversity and autocorrelation, and Markov-chain cluster [MCC]) are evaluated. Their impacts on estimated children's exposure to ambient fine particulate matter (PM2.5 ) are quantified via case studies for children in Wake County, NC for July 2002. The estimated mean daily average exposure is 12.9 μg/m(3) for simulated children using the stratified population sampling method, and 12.2 μg/m(3) using the random sampling method. These minor differences are caused by the random sampling among ages within census tracts. Among the three diary sampling methods, there are differences in the estimated number of individuals with multiple days of exposures exceeding a benchmark of concern of 25 μg/m(3) due to differences in how multiday longitudinal diaries are estimated. The MCC method is relatively more conservative. In case studies evaluated here, the MCC method led to 10% higher estimation of the number of individuals with repeated exposures exceeding the benchmark. The comparisons help to identify and contrast the capabilities of each method and to offer insight regarding implications of method choice. Exposure simulation results are robust to the two population sampling methods evaluated, and are sensitive to the choice of method for simulating longitudinal diaries, particularly when analyzing results for specific microenvironments or for exposures exceeding a benchmark of concern. © 2014 Society for Risk Analysis.
40 CFR 80.8 - Sampling methods for gasoline, diesel fuel, fuel additives, and renewable fuels.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Sampling methods for gasoline, diesel... Provisions § 80.8 Sampling methods for gasoline, diesel fuel, fuel additives, and renewable fuels. The sampling methods specified in this section shall be used to collect samples of gasoline, diesel fuel...
Evaluation of respondent-driven sampling.
McCreesh, Nicky; Frost, Simon D W; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda N; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G
2012-01-01
Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total population data. Total population data on age, tribe, religion, socioeconomic status, sexual activity, and HIV status were available on a population of 2402 male household heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, using current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). We recruited 927 household heads. Full and small RDS samples were largely representative of the total population, but both samples underrepresented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven sampling statistical inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven sampling bootstrap 95% confidence intervals included the population proportion. Respondent-driven sampling produced a generally representative sample of this well-connected nonhidden population. However, current respondent-driven sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience sampling method, and caution is required when interpreting findings based on the sampling method.
NASA Technical Reports Server (NTRS)
Kim, Hyun Jung; Choi, Sang H.; Bae, Hyung-Bin; Lee, Tae Woo
2012-01-01
The National Aeronautics and Space Administration-invented X-ray diffraction (XRD) methods, including the total defect density measurement method and the spatial wafer mapping method, have confirmed super hetero epitaxy growth for rhombohedral single crystalline silicon germanium (Si1-xGex) on a c-plane sapphire substrate. However, the XRD method cannot observe the surface morphology or roughness because of the method s limited resolution. Therefore the authors used transmission electron microscopy (TEM) with samples prepared in two ways, the focused ion beam (FIB) method and the tripod method to study the structure between Si1-xGex and sapphire substrate and Si1?xGex itself. The sample preparation for TEM should be as fast as possible so that the sample should contain few or no artifacts induced by the preparation. The standard sample preparation method of mechanical polishing often requires a relatively long ion milling time (several hours), which increases the probability of inducing defects into the sample. The TEM sampling of the Si1-xGex on sapphire is also difficult because of the sapphire s high hardness and mechanical instability. The FIB method and the tripod method eliminate both problems when performing a cross-section TEM sampling of Si1-xGex on c-plane sapphire, which shows the surface morphology, the interface between film and substrate, and the crystal structure of the film. This paper explains the FIB sampling method and the tripod sampling method, and why sampling Si1-xGex, on a sapphire substrate with TEM, is necessary.
Evaluation of Respondent-Driven Sampling
McCreesh, Nicky; Frost, Simon; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda Ndagire; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G
2012-01-01
Background Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex-workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total-population data. Methods Total-population data on age, tribe, religion, socioeconomic status, sexual activity and HIV status were available on a population of 2402 male household-heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, employing current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). Results We recruited 927 household-heads. Full and small RDS samples were largely representative of the total population, but both samples under-represented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven-sampling statistical-inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven-sampling bootstrap 95% confidence intervals included the population proportion. Conclusions Respondent-driven sampling produced a generally representative sample of this well-connected non-hidden population. However, current respondent-driven-sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience-sampling method, and caution is required when interpreting findings based on the sampling method. PMID:22157309
[Sampling methods for PM2.5 from stationary sources: a review].
Jiang, Jing-Kun; Deng, Jian-Guo; Li, Zhen; Li, Xing-Hua; Duan, Lei; Hao, Ji-Ming
2014-05-01
The new China national ambient air quality standard has been published in 2012 and will be implemented in 2016. To meet the requirements in this new standard, monitoring and controlling PM2,,5 emission from stationary sources are very important. However, so far there is no national standard method on sampling PM2.5 from stationary sources. Different sampling methods for PM2.5 from stationary sources and relevant international standards were reviewed in this study. It includes the methods for PM2.5 sampling in flue gas and the methods for PM2.5 sampling after dilution. Both advantages and disadvantages of these sampling methods were discussed. For environmental management, the method for PM2.5 sampling in flue gas such as impactor and virtual impactor was suggested as a standard to determine filterable PM2.5. To evaluate environmental and health effects of PM2.5 from stationary sources, standard dilution method for sampling of total PM2.5 should be established.
Comparison of methods for sampling plant bugs on cotton in South Texas (2010)
USDA-ARS?s Scientific Manuscript database
A total of 26 cotton fields were sampled by experienced and inexperienced samplers at 3 growth stages using 5 methods to compare the most efficient and accurate method for sampling plant bugs in cotton. Each of the 5 methods had its own distinct advantages and disadvantages as a sampling method (too...
Uechi, Ken; Asakura, Keiko; Ri, Yui; Masayasu, Shizuko; Sasaki, Satoshi
2016-02-01
Several estimation methods for 24-h sodium excretion using spot urine sample have been reported, but accurate estimation at the individual level remains difficult. We aimed to clarify the most accurate method of estimating 24-h sodium excretion with different numbers of available spot urine samples. A total of 370 participants from throughout Japan collected multiple 24-h urine and spot urine samples independently. Participants were allocated randomly into a development and a validation dataset. Two estimation methods were established in the development dataset using the two 24-h sodium excretion samples as reference: the 'simple mean method' estimated by multiplying the sodium-creatinine ratio by predicted 24-h creatinine excretion, whereas the 'regression method' employed linear regression analysis. The accuracy of the two methods was examined by comparing the estimated means and concordance correlation coefficients (CCC) in the validation dataset. Mean sodium excretion by the simple mean method with three spot urine samples was closest to that by 24-h collection (difference: -1.62 mmol/day). CCC with the simple mean method increased with an increased number of spot urine samples at 0.20, 0.31, and 0.42 using one, two, and three samples, respectively. This method with three spot urine samples yielded higher CCC than the regression method (0.40). When only one spot urine sample was available for each study participant, CCC was higher with the regression method (0.36). The simple mean method with three spot urine samples yielded the most accurate estimates of sodium excretion. When only one spot urine sample was available, the regression method was preferable.
Gutiérrez-Fonseca, Pablo E; Lorion, Christopher M
2014-04-01
The use of aquatic macroinvertebrates as bio-indicators in water quality studies has increased considerably over the last decade in Costa Rica, and standard biomonitoring methods have now been formulated at the national level. Nevertheless, questions remain about the effectiveness of different methods of sampling freshwater benthic assemblages, and how sampling intensity may influence biomonitoring results. In this study, we compared the results of qualitative sampling using commonly applied methods with a more intensive quantitative approach at 12 sites in small, lowland streams on the southern Caribbean slope of Costa Rica. Qualitative samples were collected following the official protocol using a strainer during a set time period and macroinvertebrates were field-picked. Quantitative sampling involved collecting ten replicate Surber samples and picking out macroinvertebrates in the laboratory with a stereomicroscope. The strainer sampling method consistently yielded fewer individuals and families than quantitative samples. As a result, site scores calculated using the Biological Monitoring Working Party-Costa Rica (BMWP-CR) biotic index often differed greatly depending on the sampling method. Site water quality classifications using the BMWP-CR index differed between the two sampling methods for 11 of the 12 sites in 2005, and for 9 of the 12 sites in 2006. Sampling intensity clearly had a strong influence on BMWP-CR index scores, as well as perceived differences between reference and impacted sites. Achieving reliable and consistent biomonitoring results for lowland Costa Rican streams may demand intensive sampling and requires careful consideration of sampling methods.
Maes, Sharon; Huu, Son Nguyen; Heyndrickx, Marc; Weyenberg, Stephanie van; Steenackers, Hans; Verplaetse, Alex; Vackier, Thijs; Sampers, Imca; Raes, Katleen; Reu, Koen De
2017-12-01
Biofilms are an important source of contamination in food companies, yet the composition of biofilms in practice is still mostly unknown. The chemical and microbiological characterization of surface samples taken after cleaning and disinfection is very important to distinguish free-living bacteria from the attached bacteria in biofilms. In this study, sampling methods that are potentially useful for both chemical and microbiological analyses of surface samples were evaluated. In the manufacturing facilities of eight Belgian food companies, surfaces were sampled after cleaning and disinfection using two sampling methods: the scraper-flocked swab method and the sponge stick method. Microbiological and chemical analyses were performed on these samples to evaluate the suitability of the sampling methods for the quantification of extracellular polymeric substance components and microorganisms originating from biofilms in these facilities. The scraper-flocked swab method was most suitable for chemical analyses of the samples because the material in these swabs did not interfere with determination of the chemical components. For microbiological enumerations, the sponge stick method was slightly but not significantly more effective than the scraper-flocked swab method. In all but one of the facilities, at least 20% of the sampled surfaces had more than 10 2 CFU/100 cm 2 . Proteins were found in 20% of the chemically analyzed surface samples, and carbohydrates and uronic acids were found in 15 and 8% of the samples, respectively. When chemical and microbiological results were combined, 17% of the sampled surfaces were contaminated with both microorganisms and at least one of the analyzed chemical components; thus, these surfaces were characterized as carrying biofilm. Overall, microbiological contamination in the food industry is highly variable by food sector and even within a facility at various sampling points and sampling times.
Code of Federal Regulations, 2010 CFR
2010-07-01
... will consider a sample obtained using any of the applicable sampling methods specified in appendix I to... appendix I sampling methods are not being formally adopted by the Administrator, a person who desires to employ an alternative sampling method is not required to demonstrate the equivalency of his method under...
A new design of groundwater sampling device and its application.
Tsai, Yih-jin; Kuo, Ming-ching T
2005-01-01
Compounds in the atmosphere contaminate samples of groundwater. An inexpensive and simple method for collecting groundwater samples is developed to prevent contamination when the background concentration of contaminants is high. This new design of groundwater sampling device involves a glass sampling bottle with a Teflon-lined valve at each end. A cleaned and dried sampling bottle was connected to a low flow-rate peristaltic pump with Teflon tubing and was filled with water. No headspace volume was remained in the sampling bottle. The sample bottle was then packed in a PVC bag to prevent the target component from infiltrating into the water sample through the valves. In this study, groundwater was sampled at six wells using both the conventional method and the improved method. The analysis of trichlorofluoromethane (CFC-11) concentrations at these six wells indicates that all the groundwater samples obtained by the conventional sampling method were contaminated by CFC-11 from the atmosphere. The improved sampling method greatly eliminated the problems of contamination, preservation and quantitative analysis of natural water.
Comparison of Techniques for Sampling Adult Necrophilous Insects From Pig Carcasses.
Cruise, Angela; Hatano, Eduardo; Watson, David W; Schal, Coby
2018-02-06
Studies of the pre-colonization interval and mechanisms driving necrophilous insect ecological succession depend on effective sampling of adult insects and knowledge of their diel and successional activity patterns. The number of insects trapped, their diversity, and diel periodicity were compared with four sampling methods on neonate pigs. Sampling method, time of day and decomposition age of the pigs significantly affected the number of insects sampled from pigs. We also found significant interactions of sampling method and decomposition day, time of sampling and decomposition day. No single method was superior to the other methods during all three decomposition days. Sampling times after noon yielded the largest samples during the first 2 d of decomposition. On day 3 of decomposition however, all sampling times were equally effective. Therefore, to maximize insect collections from neonate pigs, the method used to sample must vary by decomposition day. The suction trap collected the most species-rich samples, but sticky trap samples were the most diverse, when both species richness and evenness were factored into a Shannon diversity index. Repeated sampling during the noon to 18:00 hours period was most effective to obtain the maximum diversity of trapped insects. The integration of multiple sampling techniques would most effectively sample the necrophilous insect community. However, because all four tested methods were deficient at sampling beetle species, future work should focus on optimizing the most promising methods, alone or in combinations, and incorporate hand-collections of beetles. © The Author(s) 2018. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
40 CFR Appendix I to Part 261 - Representative Sampling Methods
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 27 2013-07-01 2013-07-01 false Representative Sampling Methods I Appendix I to Part 261 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES...—Representative Sampling Methods The methods and equipment used for sampling waste materials will vary with the...
Monte Carlo approaches to sampling forested tracts with lines or points
Harry T. Valentine; Jeffrey H. Gove; Timothy G. Gregoire
2001-01-01
Several line- and point-based sampling methods can be employed to estimate the aggregate dimensions of trees standing on a forested tract or pieces of coarse woody debris lying on the forest floor. Line methods include line intersect sampling, horizontal line sampling, and transect relascope sampling; point methods include variable- and fixed-radius plot sampling, and...
This Paper covers the basics of passive sampler design, compares passive samplers to conventional methods of air sampling, and discusses considerations when implementing a passive sampling program. The Paper also discusses field sampling and sample analysis considerations to ensu...
Measuring larval nematode contamination on cattle pastures: Comparing two herbage sampling methods.
Verschave, S H; Levecke, B; Duchateau, L; Vercruysse, J; Charlier, J
2015-06-15
Assessing levels of pasture larval contamination is frequently used to study the population dynamics of the free-living stages of parasitic nematodes of livestock. Direct quantification of infective larvae (L3) on herbage is the most applied method to measure pasture larval contamination. However, herbage collection remains labour intensive and there is a lack of studies addressing the variation induced by the sampling method and the required sample size. The aim of this study was (1) to compare two different sampling methods in terms of pasture larval count results and time required to sample, (2) to assess the amount of variation in larval counts at the level of sample plot, pasture and season, respectively and (3) to calculate the required sample size to assess pasture larval contamination with a predefined precision using random plots across pasture. Eight young stock pastures of different commercial dairy herds were sampled in three consecutive seasons during the grazing season (spring, summer and autumn). On each pasture, herbage samples were collected through both a double-crossed W-transect with samples taken every 10 steps (method 1) and four random located plots of 0.16 m(2) with collection of all herbage within the plot (method 2). The average (± standard deviation (SD)) pasture larval contamination using sampling methods 1 and 2 was 325 (± 479) and 305 (± 444)L3/kg dry herbage (DH), respectively. Large discrepancies in pasture larval counts of the same pasture and season were often seen between methods, but no significant difference (P = 0.38) in larval counts between methods was found. Less time was required to collect samples with method 2. This difference in collection time between methods was most pronounced for pastures with a surface area larger than 1 ha. The variation in pasture larval counts from samples generated by random plot sampling was mainly due to the repeated measurements on the same pasture in the same season (residual variance component = 6.2), rather than due to pasture (variance component = 0.55) or season (variance component = 0.15). Using the observed distribution of L3, the required sample size (i.e. number of plots per pasture) for sampling a pasture through random plots with a particular precision was simulated. A higher relative precision was acquired when estimating PLC on pastures with a high larval contamination and a low level of aggregation compared to pastures with a low larval contamination when the same sample size was applied. In the future, herbage sampling through random plots across pasture (method 2) seems a promising method to develop further as no significant difference in counts between the methods was found and this method was less time consuming. Copyright © 2015 Elsevier B.V. All rights reserved.
Comparison of preprocessing methods and storage times for touch DNA samples
Dong, Hui; Wang, Jing; Zhang, Tao; Ge, Jian-ye; Dong, Ying-qiang; Sun, Qi-fan; Liu, Chao; Li, Cai-xia
2017-01-01
Aim To select appropriate preprocessing methods for different substrates by comparing the effects of four different preprocessing methods on touch DNA samples and to determine the effect of various storage times on the results of touch DNA sample analysis. Method Hand touch DNA samples were used to investigate the detection and inspection results of DNA on different substrates. Four preprocessing methods, including the direct cutting method, stubbing procedure, double swab technique, and vacuum cleaner method, were used in this study. DNA was extracted from mock samples with four different preprocessing methods. The best preprocess protocol determined from the study was further used to compare performance after various storage times. DNA extracted from all samples was quantified and amplified using standard procedures. Results The amounts of DNA and the number of alleles detected on the porous substrates were greater than those on the non-porous substrates. The performances of the four preprocessing methods varied with different substrates. The direct cutting method displayed advantages for porous substrates, and the vacuum cleaner method was advantageous for non-porous substrates. No significant degradation trend was observed as the storage times increased. Conclusion Different substrates require the use of different preprocessing method in order to obtain the highest DNA amount and allele number from touch DNA samples. This study provides a theoretical basis for explorations of touch DNA samples and may be used as a reference when dealing with touch DNA samples in case work. PMID:28252870
Amonette, James E.; Autrey, S. Thomas; Foster-Mills, Nancy S.
2006-02-14
Methods and apparatus for simultaneous or sequential, rapid analysis of multiple samples by photoacoustic spectroscopy are disclosed. Particularly, a photoacoustic spectroscopy sample array vessel including a vessel body having multiple sample cells connected thereto is disclosed. At least one acoustic detector is acoustically positioned near the sample cells. Methods for analyzing the multiple samples in the sample array vessels using photoacoustic spectroscopy are provided.
7 CFR 58.812 - Methods of sample analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Methods of sample analysis. 58.812 Section 58.812... Procedures § 58.812 Methods of sample analysis. Samples shall be tested according to the applicable methods of laboratory analysis contained in either DA Instruction 918-RL, as issued by the USDA, Agricultural...
7 CFR 58.245 - Method of sample analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Method of sample analysis. 58.245 Section 58.245... Procedures § 58.245 Method of sample analysis. Samples shall be tested according to the applicable methods of laboratory analysis contained in either DA Instruction 918-RL as issued by the USDA, Agricultural Marketing...
This is a sampling and analysis method for the determination of asbestos in air. Samples are analyzed by transmission electron microscopy (TEM). Although a small subset of samples are to be prepared using a direct procedure, the majority of samples analyzed using this method wil...
An improved sampling method of complex network
NASA Astrophysics Data System (ADS)
Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing
2014-12-01
Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.
Equilibrium Molecular Thermodynamics from Kirkwood Sampling
2015-01-01
We present two methods for barrierless equilibrium sampling of molecular systems based on the recently proposed Kirkwood method (J. Chem. Phys.2009, 130, 134102). Kirkwood sampling employs low-order correlations among internal coordinates of a molecule for random (or non-Markovian) sampling of the high dimensional conformational space. This is a geometrical sampling method independent of the potential energy surface. The first method is a variant of biased Monte Carlo, where Kirkwood sampling is used for generating trial Monte Carlo moves. Using this method, equilibrium distributions corresponding to different temperatures and potential energy functions can be generated from a given set of low-order correlations. Since Kirkwood samples are generated independently, this method is ideally suited for massively parallel distributed computing. The second approach is a variant of reservoir replica exchange, where Kirkwood sampling is used to construct a reservoir of conformations, which exchanges conformations with the replicas performing equilibrium sampling corresponding to different thermodynamic states. Coupling with the Kirkwood reservoir enhances sampling by facilitating global jumps in the conformational space. The efficiency of both methods depends on the overlap of the Kirkwood distribution with the target equilibrium distribution. We present proof-of-concept results for a model nine-atom linear molecule and alanine dipeptide. PMID:25915525
NASA Astrophysics Data System (ADS)
Liu, Xiaodong
2017-08-01
A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.
Foo, Lee Kien; McGree, James; Duffull, Stephen
2012-01-01
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.
Jannink, I; Bennen, J N; Blaauw, J; van Diest, P J; Baak, J P
1995-01-01
This study compares the influence of two different nuclear sampling methods on the prognostic value of assessments of mean and standard deviation of nuclear area (MNA, SDNA) in 191 consecutive invasive breast cancer patients with long term follow up. The first sampling method used was 'at convenience' sampling (ACS); the second, systematic random sampling (SRS). Both sampling methods were tested with a sample size of 50 nuclei (ACS-50 and SRS-50). To determine whether, besides the sampling methods, sample size had impact on prognostic value as well, the SRS method was also tested using a sample size of 100 nuclei (SRS-100). SDNA values were systematically lower for ACS, obviously due to (unconsciously) not including small and large nuclei. Testing prognostic value of a series of cut off points, MNA and SDNA values assessed by the SRS method were prognostically significantly stronger than the values obtained by the ACS method. This was confirmed in Cox regression analysis. For the MNA, the Mantel-Cox p-values from SRS-50 and SRS-100 measurements were not significantly different. However, for the SDNA, SRS-100 yielded significantly lower p-values than SRS-50. In conclusion, compared with the 'at convenience' nuclear sampling method, systematic random sampling of nuclei is not only superior with respect to reproducibility of results, but also provides a better prognostic value in patients with invasive breast cancer.
Comparability among four invertebrate sampling methods, Fountain Creek Basin, Colorado, 2010-2012
Zuellig, Robert E.; Bruce, James F.; Stogner, Sr., Robert W.; Brown, Krystal D.
2014-01-01
The U.S. Geological Survey, in cooperation with Colorado Springs City Engineering and Colorado Springs Utilities, designed a study to determine if sampling method and sample timing resulted in comparable samples and assessments of biological condition. To accomplish this task, annual invertebrate samples were collected concurrently using four sampling methods at 15 U.S. Geological Survey streamflow gages in the Fountain Creek basin from 2010 to 2012. Collectively, the four methods are used by local (U.S. Geological Survey cooperative monitoring program) and State monitoring programs (Colorado Department of Public Health and Environment) in the Fountain Creek basin to produce two distinct sample types for each program that target single-and multiple-habitats. This study found distinguishable differences between single-and multi-habitat sample types using both community similarities and multi-metric index values, while methods from each program within sample type were comparable. This indicates that the Colorado Department of Public Health and Environment methods were compatible with the cooperative monitoring program methods within multi-and single-habitat sample types. Comparisons between September and October samples found distinguishable differences based on community similarities for both sample types, whereas only differences were found for single-habitat samples when multi-metric index values were considered. At one site, differences between September and October index values from single-habitat samples resulted in opposing assessments of biological condition. Direct application of the results to inform the revision of the existing Fountain Creek basin U.S. Geological Survey cooperative monitoring program are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.
2013-04-27
This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that amore » decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account for FNR > 0 2. qualitative data when the FNR > 0 but statistical sampling methods are used that assume the FNR = 0 3. quantitative data (e.g., contaminant concentrations expressed as CFU/cm2) when the FNR = 0 or when using statistical sampling methods that account for FNR > 0 4. quantitative data when the FNR > 0 but statistical sampling methods are used that assume the FNR = 0. For Situation 2, the hotspot sampling approach provides for stating with Z% confidence that a hotspot of specified shape and size with detectable contamination will be found. Also for Situation 2, the CJR approach provides for stating with X% confidence that at least Y% of the decision area does not contain detectable contamination. Forms of these statements for the other three situations are discussed in Section 2.2. Statistical methods that account for FNR > 0 currently only exist for the hotspot sampling approach with qualitative data (or quantitative data converted to qualitative data). This report documents the current status of methods and formulas for the hotspot and CJR sampling approaches. Limitations of these methods are identified. Extensions of the methods that are applicable when FNR = 0 to account for FNR > 0, or to address other limitations, will be documented in future revisions of this report if future funding supports the development of such extensions. For quantitative data, this report also presents statistical methods and formulas for 1. quantifying the uncertainty in measured sample results 2. estimating the true surface concentration corresponding to a surface sample 3. quantifying the uncertainty of the estimate of the true surface concentration. All of the methods and formulas discussed in the report were applied to example situations to illustrate application of the methods and interpretation of the results.« less
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
Validated Test Method 5030C: Purge-and-Trap for Aqueous Samples
This method describes a purge-and-trap procedure for the analysis of volatile organic compoundsin aqueous samples & water miscible liquid samples. It also describes the analysis of high concentration soil and waste sample extracts prepared in Method 5035.
Amonette, James E.; Autrey, S. Thomas; Foster-Mills, Nancy S.; Green, David
2005-03-29
Methods and apparatus for analysis of multiple samples by photoacoustic spectroscopy are disclosed. Particularly, a photoacoustic spectroscopy sample array vessel including a vessel body having multiple sample cells connected thereto is disclosed. At least one acoustic detector is acoustically coupled with the vessel body. Methods for analyzing the multiple samples in the sample array vessels using photoacoustic spectroscopy are provided.
Popic, Tony J; Davila, Yvonne C; Wardle, Glenda M
2013-01-01
Methods for sampling ecological assemblages strive to be efficient, repeatable, and representative. Unknowingly, common methods may be limited in terms of revealing species function and so of less value for comparative studies. The global decline in pollination services has stimulated surveys of flower-visiting invertebrates, using pan traps and net sampling. We explore the relative merits of these two methods in terms of species discovery, quantifying abundance, function, and composition, and responses of species to changing floral resources. Using a spatially-nested design we sampled across a 5000 km(2) area of arid grasslands, including 432 hours of net sampling and 1296 pan trap-days, between June 2010 and July 2011. Net sampling yielded 22% more species and 30% higher abundance than pan traps, and better reflected the spatio-temporal variation of floral resources. Species composition differed significantly between methods; from 436 total species, 25% were sampled by both methods, 50% only by nets, and the remaining 25% only by pans. Apart from being less comprehensive, if pan traps do not sample flower-visitors, the link to pollination is questionable. By contrast, net sampling functionally linked species to pollination through behavioural observations of flower-visitation interaction frequency. Netted specimens are also necessary for evidence of pollen transport. Benefits of net-based sampling outweighed minor differences in overall sampling effort. As pan traps and net sampling methods are not equivalent for sampling invertebrate-flower interactions, we recommend net sampling of invertebrate pollinator assemblages, especially if datasets are intended to document declines in pollination and guide measures to retain this important ecosystem service.
Popic, Tony J.; Davila, Yvonne C.; Wardle, Glenda M.
2013-01-01
Methods for sampling ecological assemblages strive to be efficient, repeatable, and representative. Unknowingly, common methods may be limited in terms of revealing species function and so of less value for comparative studies. The global decline in pollination services has stimulated surveys of flower-visiting invertebrates, using pan traps and net sampling. We explore the relative merits of these two methods in terms of species discovery, quantifying abundance, function, and composition, and responses of species to changing floral resources. Using a spatially-nested design we sampled across a 5000 km2 area of arid grasslands, including 432 hours of net sampling and 1296 pan trap-days, between June 2010 and July 2011. Net sampling yielded 22% more species and 30% higher abundance than pan traps, and better reflected the spatio-temporal variation of floral resources. Species composition differed significantly between methods; from 436 total species, 25% were sampled by both methods, 50% only by nets, and the remaining 25% only by pans. Apart from being less comprehensive, if pan traps do not sample flower-visitors, the link to pollination is questionable. By contrast, net sampling functionally linked species to pollination through behavioural observations of flower-visitation interaction frequency. Netted specimens are also necessary for evidence of pollen transport. Benefits of net-based sampling outweighed minor differences in overall sampling effort. As pan traps and net sampling methods are not equivalent for sampling invertebrate-flower interactions, we recommend net sampling of invertebrate pollinator assemblages, especially if datasets are intended to document declines in pollination and guide measures to retain this important ecosystem service. PMID:23799127
A new sampling scheme for developing metamodels with the zeros of Chebyshev polynomials
NASA Astrophysics Data System (ADS)
Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing
2015-09-01
The accuracy of metamodelling is determined by both the sampling and approximation. This article proposes a new sampling method based on the zeros of Chebyshev polynomials to capture the sampling information effectively. First, the zeros of one-dimensional Chebyshev polynomials are applied to construct Chebyshev tensor product (CTP) sampling, and the CTP is then used to construct high-order multi-dimensional metamodels using the 'hypercube' polynomials. Secondly, the CTP sampling is further enhanced to develop Chebyshev collocation method (CCM) sampling, to construct the 'simplex' polynomials. The samples of CCM are randomly and directly chosen from the CTP samples. Two widely studied sampling methods, namely the Smolyak sparse grid and Hammersley, are used to demonstrate the effectiveness of the proposed sampling method. Several numerical examples are utilized to validate the approximation accuracy of the proposed metamodel under different dimensions.
Zhang, Heng; Lan, Fang; Shi, Yupeng; Wan, Zhi-Gang; Yue, Zhen-Feng; Fan, Fang; Lin, Yan-Kui; Tang, Mu-Jin; Lv, Jing-Zhang; Xiao, Tan; Yi, Changqing
2014-06-15
VitaFast(®) test kits designed for the microbiological assay in microtiter plate format can be applied to quantitative determination of B-group water-soluble vitamins such as vitamin B12, folic acid and biotin, et al. Compared to traditional microbiological methods, VitaFast(®) kits significantly reduce sample processing time and provide greater reliability, higher productivity and better accuracy. Recently, simultaneous determination of vitamin B12, folic acid and biotin in one sample is urgently required when evaluating the quality of infant formulae in our practical work. However, the present sample preparation protocols which are developed for individual test systems, are incompatible with simultaneous determination of several analytes. To solve this problem, a novel "three-in-one" sample preparation method is herein developed for simultaneous determination of B-group water-soluble vitamins using VitaFast(®) kits. The performance of this novel "three-in-one" sample preparation method was systematically evaluated through comparing with individual sample preparation protocols. The experimental results of the assays which employed "three-in-one" sample preparation method were in good agreement with those obtained from conventional VitaFast(®) extraction methods, indicating that the proposed "three-in-one" sample preparation method is applicable to the present three VitaFast(®) vitamin test systems, thus offering a promising alternative for the three independent sample preparation methods. The proposed new sample preparation method will significantly improve the efficiency of infant formulae inspection. Copyright © 2013 Elsevier Ltd. All rights reserved.
Surveying immigrants without sampling frames - evaluating the success of alternative field methods.
Reichel, David; Morales, Laura
2017-01-01
This paper evaluates the sampling methods of an international survey, the Immigrant Citizens Survey, which aimed at surveying immigrants from outside the European Union (EU) in 15 cities in seven EU countries. In five countries, no sample frame was available for the target population. Consequently, alternative ways to obtain a representative sample had to be found. In three countries 'location sampling' was employed, while in two countries traditional methods were used with adaptations to reach the target population. The paper assesses the main methodological challenges of carrying out a survey among a group of immigrants for whom no sampling frame exists. The samples of the survey in these five countries are compared to results of official statistics in order to assess the accuracy of the samples obtained through the different sampling methods. It can be shown that alternative sampling methods can provide meaningful results in terms of core demographic characteristics although some estimates differ to some extent from the census results.
Chen, Meilian; Lee, Jong-Hyeon; Hur, Jin
2015-10-01
Despite literature evidence suggesting the importance of sampling methods on the properties of sediment pore waters, their effects on the dissolved organic matter (PW-DOM) have been unexplored to date. Here, we compared the effects of two commonly used sampling methods (i.e., centrifuge and Rhizon sampler) on the characteristics of PW-DOM for the first time. The bulk dissolved organic carbon (DOC), ultraviolet-visible (UV-Vis) absorption, and excitation-emission matrixes coupled with parallel factor analysis (EEM-PARAFAC) of the PW-DOM samples were compared for the two sampling methods with the sediments from minimal to severely contaminated sites. The centrifuged samples were found to have higher average values of DOC, UV absorption, and protein-like EEM-PARAFAC components. The samples collected with the Rhizon sampler, however, exhibited generally more humified characteristics than the centrifuged ones, implying a preferential collection of PW-DOM with respect to the sampling methods. Furthermore, the differences between the two sampling methods seem more pronounced in relatively more polluted sites. Our observations were possibly explained by either the filtration effect resulting from the smaller pore size of the Rhizon sampler or the desorption of DOM molecules loosely bound to minerals during centrifugation, or both. Our study suggests that consistent use of one sampling method is crucial for PW-DOM studies and also that caution should be taken in the comparison of data collected with different sampling methods.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-23
... Alimentarius Commission: Meeting of the Codex Committee on Methods of Analysis and Sampling AGENCY: Office of... discussed at the 33rd Session of the Codex Committee on Methods of Analysis and Sampling (CCMAS) of the... the criteria appropriate to Codex Methods of Analysis and Sampling; serving as a coordinating body for...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-20
... Alimentarius Commission: Meeting of the Codex Committee on Methods of Analysis and Sampling AGENCY: Office of... discussed at the 32nd session of the Codex Committee on Methods of Analysis and Sampling (CCMAS) of the... appropriate to Codex Methods of Analysis and Sampling; serving as a coordinating body for Codex with other...
Intra prediction using face continuity in 360-degree video coding
NASA Astrophysics Data System (ADS)
Hanhart, Philippe; He, Yuwen; Ye, Yan
2017-09-01
This paper presents a new reference sample derivation method for intra prediction in 360-degree video coding. Unlike the conventional reference sample derivation method for 2D video coding, which uses the samples located directly above and on the left of the current block, the proposed method considers the spherical nature of 360-degree video when deriving reference samples located outside the current face to which the block belongs, and derives reference samples that are geometric neighbors on the sphere. The proposed reference sample derivation method was implemented in the Joint Exploration Model 3.0 (JEM-3.0) for the cubemap projection format. Simulation results for the all intra configuration show that, when compared with the conventional reference sample derivation method, the proposed method gives, on average, luma BD-rate reduction of 0.3% in terms of the weighted spherical PSNR (WS-PSNR) and spherical PSNR (SPSNR) metrics.
7 CFR 29.110 - Method of sampling.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Method of sampling. 29.110 Section 29.110 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Regulations Inspectors, Samplers, and Weighers § 29.110 Method of sampling. In sampling tobacco...
7 CFR 29.110 - Method of sampling.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Method of sampling. 29.110 Section 29.110 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Regulations Inspectors, Samplers, and Weighers § 29.110 Method of sampling. In sampling tobacco...
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
Comparison of individual and pooled sampling methods for detecting bacterial pathogens of fish
Mumford, Sonia; Patterson, Chris; Evered, J.; Brunson, Ray; Levine, J.; Winton, J.
2005-01-01
Examination of finfish populations for viral and bacterial pathogens is an important component of fish disease control programs worldwide. Two methods are commonly used for collecting tissue samples for bacteriological culture, the currently accepted standards for detection of bacterial fish pathogens. The method specified in the Office International des Epizooties Manual of Diagnostic Tests for Aquatic Animals permits combining renal and splenic tissues from as many as 5 fish into pooled samples. The American Fisheries Society (AFS) Blue Book/US Fish and Wildlife Service (USFWS) Inspection Manual specifies the use of a bacteriological loop for collecting samples from the kidney of individual fish. An alternative would be to more fully utilize the pooled samples taken for virology. If implemented, this approach would provide substantial savings in labor and materials. To compare the relative performance of the AFS/USFWS method and this alternative approach, cultures of Yersinia ruckeri were used to establish low-level infections in groups of rainbow trout (Oncorhynchus mykiss) that were sampled by both methods. Yersinia ruckeri was cultured from 22 of 37 groups by at least 1 method. The loop method yielded 18 positive groups, with 1 group positive in the loop samples but negative in the pooled samples. The pooled samples produced 21 positive groups, with 4 groups positive in the pooled samples but negative in the loop samples. There was statistically significant agreement (Spearman coefficient 0.80, P < 0.001) in the relative ability of the 2 sampling methods to permit detection of low-level bacterial infections of rainbow trout.
Neutron activation analysis of certified samples by the absolute method
NASA Astrophysics Data System (ADS)
Kadem, F.; Belouadah, N.; Idiri, Z.
2015-07-01
The nuclear reactions analysis technique is mainly based on the relative method or the use of activation cross sections. In order to validate nuclear data for the calculated cross section evaluated from systematic studies, we used the neutron activation analysis technique (NAA) to determine the various constituent concentrations of certified samples for animal blood, milk and hay. In this analysis, the absolute method is used. The neutron activation technique involves irradiating the sample and subsequently performing a measurement of the activity of the sample. The fundamental equation of the activation connects several physical parameters including the cross section that is essential for the quantitative determination of the different elements composing the sample without resorting to the use of standard sample. Called the absolute method, it allows a measurement as accurate as the relative method. The results obtained by the absolute method showed that the values are as precise as the relative method requiring the use of standard sample for each element to be quantified.
Application of work sampling technique to analyze logging operations.
Edwin S. Miyata; Helmuth M. Steinhilb; Sharon A. Winsauer
1981-01-01
Discusses the advantages and disadvantages of various time study methods for determining efficiency and productivity in logging. The work sampling method is compared with the continuous time-study method. Gives the feasibility, capability, and limitation of the work sampling method.
Methods of sampling airborne fungi in working environments of waste treatment facilities.
Černá, Kristýna; Wittlingerová, Zdeňka; Zimová, Magdaléna; Janovský, Zdeněk
2016-01-01
The objective of the present study was to evaluate and compare the efficiency of a filter based sampling method and a high volume sampling method for sampling airborne culturable fungi present in waste sorting facilities. Membrane filters method was compared with surface air system method. The selected sampling methods were modified and tested in 2 plastic waste sorting facilities. The total number of colony-forming units (CFU)/m3 of airborne fungi was dependent on the type of sampling device, on the time of sampling, which was carried out every hour from the beginning of the work shift, and on the type of cultivation medium (p < 0.001). Detected concentrations of airborne fungi ranged 2×102-1.7×106 CFU/m3 when using the membrane filters (MF) method, and 3×102-6.4×104 CFU/m3 when using the surface air system (SAS) method. Both methods showed comparable sensitivity to the fluctuations of the concentrations of airborne fungi during the work shifts. The SAS method is adequate for a fast indicative determination of concentration of airborne fungi. The MF method is suitable for thorough assessment of working environment contamination by airborne fungi. Therefore we recommend the MF method for the implementation of a uniform standard methodology of airborne fungi sampling in working environments of waste treatment facilities. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
Xun-Ping, W; An, Z
2017-07-27
Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.
Extending the solvent-free MALDI sample preparation method.
Hanton, Scott D; Parees, David M
2005-01-01
Matrix-assisted laser desorption/ionization (MALDI) mass spectrometry is an important technique to characterize many different materials, including synthetic polymers. MALDI mass spectral data can be used to determine the polymer average molecular weights, repeat units, and end groups. One of the key issues in traditional MALDI sample preparation is making good solutions of the analyte and the matrix. Solvent-free sample preparation methods have been developed to address these issues. Previous results of solvent-free or dry prepared samples show some advantages over traditional wet sample preparation methods. Although the results of the published solvent-free sample preparation methods produced excellent mass spectra, we found the method to be very time-consuming, with significant tool cleaning, which presents a significant possibility of cross contamination. To address these issues, we developed an extension of the solvent-free method that replaces the mortar and pestle grinding with ball milling the sample in a glass vial with two small steel balls. This new method generates mass spectra with equal quality of the previous methods, but has significant advantages in productivity, eliminates cross contamination, and is applicable to liquid and soft or waxy analytes.
Configurations and calibration methods for passive sampling techniques.
Ouyang, Gangfeng; Pawliszyn, Janusz
2007-10-19
Passive sampling technology has developed very quickly in the past 15 years, and is widely used for the monitoring of pollutants in different environments. The design and quantification of passive sampling devices require an appropriate calibration method. Current calibration methods that exist for passive sampling, including equilibrium extraction, linear uptake, and kinetic calibration, are presented in this review. A number of state-of-the-art passive sampling devices that can be used for aqueous and air monitoring are introduced according to their calibration methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geist, William H.
2017-09-15
The objectives for this presentation are to describe the method that the IAEA uses to determine a sampling plan for nuclear material measurements; describe the terms detection probability and significant quantity; list the three nuclear materials measurement types; describe the sampling method applied to an item facility; and describe multiple method sampling.
Surface sampling techniques for 3D object inspection
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong S.; Gerhardt, Lester A.
1995-03-01
While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.
Methods of analyzing crude oil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooks, Robert Graham; Jjunju, Fred Paul Mark; Li, Anyin
The invention generally relates to methods of analyzing crude oil. In certain embodiments, methods of the invention involve obtaining a crude oil sample, and subjecting the crude oil sample to mass spectrometry analysis. In certain embodiments, the method is performed without any sample pre-purification steps.
Volatile organic compounds: sampling methods and their worldwide profile in ambient air.
Kumar, Anuj; Víden, Ivan
2007-08-01
The atmosphere is a particularly difficult analytical system because of the very low levels of substances to be analysed, sharp variations in pollutant levels with time and location, differences in wind, temperature and humidity. This makes the selection of an efficient sampling technique for air analysis a key step to reliable results. Generally, methods for volatile organic compounds sampling include collection of the whole air or preconcentration of samples on adsorbents. All the methods vary from each other according to the sampling technique, type of sorbent, method of extraction and identification technique. In this review paper we discuss various important aspects for sampling of volatile organic compounds by the widely used and advanced sampling methods. Characteristics of various adsorbents used for VOCs sampling are also described. Furthermore, this paper makes an effort to comprehensively review the concentration levels of volatile organic compounds along with the methodology used for analysis, in major cities of the world.
40 CFR 60.496 - Test methods and procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Surface Coating Industry § 60.496 Test methods and procedures. (a) The reference methods in appendix A to...) Method 4 for stack gas moisture. (b) For Method 24, the coating sample must be a 1-litre sample collected... volume must be 0.003 dscm except that shorter sampling times or smaller volumes, when necessitated by...
40 CFR 60.496 - Test methods and procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Surface Coating Industry § 60.496 Test methods and procedures. (a) The reference methods in appendix A to...) Method 4 for stack gas moisture. (b) For Method 24, the coating sample must be a 1-litre sample collected... volume must be 0.003 dscm except that shorter sampling times or smaller volumes, when necessitated by...
40 CFR 60.496 - Test methods and procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Surface Coating Industry § 60.496 Test methods and procedures. (a) The reference methods in appendix A to...) Method 4 for stack gas moisture. (b) For Method 24, the coating sample must be a 1-litre sample collected... volume must be 0.003 dscm except that shorter sampling times or smaller volumes, when necessitated by...
40 CFR 60.496 - Test methods and procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Surface Coating Industry § 60.496 Test methods and procedures. (a) The reference methods in appendix A to...) Method 4 for stack gas moisture. (b) For Method 24, the coating sample must be a 1-litre sample collected... volume must be 0.003 dscm except that shorter sampling times or smaller volumes, when necessitated by...
40 CFR 60.496 - Test methods and procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Surface Coating Industry § 60.496 Test methods and procedures. (a) The reference methods in appendix A to...) Method 4 for stack gas moisture. (b) For Method 24, the coating sample must be a 1-litre sample collected... volume must be 0.003 dscm except that shorter sampling times or smaller volumes, when necessitated by...
[Comparison of the Conventional Centrifuged and Filtrated Preparations in Urine Cytology].
Sekita, Nobuyuki; Shimosakai, Hirofumi; Nishikawa, Rika; Sato, Hiroaki; Kouno, Hiroyoshi; Fujimura, Masaaki; Mikami, Kazuo
2016-03-01
The urine cytology test is one of the most important tools for the diagnosis of malignant urinary tract tumors. This test is also of great value for predicting malignancy. However, the sensitivity of this test is not high enough to screen for malignant cells. In our laboratory, we were able to attain a high sensitivity of urine cytology tests after changing the preparation method of urine samples. The differences in the cytodiagnosis between the two methods are discussed here. From January 2012 to June 2013, 2,031 urine samples were prepared using the conventional centrifuge method (C method) ; and from September 2013 to March 2015, 2,453 urine samples were prepared using the filtration method (F method) for the cytology test. When the samples included in category 4 or 5, were defined as cytological positive, the sensitivities of this test with samples prepared using the F method were significantly high compared with samples prepared using the C method (72% vs 28%, p<0.001). The number of cells on the glass slides prepared by the F method was significantly higher than that of the samples prepared by the C method (p<0.001). After introduction of the F method, the number of f alse negative cases was decreased in the urine cytology test because a larger number of cells was seen and easily detected as atypical or malignant epithelial cells. Therefore, this method has a higher sensitivity than the conventional C method as the sensitivity of urine cytology tests relies partially on the number of cells visualized in the prepared samples.
Flow Cytometric Human Leukocyte Antigen-B27 Typing with Stored Samples for Batch Testing
Seo, Bo Young
2013-01-01
Background Flow cytometry (FC) HLA-B27 typing is still used extensively for the diagnosis of spondyloarthropathies. If patient blood samples are stored for a prolonged duration, this testing can be performed in a batch manner, and in-house cellular controls could easily be procured. In this study, we investigated various methods of storing patient blood samples. Methods We compared four storage methods: three methods of analyzing lymphocytes (whole blood stored at room temperature, frozen mononuclear cells, and frozen white blood cells [WBCs] after lysing red blood cells [RBCs]), and one method using frozen platelets (FPLT). We used three ratios associated with mean fluorescence intensities (MFI) for HLAB27 assignment: the B27 MFI ratio (sample/control) for HLA-B27 fluorescein-5-isothiocyanate (FITC); the B7 MFI ratio for HLA-B7 phycoerythrin (PE); and the ratio of these two ratios, B7/B27 ratio. Results Comparing the B27 MFI ratios of each storage method for the HLA-B27+ samples and the B7/B27 ratios for the HLA-B7+ samples revealed that FPLT was the best of the four methods. FPLT had a sensitivity of 100% and a specificity of 99.3% for HLA-B27 assignment in DNA-typed samples (N=164) when the two criteria, namely, B27 MFI ratio >4.0 and B7/B27 ratio <1.5, were used. Conclusions The FPLT method was found to offer a simple, economical, and accurate method of FC HLA-B27 typing by using stored patient samples. If stored samples are used, this method has the potential to replace the standard FC typing method when used in combination with a complementary DNA-based method. PMID:23667843
Sampling Methods in Cardiovascular Nursing Research: An Overview.
Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie
2014-01-01
Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.
Face recognition based on symmetrical virtual image and original training image
NASA Astrophysics Data System (ADS)
Ke, Jingcheng; Peng, Yali; Liu, Shigang; Li, Jun; Pei, Zhao
2018-02-01
In face representation-based classification methods, we are able to obtain high recognition rate if a face has enough available training samples. However, in practical applications, we only have limited training samples to use. In order to obtain enough training samples, many methods simultaneously use the original training samples and corresponding virtual samples to strengthen the ability of representing the test sample. One is directly using the original training samples and corresponding mirror samples to recognize the test sample. However, when the test sample is nearly symmetrical while the original training samples are not, the integration of the original training and mirror samples might not well represent the test samples. To tackle the above-mentioned problem, in this paper, we propose a novel method to obtain a kind of virtual samples which are generated by averaging the original training samples and corresponding mirror samples. Then, the original training samples and the virtual samples are integrated to recognize the test sample. Experimental results on five face databases show that the proposed method is able to partly overcome the challenges of the various poses, facial expressions and illuminations of original face image.
19 CFR 151.83 - Method of sampling.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Method of sampling. 151.83 Section 151.83 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Cotton § 151.83 Method of sampling. For...
Comparing three sampling techniques for estimating fine woody down dead biomass
Robert E. Keane; Kathy Gray
2013-01-01
Designing woody fuel sampling methods that quickly, accurately and efficiently assess biomass at relevant spatial scales requires extensive knowledge of each sampling method's strengths, weaknesses and tradeoffs. In this study, we compared various modifications of three common sampling methods (planar intercept, fixed-area microplot and photoload) for estimating...
Model-based inference for small area estimation with sampling weights
Vandendijck, Y.; Faes, C.; Kirby, R.S.; Lawson, A.; Hens, N.
2017-01-01
Obtaining reliable estimates about health outcomes for areas or domains where only few to no samples are available is the goal of small area estimation (SAE). Often, we rely on health surveys to obtain information about health outcomes. Such surveys are often characterised by a complex design, stratification, and unequal sampling weights as common features. Hierarchical Bayesian models are well recognised in SAE as a spatial smoothing method, but often ignore the sampling weights that reflect the complex sampling design. In this paper, we focus on data obtained from a health survey where the sampling weights of the sampled individuals are the only information available about the design. We develop a predictive model-based approach to estimate the prevalence of a binary outcome for both the sampled and non-sampled individuals, using hierarchical Bayesian models that take into account the sampling weights. A simulation study is carried out to compare the performance of our proposed method with other established methods. The results indicate that our proposed method achieves great reductions in mean squared error when compared with standard approaches. It performs equally well or better when compared with more elaborate methods when there is a relationship between the responses and the sampling weights. The proposed method is applied to estimate asthma prevalence across districts. PMID:28989860
Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine
2017-09-01
According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.
Sommer, D; Enderlein, D; Antakli, A; Schönenbrücher, H; Slaghuis, J; Redmann, T; Lierz, M
2012-01-01
The efficiency of two commercial PCR methods based on real-time technology, the foodproof® Salmonella detection system and the BAX® PCR Assay Salmonella system was compared to standardized culture methods (EN ISO 6579:2002 - Annex D) for the detection of Salmonella spp. in poultry samples. Four sample matrices (feed, dust, boot swabs, feces) obtained directly from poultry flocks, as well as artificially spiked samples of the same matrices, were used. All samples were tested for Salmonella spp. using culture methods first as the gold standard. In addition samples spiked with Salmonella Enteridis were tested to evaluate the sensitivity of both PCR methods. Furthermore all methods were evaluated in an annual ring-trial of the National Salmonella Reference Laboratory of Germany. Salmonella detection in the matrices feed, dust and boot swabs were comparable in both PCR systems whereas the results from feces differed markedly. The quality, especially the freshness, of the fecal samples had an influence on the sensitivity of the real-time PCR and the results of the culture methods. In fresh fecal samples an initial spiking level of 100cfu/25g Salmonella Enteritidis was detected. Two-days-dried fecal samples allowed the detection of 14cfu/25g. Both real- time PCR protocols appear to be suitable for the detection of Salmonella spp. in all four matrices. The foodproof® system detected eight samples more to be positive compared to the BAX® system, but had a potential false positive result in one case. In 7-days-dried samples none of the methods was able to detect Salmonella likely through letal cell damage. In general the advantage of PCR analyses over the culture method is the reduction of working time from 4-5 days to only 2 days. However, especially for the analysis of fecal samples official validation should be conducted according to the requirement of EN ISO6579:2002 - Annex D.
Gyawali, P; Ahmed, W; Jagals, P; Sidhu, J P S; Toze, S
2015-12-01
Hookworm infection contributes around 700 million infections worldwide especially in developing nations due to increased use of wastewater for crop production. The effective recovery of hookworm ova from wastewater matrices is difficult due to their low concentrations and heterogeneous distribution. In this study, we compared the recovery rates of (i) four rapid hookworm ova concentration methods from municipal wastewater, and (ii) two concentration methods from sludge samples. Ancylostoma caninum ova were used as surrogate for human hookworm (Ancylostoma duodenale and Necator americanus). Known concentration of A. caninum hookworm ova were seeded into wastewater (treated and raw) and sludge samples collected from two wastewater treatment plants (WWTPs) in Brisbane and Perth, Australia. The A. caninum ova were concentrated from treated and raw wastewater samples using centrifugation (Method A), hollow fiber ultrafiltration (HFUF) (Method B), filtration (Method C) and flotation (Method D) methods. For sludge samples, flotation (Method E) and direct DNA extraction (Method F) methods were used. Among the four methods tested, filtration (Method C) method was able to recover higher concentrations of A. caninum ova consistently from treated wastewater (39-50%) and raw wastewater (7.1-12%) samples collected from both WWTPs. The remaining methods (Methods A, B and D) yielded variable recovery rate ranging from 0.2 to 40% for treated and raw wastewater samples. The recovery rates for sludge samples were poor (0.02-4.7), although, Method F (direct DNA extraction) provided 1-2 orders of magnitude higher recovery rate than Method E (flotation). Based on our results it can be concluded that the recovery rates of hookworm ova from wastewater matrices, especially sludge samples, can be poor and highly variable. Therefore, choice of concentration method is vital for the sensitive detection of hookworm ova in wastewater matrices. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.
2016-12-01
The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.
Rothrock, Michael J.; Hiett, Kelli L.; Gamble, John; Caudill, Andrew C.; Cicconi-Hogan, Kellie M.; Caporaso, J. Gregory
2014-01-01
The efficacy of DNA extraction protocols can be highly dependent upon both the type of sample being investigated and the types of downstream analyses performed. Considering that the use of new bacterial community analysis techniques (e.g., microbiomics, metagenomics) is becoming more prevalent in the agricultural and environmental sciences and many environmental samples within these disciplines can be physiochemically and microbiologically unique (e.g., fecal and litter/bedding samples from the poultry production spectrum), appropriate and effective DNA extraction methods need to be carefully chosen. Therefore, a novel semi-automated hybrid DNA extraction method was developed specifically for use with environmental poultry production samples. This method is a combination of the two major types of DNA extraction: mechanical and enzymatic. A two-step intense mechanical homogenization step (using bead-beating specifically formulated for environmental samples) was added to the beginning of the “gold standard” enzymatic DNA extraction method for fecal samples to enhance the removal of bacteria and DNA from the sample matrix and improve the recovery of Gram-positive bacterial community members. Once the enzymatic extraction portion of the hybrid method was initiated, the remaining purification process was automated using a robotic workstation to increase sample throughput and decrease sample processing error. In comparison to the strict mechanical and enzymatic DNA extraction methods, this novel hybrid method provided the best overall combined performance when considering quantitative (using 16S rRNA qPCR) and qualitative (using microbiomics) estimates of the total bacterial communities when processing poultry feces and litter samples. PMID:25548939
Improving the analysis of composite endpoints in rare disease trials.
McMenamin, Martina; Berglind, Anna; Wason, James M S
2018-05-22
Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them.
Swezey, Robert; Shinn, Walter; Green, Carol; Drover, David R.; Hammer, Gregory B.; Schulman, Scott R.; Zajicek, Anne; Jett, David A.; Boss, Gerry R.
2013-01-01
Most hospital laboratories do not measure blood cyanide concentrations, and samples must be sent to reference laboratories. A simple method is needed for measuring cyanide in hospitals. The authors previously developed a method to quantify cyanide based on the high binding affinity of the vitamin B12 analog, cobinamide, for cyanide and a major spectral change observed for cyanide-bound cobinamide. This method is now validated in human blood, and the findings include a mean inter-assay accuracy of 99.1%, precision of 8.75% and a lower limit of quantification of 3.27 µM cyanide. The method was applied to blood samples from children treated with sodium nitroprusside and it yielded measurable results in 88 of 172 samples (51%), whereas the reference laboratory yielded results in only 19 samples (11%). In all 19 samples, the cobinamide-based method also yielded measurable results. The two methods showed reasonable agreement when analyzed by linear regression, but not when analyzed by a standard error of the estimate or paired t-test. Differences in results between the two methods may be because samples were assayed at different times on different sample types. The cobinamide-based method is applicable to human blood, and can be used in hospital laboratories and emergency rooms. PMID:23653045
Molecular cancer classification using a meta-sample-based regularized robust coding method.
Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen
2014-01-01
Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.
Liu, Gui-Long; Huang, Shi-Hong; Shi, Che-Si; Zeng, Bin; Zhang, Ke-Shi; Zhong, Xian-Ci
2018-02-10
Using copper thin-walled tubular specimens, the subsequent yield surfaces under pre-tension, pre-torsion and pre-combined tension-torsion are measured, where the single-sample and multi-sample methods are applied respectively to determine the yield stresses at specified offset strain. The rule and characteristics of the evolution of the subsequent yield surface are investigated. Under the conditions of different pre-strains, the influence of test point number, test sequence and specified offset strain on the measurement of subsequent yield surface and the concave phenomenon for measured yield surface are studied. Moreover, the feasibility and validity of the two methods are compared. The main conclusions are drawn as follows: (1) For the single or multi-sample method, the measured subsequent yield surfaces are remarkably different from cylindrical yield surfaces proposed by the classical plasticity theory; (2) there are apparent differences between the test results from the two kinds of methods: the multi-sample method is not influenced by the number of test points, test order and the cumulative effect of residual plastic strain resulting from the other test point, while those are very influential in the single-sample method; and (3) the measured subsequent yield surface may appear concave, which can be transformed to convex for single-sample method by changing the test sequence. However, for the multiple-sample method, the concave phenomenon will disappear when a larger offset strain is specified.
Arnold, Mark E; Mueller-Doblies, Doris; Gosling, Rebecca J; Martelli, Francesca; Davies, Robert H
2015-01-01
Reports of Salmonella in ducks in the UK currently rely upon voluntary submissions from the industry, and as there is no harmonized statutory monitoring and control programme, it is difficult to compare data from different years in order to evaluate any trends in Salmonella prevalence in relation to sampling methodology. Therefore, the aim of this project was to assess the sensitivity of a selection of environmental sampling methods, including the sampling of faeces, dust and water troughs or bowls for the detection of Salmonella in duck flocks, and a range of sampling methods were applied to 67 duck flocks. Bayesian methods in the absence of a gold standard were used to provide estimates of the sensitivity of each of the sampling methods relative to the within-flock prevalence. There was a large influence of the within-flock prevalence on the sensitivity of all sample types, with sensitivity reducing as the within-flock prevalence reduced. Boot swabs (individual and pool of four), swabs of faecally contaminated areas and whole house hand-held fabric swabs showed the overall highest sensitivity for low-prevalence flocks and are recommended for use to detect Salmonella in duck flocks. The sample type with the highest proportion positive was a pool of four hair nets used as boot swabs, but this was not the most sensitive sample for low-prevalence flocks. All the environmental sampling types (faeces swabs, litter pinches, drag swabs, water trough samples and dust) had higher sensitivity than individual faeces sampling. None of the methods consistently identified all the positive flocks, and at least 10 samples would be required for even the most sensitive method (pool of four boot swabs) to detect a 5% prevalence. The sampling of dust had a low sensitivity and is not recommended for ducks.
Improved radiation dose efficiency in solution SAXS using a sheath flow sample environment
Kirby, Nigel; Cowieson, Nathan; Hawley, Adrian M.; Mudie, Stephen T.; McGillivray, Duncan J.; Kusel, Michael; Samardzic-Boban, Vesna; Ryan, Timothy M.
2016-01-01
Radiation damage is a major limitation to synchrotron small-angle X-ray scattering analysis of biomacromolecules. Flowing the sample during exposure helps to reduce the problem, but its effectiveness in the laminar-flow regime is limited by slow flow velocity at the walls of sample cells. To overcome this limitation, the coflow method was developed, where the sample flows through the centre of its cell surrounded by a flow of matched buffer. The method permits an order-of-magnitude increase of X-ray incident flux before sample damage, improves measurement statistics and maintains low sample concentration limits. The method also efficiently handles sample volumes of a few microlitres, can increase sample throughput, is intrinsically resistant to capillary fouling by sample and is suited to static samples and size-exclusion chromatography applications. The method unlocks further potential of third-generation synchrotron beamlines to facilitate new and challenging applications in solution scattering. PMID:27917826
Ha, Ji Won; Hahn, Jong Hoon
2017-02-01
Acupuncture sample injection is a simple method to deliver well-defined nanoliter-scale sample plugs in PDMS microfluidic channels. This acupuncture injection method in microchip CE has several advantages, including minimization of sample consumption, the capability of serial injections of different sample solutions into the same microchannel, and the capability of injecting sample plugs into any desired position of a microchannel. Herein, we demonstrate that the simple and cost-effective acupuncture sample injection method can be used for PDMS microchip-based field amplified sample stacking in the most simplified straight channel by applying a single potential. We achieved the increase in electropherogram signals for the case of sample stacking. Furthermore, we present that microchip CGE of ΦX174 DNA-HaeⅢ digest can be performed with the acupuncture injection method on a glass microchip while minimizing sample loss and voltage control hardware. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Method for chromium analysis and speciation
Aiken, Abigail M.; Peyton, Brent M.; Apel, William A.; Petersen, James N.
2004-11-02
A method of detecting a metal in a sample comprising a plurality of metal is disclosed. The method comprises providing the sample comprising a metal to be detected. The sample is added to a reagent solution comprising an enzyme and a substrate, where the enzyme is inhibited by the metal to be detected. An array of chelating agents is used to eliminate the inhibitory effects of additional metals in the sample. An enzymatic activity in the sample is determined and compared to an enzymatic activity in a control solution to detect the metal to be detected. A method of determining a concentration of the metal in the sample is also disclosed. A method of detecting a valence state of a metal is also disclosed.
Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E
2014-06-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.
Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.
2017-01-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608
Myatt, Mark; Mai, Nguyen Phuong; Quynh, Nguyen Quang; Nga, Nguyen Huy; Tai, Ha Huy; Long, Nguyen Hung; Minh, Tran Hung; Limburg, Hans
2005-01-01
OBJECTIVE: To report on the use of lot quality-assurance sampling (LQAS) surveys undertaken within an area-sampling framework to identify priority areas for intervention with trachoma control activities in Viet Nam. METHODS: The LQAS survey method for the rapid assessment of the prevalence of active trachoma was adapted for use in Viet Nam with the aim of classifying individual communes by the prevalence of active trachoma among children in primary school. School-based sampling was used; school sites to be sampled were selected using an area-sampling approach. A total of 719 communes in 41 districts in 18 provinces were surveyed. FINDINGS: Survey staff found the LQAS survey method both simple and rapid to use after initial problems with area-sampling methods were identified and remedied. The method yielded a finer spatial resolution of prevalence than had been previously achieved in Viet Nam using semiquantitative rapid assessment surveys and multistage cluster-sampled surveys. CONCLUSION: When used with area-sampling techniques, the LQAS survey method has the potential to form the basis of survey instruments that can be used to efficiently target resources for interventions against active trachoma. With additional work, such methods could provide a generally applicable tool for effective programme planning and for the certification of the elimination of trachoma as a blinding disease. PMID:16283052
Geldsetzer, Pascal; Fink, Günther; Vaikath, Maria; Bärnighausen, Till
2018-02-01
(1) To evaluate the operational efficiency of various sampling methods for patient exit interviews; (2) to discuss under what circumstances each method yields an unbiased sample; and (3) to propose a new, operationally efficient, and unbiased sampling method. Literature review, mathematical derivation, and Monte Carlo simulations. Our simulations show that in patient exit interviews it is most operationally efficient if the interviewer, after completing an interview, selects the next patient exiting the clinical consultation. We demonstrate mathematically that this method yields a biased sample: patients who spend a longer time with the clinician are overrepresented. This bias can be removed by selecting the next patient who enters, rather than exits, the consultation room. We show that this sampling method is operationally more efficient than alternative methods (systematic and simple random sampling) in most primary health care settings. Under the assumption that the order in which patients enter the consultation room is unrelated to the length of time spent with the clinician and the interviewer, selecting the next patient entering the consultation room tends to be the operationally most efficient unbiased sampling method for patient exit interviews. © 2016 The Authors. Health Services Research published by Wiley Periodicals, Inc. on behalf of Health Research and Educational Trust.
Parajulee, M N; Shrestha, R B; Leser, J F
2006-04-01
A 2-yr field study was conducted to examine the effectiveness of two sampling methods (visual and plant washing techniques) for western flower thrips, Frankliniella occidentalis (Pergande), and five sampling methods (visual, beat bucket, drop cloth, sweep net, and vacuum) for cotton fleahopper, Pseudatomoscelis seriatus (Reuter), in Texas cotton, Gossypium hirsutum (L.), and to develop sequential sampling plans for each pest. The plant washing technique gave similar results to the visual method in detecting adult thrips, but the washing technique detected significantly higher number of thrips larvae compared with the visual sampling. Visual sampling detected the highest number of fleahoppers followed by beat bucket, drop cloth, vacuum, and sweep net sampling, with no significant difference in catch efficiency between vacuum and sweep net methods. However, based on fixed precision cost reliability, the sweep net sampling was the most cost-effective method followed by vacuum, beat bucket, drop cloth, and visual sampling. Taylor's Power Law analysis revealed that the field dispersion patterns of both thrips and fleahoppers were aggregated throughout the crop growing season. For thrips management decision based on visual sampling (0.25 precision), 15 plants were estimated to be the minimum sample size when the estimated population density was one thrips per plant, whereas the minimum sample size was nine plants when thrips density approached 10 thrips per plant. The minimum visual sample size for cotton fleahoppers was 16 plants when the density was one fleahopper per plant, but the sample size decreased rapidly with an increase in fleahopper density, requiring only four plants to be sampled when the density was 10 fleahoppers per plant. Sequential sampling plans were developed and validated with independent data for both thrips and cotton fleahoppers.
Performance of Traditional and Molecular Methods for Detecting Biological Agents in Drinking Water
Francy, Donna S.; Bushon, Rebecca N.; Brady, Amie M.G.; Bertke, Erin E.; Kephart, Christopher M.; Likirdopulos, Christina A.; Mailot, Brian E.; Schaefer, Frank W.; Lindquist, H.D. Alan
2009-01-01
To reduce the impact from a possible bioterrorist attack on drinking-water supplies, analytical methods are needed to rapidly detect the presence of biological agents in water. To this end, 13 drinking-water samples were collected at 9 water-treatment plants in Ohio to assess the performance of a molecular method in comparison to traditional analytical methods that take longer to perform. Two 100-liter samples were collected at each site during each sampling event; one was seeded in the laboratory with six biological agents - Bacillus anthracis (B. anthracis), Burkholderia cepacia (as a surrogate for Bu. pseudomallei), Francisella tularensis (F. tularensis), Salmonella Typhi (S. Typhi), Vibrio cholerae (V. cholerae), and Cryptospordium parvum (C. parvum). The seeded and unseeded samples were processed by ultrafiltration and analyzed by use of quantiative polymerase chain reaction (qPCR), a molecular method, and culture methods for bacterial agents or the immunomagnetic separation/fluorescent antibody (IMS/FA) method for C. parvum as traditional methods. Six replicate seeded samples were also processed and analyzed. For traditional methods, recoveries were highly variable between samples and even between some replicate samples, ranging from below detection to greater than 100 percent. Recoveries were significantly related to water pH, specific conductance, and dissolved organic carbon (DOC) for all bacteria combined by culture methods, but none of the water-quality characteristics tested were related to recoveries of C. parvum by IMS/FA. Recoveries were not determined by qPCR because of problems in quantifying organisms by qPCR in the composite seed. Instead, qPCR results were reported as detected, not detected (no qPCR signal), or +/- detected (Cycle Threshold or 'Ct' values were greater than 40). Several sample results by qPCR were omitted from the dataset because of possible problems with qPCR reagents, primers, and probes. For the remaining 14 qPCR results (including some replicate samples), F. tularensis and V. cholerae were detected in all samples after ultrafiltration, B. anthracis was detected in 13 and +/- detected in 1 sample, and C. parvum was detected in 9 and +/- detected in 4 samples. Bu. cepacia was detected in nine samples, +/- detected in two samples, and not detected in three samples (for two out of three samples not detected, a different strain was used). The qPCR assay for V. cholerae provided two false positive - but late - signals in one unseeded sample. Numbers found by qPCR after ultrafiltration were significantly or nearly significantly related to those found by traditional methods for B. anthracis, F. tularensis, and V. cholerae but not for Bu. cepacia and C. parvum. A qPCR assay for S. Typhi was not available. The qPCR method can be used to rapidly detect B. anthracis, F. tularensis, and V. cholerae with some certainty in drinking-water samples, but additional work would be needed to optimize and test qPCR for Bu. cepacia and C. parvum and establish relations to traditional methods. The specificity for the V. cholerae assay needs to be further investigated. Evidence is provided that ultrafiltration and qPCR are promising methods to rapidly detect biological agents in the Nation's drinking-water supplies and thus reduce the impact and consequences from intentional bioterrorist events. To our knowledge, this is the first study to compare the use of traditional and qPCR methods to detect biological agents in large-volume drinking-water samples.
Brady, Amie M.G.; Bushon, Rebecca N.; Bertke, Erin E.
2009-01-01
Water quality at beaches is monitored for fecal indicator bacteria by traditional, culture-based methods that can take 18 to 24 hours to obtain results. A rapid detection method that provides estimated concentrations of fecal indicator bacteria within 1 hour from the start of sample processing would allow beach managers to post advisories or close the beach when the conditions are actually considered unsafe instead of a day later, when conditions may have changed. A rapid method that couples immunomagnetic separation with adenosine triphosphate detection (IMS/ATP rapid method) was evaluated through monitoring of Escherichia coli (E. coli) at three Lake Erie beaches in Ohio (Edgewater and Villa Angela in Cleveland and Huntington in Bay Village). Beach water samples were collected between 4 and 5 days per week during the recreational seasons (May through September) of 2006 and 2007. Composite samples were created in the lab from two point samples collected at each beach and were shown to be comparable substitutes for analysis of two individual samples. E. coli concentrations in composite samples, as determined by the culture-based method, ranged from 4 to 24,000 colony-forming units per 100 milliliters during this study across all beaches. Turbidity also was measured for each sample and ranged from 0.8 to 260 neophelometric turbidity ratio units. Environmental variables were noted at the time of sampling, including number of birds at the beach and wave height. Rainfall amounts were measured at National Weather Service stations at local airports. Turbidity, rainfall, and wave height were significantly related to the culture-based method results each year and for both years combined at each beach. The number of birds at the beach was significantly related to the culture-based method results only at Edgewater during 2006 and during both years combined. Results of the IMS/ATP method were compared to results of the culture-based method for samples by year for each beach. The IMS/ATP method underwent several changes and refinements during the first year, including changes in reagents and antibodies and alterations to the method protocol. Because of the changes in the method, results from the two years of study could not be combined. Kendall's tau correlation coefficients for relations between the IMS/ATP and culture-based methods were significant except for samples collected during 2006 at Edgewater and for samples collected during 2007 at Villa Angela. Further, relations were stronger for samples collected in 2006 than for those collected in 2007, except at Edgewater where the reverse was observed. The 2007 dataset was examined to identify possible reasons for the observed difference in significance of relations by year. By dividing the 2007 data set into groups as a function of sampling date, relations (Kendall's tau) between methods were observed to be stronger for samples collected earlier in the season than for those collected later in the season. At Edgewater and Villa Angela, there were more birds at the beach at time of sampling later in the season compared to earlier in the season. (The number of birds was not examined at Huntington.) Also, more wet days (when rainfall during the 24 hours prior to sampling was greater than 0.05 inch) were sampled later in the season compared to earlier in the season. Differences in the dominant fecal source may explain the change in the relations between the culture-based and IMS/ATP methods.
Method and apparatus for data sampling
Odell, Daniel M. C.
1994-01-01
A method and apparatus for sampling radiation detector outputs and determining event data from the collected samples. The method uses high speed sampling of the detector output, the conversion of the samples to digital values, and the discrimination of the digital values so that digital values representing detected events are determined. The high speed sampling and digital conversion is performed by an A/D sampler that samples the detector output at a rate high enough to produce numerous digital samples for each detected event. The digital discrimination identifies those digital samples that are not representative of detected events. The sampling and discrimination also provides for temporary or permanent storage, either serially or in parallel, to a digital storage medium.
40 CFR 80.8 - Sampling methods for gasoline and diesel fuel.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Sampling methods for gasoline and... gasoline and diesel fuel. The sampling methods specified in this section shall be used to collect samples of gasoline and diesel fuel for purposes of determining compliance with the requirements of this part...
40 CFR 80.8 - Sampling methods for gasoline and diesel fuel.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false Sampling methods for gasoline and... gasoline and diesel fuel. The sampling methods specified in this section shall be used to collect samples of gasoline and diesel fuel for purposes of determining compliance with the requirements of this part...
40 CFR 80.8 - Sampling methods for gasoline and diesel fuel.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Sampling methods for gasoline and... gasoline and diesel fuel. The sampling methods specified in this section shall be used to collect samples of gasoline and diesel fuel for purposes of determining compliance with the requirements of this part...
Unconstrained Enhanced Sampling for Free Energy Calculations of Biomolecules: A Review
Miao, Yinglong; McCammon, J. Andrew
2016-01-01
Free energy calculations are central to understanding the structure, dynamics and function of biomolecules. Yet insufficient sampling of biomolecular configurations is often regarded as one of the main sources of error. Many enhanced sampling techniques have been developed to address this issue. Notably, enhanced sampling methods based on biasing collective variables (CVs), including the widely used umbrella sampling, adaptive biasing force and metadynamics, have been discussed in a recent excellent review (Abrams and Bussi, Entropy, 2014). Here, we aim to review enhanced sampling methods that do not require predefined system-dependent CVs for biomolecular simulations and as such do not suffer from the hidden energy barrier problem as encountered in the CV-biasing methods. These methods include, but are not limited to, replica exchange/parallel tempering, self-guided molecular/Langevin dynamics, essential energy space random walk and accelerated molecular dynamics. While it is overwhelming to describe all details of each method, we provide a summary of the methods along with the applications and offer our perspectives. We conclude with challenges and prospects of the unconstrained enhanced sampling methods for accurate biomolecular free energy calculations. PMID:27453631
Unconstrained Enhanced Sampling for Free Energy Calculations of Biomolecules: A Review.
Miao, Yinglong; McCammon, J Andrew
Free energy calculations are central to understanding the structure, dynamics and function of biomolecules. Yet insufficient sampling of biomolecular configurations is often regarded as one of the main sources of error. Many enhanced sampling techniques have been developed to address this issue. Notably, enhanced sampling methods based on biasing collective variables (CVs), including the widely used umbrella sampling, adaptive biasing force and metadynamics, have been discussed in a recent excellent review (Abrams and Bussi, Entropy, 2014). Here, we aim to review enhanced sampling methods that do not require predefined system-dependent CVs for biomolecular simulations and as such do not suffer from the hidden energy barrier problem as encountered in the CV-biasing methods. These methods include, but are not limited to, replica exchange/parallel tempering, self-guided molecular/Langevin dynamics, essential energy space random walk and accelerated molecular dynamics. While it is overwhelming to describe all details of each method, we provide a summary of the methods along with the applications and offer our perspectives. We conclude with challenges and prospects of the unconstrained enhanced sampling methods for accurate biomolecular free energy calculations.
Crepeau, Kathryn L.; Fram, Miranda S.; Bush, Noel
2004-01-01
An analytical method for the determination of the trihalomethane formation potential of water samples has been developed. The trihalomethane formation potential is measured by dosing samples with chlorine under specified conditions of pH, temperature, incubation time, darkness, and residual-free chlorine, and then analyzing the resulting trihalomethanes by purge and trap/gas chromatography equipped with an electron capture detector. Detailed explanations of the method and quality-control practices are provided. Method validation experiments showed that the trihalomethane formation potential varies as a function of time between sample collection and analysis, residual-free chlorine concentration, method of sample dilution, and the concentration of bromide in the sample.
Mechanisms of fracture of ring samples made of FCC metals on loading with magnetic-pulse method
NASA Astrophysics Data System (ADS)
Morozov, Viktor; Kats, Victor; Savenkov, Georgiy; Lukin, Anton
2018-05-01
Results of study of deformation and fracture of ring-shaped samples made of thin strips of cuprum, aluminum and steel in wide range of loading velocity are presented. Three developed by us schemes of magnetic-pulse method are used for the samples loading. The method of samples fracture with the high electrical resistance (e.g. steel) is proposed. Crack velocity at the sample fracture is estimated. Fracture surfaces are inspected. Mechanisms of dynamic fracture of the sample arere discussed.
Evaluating performance of stormwater sampling approaches using a dynamic watershed model.
Ackerman, Drew; Stein, Eric D; Ritter, Kerry J
2011-09-01
Accurate quantification of stormwater pollutant levels is essential for estimating overall contaminant discharge to receiving waters. Numerous sampling approaches exist that attempt to balance accuracy against the costs associated with the sampling method. This study employs a novel and practical approach of evaluating the accuracy of different stormwater monitoring methodologies using stormflows and constituent concentrations produced by a fully validated continuous simulation watershed model. A major advantage of using a watershed model to simulate pollutant concentrations is that a large number of storms representing a broad range of conditions can be applied in testing the various sampling approaches. Seventy-eight distinct methodologies were evaluated by "virtual samplings" of 166 simulated storms of varying size, intensity and duration, representing 14 years of storms in Ballona Creek near Los Angeles, California. The 78 methods can be grouped into four general strategies: volume-paced compositing, time-paced compositing, pollutograph sampling, and microsampling. The performances of each sampling strategy was evaluated by comparing the (1) median relative error between the virtually sampled and the true modeled event mean concentration (EMC) of each storm (accuracy), (2) median absolute deviation about the median or "MAD" of the relative error or (precision), and (3) the percentage of storms where sampling methods were within 10% of the true EMC (combined measures of accuracy and precision). Finally, costs associated with site setup, sampling, and laboratory analysis were estimated for each method. Pollutograph sampling consistently outperformed the other three methods both in terms of accuracy and precision, but was the most costly method evaluated. Time-paced sampling consistently underestimated while volume-paced sampling over estimated the storm EMCs. Microsampling performance approached that of pollutograph sampling at a substantial cost savings. The most efficient method for routine stormwater monitoring in terms of a balance between performance and cost was volume-paced microsampling, with variable sample pacing to ensure that the entirety of the storm was captured. Pollutograph sampling is recommended if the data are to be used for detailed analysis of runoff dynamics.
Myatt, Mark; Mai, Nguyen Phuong; Quynh, Nguyen Quang; Nga, Nguyen Huy; Tai, Ha Huy; Long, Nguyen Hung; Minh, Tran Hung; Limburg, Hans
2005-10-01
To report on the use of lot quality-assurance sampling (LQAS) surveys undertaken within an area-sampling framework to identify priority areas for intervention with trachoma control activities in Viet Nam. The LQAS survey method for the rapid assessment of the prevalence of active trachoma was adapted for use in Viet Nam with the aim of classifying individual communes by the prevalence of active trachoma among children in primary school. School-based sampling was used; school sites to be sampled were selected using an area-sampling approach. A total of 719 communes in 41 districts in 18 provinces were surveyed. Survey staff found the LQAS survey method both simple and rapid to use after initial problems with area-sampling methods were identified and remedied. The method yielded a finer spatial resolution of prevalence than had been previously achieved in Viet Nam using semiquantitative rapid assessment surveys and multistage cluster-sampled surveys. When used with area-sampling techniques, the LQAS survey method has the potential to form the basis of survey instruments that can be used to efficiently target resources for interventions against active trachoma. With additional work, such methods could provide a generally applicable tool for effective programme planning and for the certification of the elimination of trachoma as a blinding disease.
DuPont Qualicon BAX System polymerase chain reaction assay. Performance Tested Method 100201.
Tice, George; Andaloro, Bridget; Fallon, Dawn; Wallace, F Morgan
2009-01-01
A recent outbreak of Salmonella in peanut butter has highlighted the need for validation of rapid detection methods. A multilaboratory study for detecting Salmonella in peanut butter was conducted as part of the AOAC Research Institute Emergency Response Validation program for methods that detect outbreak threats to food safety. Three sites tested spiked samples from the same master mix according to the U.S. Food and Drug Administration's Bacteriological Analytical Manual (FDA-BAM) method and the BAX System method. Salmonella Typhimurium (ATCC 14028) was grown in brain heart infusion for 24 h at 37 degrees C, then diluted to appropriate levels for sample inoculation. Master samples of peanut butter were spiked at high and low target levels, mixed, and allowed to equilibrate at room temperature for 2 weeks. Spike levels were low [1.08 most probable number (MPN)/25 g]; high (11.5 MPN/25 g) and unspiked to serve as negative controls. Each master sample was divided into 25 g portions and coded to blind the samples. Twenty portions of each spiked master sample and five portions of the unspiked sample were tested at each site. At each testing site, samples were blended in 25 g portions with 225 mL prewarmed lactose broth until thoroughly homogenized, then allowed to remain at room temperature for 55-65 min. Samples were adjusted to a pH of 6.8 +/- 0.2, if necessary, and incubated for 22-26 h at 35 degrees C. Across the three reporting laboratories, the BAX System detected Salmonella in 10/60 low-spike samples and 58/60 high-spike samples. The reference FDA-BAM method yielded positive results for 11/60 low-spike and 58/60 high-spike samples. Neither method demonstrated positive results for any of the 15 unspiked samples.
Goeman, Valerie R; Tinkler, Stacy H; Hammac, G Kenitra; Ruple, Audrey
2018-04-01
Environmental surveillance for Salmonella enterica can be used for early detection of contamination; thus routine sampling is an integral component of infection control programs in hospital environments. At the Purdue University Veterinary Teaching Hospital (PUVTH), the technique regularly employed in the large animal hospital for sample collection uses sterile gauze sponges for environmental sampling, which has proven labor-intensive and time-consuming. Alternative sampling methods use Swiffer brand electrostatic wipes for environmental sample collection, which are reportedly effective and efficient. It was hypothesized that use of Swiffer wipes for sample collection would be more efficient and less costly than the use of gauze sponges. A head-to-head comparison between the 2 sampling methods was conducted in the PUVTH large animal hospital and relative agreement, cost-effectiveness, and sampling efficiency were compared. There was fair agreement in culture results between the 2 sampling methods, but Swiffer wipes required less time and less physical effort to collect samples and were more cost-effective.
Duyvejonck, Hans; Cools, Piet; Decruyenaere, Johan; Roelens, Kristien; Noens, Lucien; Vermeulen, Stefan; Claeys, Geert; Decat, Ellen; Van Mechelen, Els; Vaneechoutte, Mario
2015-01-01
Candida species are known as opportunistic pathogens, and a possible cause of invasive infections. Because of their species-specific antimycotic resistance patterns, reliable techniques for their detection, quantification and identification are needed. We validated a DNA amplification method for direct detection of Candida spp. from clinical samples, namely the ITS2-High Resolution Melting Analysis (direct method), by comparing it with a culture and MALDI-TOF Mass Spectrometry based method (indirect method) to establish the presence of Candida species in three different types of clinical samples. A total of 347 clinical samples, i.e. throat swabs, rectal swabs and vaginal swabs, were collected from the gynaecology/obstetrics, intensive care and haematology wards at the Ghent University Hospital, Belgium. For the direct method, ITS2-HRM was preceded by NucliSENS easyMAG DNA extraction, directly on the clinical samples. For the indirect method, clinical samples were cultured on Candida ID and individual colonies were identified by MALDI-TOF. For 83.9% of the samples there was complete concordance between both techniques, i.e. the same Candida species were detected in 31.1% of the samples or no Candida species were detected in 52.8% of the samples. In 16.1% of the clinical samples, discrepant results were obtained, of which only 6.01% were considered as major discrepancies. Discrepancies occurred mostly when overall numbers of Candida cells in the samples were low and/or when multiple species were present in the sample. Most of the discrepancies could be decided in the advantage of the direct method. This is due to samples in which no yeast could be cultured whereas low amounts could be detected by the direct method and to samples in which high quantities of Candida robusta according to ITS2-HRM were missed by culture on Candida ID agar. It remains to be decided whether the diagnostic advantages of the direct method compensate for its disadvantages.
Methyl-CpG island-associated genome signature tags
Dunn, John J
2014-05-20
Disclosed is a method for analyzing the organismic complexity of a sample through analysis of the nucleic acid in the sample. In the disclosed method, through a series of steps, including digestion with a type II restriction enzyme, ligation of capture adapters and linkers and digestion with a type IIS restriction enzyme, genome signature tags are produced. The sequences of a statistically significant number of the signature tags are determined and the sequences are used to identify and quantify the organisms in the sample. Various embodiments of the invention described herein include methods for using single point genome signature tags to analyze the related families present in a sample, methods for analyzing sequences associated with hyper- and hypo-methylated CpG islands, methods for visualizing organismic complexity change in a sampling location over time and methods for generating the genome signature tag profile of a sample of fragmented DNA.
Methods for Determining Particle Size Distributions from Nuclear Detonations.
1987-03-01
Debris . . . 30 IV. Summary of Sample Preparation Method . . . . 35 V. Set Parameters for PCS ... ........... 39 VI. Analysis by Vendors...54 XV. Results From Brookhaven Analysis Using The Method of Cumulants ... ........... . 54 XVI. Results From Brookhaven Analysis of Sample...R-3 Using Histogram Method ......... .55 XVII. Results From Brookhaven Analysis of Sample R-8 Using Histogram Method ........... 56 XVIII.TEM Particle
NASA Astrophysics Data System (ADS)
Peselnick, L.
1982-08-01
An ultrasonic method is presented which combines features of the differential path and the phase comparison methods. The proposed differential path phase comparison method, referred to as the `hybrid' method for brevity, eliminates errors resulting from phase changes in the bond between the sample and buffer rod. Define r(P) [and R(P)] as the square of the normalized frequency for cancellation of sample waves for shear [and for compressional] waves. Define N as the number of wavelengths in twice the sample length. The pressure derivatives r'(P) and R' (P) for samples of Alcoa 2024-T4 aluminum were obtained by using the phase comparison and the hybrid methods. The values of the pressure derivatives obtained by using the phase comparison method show variations by as much as 40% for small values of N (N < 50). The pressure derivatives as determined from the hybrid method are reproducible to within ±2% independent of N. The values of the pressure derivatives determined by the phase comparison method for large N are the same as those determined by the hybrid method. Advantages of the hybrid method are (1) no pressure dependent phase shift at the buffer-sample interface, (2) elimination of deviatoric stress in the sample portion of the sample assembly with application of hydrostatic pressure, and (3) operation at lower ultrasonic frequencies (for comparable sample lengths), which eliminates detrimental high frequency ultrasonic problems. A reduction of the uncertainties of the pressure derivatives of single crystals and of low porosity polycrystals permits extrapolation of such experimental data to deeper mantle depths.
Luo, Yong; Wu, Dapeng; Zeng, Shaojiang; Gai, Hongwei; Long, Zhicheng; Shen, Zheng; Dai, Zhongpeng; Qin, Jianhua; Lin, Bingcheng
2006-09-01
A novel sample injection method for chip CE was presented. This injection method uses hydrostatic pressure, generated by emptying the sample waste reservoir, for sample loading and electrokinetic force for dispensing. The injection was performed on a double-cross microchip. One cross, created by the sample and separation channels, is used for formation of a sample plug. Another cross, formed by the sample and controlling channels, is used for plug control. By varying the electric field in the controlling channel, the sample plug volume can be linearly adjusted. Hydrostatic pressure takes advantage of its ease of generation on a microfluidic chip, without any electrode or external pressure pump, thus allowing a sample injection with a minimum number of electrodes. The potential of this injection method was demonstrated by a four-separation-channel chip CE system. In this system, parallel sample separation can be achieved with only two electrodes, which is otherwise impossible with conventional injection methods. Hydrostatic pressure maintains the sample composition during the sample loading, allowing the injection to be free of injection bias.
Yi, Ming; Stephens, Robert M.
2008-01-01
Analysis of microarray and other high throughput data often involves identification of genes consistently up or down-regulated across samples as the first step in extraction of biological meaning. This gene-level paradigm can be limited as a result of valid sample fluctuations and biological complexities. In this report, we describe a novel method, SLEPR, which eliminates this limitation by relying on pathway-level consistencies. Our method first selects the sample-level differentiated genes from each individual sample, capturing genes missed by other analysis methods, ascertains the enrichment levels of associated pathways from each of those lists, and then ranks annotated pathways based on the consistency of enrichment levels of individual samples from both sample classes. As a proof of concept, we have used this method to analyze three public microarray datasets with a direct comparison with the GSEA method, one of the most popular pathway-level analysis methods in the field. We found that our method was able to reproduce the earlier observations with significant improvements in depth of coverage for validated or expected biological themes, but also produced additional insights that make biological sense. This new method extends existing analyses approaches and facilitates integration of different types of HTP data. PMID:18818771
Flotemersch, Joseph E; North, Sheila; Blocksom, Karen A
2014-02-01
Benthic macroinvertebrates are sampled in streams and rivers as one of the assessment elements of the US Environmental Protection Agency's National Rivers and Streams Assessment. In a 2006 report, the recommendation was made that different yet comparable methods be evaluated for different types of streams (e.g., low gradient vs. high gradient). Consequently, a research element was added to the 2008-2009 National Rivers and Streams Assessment to conduct a side-by-side comparison of the standard macroinvertebrate sampling method with an alternate method specifically designed for low-gradient wadeable streams and rivers that focused more on stream edge habitat. Samples were collected using each method at 525 sites in five of nine aggregate ecoregions located in the conterminous USA. Methods were compared using the benthic macroinvertebrate multimetric index developed for the 2006 Wadeable Streams Assessment. Statistical analysis did not reveal any trends that would suggest the overall assessment of low-gradient streams on a regional or national scale would change if the alternate method was used rather than the standard sampling method, regardless of the gradient cutoff used to define low-gradient streams. Based on these results, the National Rivers and Streams Survey should continue to use the standard field method for sampling all streams.
Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat
2018-03-01
To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.
Ng, Ding-Quan; Liu, Shu-Wei; Lin, Yi-Pin
2018-09-15
In this study, a sampling campaign with a total of nine sampling events investigating lead in drinking water was conducted at 7 sampling locations in an old building with lead pipes in service in part of the building on the National Taiwan University campus. This study aims to assess the effectiveness of four different sampling methods, namely first draw sampling, sequential sampling, random daytime sampling and flush sampling, in lead contamination detection. In 3 out of the 7 sampling locations without lead pipe, lead could not be detected (<1.1 μg/L) in most samples regardless of the sampling methods. On the other hand, in the 4 sampling locations where lead pipes still existed, total lead concentrations >10 μg/L were consistently observed in 3 locations using any of the four sampling methods while the remaining location was identified to be contaminated using sequential sampling. High lead levels were consistently measured by the four sampling methods in the 3 locations in which particulate lead was either predominant or comparable to soluble lead. Compared to first draw and random daytime samplings, although flush sampling had a high tendency to reduce total lead in samples in lead-contaminated sites, the extent of lead reduction was location-dependent and not dependent on flush durations between 5 and 10 min. Overall, first draw sampling and random daytime sampling were reliable and effective in determining lead contamination in this study. Flush sampling could reveal the contamination if the extent is severe but tends to underestimate lead exposure risk. Copyright © 2018 Elsevier B.V. All rights reserved.
Krempa, Heather M.
2015-10-29
Relative percent differences between methods were greater than 10 percent for most analyzed trace elements. Barium, cobalt, manganese, and boron had concentrations that were significantly different between sampling methods. Barium, molybdenum, boron, and uranium method concentrations indicate a close association between pump and grab samples based on bivariate plots and simple linear regressions. Grab sample concentrations were generally larger than pump concentrations for these elements and may be because of using a larger pore sized filter for grab samples. Analysis of zinc blank samples suggests zinc contamination in filtered grab samples. Variations of analyzed trace elements between pump and grab samples could reduce the ability to monitor temporal changes and potential groundwater contamination threats. The degree of precision necessary for monitoring potential groundwater threats and application objectives need to be considered when determining acceptable variation amounts.
Representativeness of direct observations selected using a work-sampling equation.
Sharp, Rebecca A; Mudford, Oliver C; Elliffe, Douglas
2015-01-01
Deciding on appropriate sampling to obtain representative samples of behavior is important but not straightforward, because the relative duration of the target behavior may affect its observation in a given sampling interval. Work-sampling methods, which offer a way to adjust the frequency of sampling according to a priori or ongoing estimates of the behavior to achieve a preselected level of representativeness, may provide a solution. Full-week observations of 7 behaviors were conducted for 3 students with autism spectrum disorder and intellectual disabilities. Work-sampling methods were used to select momentary time samples from the full time-of-interest, which produced representative samples. However, work sampling required impractically high numbers of time samples to obtain representative samples. More practical momentary time samples produced less representative samples, particularly for low-duration behaviors. The utility and limits of work-sampling methods for applied behavior analysis are discussed. © Society for the Experimental Analysis of Behavior.
Evaluating Composite Sampling Methods of Bacillus Spores at Low Concentrations
Hess, Becky M.; Amidan, Brett G.; Anderson, Kevin K.; Hutchison, Janine R.
2016-01-01
Restoring all facility operations after the 2001 Amerithrax attacks took years to complete, highlighting the need to reduce remediation time. Some of the most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite (SM-SPC): a single cellulose sponge samples multiple coupons with a single pass across each coupon; 2) single medium multi-pass composite: a single cellulose sponge samples multiple coupons with multiple passes across each coupon (SM-MPC); and 3) multi-medium post-sample composite (MM-MPC): a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155 CFU/cm2). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted dry wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p< 0.0001) and coupon material (p = 0.0006). Recovery efficiency (RE) was higher overall using the MM-MPC method compared to the SM-SPC and SM-MPC methods. RE with the MM-MPC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, dry wall, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces. PMID:27736999
Evaluating Composite Sampling Methods of Bacillus Spores at Low Concentrations.
Hess, Becky M; Amidan, Brett G; Anderson, Kevin K; Hutchison, Janine R
2016-01-01
Restoring all facility operations after the 2001 Amerithrax attacks took years to complete, highlighting the need to reduce remediation time. Some of the most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite (SM-SPC): a single cellulose sponge samples multiple coupons with a single pass across each coupon; 2) single medium multi-pass composite: a single cellulose sponge samples multiple coupons with multiple passes across each coupon (SM-MPC); and 3) multi-medium post-sample composite (MM-MPC): a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155 CFU/cm2). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted dry wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p< 0.0001) and coupon material (p = 0.0006). Recovery efficiency (RE) was higher overall using the MM-MPC method compared to the SM-SPC and SM-MPC methods. RE with the MM-MPC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, dry wall, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces.
SnagPRO: snag and tree sampling and analysis methods for wildlife
Lisa J. Bate; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough
2008-01-01
We describe sampling methods and provide software to accurately and efficiently estimate snag and tree densities at desired scales to meet a variety of research and management objectives. The methods optimize sampling effort by choosing a plot size appropriate for the specified forest conditions and sampling goals. Plot selection and data analyses are supported by...
ERIC Educational Resources Information Center
Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Jiao, Qun G.
2007-01-01
A sequential design utilizing identical samples was used to classify mixed methods studies via a two-dimensional model, wherein sampling designs were grouped according to the time orientation of each study's components and the relationship of the qualitative and quantitative samples. A quantitative analysis of 121 studies representing nine fields…
An evaluation of methods for estimating decadal stream loads
NASA Astrophysics Data System (ADS)
Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-11-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.
An evaluation of methods for estimating decadal stream loads
Lee, Casey; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-01-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen – lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale’s ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.
Alum, Absar; Rock, Channah; Abbaszadegan, Morteza
2014-01-01
For land application, biosolids are classified as Class A or Class B based on the levels of bacterial, viral, and helminths pathogens in residual biosolids. The current EPA methods for the detection of these groups of pathogens in biosolids include discrete steps. Therefore, a separate sample is processed independently to quantify the number of each group of the pathogens in biosolids. The aim of the study was to develop a unified method for simultaneous processing of a single biosolids sample to recover bacterial, viral, and helminths pathogens. At the first stage for developing a simultaneous method, nine eluents were compared for their efficiency to recover viruses from a 100 gm spiked biosolids sample. In the second stage, the three top performing eluents were thoroughly evaluated for the recovery of bacteria, viruses, and helminthes. For all three groups of pathogens, the glycine-based eluent provided higher recovery than the beef extract-based eluent. Additional experiments were performed to optimize performance of glycine-based eluent under various procedural factors such as, solids to eluent ratio, stir time, and centrifugation conditions. Last, the new method was directly compared with the EPA methods for the recovery of the three groups of pathogens spiked in duplicate samples of biosolids collected from different sources. For viruses, the new method yielded up to 10% higher recoveries than the EPA method. For bacteria and helminths, recoveries were 74% and 83% by the new method compared to 34% and 68% by the EPA method, respectively. The unified sample processing method significantly reduces the time required for processing biosolids samples for different groups of pathogens; it is less impacted by the intrinsic variability of samples, while providing higher yields (P = 0.05) and greater consistency than the current EPA methods.
A STANDARDIZED ASSESSMENT METHOD (SAM) FOR RIVERINE MACROINVERTEBRATES
A macroinvertebrate sampling method for large rivers based on desirable characteristics of existing nonwadeable methods was developed and tested. Six sites each were sampled on the Great Miami and Kentucky Rivers, reflecting a human disturbance gradient. Samples were collected ...
CTEPP STANDARD OPERATING PROCEDURE FOR PACKING AND SHIPPING STUDY SAMPLES (SOP-3.11)
This SOP describes the methods for packing and shipping study samples. These methods are for packing and shipping biological and environmental samples. The methods have been tested and used in the previous pilot studies.
Nonprobability and probability-based sampling strategies in sexual science.
Catania, Joseph A; Dolcini, M Margaret; Orellana, Roberto; Narayanan, Vasudah
2015-01-01
With few exceptions, much of sexual science builds upon data from opportunistic nonprobability samples of limited generalizability. Although probability-based studies are considered the gold standard in terms of generalizability, they are costly to apply to many of the hard-to-reach populations of interest to sexologists. The present article discusses recent conclusions by sampling experts that have relevance to sexual science that advocates for nonprobability methods. In this regard, we provide an overview of Internet sampling as a useful, cost-efficient, nonprobability sampling method of value to sex researchers conducting modeling work or clinical trials. We also argue that probability-based sampling methods may be more readily applied in sex research with hard-to-reach populations than is typically thought. In this context, we provide three case studies that utilize qualitative and quantitative techniques directed at reducing limitations in applying probability-based sampling to hard-to-reach populations: indigenous Peruvians, African American youth, and urban men who have sex with men (MSM). Recommendations are made with regard to presampling studies, adaptive and disproportionate sampling methods, and strategies that may be utilized in evaluating nonprobability and probability-based sampling methods.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-06
... Analytic Methods and Sampling Procedures for the United States National Residue Program for Meat, Poultry... implementing several multi-residue methods for analyzing samples of meat, poultry, and egg products for animal.... These modern, high-efficiency methods will conserve resources and provide useful and reliable results...
USDA-ARS?s Scientific Manuscript database
A sample preparation method was evaluated for the determination of polybrominated diphenyl ethers (PBDEs) in mussel samples, by using colorimetric and electrochemical immunoassay-based screening methods. A simple sample preparation in conjunction with a rapid screening method possesses the desired c...
An Improved Manual Method for NOx Emission Measurement.
ERIC Educational Resources Information Center
Dee, L. A.; And Others
The current manual NO (x) sampling and analysis method was evaluated. Improved time-integrated sampling and rapid analysis methods were developed. In the new method, the sample gas is drawn through a heated bed of uniquely active, crystalline, Pb02 where NO (x) is quantitatively absorbed. Nitrate ion is later extracted with water and the…
Comparing two sampling methods to engage hard-to-reach communities in research priority setting.
Valerio, Melissa A; Rodriguez, Natalia; Winkler, Paula; Lopez, Jaime; Dennison, Meagen; Liang, Yuanyuan; Turner, Barbara J
2016-10-28
Effective community-partnered and patient-centered outcomes research needs to address community priorities. However, optimal sampling methods to engage stakeholders from hard-to-reach, vulnerable communities to generate research priorities have not been identified. In two similar rural, largely Hispanic communities, a community advisory board guided recruitment of stakeholders affected by chronic pain using a different method in each community: 1) snowball sampling, a chain- referral method or 2) purposive sampling to recruit diverse stakeholders. In both communities, three groups of stakeholders attended a series of three facilitated meetings to orient, brainstorm, and prioritize ideas (9 meetings/community). Using mixed methods analysis, we compared stakeholder recruitment and retention as well as priorities from both communities' stakeholders on mean ratings of their ideas based on importance and feasibility for implementation in their community. Of 65 eligible stakeholders in one community recruited by snowball sampling, 55 (85 %) consented, 52 (95 %) attended the first meeting, and 36 (65 %) attended all 3 meetings. In the second community, the purposive sampling method was supplemented by convenience sampling to increase recruitment. Of 69 stakeholders recruited by this combined strategy, 62 (90 %) consented, 36 (58 %) attended the first meeting, and 26 (42 %) attended all 3 meetings. Snowball sampling recruited more Hispanics and disabled persons (all P < 0.05). Despite differing recruitment strategies, stakeholders from the two communities identified largely similar ideas for research, focusing on non-pharmacologic interventions for management of chronic pain. Ratings on importance and feasibility for community implementation differed only on the importance of massage services (P = 0.045) which was higher for the purposive/convenience sampling group and for city improvements/transportation services (P = 0.004) which was higher for the snowball sampling group. In each of the two similar hard-to-reach communities, a community advisory board partnered with researchers to implement a different sampling method to recruit stakeholders. The snowball sampling method achieved greater participation with more Hispanics but also more individuals with disabilities than a purposive-convenience sampling method. However, priorities for research on chronic pain from both stakeholder groups were similar. Although utilizing a snowball sampling method appears to be superior, further research is needed on implementation costs and resources.
Ruple-Czerniak, A; Bolte, D S; Burgess, B A; Morley, P S
2014-07-01
Nosocomial salmonellosis is an important problem in veterinary hospitals that treat horses and other large animals. Detection and mitigation of outbreaks and prevention of healthcare-associated infections often require detection of Salmonella enterica in the hospital environment. To compare 2 previously published methods for detecting environmental contamination with S. enterica in a large animal veterinary teaching hospital. Hospital-based comparison of environmental sampling techniques. A total of 100 pairs of environmental samples were collected from stalls used to house large animal cases (horses, cows or New World camelids) that were confirmed to be shedding S. enterica by faecal culture. Stalls were cleaned and disinfected prior to sampling, and the same areas within each stall were sampled for the paired samples. One method of detection used sterile, premoistened sponges that were cultured using thioglycolate enrichment before plating on XLT-4 agar. The other method used electrostatic wipes that were cultured using buffered peptone water, tetrathionate and Rappaport-Vassiliadis R10 broths before plating on XLT-4 agar. Salmonella enterica was recovered from 14% of samples processed using the electrostatic wipe sampling and culture procedure, whereas S. enterica was recovered from only 4% of samples processed using the sponge sampling and culture procedure. There was test agreement for 85 pairs of culture-negative samples and 3 pairs of culture-positive samples. However, the remaining 12 pairs of samples with discordant results created significant disagreement between the 2 detection methods (P<0.01). Persistence of Salmonella in the environment of veterinary hospitals can occur even with rigorous cleaning and disinfection. Use of sensitive methods for detection of environmental contamination is critical when detecting and mitigating this problem in veterinary hospitals. These results suggest that the electrostatic wipe sampling and culture method was more sensitive than the sponge sampling and culture method. © 2013 EVJ Ltd.
Filter forensics: microbiota recovery from residential HVAC filters.
Maestre, Juan P; Jennings, Wiley; Wylie, Dennis; Horner, Sharon D; Siegel, Jeffrey; Kinney, Kerry A
2018-01-30
Establishing reliable methods for assessing the microbiome within the built environment is critical for understanding the impact of biological exposures on human health. High-throughput DNA sequencing of dust samples provides valuable insights into the microbiome present in human-occupied spaces. However, the effect that different sampling methods have on the microbial community recovered from dust samples is not well understood across sample types. Heating, ventilation, and air conditioning (HVAC) filters hold promise as long-term, spatially integrated, high volume samplers to characterize the airborne microbiome in homes and other climate-controlled spaces. In this study, the effect that dust recovery method (i.e., cut and elution, swabbing, or vacuuming) has on the microbial community structure, membership, and repeatability inferred by Illumina sequencing was evaluated. The results indicate that vacuum samples captured higher quantities of total, bacterial, and fungal DNA than swab or cut samples. Repeated swab and vacuum samples collected from the same filter were less variable than cut samples with respect to both quantitative DNA recovery and bacterial community structure. Vacuum samples captured substantially greater bacterial diversity than the other methods, whereas fungal diversity was similar across all three methods. Vacuum and swab samples of HVAC filter dust were repeatable and generally superior to cut samples. Nevertheless, the contribution of environmental and human sources to the bacterial and fungal communities recovered via each sampling method was generally consistent across the methods investigated. Dust recovery methodologies have been shown to affect the recovery, repeatability, structure, and membership of microbial communities recovered from dust samples in the built environment. The results of this study are directly applicable to indoor microbiota studies utilizing the filter forensics approach. More broadly, this study provides a better understanding of the microbial community variability attributable to sampling methodology and helps inform interpretation of data collected from other types of dust samples collected from indoor environments.
A cryopreservation method for Pasteurella multocida from wetland samples
Moore, Melody K.; Shadduck, D.J.; Goldberg, Diana R.; Samuel, M.D.
1998-01-01
A cryopreservation method and improved isolation techniques for detection of Pasteurella multocida from wetland samples were developed. Wetland water samples were collected in the field, diluted in dimethyl sulfoxide (DMSO, final concentration 10%), and frozen at -180 C in a liquid nitrogen vapor shipper. Frozen samples were transported to the laboratory where they were subsequently thawed and processed in Pasteurella multocida selective broth (PMSB) to isolate P. multocida. This method allowed for consistent isolation of 2 to 18 organisms/ml from water seeded with known concentrations of P. multocida. The method compared favorably with the standard mouse inoculation method and allowed for preservation of the samples until they could be processed in the laboratory.
Method and apparatus for data sampling
Odell, D.M.C.
1994-04-19
A method and apparatus for sampling radiation detector outputs and determining event data from the collected samples is described. The method uses high speed sampling of the detector output, the conversion of the samples to digital values, and the discrimination of the digital values so that digital values representing detected events are determined. The high speed sampling and digital conversion is performed by an A/D sampler that samples the detector output at a rate high enough to produce numerous digital samples for each detected event. The digital discrimination identifies those digital samples that are not representative of detected events. The sampling and discrimination also provides for temporary or permanent storage, either serially or in parallel, to a digital storage medium. 6 figures.
Meyer, M.T.; Lee, E.A.; Ferrell, G.M.; Bumgarner, J.E.; Varns, Jerry
2007-01-01
This report describes the performance of an offline tandem solid-phase extraction (SPE) method and an online SPE method that use liquid chromatography/mass spectrometry for the analysis of 23 and 35 antibiotics, respectively, as used in several water-quality surveys conducted since 1999. In the offline tandem SPE method, normalized concentrations for the quinolone, macrolide, and sulfonamide antibiotics in spiked environmental samples averaged from 81 to 139 percent of the expected spiked concentrations. A modified standard-addition technique was developed to improve the quantitation of the tetracycline antibiotics, which had 'apparent' concentrations that ranged from 185 to 1,200 percent of their expected spiked concentrations in matrix-spiked samples. In the online SPE method, normalized concentrations for the quinolone, macrolide, sulfonamide, and tetracycline antibiotics in matrix-spiked samples averaged from 51 to 142 percent of their expected spiked concentrations, and the beta-lactam antibiotics in matrix-spiked samples averaged from 22 to 76 percent of their expected spiked concentration. Comparison of 44 samples analyzed by both the offline tandem SPE and online SPE methods showed 50 to 100 percent agreement in sample detection for overlapping analytes and 68 to 100 percent agreement in a presence-absence comparison for all analytes. The offline tandem and online SPE methods were compared to an independent method that contains two overlapping antibiotic compounds, sulfamethoxazole and trimethoprim, for 96 and 44 environmental samples, respectively. The offline tandem SPE showed 86 and 92 percent agreement in sample detection and 96 and 98 percent agreement in a presence-absence comparison for sulfamethoxazole and trimethoprim, respectively. The online SPE method showed 57 and 56 percent agreement in sample detection and 72 and 91 percent agreement in presence-absence comparison for sulfamethoxazole and trimethoprim, respectively. A linear regression with an R2 of 0.91 was obtained for trimethoprim concentrations, and an R2 of 0.35 was obtained for sulfamethoxazole concentrations determined from samples analyzed by the offline tandem SPE and online SPE methods. Linear regressions of trimethoprim and sulfamethoxazole concentrations determined from samples analyzed by the offline tandem SPE method and the independent M3 pharmaceutical method yielded R2 of 0.95 and 0.87, respectively. Regressed comparison of the offline tandem SPE method to the online SPE and M3 methods showed that the online SPE method gave higher concentrations for sulfamethoxazole and trimethoprim than were obtained from the offline tandem SPE or M3 methods.
NASA Astrophysics Data System (ADS)
Roether, Wolfgang; Vogt, Martin; Vogel, Sandra; Sültenfuß, Jürgen
2013-06-01
We present a new method to obtain samples for the measurement of helium isotopes and neon in water, to replace the classical sampling procedure using clamped-off Cu tubing containers that we have been using so far. The new method saves the gas extraction step prior to admission to the mass spectrometer, which the classical method requires. Water is drawn into evacuated glass ampoules with subsequent flame sealing. Approximately 50% headspace is left, from which admission into the mass spectrometer occurs without further treatment. Extensive testing has shown that, with due care and with small corrections applied, the samples represent the gas concentrations in the water within ±0.07% (95% confidence level; ±0.05% with special handling). Fast evacuation is achieved by pumping on a small charge of water placed in the ampoule. The new method was successfully tested at sea in comparison with Cu-tubing sampling. We found that the ampoule samples were superior in data precision and that a lower percentage of samples were lost prior to measurement. Further measurements revealed agreement between the two methods in helium, 3He and neon within ±0.1%. The new method facilitates the dealing with large sample sets and minimizes the delay between sampling and measurement. The method is applicable also for gases other than helium and neon.
Single-view phase retrieval of an extended sample by exploiting edge detection and sparsity
Tripathi, Ashish; McNulty, Ian; Munson, Todd; ...
2016-10-14
We propose a new approach to robustly retrieve the exit wave of an extended sample from its coherent diffraction pattern by exploiting sparsity of the sample's edges. This approach enables imaging of an extended sample with a single view, without ptychography. We introduce nonlinear optimization methods that promote sparsity, and we derive update rules to robustly recover the sample's exit wave. We test these methods on simulated samples by varying the sparsity of the edge-detected representation of the exit wave. Finally, our tests illustrate the strengths and limitations of the proposed method in imaging extended samples.
Boiano, J M; Wallace, M E; Sieber, W K; Groff, J H; Wang, J; Ashley, K
2000-08-01
A field study was conducted with the goal of comparing the performance of three recently developed or modified sampling and analytical methods for the determination of airborne hexavalent chromium (Cr(VI)). The study was carried out in a hard chrome electroplating facility and in a jet engine manufacturing facility where airborne Cr(VI) was expected to be present. The analytical methods evaluated included two laboratory-based procedures (OSHA Method ID-215 and NIOSH Method 7605) and a field-portable method (NIOSH Method 7703). These three methods employ an identical sampling methodology: collection of Cr(VI)-containing aerosol on a polyvinyl chloride (PVC) filter housed in a sampling cassette, which is connected to a personal sampling pump calibrated at an appropriate flow rate. The basis of the analytical methods for all three methods involves extraction of the PVC filter in alkaline buffer solution, chemical isolation of the Cr(VI) ion, complexation of the Cr(VI) ion with 1,5-diphenylcarbazide, and spectrometric measurement of the violet chromium diphenylcarbazone complex at 540 nm. However, there are notable specific differences within the sample preparation procedures used in three methods. To assess the comparability of the three measurement protocols, a total of 20 side-by-side air samples were collected, equally divided between a chromic acid electroplating operation and a spray paint operation where water soluble forms of Cr(VI) were used. A range of Cr(VI) concentrations from 0.6 to 960 microg m(-3), with Cr(VI) mass loadings ranging from 0.4 to 32 microg, was measured at the two operations. The equivalence of the means of the log-transformed Cr(VI) concentrations obtained from the different analytical methods was compared. Based on analysis of variance (ANOVA) results, no statistically significant differences were observed between mean values measured using each of the three methods. Small but statistically significant differences were observed between results obtained from performance evaluation samples for the NIOSH field method and the OSHA laboratory method.
[Determination of ethylene glycol in biological fluids--propylene glycol interferences].
Gomółka, Ewa; Cudzich-Czop, Sylwia; Sulka, Adrianna
2013-01-01
Many laboratories in Poland do not use gas chromatography (GC) method for determination of ethylene glycol (EG) and methanol in blood of poisoned patients, they use non specific spectrophotometry methods. One of the interfering substances is propylene glycol (PG)--compound present in many medical and cosmetic products: drops, air freshens, disinfectants, electronic cigarettes and others. In Laboratory of Analytical Toxicology and Drug Monitoring in Krakow determination of EG is made by GC method. The method enables to distinguish and make resolution of (EG) and (PG) in biological samples. In the years 2011-2012 in several serum samples from diagnosed patients PG was present in concentration from several to higher than 100 mg/dL. The aim of the study was to estimate PG interferences of serum EG determination by spectrophotometry method. Serum samples containing PG and EG were used in the study. The samples were analyzed by two methods: GC and spectrophotometry. Results of serum samples spiked with PG with no EG analysed by spectrophotometry method were improper ("false positive"). The results were correlated to PG concentration in samples. Calculated cross-reactivity of PG in the method was 42%. Positive results of EG measured by spectrophotometry method must be confirmed by reference GC method. Spectrophotometry method shouldn't be used for diagnostics and monitoring of patients poisoned by EG.
Bruce, James F.; Roberts, James J.; Zuellig, Robert E.
2018-05-24
The U.S. Geological Survey (USGS), in cooperation with Colorado Springs City Engineering and Colorado Springs Utilities, analyzed previously collected invertebrate data to determine the comparability among four sampling methods and two versions (2010 and 2017) of the Colorado Benthic Macroinvertebrate Multimetric Index (MMI). For this study, annual macroinvertebrate samples were collected concurrently (in space and time) at 15 USGS surface-water gaging stations in the Fountain Creek Basin from 2010 to 2012 using four sampling methods. The USGS monitoring project in the basin uses two of the methods and the Colorado Department of Public Health and Environment recommends the other two. These methods belong to two distinct sample types, one that targets single habitats and one that targets multiple habitats. The study results indicate that there are significant differences in MMI values obtained from the single-habitat and multihabitat sample types but methods from each program within each sample type produced comparable values. This study also determined that MMI values calculated by different versions of the Colorado Benthic Macroinvertebrate MMI are indistinguishable. This indicates that the Colorado Department of Public Health and Environment methods are comparable with the USGS monitoring project methods for single-habitat and multihabitat sample types. This report discusses the direct application of the study results to inform the revision of the existing USGS monitoring project in the Fountain Creek Basin.
Towards robust and repeatable sampling methods in eDNA based studies.
Dickie, Ian A; Boyer, Stephane; Buckley, Hannah; Duncan, Richard P; Gardner, Paul; Hogg, Ian D; Holdaway, Robert J; Lear, Gavin; Makiola, Andreas; Morales, Sergio E; Powell, Jeff R; Weaver, Louise
2018-05-26
DNA based techniques are increasingly used for measuring the biodiversity (species presence, identity, abundance and community composition) of terrestrial and aquatic ecosystems. While there are numerous reviews of molecular methods and bioinformatic steps, there has been little consideration of the methods used to collect samples upon which these later steps are based. This represents a critical knowledge gap, as methodologically sound field sampling is the foundation for subsequent analyses. We reviewed field sampling methods used for metabarcoding studies of both terrestrial and freshwater ecosystem biodiversity over a nearly three-year period (n = 75). We found that 95% (n = 71) of these studies used subjective sampling methods, inappropriate field methods, and/or failed to provide critical methodological information. It would be possible for researchers to replicate only 5% of the metabarcoding studies in our sample, a poorer level of reproducibility than for ecological studies in general. Our findings suggest greater attention to field sampling methods and reporting is necessary in eDNA-based studies of biodiversity to ensure robust outcomes and future reproducibility. Methods must be fully and accurately reported, and protocols developed that minimise subjectivity. Standardisation of sampling protocols would be one way to help to improve reproducibility, and have additional benefits in allowing compilation and comparison of data from across studies. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Comparison of extraction methods for quantifying vitamin E from animal tissues.
Xu, Zhimin
2008-12-01
Four extraction methods: (1) solvent (SOL), (2) ultrasound assisted solvent (UA), (3) saponification and solvent (SP), and (4) saponification and ultrasound assisted solvent (SP-UA), were used in sample preparation for quantifying vitamin E (tocopherols) in chicken liver and plasma samples. The extraction yields of SOL, UA, SP, and SP-UA methods obtained by adding delta-tocopherol as internal reference were 95%, 104%, 65%, and 62% for liver and 98%, 103%, 97%, and 94% for plasma, respectively. The methods with saponification significantly affected the stabilities of tocopherols in liver samples. The measured values of alpha- and gamma-tocopherols using the solvent only extraction (SOL) method were much lower than that using any of the other extraction methods. This indicated that less of the tocopherols in those samples were in a form that could be extracted directly by solvent. The measured value of alpha-tocopherol in the liver sample using the ultrasound assisted solvent (UA) method was 1.5-2.5 times of that obtained from the saponification and solvent (SP) method. The differences in measured values of tocopherols in the plasma samples by using the two methods were not significant. However, the measured value of the saponification and ultrasound assisted solvent (SP-UA) method was lower than either the saponification and solvent (SP) or the ultrasound assisted solvent (UA) method. Also, the reproducibility of the ultrasound assisted solvent (UA) method was greater than any of the saponification methods. Compared with the traditional saponification method, the ultrasound assisted solvent method could effectively extract tocopherols from sample matrix without any chemical degradation reactions, especially for complex animal tissue such as liver.
Appearance-based representative samples refining method for palmprint recognition
NASA Astrophysics Data System (ADS)
Wen, Jiajun; Chen, Yan
2012-07-01
The sparse representation can deal with the lack of sample problem due to utilizing of all the training samples. However, the discrimination ability will degrade when more training samples are used for representation. We propose a novel appearance-based palmprint recognition method. We aim to find a compromise between the discrimination ability and the lack of sample problem so as to obtain a proper representation scheme. Under the assumption that the test sample can be well represented by a linear combination of a certain number of training samples, we first select the representative training samples according to the contributions of the samples. Then we further refine the training samples by an iteration procedure, excluding the training sample with the least contribution to the test sample for each time. Experiments on PolyU multispectral palmprint database and two-dimensional and three-dimensional palmprint database show that the proposed method outperforms the conventional appearance-based palmprint recognition methods. Moreover, we also explore and find out the principle of the usage for the key parameters in the proposed algorithm, which facilitates to obtain high-recognition accuracy.
THE SCREENING AND RANKING ALGORITHM FOR CHANGE-POINTS DETECTION IN MULTIPLE SAMPLES
Song, Chi; Min, Xiaoyi; Zhang, Heping
2016-01-01
The chromosome copy number variation (CNV) is the deviation of genomic regions from their normal copy number states, which may associate with many human diseases. Current genetic studies usually collect hundreds to thousands of samples to study the association between CNV and diseases. CNVs can be called by detecting the change-points in mean for sequences of array-based intensity measurements. Although multiple samples are of interest, the majority of the available CNV calling methods are single sample based. Only a few multiple sample methods have been proposed using scan statistics that are computationally intensive and designed toward either common or rare change-points detection. In this paper, we propose a novel multiple sample method by adaptively combining the scan statistic of the screening and ranking algorithm (SaRa), which is computationally efficient and is able to detect both common and rare change-points. We prove that asymptotically this method can find the true change-points with almost certainty and show in theory that multiple sample methods are superior to single sample methods when shared change-points are of interest. Additionally, we report extensive simulation studies to examine the performance of our proposed method. Finally, using our proposed method as well as two competing approaches, we attempt to detect CNVs in the data from the Primary Open-Angle Glaucoma Genes and Environment study, and conclude that our method is faster and requires less information while our ability to detect the CNVs is comparable or better. PMID:28090239
Efficient global biopolymer sampling with end-transfer configurational bias Monte Carlo
NASA Astrophysics Data System (ADS)
Arya, Gaurav; Schlick, Tamar
2007-01-01
We develop an "end-transfer configurational bias Monte Carlo" method for efficient thermodynamic sampling of complex biopolymers and assess its performance on a mesoscale model of chromatin (oligonucleosome) at different salt conditions compared to other Monte Carlo moves. Our method extends traditional configurational bias by deleting a repeating motif (monomer) from one end of the biopolymer and regrowing it at the opposite end using the standard Rosenbluth scheme. The method's sampling efficiency compared to local moves, pivot rotations, and standard configurational bias is assessed by parameters relating to translational, rotational, and internal degrees of freedom of the oligonucleosome. Our results show that the end-transfer method is superior in sampling every degree of freedom of the oligonucleosomes over other methods at high salt concentrations (weak electrostatics) but worse than the pivot rotations in terms of sampling internal and rotational sampling at low-to-moderate salt concentrations (strong electrostatics). Under all conditions investigated, however, the end-transfer method is several orders of magnitude more efficient than the standard configurational bias approach. This is because the characteristic sampling time of the innermost oligonucleosome motif scales quadratically with the length of the oligonucleosomes for the end-transfer method while it scales exponentially for the traditional configurational-bias method. Thus, the method we propose can significantly improve performance for global biomolecular applications, especially in condensed systems with weak nonbonded interactions and may be combined with local enhancements to improve local sampling.
Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...
2014-01-01
Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less
Won, Jonghun; Lee, Gyu Rie; Park, Hahnbeom; Seok, Chaok
2018-06-07
The second extracellular loops (ECL2s) of G-protein-coupled receptors (GPCRs) are often involved in GPCR functions, and their structures have important implications in drug discovery. However, structure prediction of ECL2 is difficult because of its long length and the structural diversity among different GPCRs. In this study, a new ECL2 conformational sampling method involving both template-based and ab initio sampling was developed. Inspired by the observation of similar ECL2 structures of closely related GPCRs, a template-based sampling method employing loop structure templates selected from the structure database was developed. A new metric for evaluating similarity of the target loop to templates was introduced for template selection. An ab initio loop sampling method was also developed to treat cases without highly similar templates. The ab initio method is based on the previously developed fragment assembly and loop closure method. A new sampling component that takes advantage of secondary structure prediction was added. In addition, a conserved disulfide bridge restraining ECL2 conformation was predicted and analytically incorporated into sampling, reducing the effective dimension of the conformational search space. The sampling method was combined with an existing energy function for comparison with previously reported loop structure prediction methods, and the benchmark test demonstrated outstanding performance.
Conceptual data sampling for breast cancer histology image classification.
Rezk, Eman; Awan, Zainab; Islam, Fahad; Jaoua, Ali; Al Maadeed, Somaya; Zhang, Nan; Das, Gautam; Rajpoot, Nasir
2017-10-01
Data analytics have become increasingly complicated as the amount of data has increased. One technique that is used to enable data analytics in large datasets is data sampling, in which a portion of the data is selected to preserve the data characteristics for use in data analytics. In this paper, we introduce a novel data sampling technique that is rooted in formal concept analysis theory. This technique is used to create samples reliant on the data distribution across a set of binary patterns. The proposed sampling technique is applied in classifying the regions of breast cancer histology images as malignant or benign. The performance of our method is compared to other classical sampling methods. The results indicate that our method is efficient and generates an illustrative sample of small size. It is also competing with other sampling methods in terms of sample size and sample quality represented in classification accuracy and F1 measure. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sydoff, Marie; Stenström, Kristina
2010-04-01
The Department of Physics at Lund University is participating in a European Union project called EUMAPP (European Union Microdose AMS Partnership Programme), in which sample preparation and accelerator mass spectrometry (AMS) measurements of biological samples from microdosing studies have been made. This paper describes a simplified method of converting biological samples to solid graphite for 14C analysis with AMS. The method is based on online combustion of the samples, and reduction of CO 2 in septa-sealed vials. The septa-sealed vials and disposable materials are used to eliminate sample cross-contamination. Measurements of ANU and Ox I standards show deviations of 2% and 3%, respectively, relative to reference values. This level of accuracy is sufficient for biological samples from microdosing studies. Since the method has very few handling steps from sample to graphite, the risk of failure during the sample preparation process is minimized, making the method easy to use in routine preparation of samples.
Zainathan, S C; Carson, J; Crane, M St J; Nowak, B F
2013-04-01
The use of swabs relative to organs as a sample collection method for the detection of Tasmanian salmon reovirus (TSRV) in farmed Tasmanian Atlantic salmon, Salmo salar L., was evaluated by RT-qPCR. Evaluation of individual and pooled sample collection (organs vs swabs) was carried out to determine the sensitivity of the collection methods and the effect of pooling of samples for the detection of TSRV. Detection of TSRV in individual samples was as sensitive when organs were sampled compared to swabs, and in pooled samples, organs demonstrated a sensitivity of one 10-fold dilution higher than sampling of pooled swabs. Storage of swabs at 4 °C for t = 24 h demonstrated results similar to those at t = 0. Advantages of using swabs as a preferred sample collection method for the detection of TSRV compared to organ samples are evident from these experimental trials. © 2012 Blackwell Publishing Ltd.
Apparatus and method for handheld sampling
Staab, Torsten A.
2005-09-20
The present invention includes an apparatus, and corresponding method, for taking a sample. The apparatus is built around a frame designed to be held in at least one hand. A sample media is used to secure the sample. A sample media adapter for securing the sample media is operated by a trigger mechanism connectively attached within the frame to the sample media adapter.
A METHODS COMPARISON FOR COLLECTING MACROINVERTEBRATES IN THE OHIO RIVER
Collection of representative benthic macroinvertebrate samples from large rivers has been challenging researchers for many years. The objective of our study was to develop an appropriate method(s) for sampling macroinvertebrates from the Ohio River. Four existing sampling metho...
Comparison of four sampling methods for the detection of Salmonella in broiler litter.
Buhr, R J; Richardson, L J; Cason, J A; Cox, N A; Fairchild, B D
2007-01-01
Experiments were conducted to compare litter sampling methods for the detection of Salmonella. In experiment 1, chicks were challenged orally with a suspension of naladixic acid-resistant Salmonella and wing banded, and additional nonchallenged chicks were placed into each of 2 challenge pens. Nonchallenged chicks were placed into each nonchallenge pen located adjacent to the challenge pens. At 7, 8, 10, and 11 wk of age the litter was sampled using 4 methods: fecal droppings, litter grab, drag swab, and sock. For the challenge pens, Salmonella-positive samples were detected in 3 of 16 fecal samples, 6 of 16 litter grab samples, 7 of 16 drag swabs samples, and 7 of 16 sock samples. Samples from the nonchallenge pens were Salmonella positive in 2 of 16 litter grab samples, 9 of 16 drag swab samples, and 9 of 16 sock samples. In experiment 2, chicks were challenged with Salmonella, and the litter in the challenge and adjacent nonchallenge pens were sampled at 4, 6, and 8 wk of age with broilers remaining in all pens. For the challenge pens, Salmonella was detected in 10 of 36 fecal samples, 20 of 36 litter grab samples, 14 of 36 drag swab samples, and 26 of 36 sock samples. Samples from the adjacent nonchallenge pens were positive for Salmonella in 6 of 36 fecal droppings samples, 4 of 36 litter grab samples, 7 of 36 drag swab samples, and 19 of 36 sock samples. Sock samples had the highest rates of Salmonella detection. In experiment 3, the litter from a Salmonella-challenged flock was sampled at 7, 8, and 9 wk by socks and drag swabs. In addition, comparisons with drag swabs that were stepped on during sampling were made. Both socks (24 of 36, 67%) and drag swabs that were stepped on (25 of 36, 69%) showed significantly more Salmonella-positive samples than the traditional drag swab method (16 of 36, 44%). Drag swabs that were stepped on had comparable Salmonella detection level to that for socks. Litter sampling methods that incorporate stepping on the sample material while in contact with the litter appear to detect Salmonella in greater incidence than traditional sampling methods of dragging swabs over the litter surface.
Frison, Severine; Kerac, Marko; Checchi, Francesco; Nicholas, Jennifer
2017-01-01
The assessment of the prevalence of acute malnutrition in children under five is widely used for the detection of emergencies, planning interventions, advocacy, and monitoring and evaluation. This study examined PROBIT Methods which convert parameters (mean and standard deviation (SD)) of a normally distributed variable to a cumulative probability below any cut-off to estimate acute malnutrition in children under five using Middle-Upper Arm Circumference (MUAC). We assessed the performance of: PROBIT Method I, with mean MUAC from the survey sample and MUAC SD from a database of previous surveys; and PROBIT Method II, with mean and SD of MUAC observed in the survey sample. Specifically, we generated sub-samples from 852 survey datasets, simulating 100 surveys for eight sample sizes. Overall the methods were tested on 681 600 simulated surveys. PROBIT methods relying on sample sizes as small as 50 had better performance than the classic method for estimating and classifying the prevalence of acute malnutrition. They had better precision in the estimation of acute malnutrition for all sample sizes and better coverage for smaller sample sizes, while having relatively little bias. They classified situations accurately for a threshold of 5% acute malnutrition. Both PROBIT methods had similar outcomes. PROBIT Methods have a clear advantage in the assessment of acute malnutrition prevalence based on MUAC, compared to the classic method. Their use would require much lower sample sizes, thus enable great time and resource savings and permit timely and/or locally relevant prevalence estimates of acute malnutrition for a swift and well-targeted response.
Approximation of the exponential integral (well function) using sampling methods
NASA Astrophysics Data System (ADS)
Baalousha, Husam Musa
2015-04-01
Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.
Cox, Jennie; Indugula, Reshmi; Vesper, Stephen; Zhu, Zheng; Jandarov, Roman; Reponen, Tiina
2017-10-18
Evaluating fungal contamination indoors is complicated because of the many different sampling methods utilized. In this study, fungal contamination was evaluated using five sampling methods and four matrices for results. The five sampling methods were a 48 hour indoor air sample collected with a Button™ inhalable aerosol sampler and four types of dust samples: a vacuumed floor dust sample, newly settled dust collected for four weeks onto two types of electrostatic dust cloths (EDCs) in trays, and a wipe sample of dust from above floor surfaces. The samples were obtained in the bedrooms of asthmatic children (n = 14). Quantitative polymerase chain reaction (qPCR) was used to analyze the dust and air samples for the 36 fungal species that make up the Environmental Relative Moldiness Index (ERMI). The results from the samples were compared by four matrices: total concentration of fungal cells, concentration of fungal species associated with indoor environments, concentration of fungal species associated with outdoor environments, and ERMI values (or ERMI-like values for air samples). The ERMI values for the dust samples and the ERMI-like values for the 48 hour air samples were not significantly different. The total cell concentrations of the 36 species obtained with the four dust collection methods correlated significantly (r = 0.64-0.79, p < 0.05), with the exception of the vacuumed floor dust and newly settled dust. In addition, fungal cell concentrations of indoor associated species correlated well between all four dust sampling methods (r = 0.68-0.86, p < 0.01). No correlation was found between the fungal concentrations in the air and dust samples primarily because of differences in concentrations of Cladosporium cladosporioides Type 1 and Epicoccum nigrum. A representative type of dust sample and a 48 hour air sample might both provide useful information about fungal exposures.
Combinatorial Screening Of Inorganic And Organometallic Materials
Li, Yi , Li, Jing , Britton, Ted W.
2002-06-25
A method for differentiating and enumerating nucleated red blood cells in a blood sample is described. The method includes the steps of lysing red blood cells of a blood sample with a lytic reagent, measuring nucleated blood cells by DC impedance measurement in a non-focused flow aperture, differentiating nucleated red blood cells from other cell types, and reporting nucleated red blood cells in the blood sample. The method further includes subtracting nucleated red blood cells and other interference materials from the count of remaining blood cells, and reporting a corrected white blood cell count of the blood sample. Additionally, the method further includes measuring spectrophotometric absorbance of the sample mixture at a predetermined wavelength of a hemoglobin chromogen formed upon lysing the blood sample, and reporting hemoglobin concentration of the blood sample.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathi, Ashish; McNulty, Ian; Munson, Todd
We propose a new approach to robustly retrieve the exit wave of an extended sample from its coherent diffraction pattern by exploiting sparsity of the sample's edges. This approach enables imaging of an extended sample with a single view, without ptychography. We introduce nonlinear optimization methods that promote sparsity, and we derive update rules to robustly recover the sample's exit wave. We test these methods on simulated samples by varying the sparsity of the edge-detected representation of the exit wave. Finally, our tests illustrate the strengths and limitations of the proposed method in imaging extended samples.
Henderson, Gemma; Cox, Faith; Kittelmann, Sandra; Miri, Vahideh Heidarian; Zethof, Michael; Noel, Samantha J.; Waghorn, Garry C.; Janssen, Peter H.
2013-01-01
Molecular microbial ecology techniques are widely used to study the composition of the rumen microbiota and to increase understanding of the roles they play. Therefore, sampling and DNA extraction methods that result in adequate yields of microbial DNA that also accurately represents the microbial community are crucial. Fifteen different methods were used to extract DNA from cow and sheep rumen samples. The DNA yield and quality, and its suitability for downstream PCR amplifications varied considerably, depending on the DNA extraction method used. DNA extracts from nine extraction methods that passed these first quality criteria were evaluated further by quantitative PCR enumeration of microbial marker loci. Absolute microbial numbers, determined on the same rumen samples, differed by more than 100-fold, depending on the DNA extraction method used. The apparent compositions of the archaeal, bacterial, ciliate protozoal, and fungal communities in identical rumen samples were assessed using 454 Titanium pyrosequencing. Significant differences in microbial community composition were observed between extraction methods, for example in the relative abundances of members of the phyla Bacteroidetes and Firmicutes. Microbial communities in parallel samples collected from cows by oral stomach-tubing or through a rumen fistula, and in liquid and solid rumen digesta fractions, were compared using one of the DNA extraction methods. Community representations were generally similar, regardless of the rumen sampling technique used, but significant differences in the abundances of some microbial taxa such as the Clostridiales and the Methanobrevibacter ruminantium clade were observed. The apparent microbial community composition differed between rumen sample fractions, and Prevotellaceae were most abundant in the liquid fraction. DNA extraction methods that involved phenol-chloroform extraction and mechanical lysis steps tended to be more comparable. However, comparison of data from studies in which different sampling techniques, different rumen sample fractions or different DNA extraction methods were used should be avoided. PMID:24040342
Henderson, Gemma; Cox, Faith; Kittelmann, Sandra; Miri, Vahideh Heidarian; Zethof, Michael; Noel, Samantha J; Waghorn, Garry C; Janssen, Peter H
2013-01-01
Molecular microbial ecology techniques are widely used to study the composition of the rumen microbiota and to increase understanding of the roles they play. Therefore, sampling and DNA extraction methods that result in adequate yields of microbial DNA that also accurately represents the microbial community are crucial. Fifteen different methods were used to extract DNA from cow and sheep rumen samples. The DNA yield and quality, and its suitability for downstream PCR amplifications varied considerably, depending on the DNA extraction method used. DNA extracts from nine extraction methods that passed these first quality criteria were evaluated further by quantitative PCR enumeration of microbial marker loci. Absolute microbial numbers, determined on the same rumen samples, differed by more than 100-fold, depending on the DNA extraction method used. The apparent compositions of the archaeal, bacterial, ciliate protozoal, and fungal communities in identical rumen samples were assessed using 454 Titanium pyrosequencing. Significant differences in microbial community composition were observed between extraction methods, for example in the relative abundances of members of the phyla Bacteroidetes and Firmicutes. Microbial communities in parallel samples collected from cows by oral stomach-tubing or through a rumen fistula, and in liquid and solid rumen digesta fractions, were compared using one of the DNA extraction methods. Community representations were generally similar, regardless of the rumen sampling technique used, but significant differences in the abundances of some microbial taxa such as the Clostridiales and the Methanobrevibacter ruminantium clade were observed. The apparent microbial community composition differed between rumen sample fractions, and Prevotellaceae were most abundant in the liquid fraction. DNA extraction methods that involved phenol-chloroform extraction and mechanical lysis steps tended to be more comparable. However, comparison of data from studies in which different sampling techniques, different rumen sample fractions or different DNA extraction methods were used should be avoided.
Carter, Melissa D.; Crow, Brian S.; Pantazides, Brooke G.; Watson, Caroline M.; deCastro, B. Rey; Thomas, Jerry D.; Blake, Thomas A.; Johnson, Rudolph C.
2017-01-01
A high-throughput prioritization method was developed for use with a validated confirmatory method detecting organophosphorus nerve agent exposure by immunomagnetic separation-HPLC-MS/MS. A ballistic gradient was incorporated into this analytical method in order to profile unadducted butyrylcholinesterase (BChE) in clinical samples. With Zhang, et al. 1999’s Z′-factor of 0.88 ± 0.01 (SD) of control analytes and Z-factor of 0.25 ± 0.06 (SD) of serum samples, the assay is rated an “excellent assay” for the synthetic peptide controls used and a “double assay” when used to prioritize clinical samples. Hits, defined as samples containing BChE Ser-198 adducts or no BChE present, were analyzed in a confirmatory method for identification and quantitation of the BChE adduct, if present. The ability to prioritize samples by highest exposure for confirmatory analysis is of particular importance in an exposure to cholinesterase inhibitors such as organophosphorus nerve agents where a large number of clinical samples may be collected. In an initial blind screen, 67 out of 70 samples were accurately identified giving an assay accuracy of 96% and yielded no false negatives. The method is the first to provide a high-throughput prioritization assay for profiling adduction of Ser-198 BChE in clinical samples. PMID:23954929
Nelson, Jennifer Clark; Marsh, Tracey; Lumley, Thomas; Larson, Eric B; Jackson, Lisa A; Jackson, Michael L
2013-08-01
Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased owing to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. We applied two such methods, namely imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method's ability to reduce bias using the control time period before influenza circulation. Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not use the validation sample confounders. Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from health care database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which the data can be imputed or reweighted using the additional validation sample information. Copyright © 2013 Elsevier Inc. All rights reserved.
Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco
2016-01-01
Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population's sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns.
Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco
2016-01-01
Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population’s sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns. PMID:27441554
[Free crystalline silica: a comparison of methods for its determination in total dust].
Maciejewska, Aleksandra; Szadkowska-Stańczyk, Irena; Kondratowicz, Grzegorz
2005-01-01
The major objective of the study was to compare and investigate the usefulness of quantitative analyses of free crystalline silica (FCS) in the assessment of dust exposure in samples of total dust of varied composition, using three methods: chemical method in common use in Poland; infrared spectrometry; and x-ray powder diffraction. Mineral composition and FCS contents were investigated in 9 laboratory samples of raw materials, materials, and industrial wastes, containing from about 2 to over 80% of crystalline silica and reduced to particles of size corresponding with that of total dust. Sample components were identified using XRD and FT-IR methods. Ten independent determinations of FCS with each of the three study methods were performed in dust samples. An analysis of linear correlation was applied to investigate interrelationship between mean FCS determinations. In analyzed dust samples, along with silica dust there were numerous minerals interfering with silica during the quantitative analysis. Comparison of mean results of FCS determinations showed that the results obtained using the FT-IR method were by 12-13% lower than those obtained with two other methods. However, the differences observed were within the limits of changeability of results associated with their precision and dependence on reference materials used. Assessment of occupational exposure to dusts containing crystalline silica can be performed on the basis of quantitative analysis of FCS in total dusts using each of the compared methods. The FT-IR method is most appropriate for the FCS determination in samples of small amount of silica or collected at low dust concentrations; the XRD method for the analysis of multicomponent samples; and the chemical method in the case of medium and high FCS contents in samples or high concentrations of dusts in the work environment.
Feng, Shu; Gale, Michael J.; Fay, Jonathan D.; Faridi, Ambar; Titus, Hope E.; Garg, Anupam K.; Michaels, Keith V.; Erker, Laura R.; Peters, Dawn; Smith, Travis B.; Pennesi, Mark E.
2015-01-01
Purpose To describe a standardized flood-illuminated adaptive optics (AO) imaging protocol suitable for the clinical setting and to assess sampling methods for measuring cone density. Methods Cone density was calculated following three measurement protocols: 50 × 50-μm sampling window values every 0.5° along the horizontal and vertical meridians (fixed-interval method), the mean density of expanding 0.5°-wide arcuate areas in the nasal, temporal, superior, and inferior quadrants (arcuate mean method), and the peak cone density of a 50 × 50-μm sampling window within expanding arcuate areas near the meridian (peak density method). Repeated imaging was performed in nine subjects to determine intersession repeatability of cone density. Results Cone density montages could be created for 67 of the 74 subjects. Image quality was determined to be adequate for automated cone counting for 35 (52%) of the 67 subjects. We found that cone density varied with different sampling methods and regions tested. In the nasal and temporal quadrants, peak density most closely resembled histological data, whereas the arcuate mean and fixed-interval methods tended to underestimate the density compared with histological data. However, in the inferior and superior quadrants, arcuate mean and fixed-interval methods most closely matched histological data, whereas the peak density method overestimated cone density compared with histological data. Intersession repeatability testing showed that repeatability was greatest when sampling by arcuate mean and lowest when sampling by fixed interval. Conclusions We show that different methods of sampling can significantly affect cone density measurements. Therefore, care must be taken when interpreting cone density results, even in a normal population. PMID:26325414
Comparing the NIOSH Method 5040 to a Diesel Particulate Matter Meter for Elemental Carbon
NASA Astrophysics Data System (ADS)
Ayers, David Matthew
Introduction: The sampling of elemental carbon has been associated with monitoring exposures in the trucking and mining industries. Recently, in the field of engineered nanomaterials, single wall and muti-wall carbon nanotubes (MWCNTs) are being produced in ever increasing quantities. The only approved atmospheric sampling for multi-wall carbon nanotubes in NIOSH Method 5040. These results are accurate but can take up to 30 days for sample results to be received. Objectives: Compare the results of elemental carbon sampling from the NIOSH Method 5040 to a Diesel Particulate Matter (DPM) Meter. Methods: MWCNTs were transferred and weighed between several trays placed on a scale. The NIOSH Method 5040 and DPM sampling train was hung 6 inches above the receiving tray. The transferring and weighing of the MWCNTs created an aerosol containing elemental carbon. Twenty-one total samples using both meters type were collected. Results: The assumptions for a Two-Way ANOVA were violated therefore, Mann-Whitney U Tests and a Kruskal-Wallis Test were performed. The hypotheses for both research questions were rejected. There was a significant difference in the EC concentrations obtained by the NIOSH Method 5040 and the DPM meter. There were also significant differences in elemental carbon level concentrations when sampled using a DPM meter versus a sampling pump based upon the three concentration levels (low, medium and high). Conclusions: The differences in the EC concentrations were statistically significant therefore, the two methods (NIOSH Method 5040 and DPM) are not the same. The NIOSH Method 5040 should continue to be the only authorized method of establishing an EC concentration for MWCNTs until a MWCNT specific method or an instantaneous meter is invented.
Methods for estimating the amount of vernal pool habitat in the northeastern United States
Van Meter, R.; Bailey, L.L.; Grant, E.H.C.
2008-01-01
The loss of small, seasonal wetlands is a major concern for a variety of state, local, and federal organizations in the northeastern U.S. Identifying and estimating the number of vernal pools within a given region is critical to developing long-term conservation and management strategies for these unique habitats and their faunal communities. We use three probabilistic sampling methods (simple random sampling, adaptive cluster sampling, and the dual frame method) to estimate the number of vernal pools on protected, forested lands. Overall, these methods yielded similar values of vernal pool abundance for each study area, and suggest that photographic interpretation alone may grossly underestimate the number of vernal pools in forested habitats. We compare the relative efficiency of each method and discuss ways of improving precision. Acknowledging that the objectives of a study or monitoring program ultimately determine which sampling designs are most appropriate, we recommend that some type of probabilistic sampling method be applied. We view the dual-frame method as an especially useful way of combining incomplete remote sensing methods, such as aerial photograph interpretation, with a probabilistic sample of the entire area of interest to provide more robust estimates of the number of vernal pools and a more representative sample of existing vernal pool habitats.
Method and apparatus for measuring nuclear magnetic properties
Weitekamp, D.P.; Bielecki, A.; Zax, D.B.; Zilm, K.W.; Pines, A.
1987-12-01
A method for studying the chemical and structural characteristics of materials is disclosed. The method includes placement of a sample material in a high strength polarizing magnetic field to order the sample nuclei. The condition used to order the sample is then removed abruptly and the ordering of the sample allowed to evolve for a time interval. At the end of the time interval, the ordering of the sample is measured by conventional nuclear magnetic resonance techniques. 5 figs.
Method and apparatus for measuring nuclear magnetic properties
Weitekamp, Daniel P.; Bielecki, Anthony; Zax, David B.; Zilm, Kurt W.; Pines, Alexander
1987-01-01
A method for studying the chemical and structural characteristics of materials is disclosed. The method includes placement of a sample material in a high strength polarizing magnetic field to order the sample nucleii. The condition used to order the sample is then removed abruptly and the ordering of the sample allowed to evolve for a time interval. At the end of the time interval, the ordering of the sample is measured by conventional nuclear magnetic resonance techniques.
A LITERATURE REVIEW OF WIPE SAMPLING METHODS ...
Wipe sampling is an important technique for the estimation of contaminant deposition in buildings, homes, or outdoor surfaces as a source of possible human exposure. Numerousmethods of wipe sampling exist, and each method has its own specification for the type of wipe, wetting solvent, and determinative step to be used, depending upon the contaminant of concern. The objective of this report is to concisely summarize the findings of a literature review that was conducted to identify the state-of-the-art wipe sampling techniques for a target list of compounds. This report describes the methods used to perform the literature review; a brief review of wipe sampling techniques in general; an analysis of physical and chemical properties of each target analyte; an analysis of wipe sampling techniques for the target analyte list; and asummary of the wipe sampling techniques for the target analyte list, including existing data gaps. In general, no overwhelming consensus can be drawn from the current literature on how to collect a wipe sample for the chemical warfare agents, organophosphate pesticides, and other toxic industrial chemicals of interest to this study. Different methods, media, and wetting solvents have been recommended and used by various groups and different studies. For many of the compounds of interest, no specific wipe sampling methodology has been established for their collection. Before a wipe sampling method (or methods) can be established for the co
NASA Astrophysics Data System (ADS)
Aspinall, M. D.; Joyce, M. J.; Mackin, R. O.; Jarrah, Z.; Boston, A. J.; Nolan, P. J.; Peyton, A. J.; Hawkes, N. P.
2009-01-01
A unique, digital time pick-off method, known as sample-interpolation timing (SIT) is described. This method demonstrates the possibility of improved timing resolution for the digital measurement of time of flight compared with digital replica-analogue time pick-off methods for signals sampled at relatively low rates. Three analogue timing methods have been replicated in the digital domain (leading-edge, crossover and constant-fraction timing) for pulse data sampled at 8 GSa s-1. Events arising from the 7Li(p, n)7Be reaction have been detected with an EJ-301 organic liquid scintillator and recorded with a fast digital sampling oscilloscope. Sample-interpolation timing was developed solely for the digital domain and thus performs more efficiently on digital signals compared with analogue time pick-off methods replicated digitally, especially for fast signals that are sampled at rates that current affordable and portable devices can achieve. Sample interpolation can be applied to any analogue timing method replicated digitally and thus also has the potential to exploit the generic capabilities of analogue techniques with the benefits of operating in the digital domain. A threshold in sampling rate with respect to the signal pulse width is observed beyond which further improvements in timing resolution are not attained. This advance is relevant to many applications in which time-of-flight measurement is essential.
Detection and monitoring of invasive exotic plants: a comparison of four sampling methods
Cynthia D. Huebner
2007-01-01
The ability to detect and monitor exotic invasive plants is likely to vary depending on the sampling method employed. Methods with strong qualitative thoroughness for species detection often lack the intensity necessary to monitor vegetation change. Four sampling methods (systematic plot, stratified-random plot, modified Whittaker, and timed meander) in hemlock and red...
Zhang, L; Liu, X J
2016-06-03
With the rapid development of next-generation high-throughput sequencing technology, RNA-seq has become a standard and important technique for transcriptome analysis. For multi-sample RNA-seq data, the existing expression estimation methods usually deal with each single-RNA-seq sample, and ignore that the read distributions are consistent across multiple samples. In the current study, we propose a structured sparse regression method, SSRSeq, to estimate isoform expression using multi-sample RNA-seq data. SSRSeq uses a non-parameter model to capture the general tendency of non-uniformity read distribution for all genes across multiple samples. Additionally, our method adds a structured sparse regularization, which not only incorporates the sparse specificity between a gene and its corresponding isoform expression levels, but also reduces the effects of noisy reads, especially for lowly expressed genes and isoforms. Four real datasets were used to evaluate our method on isoform expression estimation. Compared with other popular methods, SSRSeq reduced the variance between multiple samples, and produced more accurate isoform expression estimations, and thus more meaningful biological interpretations.
Taylor, Vivien F; Toms, Andrew; Longerich, Henry P
2002-01-01
The application of open vessel focused microwave acid digestion is described for the preparation of geological and environmental samples for analysis using inductively coupled plasma-mass spectrometry (ICP-MS). The method is compared to conventional closed-vessel high pressure methods which are limited in the use of HF to break down silicates. Open-vessel acid digestion more conveniently enables the use of HF to remove Si from geological and plant samples as volatile SiF4, as well as evaporation-to-dryness and sequential acid addition during the procedure. Rock reference materials (G-2 granite, MRG-1 gabbros, SY-2 syenite, JA-1 andesite, and JB-2 and SRM-688 basalts) and plant reference materials (BCR and IAEA lichens, peach leaves, apple leaves, Durham wheat flour, and pine needles) were digested with results comparable to conventional hotplate digestion. The microwave digestion method gave poor results for granitic samples containing refractory minerals, however fusion was the preferred method of preparation for these samples. Sample preparation time was reduced from several days, using conventional hotplate digestion method, to one hour per sample using our microwave method.
Ikebe, Jinzen; Umezawa, Koji; Higo, Junichi
2016-03-01
Molecular dynamics (MD) simulations using all-atom and explicit solvent models provide valuable information on the detailed behavior of protein-partner substrate binding at the atomic level. As the power of computational resources increase, MD simulations are being used more widely and easily. However, it is still difficult to investigate the thermodynamic properties of protein-partner substrate binding and protein folding with conventional MD simulations. Enhanced sampling methods have been developed to sample conformations that reflect equilibrium conditions in a more efficient manner than conventional MD simulations, thereby allowing the construction of accurate free-energy landscapes. In this review, we discuss these enhanced sampling methods using a series of case-by-case examples. In particular, we review enhanced sampling methods conforming to trivial trajectory parallelization, virtual-system coupled multicanonical MD, and adaptive lambda square dynamics. These methods have been recently developed based on the existing method of multicanonical MD simulation. Their applications are reviewed with an emphasis on describing their practical implementation. In our concluding remarks we explore extensions of the enhanced sampling methods that may allow for even more efficient sampling.
A sequential bioequivalence design with a potential ethical advantage.
Fuglsang, Anders
2014-07-01
This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.
Blekhman, Ran; Tang, Karen; Archie, Elizabeth A; Barreiro, Luis B; Johnson, Zachary P; Wilson, Mark E; Kohn, Jordan; Yuan, Michael L; Gesquiere, Laurence; Grieneisen, Laura E; Tung, Jenny
2016-08-16
Field studies of wild vertebrates are frequently associated with extensive collections of banked fecal samples-unique resources for understanding ecological, behavioral, and phylogenetic effects on the gut microbiome. However, we do not understand whether sample storage methods confound the ability to investigate interindividual variation in gut microbiome profiles. Here, we extend previous work on storage methods for gut microbiome samples by comparing immediate freezing, the gold standard of preservation, to three methods commonly used in vertebrate field studies: lyophilization, storage in ethanol, and storage in RNAlater. We found that the signature of individual identity consistently outweighed storage effects: alpha diversity and beta diversity measures were significantly correlated across methods, and while samples often clustered by donor, they never clustered by storage method. Provided that all analyzed samples are stored the same way, banked fecal samples therefore appear highly suitable for investigating variation in gut microbiota. Our results open the door to a much-expanded perspective on variation in the gut microbiome across species and ecological contexts.
Rocha, C F D; Van Sluys, M; Hatano, F H; Boquimpani-Freitas, L; Marra, R V; Marques, R V
2004-11-01
Studies on anurans in restinga habitats are few and, as a result, there is little information on which methods are more efficient for sampling them in this environment. Ten methods are usually used for sampling anuran communities in tropical and sub-tropical areas. In this study we evaluate which methods are more appropriate for this purpose in the restinga environment of Parque Nacional da Restinga de Jurubatiba. We analyzed six methods among those usually used for anuran samplings. For each method, we recorded the total amount of time spent (in min.), the number of researchers involved, and the number of species captured. We calculated a capture efficiency index (time necessary for a researcher to capture an individual frog) in order to make comparable the data obtained. Of the methods analyzed, the species inventory (9.7 min/searcher /ind.- MSI; richness = 6; abundance = 23) and the breeding site survey (9.5 MSI; richness = 4; abundance = 22) were the most efficient. The visual encounter inventory (45.0 MSI) and patch sampling (65.0 MSI) methods were of comparatively lower efficiency restinga, whereas the plot sampling and the pit-fall traps with drift-fence methods resulted in no frog capture. We conclude that there is a considerable difference in efficiency of methods used in the restinga environment and that the complete species inventory method is highly efficient for sampling frogs in the restinga studied and may be so in other restinga environments. Methods that are usually efficient in forested areas seem to be of little value in open restinga habitats.
Rapid method for sampling metals for materials identification
NASA Technical Reports Server (NTRS)
Higgins, L. E.
1971-01-01
Nondamaging process similar to electrochemical machining is useful in obtaining metal samples from places inaccessible to conventional sampling methods or where methods would be hazardous or contaminating to specimens. Process applies to industries where metals or metal alloys play a vital role.
Methods for making nucleotide probes for sequencing and synthesis
Church, George M; Zhang, Kun; Chou, Joseph
2014-07-08
Compositions and methods for making a plurality of probes for analyzing a plurality of nucleic acid samples are provided. Compositions and methods for analyzing a plurality of nucleic acid samples to obtain sequence information in each nucleic acid sample are also provided.
Systematic Evaluation of Aggressive Air Sampling for Bacillus ...
Report The primary objectives of this project were to evaluate the Aggressive Air Sampling (AAS) method compared to currently used surface sampling methods and to determine if AAS is a viable option for sampling Bacillus anthracis spores.
Evaluation of Surface Sampling for Bacillus Spores Using ...
Report The primary objectives of this project were to evaluate the Aggressive Air Sampling (AAS) method compared to currently used surface sampling methods and to determine if AAS is a viable option for sampling Bacillus anthracis spores.
Effects of Sample Preparation on the Infrared Reflectance Spectra of Powders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brauer, Carolyn S.; Johnson, Timothy J.; Myers, Tanya L.
2015-05-22
While reflectance spectroscopy is a useful tool in identifying molecular compounds, laboratory measurement of solid (particularly powder) samples often is confounded by sample preparation methods. For example, both the packing density and surface roughness can have an effect on the quantitative reflectance spectra of powdered samples. Recent efforts in our group have focused on developing standard methods for measuring reflectance spectra that accounts for sample preparation, as well as other factors such as particle size and provenance. In this work, the effect of preparation method on sample reflectivity was investigated by measuring the directional-hemispherical spectra of samples that were hand-packedmore » as well as pressed into pellets using an integrating sphere attached to a Fourier transform infrared spectrometer. The results show that the methods used to prepare the sample have a substantial effect on the measured reflectance spectra, as do other factors such as particle size.« less
Effects of sample preparation on the infrared reflectance spectra of powders
NASA Astrophysics Data System (ADS)
Brauer, Carolyn S.; Johnson, Timothy J.; Myers, Tanya L.; Su, Yin-Fong; Blake, Thomas A.; Forland, Brenda M.
2015-05-01
While reflectance spectroscopy is a useful tool for identifying molecular compounds, laboratory measurement of solid (particularly powder) samples often is confounded by sample preparation methods. For example, both the packing density and surface roughness can have an effect on the quantitative reflectance spectra of powdered samples. Recent efforts in our group have focused on developing standard methods for measuring reflectance spectra that accounts for sample preparation, as well as other factors such as particle size and provenance. In this work, the effect of preparation method on sample reflectivity was investigated by measuring the directional-hemispherical spectra of samples that were hand-loaded as well as pressed into pellets using an integrating sphere attached to a Fourier transform infrared spectrometer. The results show that the methods used to prepare the sample can have a substantial effect on the measured reflectance spectra, as do other factors such as particle size.
Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.
Fourcade, Yoan; Engler, Jan O.; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one “virtual” derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases. PMID:24818607
Tulipan, Rachel J; Phillips, Heidi; Garrett, Laura D; Dirikolu, Levent; Mitchell, Mark A
2017-05-01
OBJECTIVE To characterize long-term elution of platinum from carboplatin-impregnated calcium sulfate hemihydrate (CI-CSH) beads in vitro by comparing 2 distinct sample collection methods designed to mimic 2 in vivo environments. SAMPLES 162 CI-CSH beads containing 4.6 mg of carboplatin (2.4 mg of platinum/bead). PROCEDURES For method 1, which mimicked an in vivo environment with rapid and complete fluid exchange, each of 3 plastic 10-mL conical tubes contained 3 CI-CSH beads and 5 mL of PBS solution. Eluent samples were obtained by evacuation of all fluid at 1, 2, 3, 6, 9, and 12 hours and 1, 2, 3, 6, 9, 12, 15, 18, 22, 26, and 30 days. Five milliliters of fresh PBS solution was then added to each tube. For method 2, which mimicked an in vivo environment with no fluid exchange, each of 51 tubes (ie, 3 tubes/17 sample collection times) contained 3 CI-CSH beads and 5 mL of PBS solution. Eluent samples were obtained from the assigned tubes for each time point. All samples were analyzed for platinum content by inductively coupled plasma-mass spectrometry. RESULTS Platinum was released from CI-CSH beads for 22 to 30 days. Significant differences were found in platinum concentration and percentage of platinum eluted from CI-CSH beads over time for each method. Platinum concentrations and elution percentages in method 2 samples were significantly higher than those of method 1 samples, except for the first hour measurements. CONCLUSIONS AND CLINICAL RELEVANCE Sample collection methods 1 and 2 may provide estimates of the minimum and maximum platinum release, respectively, from CI-CSH beads in vivo.
Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks
NASA Astrophysics Data System (ADS)
Sun, Wei; Chang, K. C.
2005-05-01
Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.
de Souza, Marjorie M A; Hartel, Gunter; Whiteman, David C; Antonsson, Annika
2018-04-01
Very little is known about the natural history of oral HPV infection. Several different methods exist to collect oral specimens and detect HPV, but their respective performance characteristics are unknown. We compared two different methods for oral specimen collection (oral saline rinse and commercial saliva kit) from 96 individuals and then analyzed the samples for HPV by two different PCR detection methods (single GP5+/6+ PCR and nested MY09/11 and GP5+/6+ PCR). For the oral rinse samples, the oral HPV prevalence was 10.4% (GP+ PCR; 10% repeatability) vs 11.5% (nested PCR method; 100% repeatability). For the commercial saliva kit samples, the prevalences were 3.1% vs 16.7% with the GP+ PCR vs the nested PCR method (repeatability 100% for both detection methods). Overall the agreement was fair or poor between samples and methods (kappa 0.06-0.36). Standardizing methods of oral sample collection and HPV detection would ensure comparability between future oral HPV studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Evaluation of sampling methods for Bacillus spore-contaminated HVAC filters
Calfee, M. Worth; Rose, Laura J.; Tufts, Jenia; Morse, Stephen; Clayton, Matt; Touati, Abderrahmane; Griffin-Gatchalian, Nicole; Slone, Christina; McSweeney, Neal
2016-01-01
The objective of this study was to compare an extraction-based sampling method to two vacuum-based sampling methods (vacuum sock and 37 mm cassette filter) with regards to their ability to recover Bacillus atrophaeus spores (surrogate for Bacillus anthracis) from pleated heating, ventilation, and air conditioning (HVAC) filters that are typically found in commercial and residential buildings. Electrostatic and mechanical HVAC filters were tested, both without and after loading with dust to 50% of their total holding capacity. The results were analyzed by one-way ANOVA across material types, presence or absence of dust, and sampling device. The extraction method gave higher relative recoveries than the two vacuum methods evaluated (p ≤ 0.001). On average, recoveries obtained by the vacuum methods were about 30% of those achieved by the extraction method. Relative recoveries between the two vacuum methods were not significantly different (p > 0.05). Although extraction methods yielded higher recoveries than vacuum methods, either HVAC filter sampling approach may provide a rapid and inexpensive mechanism for understanding the extent of contamination following a wide-area biological release incident. PMID:24184312
Evaluation of sampling methods for Bacillus spore-contaminated HVAC filters.
Calfee, M Worth; Rose, Laura J; Tufts, Jenia; Morse, Stephen; Clayton, Matt; Touati, Abderrahmane; Griffin-Gatchalian, Nicole; Slone, Christina; McSweeney, Neal
2014-01-01
The objective of this study was to compare an extraction-based sampling method to two vacuum-based sampling methods (vacuum sock and 37mm cassette filter) with regards to their ability to recover Bacillus atrophaeus spores (surrogate for Bacillus anthracis) from pleated heating, ventilation, and air conditioning (HVAC) filters that are typically found in commercial and residential buildings. Electrostatic and mechanical HVAC filters were tested, both without and after loading with dust to 50% of their total holding capacity. The results were analyzed by one-way ANOVA across material types, presence or absence of dust, and sampling device. The extraction method gave higher relative recoveries than the two vacuum methods evaluated (p≤0.001). On average, recoveries obtained by the vacuum methods were about 30% of those achieved by the extraction method. Relative recoveries between the two vacuum methods were not significantly different (p>0.05). Although extraction methods yielded higher recoveries than vacuum methods, either HVAC filter sampling approach may provide a rapid and inexpensive mechanism for understanding the extent of contamination following a wide-area biological release incident. Published by Elsevier B.V.
An evaluation of flow-stratified sampling for estimating suspended sediment loads
Robert B. Thomas; Jack Lewis
1995-01-01
Abstract - Flow-stratified sampling is a new method for sampling water quality constituents such as suspended sediment to estimate loads. As with selection-at-list-time (SALT) and time-stratified sampling, flow-stratified sampling is a statistical method requiring random sampling, and yielding unbiased estimates of load and variance. It can be used to estimate event...
Drummond, A; Rodrigo, A G
2000-12-01
Reconstruction of evolutionary relationships from noncontemporaneous molecular samples provides a new challenge for phylogenetic reconstruction methods. With recent biotechnological advances there has been an increase in molecular sequencing throughput, and the potential to obtain serial samples of sequences from populations, including rapidly evolving pathogens, is fast being realized. A new method called the serial-sample unweighted pair grouping method with arithmetic means (sUPGMA) is presented that reconstructs a genealogy or phylogeny of sequences sampled serially in time using a matrix of pairwise distances. The resulting tree depicts the terminal lineages of each sample ending at a different level consistent with the sample's temporal order. Since sUPGMA is a variant of UPGMA, it will perform best when sequences have evolved at a constant rate (i.e., according to a molecular clock). On simulated data, this new method performs better than standard cluster analysis under a variety of longitudinal sampling strategies. Serial-sample UPGMA is particularly useful for analysis of longitudinal samples of viruses and bacteria, as well as ancient DNA samples, with the minimal requirement that samples of sequences be ordered in time.
A Novel Method to Handle the Effect of Uneven Sampling Effort in Biodiversity Databases
Pardo, Iker; Pata, María P.; Gómez, Daniel; García, María B.
2013-01-01
How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses. PMID:23326357
A novel method to handle the effect of uneven sampling effort in biodiversity databases.
Pardo, Iker; Pata, María P; Gómez, Daniel; García, María B
2013-01-01
How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses.
Viability qPCR, a new tool for Legionella risk management.
Lizana, X; López, A; Benito, S; Agustí, G; Ríos, M; Piqué, N; Marqués, A M; Codony, F
2017-11-01
Viability quantitative Polymerase Chain Reaction (v-qPCR) is a recent analytical approach for only detecting live microorganisms by DNA amplification-based methods This approach is based on the use of a reagent that irreversibly fixes dead cells DNA. In this study, we evaluate the utility of v-qPCR versus culture method for Legionellosis risk management. The present study was performed using 116 real samples. Water samples were simultaneously analysed by culture, v-qPCR and qPCR methods. Results were compared by means of a non-parametric test. In 11.6% of samples using both methods (culture method and v-qPCR) results were positive, in 50.0% of samples both methods gave rise to negative results. As expected, equivalence between methods was not observed in all cases, as in 32.1% of samples positive results were obtained by v-qPCR and all of them gave rise to negative results by culture. Only in 6.3% of samples, with very low Legionella levels, was culture positive and v-qPCR negative. In 3.5% of samples, overgrowth of other bacteria did not allow performing the culture. When comparing both methods, significant differences between culture and v-qPCR were in the samples belonging to the cooling towers-evaporative condensers group. The v-qPCR method detected greater presence and obtained higher concentrations of Legionella spp. (p<0.001). Otherwise, no significant differences between methods were found in the rest of the groups. The v-qPCR method can be used as a quick tool to evaluate Legionellosis risk, especially in cooling towers-evaporative condensers, where this technique can detect higher levels than culture. The combined interpretation of PCR results along with the ratio of live cells is proposed as a tool for understanding the sample context and estimating the Legionellosis risk potential according to 4 levels of hierarchy. Copyright © 2017 Elsevier GmbH. All rights reserved.
A Typology of Mixed Methods Sampling Designs in Social Science Research
ERIC Educational Resources Information Center
Onwuegbuzie, Anthony J.; Collins, Kathleen M. T.
2007-01-01
This paper provides a framework for developing sampling designs in mixed methods research. First, we present sampling schemes that have been associated with quantitative and qualitative research. Second, we discuss sample size considerations and provide sample size recommendations for each of the major research designs for quantitative and…
Sampling bee communities using pan traps: alternative methods increase sample size
USDA-ARS?s Scientific Manuscript database
Monitoring of the status of bee populations and inventories of bee faunas require systematic sampling. Efficiency and ease of implementation has encouraged the use of pan traps to sample bees. Efforts to find an optimal standardized sampling method for pan traps have focused on pan trap color. Th...
Universal nucleic acids sample preparation method for cells, spores and their mixture
Bavykin, Sergei [Darien, IL
2011-01-18
The present invention relates to a method for extracting nucleic acids from biological samples. More specifically the invention relates to a universal method for extracting nucleic acids from unidentified biological samples. An advantage of the presently invented method is its ability to effectively and efficiently extract nucleic acids from a variety of different cell types including but not limited to prokaryotic or eukaryotic cells and/or recalcitrant organisms (i.e. spores). Unlike prior art methods which are focused on extracting nucleic acids from vegetative cell or spores, the present invention effectively extracts nucleic acids from spores, multiple cell types or mixtures thereof using a single method. Important that the invented method has demonstrated an ability to extract nucleic acids from spores and vegetative bacterial cells with similar levels effectiveness. The invented method employs a multi-step protocol which erodes the cell structure of the biological sample, isolates, labels, fragments nucleic acids and purifies labeled samples from the excess of dye.
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
Preparation of bone samples in the Gliwice Radiocarbon Laboratory for AMS radiocarbon dating.
Piotrowska, N; Goslar, T
2002-12-01
In the Gliwice Radiocarbon Laboratory, a system for preparation of samples for AMS dating has been built. At first it was used to produce graphite targets from plant macrofossils and sediments. In this study we extended its capabilities with the preparation of bones. We dealt with 3 methods; the first was the classical Longin method of collagen extraction, the second one included additional treatment of powdered bone in alkali solution, while in the third one carboxyl carbon was separated from amino acids obtained after hydrolysis of protein. The suitability of the methods was tested on 2 bone samples. Most of our samples gave ages > 40 kyr BP, suggesting good performance of the adapted methods, except for one sample prepared with simple Longin method. For routine preparation of bones we chose the Longin method with additional alkali treatment.
Sun, Yangbo; Chen, Long; Huang, Bisheng; Chen, Keli
2017-07-01
As a mineral, the traditional Chinese medicine calamine has a similar shape to many other minerals. Investigations of commercially available calamine samples have shown that there are many fake and inferior calamine goods sold on the market. The conventional identification method for calamine is complicated, therefore as a result of the large scale of calamine samples, a rapid identification method is needed. To establish a qualitative model using near-infrared (NIR) spectroscopy for rapid identification of various calamine samples, large quantities of calamine samples including crude products, counterfeits and processed products were collected and correctly identified using the physicochemical and powder X-ray diffraction method. The NIR spectroscopy method was used to analyze these samples by combining the multi-reference correlation coefficient (MRCC) method and the error back propagation artificial neural network algorithm (BP-ANN), so as to realize the qualitative identification of calamine samples. The accuracy rate of the model based on NIR and MRCC methods was 85%; in addition, the model, which took comprehensive multiple factors into consideration, can be used to identify crude calamine products, its counterfeits and processed products. Furthermore, by in-putting the correlation coefficients of multiple references as the spectral feature data of samples into BP-ANN, a BP-ANN model of qualitative identification was established, of which the accuracy rate was increased to 95%. The MRCC method can be used as a NIR-based method in the process of BP-ANN modeling.
Application of a Permethrin Immunosorbent Assay Method to Residential Soil and Dust Samples
A low-cost, high throughput bioanalytical screening method was developed for monitoring cis/trans-permethrin in dust and soil samples. The method consisted of a simple sample preparation procedure [sonication with dichloromethane followed by a solvent exchange into methanol:wate...
Rapid fusion method for the determination of Pu, Np, and Am in large soil samples
Maxwell, Sherrod L.; Culligan, Brian; Hutchison, Jay B.; ...
2015-02-14
A new rapid sodium hydroxide fusion method for the preparation of 10-20 g soil samples has been developed by the Savannah River National Laboratory (SRNL). The method enables lower detection limits for plutonium, neptunium, and americium in environmental soil samples. The method also significantly reduces sample processing time and acid fume generation compared to traditional soil digestion techniques using hydrofluoric acid. Ten gram soil aliquots can be ashed and fused using the new method in 1-2 hours, completely dissolving samples, including refractory particles. Pu, Np and Am are separated using stacked 2mL cartridges of TEVA and DGA Resin and measuredmore » using alpha spectrometry. The method can be adapted for measurement by inductively-coupled plasma mass spectrometry (ICP-MS). Two 10 g soil aliquots of fused soil may be combined prior to chromatographic separations to further improve detection limits. Total sample preparation time, including chromatographic separations and alpha spectrometry source preparation, is less than 8 hours.« less
Local Feature Selection for Data Classification.
Armanfard, Narges; Reilly, James P; Komeili, Majid
2016-06-01
Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.
Resampling methods in Microsoft Excel® for estimating reference intervals
Theodorsson, Elvar
2015-01-01
Computer- intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples. PMID:26527366
Resampling methods in Microsoft Excel® for estimating reference intervals.
Theodorsson, Elvar
2015-01-01
Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.
Uran, Harun; Gokoglu, Nalan
2014-04-01
The aim of this study was to determine the nutritional and quality characteristics of anchovy after cooking. The fish were cooked by different methods (frying, baking and grilling) at two different temperatures (160 °C, 180 °C). Crude ash, crude protein and crude fat contents of cooked fish increased due to rise in dry matter contents. While cooking methods affected mineral content of anchovy, cooking temperature did not affect. The highest values of monounsaturated fatty acids were found in baked samples. Polyunsaturated fatty acids in baked samples were also high and similar in fried samples. Fried samples, which were the most preferred, lost its nutritional characteristics more than baked and grilled samples. Grilled and baked fish samples can be recommended for healthy consumption. However, grilled fish samples had hard texture due to more moisture loss than other methods. Therefore, it is concluded that baking is the best cooking method for anchovy.
Cruz, Mutya; Wang, Miao; Frisch-Daiello, Jessica; Han, Xianlin
2016-07-01
Extraction of lipids from biological samples is a critical step in lipidomics, especially for shotgun lipidomics where lipid extracts are directly infused into a mass spectrometer. The butanol-methanol (BUME) extraction method was originally developed to extract lipids from plasma samples with 1 % acetic acid. Considering some lipids are sensitive to acidic environments, we modified this protocol by replacing acetic acid with lithium chloride solution and extended the modified extraction to tissue samples. Although no significant reduction of plasmalogen levels in the acidic BUME extracts of rat heart samples was found, the modified method was established to extract various tissue samples, including rat liver, heart, and plasma. Essentially identical profiles of the majority of lipid classes were obtained from the extracts of the modified BUME and traditional Bligh-Dyer methods. However, it was found that neither the original, nor the modified BUME method was suitable for 4-hydroxyalkenal species measurement in biological samples.
Cruz, Mutya; Wang, Miao; Frisch-Daiello, Jessica; Han, Xianlin
2016-01-01
Extraction of lipids from biological samples is a critical step in lipidomics, especially for shotgun lipidomics where lipid extracts are directly infused into a mass spectrometer. The butanol-methanol (BUME) extraction method was originally developed to extract lipids from plasma samples with 1% acetic acid. Considering some lipids are sensitive to acidic environments, we modified this protocol by replacing acetic acid with lithium chloride solution and extended the modified extraction to tissue samples. Although no significant reduction of plasmalogen levels in the acidic BUME extracts of rat heart samples was found, the modified method was established to extract various tissue samples, including rat liver, heart, and plasma. Essentially identical profiles of the majority of lipid classes were obtained from the extracts of the modified BUME and traditional Bligh-Dyer methods. However, it was found that neither the original, nor the modified BUME method was suitable for 4-hydroxyalkenal species measurement in biological samples. PMID:27245345
Jiang, Jia-Jia; Duan, Fa-Jie; Li, Yan-Chao; Hua, Xiang-Ning
2014-03-01
Synchronization sampling is very important in underwater towed array system where every acquisition node (AN) samples analog signals by its own analog-digital converter (ADC). In this paper, a simple and effective synchronization sampling method is proposed to ensure synchronized operation among different ANs of the underwater towed array system. We first present a master-slave synchronization sampling model, and then design a high accuracy phase-locked loop to synchronize all delta-sigma ADCs to a reference clock. However, when the master-slave synchronization sampling model is used, both the time-delay (TD) of messages traveling along the wired transmission medium and the jitter of the clocks will bring out synchronization sampling error (SSE). Therefore, a simple method is proposed to estimate and compensate the TD of the messages transmission, and then another effective method is presented to overcome the SSE caused by the jitter of the clocks. An experimental system with three ANs is set up, and the related experimental results verify the validity of the synchronization sampling method proposed in this paper.
NASA Astrophysics Data System (ADS)
Jiang, Jia-Jia; Duan, Fa-Jie; Li, Yan-Chao; Hua, Xiang-Ning
2014-03-01
Synchronization sampling is very important in underwater towed array system where every acquisition node (AN) samples analog signals by its own analog-digital converter (ADC). In this paper, a simple and effective synchronization sampling method is proposed to ensure synchronized operation among different ANs of the underwater towed array system. We first present a master-slave synchronization sampling model, and then design a high accuracy phase-locked loop to synchronize all delta-sigma ADCs to a reference clock. However, when the master-slave synchronization sampling model is used, both the time-delay (TD) of messages traveling along the wired transmission medium and the jitter of the clocks will bring out synchronization sampling error (SSE). Therefore, a simple method is proposed to estimate and compensate the TD of the messages transmission, and then another effective method is presented to overcome the SSE caused by the jitter of the clocks. An experimental system with three ANs is set up, and the related experimental results verify the validity of the synchronization sampling method proposed in this paper.
Rapid fusion method for the determination of refractory thorium and uranium isotopes in soil samples
Maxwell, Sherrod L.; Hutchison, Jay B.; McAlister, Daniel R.
2015-02-14
Recently, approximately 80% of participating laboratories failed to accurately determine uranium isotopes in soil samples in the U.S Department of Energy Mixed Analyte Performance Evaluation Program (MAPEP) Session 30, due to incomplete dissolution of refractory particles in the samples. Failing laboratories employed acid dissolution methods, including hydrofluoric acid, to recover uranium from the soil matrix. The failures illustrate the importance of rugged soil dissolution methods for the accurate measurement of analytes in the sample matrix. A new rapid fusion method has been developed by the Savannah River National Laboratory (SRNL) to prepare 1-2 g soil sample aliquots very quickly, withmore » total dissolution of refractory particles. Soil samples are fused with sodium hydroxide at 600 ºC in zirconium crucibles to enable complete dissolution of the sample. Uranium and thorium are separated on stacked TEVA and TRU extraction chromatographic resin cartridges, prior to isotopic measurements by alpha spectrometry on cerium fluoride microprecipitation sources. Plutonium can also be separated and measured using this method. Batches of 12 samples can be prepared for measurement in <5 hours.« less
A Comparison of Two Sampling Strategies to Assess Discomycete Diversity in Wet Tropical Forests
SHARON A. CANTRELL
2004-01-01
Most of the fungal diversity studies that have used a systematic collecting scheme have not included the discomycetes, so optimal sampling methods are not available for this group. In this study, I tested two sampling methods at each sites in the Caribbean National Forest, Puerto Rico and Ebano Verde Reserve, Dominican Republic. For a plot-based sampling method, 10 Ã...
Lungu, Bwalya; Waltman, W Douglas; Berghaus, Roy D; Hofacre, Charles L
2012-04-01
Conventional culture methods have traditionally been considered the "gold standard" for the isolation and identification of foodborne bacterial pathogens. However, culture methods are labor-intensive and time-consuming. A Salmonella enterica serotype Enteritidis-specific real-time PCR assay that recently received interim approval by the National Poultry Improvement Plan for the detection of Salmonella Enteritidis was evaluated against a culture method that had also received interim National Poultry Improvement Plan approval for the analysis of environmental samples from integrated poultry houses. The method was validated with 422 field samples collected by either the boot sock or drag swab method. The samples were cultured by selective enrichment in tetrathionate broth followed by transfer onto a modified semisolid Rappaport-Vassiliadis medium and then plating onto brilliant green with novobiocin and xylose lysine brilliant Tergitol 4 plates. One-milliliter aliquots of the selective enrichment broths from each sample were collected for DNA extraction by the commercial PrepSEQ nucleic acid extraction assay and analysis by the Salmonella Enteritidis-specific real-time PCR assay. The real-time PCR assay detected no significant differences between the boot sock and drag swab samples. In contrast, the culture method detected a significantly higher number of positive samples from boot socks. The diagnostic sensitivity of the real-time PCR assay for the field samples was significantly higher than that of the culture method. The kappa value obtained was 0.46, indicating moderate agreement between the real-time PCR assay and the culture method. In addition, the real-time PCR method had a turnaround time of 2 days compared with 4 to 8 days for the culture method. The higher sensitivity as well as the reduction in time and labor makes this real-time PCR assay an excellent alternative to conventional culture methods for diagnostic purposes, surveillance, and research studies to improve food safety.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxwell, S.; Jones, V.
2009-05-27
A new rapid separation method that allows separation and preconcentration of actinides in urine samples was developed for the measurement of longer lived actinides by inductively coupled plasma mass spectrometry (ICP-MS) and short-lived actinides by alpha spectrometry; a hybrid approach. This method uses stacked extraction chromatography cartridges and vacuum box technology to facilitate rapid separations. Preconcentration, if required, is performed using a streamlined calcium phosphate precipitation. Similar technology has been applied to separate actinides prior to measurement by alpha spectrometry, but this new method has been developed with elution reagents now compatible with ICP-MS as well. Purified solutions are splitmore » between ICP-MS and alpha spectrometry so that long- and short-lived actinide isotopes can be measured successfully. The method allows for simultaneous extraction of 24 samples (including QC samples) in less than 3 h. Simultaneous sample preparation can offer significant time savings over sequential sample preparation. For example, sequential sample preparation of 24 samples taking just 15 min each requires 6 h to complete. The simplicity and speed of this new method makes it attractive for radiological emergency response. If preconcentration is applied, the method is applicable to larger sample aliquots for occupational exposures as well. The chemical recoveries are typically greater than 90%, in contrast to other reported methods using flow injection separation techniques for urine samples where plutonium yields were 70-80%. This method allows measurement of both long-lived and short-lived actinide isotopes. 239Pu, 242Pu, 237Np, 243Am, 234U, 235U and 238U were measured by ICP-MS, while 236Pu, 238Pu, 239Pu, 241Am, 243Am and 244Cm were measured by alpha spectrometry. The method can also be adapted so that the separation of uranium isotopes for assay is not required, if uranium assay by direct dilution of the urine sample is preferred instead. Multiple vacuum box locations may be set-up to supply several ICP-MS units with purified sample fractions such that a high sample throughput may be achieved, while still allowing for rapid measurement of short-lived actinides by alpha spectrometry.« less
Subrandom methods for multidimensional nonuniform sampling.
Worley, Bradley
2016-08-01
Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics. Copyright © 2016 Elsevier Inc. All rights reserved.
Zhang, Hong-guang; Lu, Jian-gang
2016-02-01
Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.
Mavridou, A; Smeti, E; Mandilara, G; Mandilara, G; Boufa, P; Vagiona-Arvanitidou, M; Vantarakis, A; Vassilandonopoulou, G; Pappa, O; Roussia, V; Tzouanopoulos, A; Livadara, M; Aisopou, I; Maraka, V; Nikolaou, E; Mandilara, G
2010-01-01
In this study ten laboratories in Greece compared the performance of reference method TTC Tergitol 7 Agar (with the additional test of beta-glucuronidase production) with five alternative methods, to detect E. coli in water, in line with European Water Directive recommendations. The samples were prepared by spiking drinking water with sewage effluent following a standard protocol. Chlorinated and non-chlorinated samples were used. The statistical analysis was based on the mean relative difference of confirmed counts and was performed in line with ISO 17994. The results showed that in total, three of the alternative methods (Chromocult Coliform agar, Membrane Lauryl Sulfate agar and Trypton Bilex-glucuronidase medium) were not different from TTC Tergitol 7 agar (TTC Tergitol 7 agar vs Chromocult Coliform agar, 294 samples, mean RD% 5.55; vs MLSA, 302 samples, mean RD% 1; vs TBX, 297 samples, mean RD% -2.78). The other two alternative methods (Membrane Faecal coliform medium and Colilert 18/ Quantitray) gave significantly higher counts than TTC Tergitol 7 agar (TTC Tergitol 7 agar vs MFc, 303 samples, mean RD% 8.81; vs Colilert-18/Quantitray, 76 samples, mean RD% 18.91). In other words, the alternative methods generated performance that was as reliable as, or even better than, the reference method. This study will help laboratories in Greece overcome culture and counting problems deriving from the EU reference method for E. coli counts in water samples.
Evaluation on determination of iodine in coal by energy dispersive X-ray fluorescence
Wang, B.; Jackson, J.C.; Palmer, C.; Zheng, B.; Finkelman, R.B.
2005-01-01
A quick and inexpensive method of relative high iodine determination from coal samples was evaluated. Energy dispersive X-ray fluorescence (EDXRF) provided a detection limit of about 14 ppm (3 times of standard deviations of the blank sample), without any complex sample preparation. An analytical relative standard deviation of 16% was readily attainable for coal samples. Under optimum conditions, coal samples with iodine concentrations higher than 5 ppm can be determined using this EDXRF method. For the time being, due to the general iodine concentrations of coal samples lower than 5 ppm, except for some high iodine content coal, this method can not effectively been used for iodine determination. More work needed to meet the requirement of determination of iodine from coal samples for this method. Copyright ?? 2005 by The Geochemical Society of Japan.
Galea, Karen S; McGonagle, Carolyn; Sleeuwenhoek, Anne; Todd, David; Jiménez, Araceli Sánchez
2014-06-01
Dermal exposure to drilling fluids and crude oil is an exposure route of concern. However, there have been no published studies describing sampling methods or reporting dermal exposure measurements. We describe a study that aimed to evaluate a wipe sampling method to assess dermal exposure to an oil-based drilling fluid and crude oil, as well as to investigate the feasibility of using an interception cotton glove sampler for exposure on the hands/wrists. A direct comparison of the wipe and interception methods was also completed using pigs' trotters as a surrogate for human skin and a direct surface contact exposure scenario. Overall, acceptable recovery and sampling efficiencies were reported for both methods, and both methods had satisfactory storage stability at 1 and 7 days, although there appeared to be some loss over 14 days. The methods' comparison study revealed significantly higher removal of both fluids from the metal surface with the glove samples compared with the wipe samples (on average 2.5 times higher). Both evaluated sampling methods were found to be suitable for assessing dermal exposure to oil-based drilling fluids and crude oil; however, the comparison study clearly illustrates that glove samplers may overestimate the amount of fluid transferred to the skin. Further comparison of the two dermal sampling methods using additional exposure situations such as immersion or deposition, as well as a field evaluation, is warranted to confirm their appropriateness and suitability in the working environment. © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Galea, Karen S.; McGonagle, Carolyn; Sleeuwenhoek, Anne; Todd, David; Jiménez, Araceli Sánchez
2014-01-01
Dermal exposure to drilling fluids and crude oil is an exposure route of concern. However, there have been no published studies describing sampling methods or reporting dermal exposure measurements. We describe a study that aimed to evaluate a wipe sampling method to assess dermal exposure to an oil-based drilling fluid and crude oil, as well as to investigate the feasibility of using an interception cotton glove sampler for exposure on the hands/wrists. A direct comparison of the wipe and interception methods was also completed using pigs’ trotters as a surrogate for human skin and a direct surface contact exposure scenario. Overall, acceptable recovery and sampling efficiencies were reported for both methods, and both methods had satisfactory storage stability at 1 and 7 days, although there appeared to be some loss over 14 days. The methods’ comparison study revealed significantly higher removal of both fluids from the metal surface with the glove samples compared with the wipe samples (on average 2.5 times higher). Both evaluated sampling methods were found to be suitable for assessing dermal exposure to oil-based drilling fluids and crude oil; however, the comparison study clearly illustrates that glove samplers may overestimate the amount of fluid transferred to the skin. Further comparison of the two dermal sampling methods using additional exposure situations such as immersion or deposition, as well as a field evaluation, is warranted to confirm their appropriateness and suitability in the working environment. PMID:24598941
Le Boedec, Kevin
2016-12-01
According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.
Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin
2017-08-15
Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.
Shear Strength of Remoulding Clay Samples Using Different Methods of Moulding
NASA Astrophysics Data System (ADS)
Norhaliza, W.; Ismail, B.; Azhar, A. T. S.; Nurul, N. J.
2016-07-01
Shear strength for clay soil was required to determine the soil stability. Clay was known as a soil with complex natural formations and very difficult to obtain undisturbed samples at the site. The aim of this paper was to determine the unconfined shear strength of remoulded clay on different methods in moulding samples which were proctor compaction, hand operated soil compacter and miniature mould methods. All the samples were remoulded with the same optimum moisture content (OMC) and density that were 18% and 1880 kg/m3 respectively. The unconfined shear strength results of remoulding clay soils for proctor compaction method was 289.56kPa with the strain 4.8%, hand operated method was 261.66kPa with the strain 4.4% and miniature mould method was 247.52kPa with the strain 3.9%. Based on the proctor compaction method, the reduction percentage of unconfined shear strength of remoulded clay soil of hand operated method was 9.66%, and for miniature mould method was 14.52%. Thus, because there was no significant difference of reduction percentage of unconfined shear strength between three different methods, so it can be concluded that remoulding clay by hand operated method and miniature mould method were accepted and suggested to perform remoulding clay samples by other future researcher. However for comparison, the hand operated method was more suitable to form remoulded clay sample in term of easiness, saving time and less energy for unconfined shear strength determination purposes.
NASA Astrophysics Data System (ADS)
Nasir, N. F.; Mirus, M. F.; Ismail, M.
2017-09-01
Crude glycerol which produced from transesterification reaction has limited usage if it does not undergo purification process. It also contains excess methanol, catalyst and soap. Conventionally, purification method of the crude glycerol involves high cost and complex processes. This study aimed to determine the effects of using different purification methods which are direct method (comprises of ion exchange and methanol removal steps) and multistep method (comprises of neutralization, filtration, ion exchange and methanol removal steps). Two crude glycerol samples were investigated; the self-produced sample through the transesterification process of palm oil and the sample obtained from biodiesel plant. Samples were analysed using Fourier Transform Infrared Spectroscopy, Gas Chromatography and High Performance Liquid Chromatography. The results of this study for both samples after purification have showed that the pure glycerol was successfully produced and fatty acid salts were eliminated. Also, the results indicated the absence of methanol in both samples after purification process. In short, the combination of 4 purification steps has contributed to a higher quality of glycerol. Multistep purification method gave a better result compared to the direct method as neutralization and filtration steps helped in removing most excess salt, fatty acid and catalyst.
Novel methodology to isolate microplastics from vegetal-rich samples.
Herrera, Alicia; Garrido-Amador, Paloma; Martínez, Ico; Samper, María Dolores; López-Martínez, Juan; Gómez, May; Packard, Theodore T
2018-04-01
Microplastics are small plastic particles, globally distributed throughout the oceans. To properly study them, all the methodologies for their sampling, extraction, and measurement should be standardized. For heterogeneous samples containing sediments, animal tissues and zooplankton, several procedures have been described. However, definitive methodologies for samples, rich in algae and plant material, have not yet been developed. The aim of this study was to find the best extraction protocol for vegetal-rich samples by comparing the efficacies of five previously described digestion methods, and a novel density separation method. A protocol using 96% ethanol for density separation was better than the five digestion methods tested, even better than using H 2 O 2 digestion. As it was the most efficient, simple, safe and inexpensive method for isolating microplastics from vegetal rich samples, we recommend it as a standard separation method. Copyright © 2018 Elsevier Ltd. All rights reserved.
Method and system for laser-based formation of micro-shapes in surfaces of optical elements
Bass, Isaac Louis; Guss, Gabriel Mark
2013-03-05
A method of forming a surface feature extending into a sample includes providing a laser operable to emit an output beam and modulating the output beam to form a pulse train having a plurality of pulses. The method also includes a) directing the pulse train along an optical path intersecting an exposed portion of the sample at a position i and b) focusing a first portion of the plurality of pulses to impinge on the sample at the position i. Each of the plurality of pulses is characterized by a spot size at the sample. The method further includes c) ablating at least a portion of the sample at the position i to form a portion of the surface feature and d) incrementing counter i. The method includes e) repeating steps a) through d) to form the surface feature. The sample is free of a rim surrounding the surface feature.
Fortes, Esther D; David, John; Koeritzer, Bob; Wiedmann, Martin
2013-05-01
There is a continued need to develop improved rapid methods for detection of foodborne pathogens. The aim of this project was to evaluate the 3M Molecular Detection System (3M MDS), which uses isothermal DNA amplification, and the 3M Molecular Detection Assay Listeria using environmental samples obtained from retail delicatessens and meat, seafood, and dairy processing plants. Environmental sponge samples were tested for Listeria with the 3M MDS after 22 and 48 h of enrichment in 3M Modified Listeria Recovery Broth (3M mLRB); enrichments were also used for cultural detection of Listeria spp. Among 391 samples tested for Listeria, 74 were positive by both the 3M MDS and the cultural method, 310 were negative by both methods, 2 were positive by the 3M MDS and negative by the cultural method, and one sample was negative by the 3M MDS and positive by the cultural method. Four samples were removed from the sample set, prior to statistical analyses, due to potential cross-contamination during testing. Listeria isolates from positive samples represented L. monocytogenes, L. innocua, L. welshimeri, and L. seeligeri. Overall, the 3M MDS and culture-based detection after enrichment in 3M mLRB did not differ significantly (P < 0.05) with regard to the number of positive samples, when chi-square analyses were performed for (i) number of positive samples after 22 h, (ii) number of positive samples after 48 h, and (iii) number of positive samples after 22 and/or 48 h of enrichment in 3M mLRB. Among 288 sampling sites that were tested with duplicate sponges, 67 each tested positive with the 3M MDS and the traditional U.S. Food and Drug Administration Bacteriological Analytical Manual method, further supporting that the 3M MDS performs equivalently to traditional methods when used with environmental sponge samples.
Evaluating Composite Sampling Methods of Bacillus spores at Low Concentrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hess, Becky M.; Amidan, Brett G.; Anderson, Kevin K.
Restoring facility operations after the 2001 Amerithrax attacks took over three months to complete, highlighting the need to reduce remediation time. The most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite: a single cellulose sponge samples multiple coupons; 2) single medium multi-pass composite: a single cellulose sponge is used to sample multiple coupons; and 3) multi-medium post-samplemore » composite: a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155CFU/cm2, respectively). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p-value < 0.0001) and coupon material (p-value = 0.0008). Recovery efficiency (RE) was higher overall using the post-sample composite (PSC) method compared to single medium composite from both clean and grime coated materials. RE with the PSC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, painted wall board, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but significantly lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces.« less
Feng, Shu; Gale, Michael J; Fay, Jonathan D; Faridi, Ambar; Titus, Hope E; Garg, Anupam K; Michaels, Keith V; Erker, Laura R; Peters, Dawn; Smith, Travis B; Pennesi, Mark E
2015-09-01
To describe a standardized flood-illuminated adaptive optics (AO) imaging protocol suitable for the clinical setting and to assess sampling methods for measuring cone density. Cone density was calculated following three measurement protocols: 50 × 50-μm sampling window values every 0.5° along the horizontal and vertical meridians (fixed-interval method), the mean density of expanding 0.5°-wide arcuate areas in the nasal, temporal, superior, and inferior quadrants (arcuate mean method), and the peak cone density of a 50 × 50-μm sampling window within expanding arcuate areas near the meridian (peak density method). Repeated imaging was performed in nine subjects to determine intersession repeatability of cone density. Cone density montages could be created for 67 of the 74 subjects. Image quality was determined to be adequate for automated cone counting for 35 (52%) of the 67 subjects. We found that cone density varied with different sampling methods and regions tested. In the nasal and temporal quadrants, peak density most closely resembled histological data, whereas the arcuate mean and fixed-interval methods tended to underestimate the density compared with histological data. However, in the inferior and superior quadrants, arcuate mean and fixed-interval methods most closely matched histological data, whereas the peak density method overestimated cone density compared with histological data. Intersession repeatability testing showed that repeatability was greatest when sampling by arcuate mean and lowest when sampling by fixed interval. We show that different methods of sampling can significantly affect cone density measurements. Therefore, care must be taken when interpreting cone density results, even in a normal population.
Yong, Dongeun; Ki, Chang-Seok; Kim, Jae-Seok; Seong, Moon-Woo; Lee, Hyukmin
2016-01-01
Background Real-time reverse transcription PCR (rRT-PCR) of sputum samples is commonly used to diagnose Middle East respiratory syndrome coronavirus (MERS-CoV) infection. Owing to the difficulty of extracting RNA from sputum containing mucus, sputum homogenization is desirable prior to nucleic acid isolation. We determined optimal homogenization methods for isolating viral nucleic acids from sputum. Methods We evaluated the following three sputum-homogenization methods: proteinase K and DNase I (PK-DNase) treatment, phosphate-buffered saline (PBS) treatment, and N-acetyl-L-cysteine and sodium citrate (NALC) treatment. Sputum samples were spiked with inactivated MERS-CoV culture isolates. RNA was extracted from pretreated, spiked samples using the easyMAG system (bioMérieux, France). Extracted RNAs were then subjected to rRT-PCR for MERS-CoV diagnosis (DiaPlex Q MERS-coronavirus, SolGent, Korea). Results While analyzing 15 spiked sputum samples prepared in technical duplicate, false-negative results were obtained with five (16.7%) and four samples (13.3%), respectively, by using the PBS and NALC methods. The range of threshold cycle (Ct) values observed when detecting upE in sputum samples was 31.1–35.4 with the PK-DNase method, 34.7–39.0 with the PBS method, and 33.9–38.6 with the NALC method. Compared with the control, which were prepared by adding a one-tenth volume of 1:1,000 diluted viral culture to PBS solution, the ranges of Ct values obtained by the PBS and NALC methods differed significantly from the mean control Ct of 33.2 (both P<0.0001). Conclusions The PK-DNase method is suitable for homogenizing sputum samples prior to RNA extraction. PMID:27374711
GROUND WATER PURGING AND SAMPLING METHODS: HISTORY VS. HYSTERIA
It has been over 10 years since the low-flow ground water purging and sampling method was initially reported in the literature. The method grew from the recognition that well purging was necessary to collect representative samples, bailers could not achieve well purging, and high...
THE INFLUENCE OF PHYSICAL FACTORS ON COMPARATIVE PERFORMANCE OF SAMPLING METHODS IN LARGE RIVERS
In 1999, we compared five existing benthic macroinvertebrate sampling methods used in boatable rivers. Each sampling protocol was performed at each of 60 sites distributed among four rivers in the Ohio River drainage basin. Initial comparison of methods using key macroinvertebr...
Herrington, Jason S; Fan, Zhi-Hua Tina; Lioy, Paul J; Zhang, Junfeng Jim
2007-01-15
Airborne aldehyde and ketone (carbonyl) sampling methodologies based on derivatization with 2,4-dinitrophenylhydrazine (DNPH)-coated solid sorbents could unequivocally be considered the "gold" standard. Originally developed in the late 1970s, these methods have been extensively evaluated and developed up to the present day. However, these methods have been inadequately evaluated for the long-term (i.e., 24 h or greater) sampling collection efficiency (CE) of carbonyls other than formaldehyde. The current body of literature fails to demonstrate that DNPH-coated solid sorbent sampling methods have acceptable CEs for the long-term sampling of carbonyls other than formaldehyde. Despite this, such methods are widely used to report the concentrations of multiple carbonyls from long-term sampling, assuming approximately 100% CEs. Laboratory experiments were conducted in this study to evaluate the long-term formaldehyde and acetaldehyde sampling CEs for several commonly used DNPH-coated solid sorbents. Results from sampling known concentrations of formaldehyde and acetaldehyde generated in a dynamic atmosphere generation system demonstrate that the 24-hour formaldehyde sampling CEs ranged from 83 to 133%, confirming the findings made in previous studies. However, the 24-hour acetaldehyde sampling CEs ranged from 1 to 62%. Attempts to increase the acetaldehyde CEs by adding acid to the samples post sampling were unsuccessful. These results indicate that assuming approximately 100% CEs for 24-hour acetaldehyde sampling, as commonly done with DNPH-coated solid sorbent methods, would substantially under estimate acetaldehyde concentrations.
[Respondent-Driven Sampling: a new sampling method to study visible and hidden populations].
Mantecón, Alejandro; Juan, Montse; Calafat, Amador; Becoña, Elisardo; Román, Encarna
2008-01-01
The paper introduces a variant of chain-referral sampling: respondent-driven sampling (RDS). This sampling method shows that methods based on network analysis can be combined with the statistical validity of standard probability sampling methods. In this sense, RDS appears to be a mathematical improvement of snowball sampling oriented to the study of hidden populations. However, we try to prove its validity with populations that are not within a sampling frame but can nonetheless be contacted without difficulty. The basics of RDS are explained through our research on young people (aged 14 to 25) who go clubbing, consume alcohol and other drugs, and have sex. Fieldwork was carried out between May and July 2007 in three Spanish regions: Baleares, Galicia and Comunidad Valenciana. The presentation of the study shows the utility of this type of sampling when the population is accessible but there is a difficulty deriving from the lack of a sampling frame. However, the sample obtained is not a random representative one in statistical terms of the target population. It must be acknowledged that the final sample is representative of a 'pseudo-population' that approximates to the target population but is not identical to it.
Systems and methods for separating particles and/or substances from a sample fluid
Mariella, Jr., Raymond P.; Dougherty, George M.; Dzenitis, John M.; Miles, Robin R.; Clague, David S.
2016-11-01
Systems and methods for separating particles and/or toxins from a sample fluid. A method according to one embodiment comprises simultaneously passing a sample fluid and a buffer fluid through a chamber such that a fluidic interface is formed between the sample fluid and the buffer fluid as the fluids pass through the chamber, the sample fluid having particles of interest therein; applying a force to the fluids for urging the particles of interest to pass through the interface into the buffer fluid; and substantially separating the buffer fluid from the sample fluid.
Detecting the sampling rate through observations
NASA Astrophysics Data System (ADS)
Shoji, Isao
2018-09-01
This paper proposes a method to detect the sampling rate of discrete time series of diffusion processes. Using the maximum likelihood estimates of the parameters of a diffusion process, we establish a criterion based on the Kullback-Leibler divergence and thereby estimate the sampling rate. Simulation studies are conducted to check whether the method can detect the sampling rates from data and their results show a good performance in the detection. In addition, the method is applied to a financial time series sampled on daily basis and shows the detected sampling rate is different from the conventional rates.
Probe Heating Method for the Analysis of Solid Samples Using a Portable Mass Spectrometer
Kumano, Shun; Sugiyama, Masuyuki; Yamada, Masuyoshi; Nishimura, Kazushige; Hasegawa, Hideki; Morokuma, Hidetoshi; Inoue, Hiroyuki; Hashimoto, Yuichiro
2015-01-01
We previously reported on the development of a portable mass spectrometer for the onsite screening of illicit drugs, but our previous sampling system could only be used for liquid samples. In this study, we report on an attempt to develop a probe heating method that also permits solid samples to be analyzed using a portable mass spectrometer. An aluminum rod is used as the sampling probe. The powdered sample is affixed to the sampling probe or a droplet of sample solution is placed on the tip of the probe and dried. The probe is then placed on a heater to vaporize the sample. The vapor is then introduced into the portable mass spectrometer and analyzed. With the heater temperature set to 130°C, the developed system detected 1 ng of methamphetamine, 1 ng of amphetamine, 3 ng of 3,4-methylenedioxymethamphetamine, 1 ng of 3,4-methylenedioxyamphetamine, and 0.3 ng of cocaine. Even from mixtures consisting of clove powder and methamphetamine powder, methamphetamine ions were detected by tandem mass spectrometry. The developed probe heating method provides a simple method for the analysis of solid samples. A portable mass spectrometer incorporating this method would thus be useful for the onsite screening of illicit drugs. PMID:26819909
Filla, Robert T; Schrell, Adrian M; Coulton, John B; Edwards, James L; Roper, Michael G
2018-02-20
A method for multiplexed sample analysis by mass spectrometry without the need for chemical tagging is presented. In this new method, each sample is pulsed at unique frequencies, mixed, and delivered to the mass spectrometer while maintaining a constant total flow rate. Reconstructed ion currents are then a time-dependent signal consisting of the sum of the ion currents from the various samples. Spectral deconvolution of each reconstructed ion current reveals the identity of each sample, encoded by its unique frequency, and its concentration encoded by the peak height in the frequency domain. This technique is different from other approaches that have been described, which have used modulation techniques to increase the signal-to-noise ratio of a single sample. As proof of concept of this new method, two samples containing up to 9 analytes were multiplexed. The linear dynamic range of the calibration curve was increased with extended acquisition times of the experiment and longer oscillation periods of the samples. Because of the combination of the samples, salt had little effect on the ability of this method to achieve relative quantitation. Continued development of this method is expected to allow for increased numbers of samples that can be multiplexed.
Zheng, Lu; Gao, Naiyun; Deng, Yang
2012-01-01
It is difficult to isolate DNA from biological activated carbon (BAC) samples used in water treatment plants, owing to the scarcity of microorganisms in BAC samples. The aim of this study was to identify DNA extraction methods suitable for a long-term, comprehensive ecological analysis of BAC microbial communities. To identify a procedure that can produce high molecular weight DNA, maximizes detectable diversity and is relatively free from contaminants, the microwave extraction method, the cetyltrimethylammonium bromide (CTAB) extraction method, a commercial DNA extraction kit, and the ultrasonic extraction method were used for the extraction of DNA from BAC samples. Spectrophotometry, agarose gel electrophoresis and polymerase chain reaction (PCR)-restriction fragment length polymorphisms (RFLP) analysis were conducted to compare the yield and quality of DNA obtained using these methods. The results showed that the CTAB method produce the highest yield and genetic diversity of DNA from BAC samples, but DNA purity was slightly less than that obtained with the DNA extraction-kit method. This study provides a theoretical basis for establishing and selecting DNA extraction methods for BAC samples.
NASA Astrophysics Data System (ADS)
Lusiana, Evellin Dewi
2017-12-01
The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.
Valid statistical inference methods for a case-control study with missing data.
Tian, Guo-Liang; Zhang, Chi; Jiang, Xuejun
2018-04-01
The main objective of this paper is to derive the valid sampling distribution of the observed counts in a case-control study with missing data under the assumption of missing at random by employing the conditional sampling method and the mechanism augmentation method. The proposed sampling distribution, called the case-control sampling distribution, can be used to calculate the standard errors of the maximum likelihood estimates of parameters via the Fisher information matrix and to generate independent samples for constructing small-sample bootstrap confidence intervals. Theoretical comparisons of the new case-control sampling distribution with two existing sampling distributions exhibit a large difference. Simulations are conducted to investigate the influence of the three different sampling distributions on statistical inferences. One finding is that the conclusion by the Wald test for testing independency under the two existing sampling distributions could be completely different (even contradictory) from the Wald test for testing the equality of the success probabilities in control/case groups under the proposed distribution. A real cervical cancer data set is used to illustrate the proposed statistical methods.
Krämer, Nadine; Löfström, Charlotta; Vigre, Håkan; Hoorfar, Jeffrey; Bunge, Cornelia; Malorny, Burkhard
2011-03-01
Salmonella is a major zoonotic pathogen which causes outbreaks and sporadic cases of gastroenteritis in humans worldwide. The primary sources for Salmonella are food-producing animals such as pigs and poultry. For risk assessment and hazard analysis and critical control point (HACCP) concepts, it is essential to produce large amounts of quantitative data, which is currently not achievable with the standard cultural based methods for enumeration of Salmonella. This study presents the development of a novel strategy to enumerate low numbers of Salmonella in cork borer samples taken from pig carcasses as a first concept and proof of principle for a new sensitive and rapid quantification method based on combined enrichment and real-time PCR. The novelty of the approach is in the short pre-enrichment step, where for most bacteria, growth is in the log phase. The method consists of an 8h pre-enrichment of the cork borer sample diluted 1:10 in non-selective buffered peptone water, followed by DNA extraction, and Salmonella detection and quantification by real-time PCR. The limit of quantification was 1.4 colony forming units (CFU)/20 cm(2) (approximately 10 g) of artificially contaminated sample with 95% confidence interval of ± 0.7 log CFU/sample. The precision was similar to the standard reference most probable number (MPN) method. A screening of 200 potentially naturally contaminated cork borer samples obtained over seven weeks in a slaughterhouse resulted in 25 Salmonella-positive samples. The analysis of salmonellae within these samples showed that the PCR method had a higher sensitivity for samples with a low contamination level (<6.7 CFU/sample), where 15 of the samples negative with the MPN method was detected with the PCR method and 5 were found to be negative by both methods. For the samples with a higher contamination level (6.7-310 CFU/sample) a good agreement between the results obtained with the PCR and MPN methods was obtained. The quantitative real-time PCR method can easily be applied to other food and environmental matrices by adaptation of the pre-enrichment time and media. Copyright © 2010 Elsevier B.V. All rights reserved.
Comparisons of discrete and integrative sampling accuracy in estimating pulsed aquatic exposures.
Morrison, Shane A; Luttbeg, Barney; Belden, Jason B
2016-11-01
Most current-use pesticides have short half-lives in the water column and thus the most relevant exposure scenarios for many aquatic organisms are pulsed exposures. Quantifying exposure using discrete water samples may not be accurate as few studies are able to sample frequently enough to accurately determine time-weighted average (TWA) concentrations of short aquatic exposures. Integrative sampling methods that continuously sample freely dissolved contaminants over time intervals (such as integrative passive samplers) have been demonstrated to be a promising measurement technique. We conducted several modeling scenarios to test the assumption that integrative methods may require many less samples for accurate estimation of peak 96-h TWA concentrations. We compared the accuracies of discrete point samples and integrative samples while varying sampling frequencies and a range of contaminant water half-lives (t 50 = 0.5, 2, and 8 d). Differences the predictive accuracy of discrete point samples and integrative samples were greatest at low sampling frequencies. For example, when the half-life was 0.5 d, discrete point samples required 7 sampling events to ensure median values > 50% and no sampling events reporting highly inaccurate results (defined as < 10% of the true 96-h TWA). Across all water half-lives investigated, integrative sampling only required two samples to prevent highly inaccurate results and measurements resulting in median values > 50% of the true concentration. Regardless, the need for integrative sampling diminished as water half-life increased. For an 8-d water half-life, two discrete samples produced accurate estimates and median values greater than those obtained for two integrative samples. Overall, integrative methods are the more accurate method for monitoring contaminants with short water half-lives due to reduced frequency of extreme values, especially with uncertainties around the timing of pulsed events. However, the acceptability of discrete sampling methods for providing accurate concentration measurements increases with increasing aquatic half-lives. Copyright © 2016 Elsevier Ltd. All rights reserved.
Inoue, Hiroaki; Takama, Tomoko; Yoshizaki, Miwa; Agata, Kunio
2015-01-01
We detected Legionella species in 111 bath water samples and 95 cooling tower water samples by using a combination of conventional plate culture, quantitative polymerase chain reaction (qPCR) and qPCR combined with ethidium monoazide treatment (EMA-qPCR) methods. In the case of bath water samples, Legionella spp. were detected in 30 samples by plate culture, in 85 samples by qPCR, and in 49 samples by EMA-qPCR. Of 81 samples determined to be Legionella-negative by plate culture, 56 and 23 samples were positive by qPCR and EMA-qPCR, respectively. Therefore, EMA treatment decreased the number of Legionella-positive bath water samples detected by qPCR. In contrast, EMA treatment had no effect on cooling tower water samples. We therefore expect that EMA-qPCR is a useful method for the rapid detection of viable Legionella spp. from bath water samples.
Rosing, H.; Hillebrand, M. J. X.; Blesson, S.; Mengesha, B.; Diro, E.; Hailu, A.; Schellens, J. H. M.; Beijnen, J. H.
2016-01-01
To facilitate future pharmacokinetic studies of combination treatments against leishmaniasis in remote regions in which the disease is endemic, a simple cheap sampling method is required for miltefosine quantification. The aims of this study were to validate a liquid chromatography-tandem mass spectrometry method to quantify miltefosine in dried blood spot (DBS) samples and to validate its use with Ethiopian patients with visceral leishmaniasis (VL). Since hematocrit (Ht) levels are typically severely decreased in VL patients, returning to normal during treatment, the method was evaluated over a range of clinically relevant Ht values. Miltefosine was extracted from DBS samples using a simple method of pretreatment with methanol, resulting in >97% recovery. The method was validated over a calibration range of 10 to 2,000 ng/ml, and accuracy and precision were within ±11.2% and ≤7.0% (≤19.1% at the lower limit of quantification), respectively. The method was accurate and precise for blood spot volumes between 10 and 30 μl and for Ht levels of 20 to 35%, although a linear effect of Ht levels on miltefosine quantification was observed in the bioanalytical validation. DBS samples were stable for at least 162 days at 37°C. Clinical validation of the method using paired DBS and plasma samples from 16 VL patients showed a median observed DBS/plasma miltefosine concentration ratio of 0.99, with good correlation (Pearson's r = 0.946). Correcting for patient-specific Ht levels did not further improve the concordance between the sampling methods. This successfully validated method to quantify miltefosine in DBS samples was demonstrated to be a valid and practical alternative to venous blood sampling that can be applied in future miltefosine pharmacokinetic studies with leishmaniasis patients, without Ht correction. PMID:26787691
Methods for sample size determination in cluster randomized trials
Rutterford, Clare; Copas, Andrew; Eldridge, Sandra
2015-01-01
Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515
Improved Sampling Method Reduces Isokinetic Sampling Errors.
ERIC Educational Resources Information Center
Karels, Gale G.
The particulate sampling system currently in use by the Bay Area Air Pollution Control District, San Francisco, California is described in this presentation for the 12th Conference on Methods in Air Pollution and Industrial Hygiene Studies, University of Southern California, April, 1971. The method represents a practical, inexpensive tool that can…
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2011 CFR
2011-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2010 CFR
2010-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2013 CFR
2013-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2014 CFR
2014-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2012 CFR
2012-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
Investigations at hazardous waste sites and sites of chemical spills often require on-site measurements and sampling activities to assess the type and extent of contamination. This document is a compilation of sampling methods and materials suitable to address most needs that ari...
Absolute method of measuring magnetic susceptibility
Thorpe, A.; Senftle, F.E.
1959-01-01
An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.
Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu
2015-07-01
Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.
Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella
2016-12-09
Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.
[Recent advances in sample preparation methods of plant hormones].
Wu, Qian; Wang, Lus; Wu, Dapeng; Duan, Chunfeng; Guan, Yafeng
2014-04-01
Plant hormones are a group of naturally occurring trace substances which play a crucial role in controlling the plant development, growth and environment response. With the development of the chromatography and mass spectroscopy technique, chromatographic analytical method has become a widely used way for plant hormone analysis. Among the steps of chromatographic analysis, sample preparation is undoubtedly the most vital one. Thus, a highly selective and efficient sample preparation method is critical for accurate identification and quantification of phytohormones. For the three major kinds of plant hormones including acidic plant hormones & basic plant hormones, brassinosteroids and plant polypeptides, the sample preparation methods are reviewed in sequence especially the recently developed methods. The review includes novel methods, devices, extractive materials and derivative reagents for sample preparation of phytohormones analysis. Especially, some related works of our group are included. At last, the future developments in this field are also prospected.
[Standard sample preparation method for quick determination of trace elements in plastic].
Yao, Wen-Qing; Zong, Rui-Long; Zhu, Yong-Fa
2011-08-01
Reference sample was prepared by masterbatch method, containing heavy metals with known concentration of electronic information products (plastic), the repeatability and precision were determined, and reference sample preparation procedures were established. X-Ray fluorescence spectroscopy (XRF) analysis method was used to determine the repeatability and uncertainty in the analysis of the sample of heavy metals and bromine element. The working curve and the metrical methods for the reference sample were carried out. The results showed that the use of the method in the 200-2000 mg x kg(-1) concentration range for Hg, Pb, Cr and Br elements, and in the 20-200 mg x kg(-1) range for Cd elements, exhibited a very good linear relationship, and the repeatability of analysis methods for six times is good. In testing the circuit board ICB288G and ICB288 from the Mitsubishi Heavy Industry Company, results agreed with the recommended values.
A study of active learning methods for named entity recognition in clinical text.
Chen, Yukun; Lasko, Thomas A; Mei, Qiaozhu; Denny, Joshua C; Xu, Hua
2015-12-01
Named entity recognition (NER), a sequential labeling task, is one of the fundamental tasks for building clinical natural language processing (NLP) systems. Machine learning (ML) based approaches can achieve good performance, but they often require large amounts of annotated samples, which are expensive to build due to the requirement of domain experts in annotation. Active learning (AL), a sample selection approach integrated with supervised ML, aims to minimize the annotation cost while maximizing the performance of ML-based models. In this study, our goal was to develop and evaluate both existing and new AL methods for a clinical NER task to identify concepts of medical problems, treatments, and lab tests from the clinical notes. Using the annotated NER corpus from the 2010 i2b2/VA NLP challenge that contained 349 clinical documents with 20,423 unique sentences, we simulated AL experiments using a number of existing and novel algorithms in three different categories including uncertainty-based, diversity-based, and baseline sampling strategies. They were compared with the passive learning that uses random sampling. Learning curves that plot performance of the NER model against the estimated annotation cost (based on number of sentences or words in the training set) were generated to evaluate different active learning and the passive learning methods and the area under the learning curve (ALC) score was computed. Based on the learning curves of F-measure vs. number of sentences, uncertainty sampling algorithms outperformed all other methods in ALC. Most diversity-based methods also performed better than random sampling in ALC. To achieve an F-measure of 0.80, the best method based on uncertainty sampling could save 66% annotations in sentences, as compared to random sampling. For the learning curves of F-measure vs. number of words, uncertainty sampling methods again outperformed all other methods in ALC. To achieve 0.80 in F-measure, in comparison to random sampling, the best uncertainty based method saved 42% annotations in words. But the best diversity based method reduced only 7% annotation effort. In the simulated setting, AL methods, particularly uncertainty-sampling based approaches, seemed to significantly save annotation cost for the clinical NER task. The actual benefit of active learning in clinical NER should be further evaluated in a real-time setting. Copyright © 2015 Elsevier Inc. All rights reserved.
Observational studies of patients in the emergency department: a comparison of 4 sampling methods.
Valley, Morgan A; Heard, Kennon J; Ginde, Adit A; Lezotte, Dennis C; Lowenstein, Steven R
2012-08-01
We evaluate the ability of 4 sampling methods to generate representative samples of the emergency department (ED) population. We analyzed the electronic records of 21,662 consecutive patient visits at an urban, academic ED. From this population, we simulated different models of study recruitment in the ED by using 2 sample sizes (n=200 and n=400) and 4 sampling methods: true random, random 4-hour time blocks by exact sample size, random 4-hour time blocks by a predetermined number of blocks, and convenience or "business hours." For each method and sample size, we obtained 1,000 samples from the population. Using χ(2) tests, we measured the number of statistically significant differences between the sample and the population for 8 variables (age, sex, race/ethnicity, language, triage acuity, arrival mode, disposition, and payer source). Then, for each variable, method, and sample size, we compared the proportion of the 1,000 samples that differed from the overall ED population to the expected proportion (5%). Only the true random samples represented the population with respect to sex, race/ethnicity, triage acuity, mode of arrival, language, and payer source in at least 95% of the samples. Patient samples obtained using random 4-hour time blocks and business hours sampling systematically differed from the overall ED patient population for several important demographic and clinical variables. However, the magnitude of these differences was not large. Common sampling strategies selected for ED-based studies may affect parameter estimates for several representative population variables. However, the potential for bias for these variables appears small. Copyright © 2012. Published by Mosby, Inc.
Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min
2016-07-22
One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account.
Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min
2016-01-01
One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account. PMID:27445105
An Overview of Conventional and Emerging Analytical Methods for the Determination of Mycotoxins
Cigić, Irena Kralj; Prosen, Helena
2009-01-01
Mycotoxins are a group of compounds produced by various fungi and excreted into the matrices on which they grow, often food intended for human consumption or animal feed. The high toxicity and carcinogenicity of these compounds and their ability to cause various pathological conditions has led to widespread screening of foods and feeds potentially polluted with them. Maximum permissible levels in different matrices have also been established for some toxins. As these are quite low, analytical methods for determination of mycotoxins have to be both sensitive and specific. In addition, an appropriate sample preparation and pre-concentration method is needed to isolate analytes from rather complicated samples. In this article, an overview of methods for analysis and sample preparation published in the last ten years is given for the most often encountered mycotoxins in different samples, mainly in food. Special emphasis is on liquid chromatography with fluorescence and mass spectrometric detection, while in the field of sample preparation various solid-phase extraction approaches are discussed. However, an overview of other analytical and sample preparation methods less often used is also given. Finally, different matrices where mycotoxins have to be determined are discussed with the emphasis on their specific characteristics important for the analysis (human food and beverages, animal feed, biological samples, environmental samples). Various issues important for accurate qualitative and quantitative analyses are critically discussed: sampling and choice of representative sample, sample preparation and possible bias associated with it, specificity of the analytical method and critical evaluation of results. PMID:19333436
Usefulness of in-house PCR methods for hepatitis B virus DNA detection.
Portilho, Moyra Machado; Baptista, Marcia Leite; da Silva, Messias; de Sousa, Paulo Sérgio Fonseca; Lewis-Ximenez, Lia Laura; Lampe, Elisabeth; Villar, Livia Melo
2015-10-01
The aim of the present study was to evaluate the performance of three in-house PCR techniques for HBV DNA detection and compare it with commercial quantitative methods to evaluate the usefulness of in-house methods for HBV diagnosis. Three panels of HBsAg reactive sera samples were evaluated: (i) 50 samples were examined using three methods for in-house qualitative PCR and the Cobas Amplicor HBV Monitor Assay; (ii) 87 samples were assayed using in-house semi-nested PCR and the Cobas TaqMan HBV test; (iii) 11 serial samples obtained from 2 HBV-infected individuals were assayed using the Cobas Amplicor HBV test and semi-nested PCR. In panel I, HBV DNA was detected in 44 samples using the Cobas Amplicor HBV test, 42 samples using semi-nested PCR (90% concordance with Cobas Amplicor), 22 samples using PCR for the core gene (63.6% concordance) and 29 samples using single-round PCR for the pre-S/S gene (75% concordance). In panel II, HBV DNA was quantified in 78 of the 87 HBsAg reactive samples using Cobas TaqMan but 52 samples using semi-nested PCR (67.8% concordance). HBV DNA was detected in serial samples until the 17th and 26th week after first donation using in-house semi-nested PCR and the Cobas Amplicor HBV test, respectively. In-house semi-nested PCR presented adequate concordance with commercial methods as an alternative method for HBV molecular diagnosis in low-resource settings. Copyright © 2015 Elsevier B.V. All rights reserved.
Rosenblum, Michael A; Laan, Mark J van der
2009-01-07
The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).
NASA Astrophysics Data System (ADS)
Šantić, Branko; Gracin, Davor
2017-12-01
A new simple Monte Carlo method is introduced for the study of electrostatic screening by surrounding ions. The proposed method is not based on the generally used Markov chain method for sample generation. Each sample is pristine and there is no correlation with other samples. As the main novelty, the pairs of ions are gradually added to a sample provided that the energy of each ion is within the boundaries determined by the temperature and the size of ions. The proposed method provides reliable results, as demonstrated by the screening of ion in plasma and in water.
Preliminary evaluation of a gel tube agglutination major cross-match method in dogs.
Villarnovo, Dania; Burton, Shelley A; Horney, Barbara S; MacKenzie, Allan L; Vanderstichel, Raphaël
2016-09-01
A major cross-match gel tube test is available for use in dogs yet has not been clinically evaluated. This study compared cross-match results obtained using the gel tube and the standard tube methods for canine samples. Study 1 included 107 canine sample donor-recipient pairings cross-match tested with the RapidVet-H method gel tube test and compared results with the standard tube method. Additionally, 120 pairings using pooled sera containing anti-canine erythrocyte antibody at various concentrations were tested with leftover blood from a hospital population to assess sensitivity and specificity of the gel tube method in comparison with the standard method. The gel tube method had a good relative specificity of 96.1% in detecting lack of agglutination (compatibility) compared to the standard tube method. Agreement between the 2 methods was moderate. Nine of 107 pairings showed agglutination/incompatibility on either test, too few to allow reliable calculation of relative sensitivity. Fifty percent of the gel tube method results were difficult to interpret due to sample spreading in the reaction and/or negative control tubes. The RapidVet-H method agreed with the standard cross-match method on compatible samples, but detected incompatibility in some sample pairs that were compatible with the standard method. Evaluation using larger numbers of incompatible pairings is needed to assess diagnostic utility. The gel tube method results were difficult to categorize due to sample spreading. Weak agglutination reactions or other factors such as centrifuge model may be responsible. © 2016 American Society for Veterinary Clinical Pathology.
NASA Astrophysics Data System (ADS)
Kuusimäki, Leea; Peltonen, Kimmo; Vainiotalo, Sinikka
A previously introduced method for monitoring environmental tobacco smoke (ETS) was further validated. The method is based on diffusive sampling of a vapour-phase marker, 3-ethenylpyridine (3-EP), with 3 M passive monitors (type 3500). Experiments were done in a dynamic chamber to assess diffusive sampling in comparison with active sampling in charcoal tubes or XAD-4 tubes. The sampling rate for 3-EP collected on the diffusive sampler was 23.1±0.6 mL min -1. The relative standard deviation for parallel samples ( n=6) ranged from 4% to 14% among experiments ( n=9). No marked reverse diffusion of 3-EP was detected nor any significant effect of relative humidity at 20%, 50% or 80%. The diffusive sampling of 3-EP was validated in field measurements in 15 restaurants in comparison with 3-EP and nicotine measurements using active sampling. The 3-EP concentration in restaurants ranged from 0.01 to 9.8 μg m -3, and the uptake rate for 3-EP based on 92 parallel samples was 24.0±0.4 mL min -1. A linear correlation ( r=0.98) was observed between 3-EP and nicotine concentrations, the average ratio of 3-EP to nicotine being 1:8. Active sampling of 3-EP and nicotine in charcoal tubes provided more reliable results than sampling in XAD-4 tubes. All samples were analysed using gas chromatography-mass spectrometry after elution with a 15% solution of pyridine in toluene. For nicotine, the limit of quantification of the charcoal tube method was 4 ng per sample, corresponding to 0.04 μg m -3 for an air sample of 96 L. For 3-EP, the limit of quantification of the diffusive method was 0.5-1.0 ng per sample, corresponding to 0.04-0.09 μg m -3 for 8 h sampling. The diffusive method proved suitable for ETS monitoring, even at low levels of ETS.
Feasibility of zero tolerance for Salmonella on raw poultry
USDA-ARS?s Scientific Manuscript database
Ideally, poultry producing countries around the globe should use internationally standardized sampling methods for Salmonella. It is difficult to compare prevalence data from country-to-country when sample plan, sample type, sample frequency and laboratory media along with methods differ. The Europe...
Flexible sampling large-scale social networks by self-adjustable random walk
NASA Astrophysics Data System (ADS)
Xu, Xiao-Ke; Zhu, Jonathan J. H.
2016-12-01
Online social networks (OSNs) have become an increasingly attractive gold mine for academic and commercial researchers. However, research on OSNs faces a number of difficult challenges. One bottleneck lies in the massive quantity and often unavailability of OSN population data. Sampling perhaps becomes the only feasible solution to the problems. How to draw samples that can represent the underlying OSNs has remained a formidable task because of a number of conceptual and methodological reasons. Especially, most of the empirically-driven studies on network sampling are confined to simulated data or sub-graph data, which are fundamentally different from real and complete-graph OSNs. In the current study, we propose a flexible sampling method, called Self-Adjustable Random Walk (SARW), and test it against with the population data of a real large-scale OSN. We evaluate the strengths of the sampling method in comparison with four prevailing methods, including uniform, breadth-first search (BFS), random walk (RW), and revised RW (i.e., MHRW) sampling. We try to mix both induced-edge and external-edge information of sampled nodes together in the same sampling process. Our results show that the SARW sampling method has been able to generate unbiased samples of OSNs with maximal precision and minimal cost. The study is helpful for the practice of OSN research by providing a highly needed sampling tools, for the methodological development of large-scale network sampling by comparative evaluations of existing sampling methods, and for the theoretical understanding of human networks by highlighting discrepancies and contradictions between existing knowledge/assumptions of large-scale real OSN data.
Bouchard, Daniel; Wanner, Philipp; Luo, Hong; McLoughlin, Patrick W; Henderson, James K; Pirkle, Robert J; Hunkeler, Daniel
2017-10-20
The methodology of the solvent-based dissolution method used to sample gas phase volatile organic compounds (VOC) for compound-specific isotope analysis (CSIA) was optimized to lower the method detection limits for TCE and benzene. The sampling methodology previously evaluated by [1] consists in pulling the air through a solvent to dissolve and accumulate the gaseous VOC. After the sampling process, the solvent can then be treated similarly as groundwater samples to perform routine CSIA by diluting an aliquot of the solvent into water to reach the required concentration of the targeted contaminant. Among solvents tested, tetraethylene glycol dimethyl ether (TGDE) showed the best aptitude for the method. TGDE has a great affinity with TCE and benzene, hence efficiently dissolving the compounds during their transition through the solvent. The method detection limit for TCE (5±1μg/m 3 ) and benzene (1.7±0.5μg/m 3 ) is lower when using TGDE compared to methanol, which was previously used (385μg/m 3 for TCE and 130μg/m 3 for benzene) [2]. The method detection limit refers to the minimal gas phase concentration in ambient air required to load sufficient VOC mass into TGDE to perform δ 13 C analysis. Due to a different analytical procedure, the method detection limit associated with δ 37 Cl analysis was found to be 156±6μg/m 3 for TCE. Furthermore, the experimental results validated the relationship between the gas phase TCE and the progressive accumulation of dissolved TCE in the solvent during the sampling process. Accordingly, based on the air-solvent partitioning coefficient, the sampling methodology (e.g. sampling rate, sampling duration, amount of solvent) and the final TCE concentration in the solvent, the concentration of TCE in the gas phase prevailing during the sampling event can be determined. Moreover, the possibility to analyse for TCE concentration in the solvent after sampling (or other targeted VOCs) allows the field deployment of the sampling method without the need to determine the initial gas phase TCE concentration. The simplified field deployment approach of the solvent-based dissolution method combined with the conventional analytical procedure used for groundwater samples substantially facilitates the application of CSIA to gas phase studies. Copyright © 2017 Elsevier B.V. All rights reserved.
Eggenkamp, H G M; Louvat, P
2018-04-30
In natural samples bromine is present in trace amounts, and measurement of stable Br isotopes necessitates its separation from the matrix. Most methods described previously need large samples or samples with high Br/Cl ratios. The use of metals as reagents, proposed in previous Br distillation methods, must be avoided for multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) analyses, because of risk of cross-contamination, since the instrument is also used to measure stable isotopes of metals. Dedicated to water and evaporite samples with low Br/Cl ratios, the proposed method is a simple distillation that separates bromide from chloride for isotopic analyses by MC-ICP-MS. It is based on the difference in oxidation potential between chloride and bromide in the presence of nitric acid. The sample is mixed with dilute (1:5) nitric acid in a distillation flask and heated over a candle flame for 10 min. The distillate (bromine) is trapped in an ammonia solution and reduced to bromide. Chloride is only distilled to a very small extent. The obtained solution can be measured directly by MC-ICP-MS for stable Br isotopes. The method was tested for a variety of volumes, ammonia concentrations, pH values and distillation times and compared with the classic ion-exchange chromatography method. The method more efficiently separates Br from Cl, so that samples with lower Br/Cl ratios can be analysed, with Br isotope data in agreement with those obtained by previous methods. Unlike other Br extraction methods based on oxidation, the distillation method presented here does not use any metallic ion for redox reactions that could contaminate the mass spectrometer. It is efficient in separating Br from samples with low Br/Cl ratios. The method ensures reproducible recovery yields and a long-term reproducibility of ±0.11‰ (1 standard deviation). The distillation method was successfully applied to samples with low Br/Cl ratios and low Br amounts (down to 20 μg). Copyright © 2018 John Wiley & Sons, Ltd.
Jha, Virendra K.; Wydoski, Duane S.
2002-01-01
A method for the isolation of 20 parent organophosphate pesticides and 5 pesticide degradates from filtered natural-water samples is described. Seven of these compounds are reported permanently with an estimated concentration because of performance issues. Water samples are filtered to remove suspended particulate matter, and then 1 liter of filtrate is pumped through disposable solid-phase extraction columns that contain octadecyl-bonded porous silica to extract the compounds. The C-18 columns are dried with nitrogen gas, and method compounds are eluted from the columns with ethyl acetate. The extract is analyzed by dual capillary-column gas chromatography with flame photometric detection. Single-operator method detection limits in all three water-matrix samples ranged from 0.004 to 0.012 microgram per liter. Method performance was validated by spiking all compounds into three different matrices at three different concentrations. Eight replicates were analyzed at each concentration level in each matrix. Mean recoveries of method compounds spiked in surface-water samples ranged from 39 to 149 percent and those in ground-water samples ranged from 40 to 124 percent for all pesticides except dimethoate. Mean recoveries of method compounds spiked in reagent-water samples ranged from 41 to 119 percent for all pesticides except dimethoate. Dimethoate exhibited reduced recoveries (mean of 43 percent in low- and medium-concentration level spiked samples and 20 percent in high-concentration level spiked samples) in all matrices because of incomplete collection on the C-18 column. As a result, concen-trations of dimethoate and six other compounds (based on performance issues) in samples are reported in this method with an estimated remark code.
A passive guard for low thermal conductivity measurement of small samples by the hot plate method
NASA Astrophysics Data System (ADS)
Jannot, Yves; Degiovanni, Alain; Grigorova-Moutiers, Veneta; Godefroy, Justine
2017-01-01
Hot plate methods under steady state conditions are based on a 1D model to estimate the thermal conductivity, using measurements of the temperatures T 0 and T 1 of the two sides of the sample and of the heat flux crossing it. To be consistent with the hypothesis of the 1D heat flux, either a hot plate guarded apparatus is used, or the temperature is measured at the centre of the sample. On one hand the latter method can be used only if the ratio thickness/width of the sample is sufficiently low and on the other hand the guarded hot plate method requires large width samples (typical cross section of 0.6 × 0.6 m2). That is why both methods cannot be used for low width samples. The method presented in this paper is based on an optimal choice of the temperatures T 0 and T 1 compared to the ambient temperature T a, enabling the estimation of the thermal conductivity with a centered hot plate method, by applying the 1D heat flux model. It will be shown that these optimal values do not depend on the size or on the thermal conductivity of samples (in the range 0.015-0.2 W m-1 K-1), but only on T a. The experimental results obtained validate the method for several reference samples for values of the ratio thickness/width up to 0.3, thus enabling the measurement of the thermal conductivity of samples having a small cross-section, down to 0.045 × 0.045 m2.
Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering
NASA Astrophysics Data System (ADS)
Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki
2018-03-01
We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.
Shao, Jingyuan; Cao, Wen; Qu, Haibin; Pan, Jianyang; Gong, Xingchu
2018-01-01
The aim of this study was to present a novel analytical quality by design (AQbD) approach for developing an HPLC method to analyze herbal extracts. In this approach, critical method attributes (CMAs) and critical method parameters (CMPs) of the analytical method were determined using the same data collected from screening experiments. The HPLC-ELSD method for separation and quantification of sugars in Codonopsis Radix extract (CRE) samples and Astragali Radix extract (ARE) samples was developed as an example method with a novel AQbD approach. Potential CMAs and potential CMPs were found with Analytical Target Profile. After the screening experiments, the retention time of the D-glucose peak of CRE samples, the signal-to-noise ratio of the D-glucose peak of CRE samples, and retention time of the sucrose peak in ARE samples were considered CMAs. The initial and final composition of the mobile phase, flow rate, and column temperature were found to be CMPs using a standard partial regression coefficient method. The probability-based design space was calculated using a Monte-Carlo simulation method and verified by experiments. The optimized method was validated to be accurate and precise, and then it was applied in the analysis of CRE and ARE samples. The present AQbD approach is efficient and suitable for analysis objects with complex compositions.
Hu, Zheng; Lin, Jun; Chen, Zhong-Sheng; Yang, Yong-Min; Li, Xue-Jun
2015-01-22
High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT) have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes.
Sampling Methods for Detection and Monitoring of the Asian Citrus Psyllid (Hemiptera: Psyllidae).
Monzo, C; Arevalo, H A; Jones, M M; Vanaclocha, P; Croxton, S D; Qureshi, J A; Stansly, P A
2015-06-01
The Asian citrus psyllid (ACP), Diaphorina citri Kuwayama is a key pest of citrus due to its role as vector of citrus greening disease or "huanglongbing." ACP monitoring is considered an indispensable tool for management of vector and disease. In the present study, datasets collected between 2009 and 2013 from 245 citrus blocks were used to evaluate precision, sensitivity for detection, and efficiency of five sampling methods. The number of samples needed to reach a 0.25 standard error-mean ratio was estimated using Taylor's power law and used to compare precision among sampling methods. Comparison of detection sensitivity and time expenditure (cost) between stem-tap and other sampling methodologies conducted consecutively at the same location were also assessed. Stem-tap sampling was the most efficient sampling method when ACP densities were moderate to high and served as the basis for comparison with all other methods. Protocols that grouped trees near randomly selected locations across the block were more efficient than sampling trees at random across the block. Sweep net sampling was similar to stem-taps in number of captures per sampled unit, but less precise at any ACP density. Yellow sticky traps were 14 times more sensitive than stem-taps but much more time consuming and thus less efficient except at very low population densities. Visual sampling was efficient for detecting and monitoring ACP at low densities. Suction sampling was time consuming and taxing but the most sensitive of all methods for detection of sparse populations. This information can be used to optimize ACP monitoring efforts. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Capillary microextraction: A new method for sampling methamphetamine vapour.
Nair, M V; Miskelly, G M
2016-11-01
Clandestine laboratories pose a serious health risk to first responders, investigators, decontamination companies, and the public who may be inadvertently exposed to methamphetamine and other chemicals used in its manufacture. Therefore there is an urgent need for reliable methods to detect and measure methamphetamine at such sites. The most common method for determining methamphetamine contamination at former clandestine laboratory sites is selected surface wipe sampling, followed by analysis with gas chromatography-mass spectrometry (GC-MS). We are investigating the use of sampling for methamphetamine vapour to complement such wipe sampling. In this study, we report the use of capillary microextraction (CME) devices for sampling airborne methamphetamine, and compare their sampling efficiency with a previously reported dynamic SPME method. The CME devices consisted of PDMS-coated glass filter strips inside a glass tube. The devices were used to dynamically sample methamphetamine vapour in the range of 0.42-4.2μgm -3 , generated by a custom-built vapour dosing system, for 1-15min, and methamphetamine was analysed using a GC-MS fitted with a ChromatoProbe thermal desorption unit. The devices showed good reproducibility (RSD<15%), and a curvilinear pre-equilibrium relationship between sampling times and peak area, which can be utilised for calibration. Under identical sampling conditions, the CME devices were approximately 30 times more sensitive than the dynamic SPME method. The CME devices could be stored for up to 3days after sampling prior to analysis. Consecutive sampling of methamphetamine and its isotopic substitute, d-9 methamphetamine showed no competitive displacement. This suggests that CME devices, pre-loaded with an internal standard, could be a feasible method for sampling airborne methamphetamine at former clandestine laboratories. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Hu, Zheng; Lin, Jun; Chen, Zhong-Sheng; Yang, Yong-Min; Li, Xue-Jun
2015-01-01
High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT) have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes. PMID:25621612
Recommendations for level-determined sampling in wells
NASA Astrophysics Data System (ADS)
Lerner, David N.; Teutsch, Georg
1995-10-01
Level-determined samples of groundwater are increasingly important for hydrogeological studies. The techniques for collecting them range from the use of purpose drilled wells, sometimes with sophisticated dedicated multi-level samplers in them, to a variety of methods used in open wells. Open, often existing, wells are frequently used on cost grounds, but there are risks of obtaining poor and unrepresentative samples. Alternative approaches to level-determined sampling incorporate seven concepts: depth sampling; packer systems; individual wells; dedicated multi-level systems; separation pumping; baffle systems; multi-port sock samplers. These are outlined and evaluated in terms of the environment to be sampled, and the features and performance of the methods. Recommendations are offered to match methods to sampling problems.
Compendium of selected methods for sampling and analysis at geothermal facilities
NASA Astrophysics Data System (ADS)
Kindle, C. H.; Pool, K. H.; Ludwick, J. D.; Robertson, D. E.
1984-06-01
An independent study of the field has resulted in a compilation of the best methods for sampling, preservation and analysis of potential pollutants from geothermally fueled electric power plants. These methods are selected as the most usable over the range of application commonly experienced in the various geothermal plant sample locations. In addition to plant and well piping, techniques for sampling cooling towers, ambient gases, solids, surface and subsurface waters are described. Emphasis is placed on the use of sampling proves to extract samples from heterogeneous flows. Certain sampling points, constituents and phases of plant operation are more amenable to quality assurance improvement in the emission measurements than others and are so identified.
Tack, Pieter; Vekemans, Bart; Laforce, Brecht; Rudloff-Grund, Jennifer; Hernández, Willinton Y; Garrevoet, Jan; Falkenberg, Gerald; Brenker, Frank; Van Der Voort, Pascal; Vincze, Laszlo
2017-02-07
Using X-ray absorption near edge structure (XANES) spectroscopy, information on the local chemical structure and oxidation state of an element of interest can be acquired. Conventionally, this information can be obtained in a spatially resolved manner by scanning a sample through a focused X-ray beam. Recently, full-field methods have been developed to obtain direct 2D chemical state information by imaging a large sample area. These methods are usually in transmission mode, thus restricting the use to thin and transmitting samples. Here, a fluorescence method is displayed using an energy-dispersive pnCCD detector, the SLcam, characterized by measurement times far superior to what is generally applicable. Additionally, this method operates in confocal mode, thus providing direct 3D spatially resolved chemical state information from a selected subvolume of a sample, without the need of rotating a sample. The method is applied to two samples: a gold-supported magnesia catalyst (Au/MgO) and a natural diamond containing Fe-rich inclusions. Both samples provide XANES spectra that can be overlapped with reference XANES spectra, allowing this method to be used for fingerprinting and linear combination analysis of known XANES reference compounds.
Rapid method to determine 226Ra in steel samples
Maxwell, Sherrod L.; Culligan, Brian; Hutchison, Jay B.; ...
2017-09-22
The rapid measurement of 226Ra in steel samples is very important in the event of a radiological emergency. 226Ra (T 1/2 = 1600 y) is a natural radionuclide present in the environment and a highly toxic alpha-emitter. Due to its long life and tendency to concentrate in bones, 226Ra ingestion or inhalation can lead to significant committed dose to individuals. A new method for the determination of 226Ra in steel samples has been developed at the Savannah River Environmental Laboratory. The new method employs a rugged acid digestion method that includes hydrofluoric acid, followed by a single precipitation step tomore » rapidly preconcentrate the radium and remove most of the dissolved steel sample matrix. Radium is then separated using a combination of cation exchange and extraction chromatography, and 226Ra is measured by alpha spectrometry. This approach has a sample preparation time of ~ 8 h for steel samples, has a very high tracer yield (> 88%), and removes interferences effectively. A 133Ba yield tracer is used so that samples can be counted immediately following the separation method, avoiding lengthy ingrowth times that are required in other methods.« less
Rapid method to determine 226Ra in steel samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxwell, Sherrod L.; Culligan, Brian; Hutchison, Jay B.
The rapid measurement of 226Ra in steel samples is very important in the event of a radiological emergency. 226Ra (T 1/2 = 1600 y) is a natural radionuclide present in the environment and a highly toxic alpha-emitter. Due to its long life and tendency to concentrate in bones, 226Ra ingestion or inhalation can lead to significant committed dose to individuals. A new method for the determination of 226Ra in steel samples has been developed at the Savannah River Environmental Laboratory. The new method employs a rugged acid digestion method that includes hydrofluoric acid, followed by a single precipitation step tomore » rapidly preconcentrate the radium and remove most of the dissolved steel sample matrix. Radium is then separated using a combination of cation exchange and extraction chromatography, and 226Ra is measured by alpha spectrometry. This approach has a sample preparation time of ~ 8 h for steel samples, has a very high tracer yield (> 88%), and removes interferences effectively. A 133Ba yield tracer is used so that samples can be counted immediately following the separation method, avoiding lengthy ingrowth times that are required in other methods.« less
Oral sampling methods are associated with differences in immune marker concentrations.
Fakhry, Carole; Qeadan, Fares; Gilman, Robert H; Yori, Pablo; Kosek, Margaret; Patterson, Nicole; Eisele, David W; Gourin, Christine G; Chitguppi, Chandala; Marks, Morgan; Gravitt, Patti
2018-06-01
To determine whether the concentration and distribution of immune markers in paired oral samples were similar. Clinical research. Cross-sectional study. Paired saliva and oral secretions (OS) samples were collected. The concentration of immune markers was estimated using Luminex multiplex assay (Thermo Fisher Scientific, Waltham, MA). For each sample, the concentration of respective immune markers was normalized to total protein present and log-transformed. Median concentrations of immune markers were compared between both types of samples. Intermarker correlation in each sampling method and across sampling methods was evaluated. There were 90 study participants. Concentrations of immune markers in saliva samples were significantly different from concentrations in OS samples. Oral secretions samples showed higher concentrations of immunoregulatory markers, whereas the saliva samples contained proinflammatory markers in higher concentration. The immune marker profile in saliva samples is distinct from the immune marker profile in paired OS samples. 2b. Laryngoscope, 128:E214-E221, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
NASA Astrophysics Data System (ADS)
Almirall, Jose R.; Trejos, Tatiana; Hobbs, Andria; Furton, Kenneth G.
2003-09-01
The importance of small amounts of glass and paint evidence as a means to associate a crime event to a suspect or a suspect to another individual has been demonstrated in many cases. Glass is a fragile material that is often found at the scenes of crimes such as burglaries, hit-and-run accidents and violent crime offenses. Previous work has demonstrated the utility of elemental analysis by solution ICP-MS of small amounts of glass for the comparison between a fragment found at a crime scene to a possible source of the glass. The multi-element capability and the sensitivity of ICP-MS combined with the simplified sample introduction of laser ablation prior to ion detection provides for an excellent and relatively non-destructive technique for elemental analysis of glass fragments. The direct solid sample introduction technique of laser ablation (LA) is reported as an alternative to the solution method. Direct solid sampling provides several advantages over solution methods and shows great potential for a number of solid sample analyses in forensic science. The advantages of laser ablation include the simplification of sample preparation, thereby reducing the time and complexity of the analysis, the elimination of handling acid dissolution reagents such as HF and the reduction of sources of interferences in the ionization plasma. Direct sampling also provides for essentially "non-destructive" sampling due to the removal of very small amounts of sample needed for analysis. The discrimination potential of LA-ICP-MS is compared with previously reported solution ICP-MS methods using external calibration with internal standardization and a newly reported solution isotope dilution (ID) method. A total of ninety-one different glass samples were used for the comparison study using the techniques mentioned. One set consisted of forty-five headlamps taken from a variety of automobiles representing a range of twenty years of manufacturing dates. A second set consisted of forty-six automotive glasses (side windows and windshields) representing casework glass from different vehicle manufacturers over several years was also characterized by RI and elemental composition analysis. The solution sample introduction techniques (external calibration and isotope dilution) provide for excellent sensitivity and precision but have the disadvantages of destroying the sample and also involve complex sample preparation. The laser ablation method was simpler, faster and produced comparable discrimination to the EC-ICP-MS and ID-ICP-MS. LA-ICP-MS can provide for an excellent alternative to solution analysis of glass in forensic casework samples. Paints and coatings are frequently encountered as trace evidence samples submitted to forensic science laboratories. A LA-ICP-MS method has been developed to complement the commonly used techniques in forensic laboratories in order to better characterize these samples for forensic purposes. Time-resolved plots of each sample can be compared to associate samples to each other or to discriminate between samples. Additionally, the concentration of lead and the ratios of other elements have been determined in various automotive paints by the reported method. A sample set of eighteen (18) survey automotive paint samples have been analyzed with the developed method in order to determine the utility of LA-ICP-MS and to compare the method to the more commonly used scanning electron microscopy (SEM) method for elemental characterization of paint layers in forensic casework.
Martínez-Mier, E. Angeles; Soto-Rojas, Armando E.; Buckley, Christine M.; Margineda, Jorge; Zero, Domenick T.
2010-01-01
Objective The aim of this study was to assess methods currently used for analyzing fluoridated salt in order to identify the most useful method for this type of analysis. Basic research design Seventy-five fluoridated salt samples were obtained. Samples were analyzed for fluoride content, with and without pretreatment, using direct and diffusion methods. Element analysis was also conducted in selected samples. Fluoride was added to ultra pure NaCl and non-fluoridated commercial salt samples and Ca and Mg were added to fluoride samples in order to assess fluoride recoveries using modifications to the methods. Results Larger amounts of fluoride were found and recovered using diffusion than direct methods (96%–100% for diffusion vs. 67%–90% for direct). Statistically significant differences were obtained between direct and diffusion methods using different ion strength adjusters. Pretreatment methods reduced the amount of recovered fluoride. Determination of fluoride content was influenced both by the presence of NaCl and other ions in the salt. Conclusion Direct and diffusion techniques for analysis of fluoridated salt are suitable methods for fluoride analysis. The choice of method should depend on the purpose of the analysis. PMID:20088217
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
Efficient free energy calculations by combining two complementary tempering sampling methods.
Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun
2017-01-14
Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.
Efficient free energy calculations by combining two complementary tempering sampling methods
NASA Astrophysics Data System (ADS)
Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun
2017-01-01
Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.
NASA Astrophysics Data System (ADS)
Yang, Linlin; Sun, Hai; Fu, Xudong; Wang, Suli; Jiang, Luhua; Sun, Gongquan
2014-07-01
A novel method for measuring effective diffusion coefficient of porous materials is developed. The oxygen concentration gradient is established by an air-breathing proton exchange membrane fuel cell (PEMFC). The porous sample is set in a sample holder located in the cathode plate of the PEMFC. At a given oxygen flux, the effective diffusion coefficients are related to the difference of oxygen concentration across the samples, which can be correlated with the differences of the output voltage of the PEMFC with and without inserting the sample in the cathode plate. Compared to the conventional electrical conductivity method, this method is more reliable for measuring non-wetting samples.
Rapid method for the determination of 226Ra in hydraulic fracturing wastewater samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxwell, Sherrod L.; Culligan, Brian K.; Warren, Richard A.
A new method that rapidly preconcentrates and measures 226Ra from hydraulic fracturing wastewater samples was developed in the Savannah River Environmental Laboratory. The method improves the quality of 226Ra measurements using gamma spectrometry by providing up to 100x preconcentration of 226Ra from this difficult sample matrix, which contains very high levels of calcium, barium, strontium, magnesium and sodium. The high chemical yield, typically 80-90%, facilitates a low detection limit, important for lower level samples, and indicates method ruggedness. Ba-133 tracer is used to determine chemical yield and correct for geometry-related counting issues. The 226Ra sample preparation takes < 2 hours.
Rapid method for the determination of 226Ra in hydraulic fracturing wastewater samples
Maxwell, Sherrod L.; Culligan, Brian K.; Warren, Richard A.; ...
2016-03-24
A new method that rapidly preconcentrates and measures 226Ra from hydraulic fracturing wastewater samples was developed in the Savannah River Environmental Laboratory. The method improves the quality of 226Ra measurements using gamma spectrometry by providing up to 100x preconcentration of 226Ra from this difficult sample matrix, which contains very high levels of calcium, barium, strontium, magnesium and sodium. The high chemical yield, typically 80-90%, facilitates a low detection limit, important for lower level samples, and indicates method ruggedness. Ba-133 tracer is used to determine chemical yield and correct for geometry-related counting issues. The 226Ra sample preparation takes < 2 hours.
Carter, James L.; Resh, Vincent H.
2001-01-01
A survey of methods used by US state agencies for collecting and processing benthic macroinvertebrate samples from streams was conducted by questionnaire; 90 responses were received and used to describe trends in methods. The responses represented an estimated 13,000-15,000 samples collected and processed per year. Kicknet devices were used in 64.5% of the methods; other sampling devices included fixed-area samplers (Surber and Hess), artificial substrates (Hester-Dendy and rock baskets), grabs, and dipnets. Regional differences existed, e.g., the 1-m kicknet was used more often in the eastern US than in the western US. Mesh sizes varied among programs but 80.2% of the methods used a mesh size between 500 and 600 (mu or u)m. Mesh size variations within US Environmental Protection Agency regions were large, with size differences ranging from 100 to 700 (mu or u)m. Most samples collected were composites; the mean area sampled was 1.7 m2. Samples rarely were collected using a random method (4.7%); most samples (70.6%) were collected using "expert opinion", which may make data obtained operator-specific. Only 26.3% of the methods sorted all the organisms from a sample; the remainder subsampled in the laboratory. The most common method of subsampling was to remove 100 organisms (range = 100-550). The magnification used for sorting ranged from 1 (sorting by eye) to 30x, which results in inconsistent separation of macroinvertebrates from detritus. In addition to subsampling, 53% of the methods sorted large/rare organisms from a sample. The taxonomic level used for identifying organisms varied among taxa; Ephemeroptera, Plecoptera, and Trichoptera were generally identified to a finer taxonomic resolution (genus and species) than other taxa. Because there currently exists a large range of field and laboratory methods used by state programs, calibration among all programs to increase data comparability would be exceptionally challenging. However, because many techniques are shared among methods, limited testing could be designed to evaluate whether procedural differences affect the ability to determine levels of environmental impairment using benthic macroinvertebrate communities.
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
2017-10-26
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
Least squares polynomial chaos expansion: A review of sampling strategies
NASA Astrophysics Data System (ADS)
Hadigol, Mohammad; Doostan, Alireza
2018-04-01
As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.
Densitometry By Acoustic Levitation
NASA Technical Reports Server (NTRS)
Trinh, Eugene H.
1989-01-01
"Static" and "dynamic" methods developed for measuring mass density of acoustically levitated solid particle or liquid drop. "Static" method, unknown density of sample found by comparison with another sample of known density. "Dynamic" method practiced with or without gravitational field. Advantages over conventional density-measuring techniques: sample does not have to make contact with container or other solid surface, size and shape of samples do not affect measurement significantly, sound field does not have to be know in detail, and sample can be smaller than microliter. Detailed knowledge of acoustic field not necessary.
Mixture and method for simulating soiling and weathering of surfaces
Sleiman, Mohamad; Kirchstetter, Thomas; Destaillats, Hugo; Levinson, Ronnen; Berdahl, Paul; Akbari, Hashem
2018-01-02
This disclosure provides systems, methods, and apparatus related to simulated soiling and weathering of materials. In one aspect, a soiling mixture may include an aqueous suspension of various amounts of salt, soot, dust, and humic acid. In another aspect, a method may include weathering a sample of material in a first exposure of the sample to ultraviolet light, water vapor, and elevated temperatures, depositing a soiling mixture on the sample, and weathering the sample in a second exposure of the sample to ultraviolet light, water vapor, and elevated temperatures.
NASA Astrophysics Data System (ADS)
Cheng, T.; Zhou, X.; Jia, Y.; Yang, G.; Bai, J.
2018-04-01
In the project of China's First National Geographic Conditions Census, millions of sample data have been collected all over the country for interpreting land cover based on remote sensing images, the quantity of data files reaches more than 12,000,000 and has grown in the following project of National Geographic Conditions Monitoring. By now, using database such as Oracle for storing the big data is the most effective method. However, applicable method is more significant for sample data's management and application. This paper studies a database construction method which is based on relational database with distributed file system. The vector data and file data are saved in different physical location. The key issues and solution method are discussed. Based on this, it studies the application method of sample data and analyzes some kinds of using cases, which could lay the foundation for sample data's application. Particularly, sample data locating in Shaanxi province are selected for verifying the method. At the same time, it takes 10 first-level classes which defined in the land cover classification system for example, and analyzes the spatial distribution and density characteristics of all kinds of sample data. The results verify that the method of database construction which is based on relational database with distributed file system is very useful and applicative for sample data's searching, analyzing and promoted application. Furthermore, sample data collected in the project of China's First National Geographic Conditions Census could be useful in the earth observation and land cover's quality assessment.
LeBouf, Ryan F; Virji, Mohammed Abbas; Ranpara, Anand; Stefaniak, Aleksandr B
2017-07-01
This method was designed for sampling select quaternary ammonium (quat) compounds in air or on surfaces followed by analysis using ultraperformance liquid chromatography tandem mass spectrometry. Target quats were benzethonium chloride, didecyldimethylammonium bromide, benzyldimethyldodecylammonium chloride, benzyldimethyltetradecylammonium chloride, and benzyldimethylhexadecylammonium chloride. For air sampling, polytetrafluoroethylene (PTFE) filters are recommended for 15-min to 24-hour sampling. For surface sampling, Pro-wipe® 880 (PW) media was chosen. Samples were extracted in 60:40 acetonitrile:0.1% formic acid for 1 hour on an orbital shaker. Method detection limits range from 0.3 to 2 ng/ml depending on media and analyte. Matrix effects of media are minimized through the use of multiple reaction monitoring versus selected ion recording. Upper confidence limits on accuracy meet the National Institute for Occupational Safety and Health 25% criterion for PTFE and PW media for all analytes. Using PTFE and PW analyzed with multiple reaction monitoring, the method quantifies levels among the different quats compounds with high precision (<10% relative standard deviation) and low bias (<11%). The method is sensitive enough with very low method detection limits to capture quats on air sampling filters with only a 15-min sample duration with a maximum assessed storage time of 103 days before sample extraction. This method will support future exposure assessment and quantitative epidemiologic studies to explore exposure-response relationships and establish levels of quats exposures associated with adverse health effects. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Wen, Tingxi; Zhang, Zhongnan; Qiu, Ming; Zeng, Ming; Luo, Weizhen
2017-01-01
The computer mouse is an important human-computer interaction device. But patients with physical finger disability are unable to operate this device. Surface EMG (sEMG) can be monitored by electrodes on the skin surface and is a reflection of the neuromuscular activities. Therefore, we can control limbs auxiliary equipment by utilizing sEMG classification in order to help the physically disabled patients to operate the mouse. To develop a new a method to extract sEMG generated by finger motion and apply novel features to classify sEMG. A window-based data acquisition method was presented to extract signal samples from sEMG electordes. Afterwards, a two-dimensional matrix image based feature extraction method, which differs from the classical methods based on time domain or frequency domain, was employed to transform signal samples to feature maps used for classification. In the experiments, sEMG data samples produced by the index and middle fingers at the click of a mouse button were separately acquired. Then, characteristics of the samples were analyzed to generate a feature map for each sample. Finally, the machine learning classification algorithms (SVM, KNN, RBF-NN) were employed to classify these feature maps on a GPU. The study demonstrated that all classifiers can identify and classify sEMG samples effectively. In particular, the accuracy of the SVM classifier reached up to 100%. The signal separation method is a convenient, efficient and quick method, which can effectively extract the sEMG samples produced by fingers. In addition, unlike the classical methods, the new method enables to extract features by enlarging sample signals' energy appropriately. The classical machine learning classifiers all performed well by using these features.
Sung, Heungsup; Yong, Dongeun; Ki, Chang Seok; Kim, Jae Seok; Seong, Moon Woo; Lee, Hyukmin; Kim, Mi Na
2016-09-01
Real-time reverse transcription PCR (rRT-PCR) of sputum samples is commonly used to diagnose Middle East respiratory syndrome coronavirus (MERS-CoV) infection. Owing to the difficulty of extracting RNA from sputum containing mucus, sputum homogenization is desirable prior to nucleic acid isolation. We determined optimal homogenization methods for isolating viral nucleic acids from sputum. We evaluated the following three sputum-homogenization methods: proteinase K and DNase I (PK-DNase) treatment, phosphate-buffered saline (PBS) treatment, and N-acetyl-L-cysteine and sodium citrate (NALC) treatment. Sputum samples were spiked with inactivated MERS-CoV culture isolates. RNA was extracted from pretreated, spiked samples using the easyMAG system (bioMérieux, France). Extracted RNAs were then subjected to rRT-PCR for MERS-CoV diagnosis (DiaPlex Q MERS-coronavirus, SolGent, Korea). While analyzing 15 spiked sputum samples prepared in technical duplicate, false-negative results were obtained with five (16.7%) and four samples (13.3%), respectively, by using the PBS and NALC methods. The range of threshold cycle (Ct) values observed when detecting upE in sputum samples was 31.1-35.4 with the PK-DNase method, 34.7-39.0 with the PBS method, and 33.9-38.6 with the NALC method. Compared with the control, which were prepared by adding a one-tenth volume of 1:1,000 diluted viral culture to PBS solution, the ranges of Ct values obtained by the PBS and NALC methods differed significantly from the mean control Ct of 33.2 (both P<0.0001). The PK-DNase method is suitable for homogenizing sputum samples prior to RNA extraction.
Satzke, Catherine; Dunne, Eileen M.; Porter, Barbara D.; Klugman, Keith P.; Mulholland, E. Kim
2015-01-01
Background The pneumococcus is a diverse pathogen whose primary niche is the nasopharynx. Over 90 different serotypes exist, and nasopharyngeal carriage of multiple serotypes is common. Understanding pneumococcal carriage is essential for evaluating the impact of pneumococcal vaccines. Traditional serotyping methods are cumbersome and insufficient for detecting multiple serotype carriage, and there are few data comparing the new methods that have been developed over the past decade. We established the PneuCarriage project, a large, international multi-centre study dedicated to the identification of the best pneumococcal serotyping methods for carriage studies. Methods and Findings Reference sample sets were distributed to 15 research groups for blinded testing. Twenty pneumococcal serotyping methods were used to test 81 laboratory-prepared (spiked) samples. The five top-performing methods were used to test 260 nasopharyngeal (field) samples collected from children in six high-burden countries. Sensitivity and positive predictive value (PPV) were determined for the test methods and the reference method (traditional serotyping of >100 colonies from each sample). For the alternate serotyping methods, the overall sensitivity ranged from 1% to 99% (reference method 98%), and PPV from 8% to 100% (reference method 100%), when testing the spiked samples. Fifteen methods had ≥70% sensitivity to detect the dominant (major) serotype, whilst only eight methods had ≥70% sensitivity to detect minor serotypes. For the field samples, the overall sensitivity ranged from 74.2% to 95.8% (reference method 93.8%), and PPV from 82.2% to 96.4% (reference method 99.6%). The microarray had the highest sensitivity (95.8%) and high PPV (93.7%). The major limitation of this study is that not all of the available alternative serotyping methods were included. Conclusions Most methods were able to detect the dominant serotype in a sample, but many performed poorly in detecting the minor serotype populations. Microarray with a culture amplification step was the top-performing method. Results from this comprehensive evaluation will inform future vaccine evaluation and impact studies, particularly in low-income settings, where pneumococcal disease burden remains high. PMID:26575033
Field efficiency and bias of snag inventory methods
Robert S. Kenning; Mark J. Ducey; John C. Brissette; Jeffery H. Gove
2005-01-01
Snags and cavity trees are important components of forests, but can be difficult to inventory precisely and are not always included in inventories because of limited resources. We tested the application of N-tree distance sampling as a time-saving snag sampling method and compared N-tree distance sampling to fixed-area sampling and modified horizontal line sampling in...
40 CFR 761.292 - Chemical extraction and analysis of individual samples and composite samples.
Code of Federal Regulations, 2011 CFR
2011-07-01
... individual samples and composite samples. 761.292 Section 761.292 Protection of Environment ENVIRONMENTAL... Cleanup and On-Site Disposal of Bulk PCB Remediation Waste and Porous Surfaces in Accordance With § 761... individual and composite samples of PCB remediation waste. Use Method 8082 from SW-846, or a method validated...
40 CFR 761.292 - Chemical extraction and analysis of individual samples and composite samples.
Code of Federal Regulations, 2013 CFR
2013-07-01
... individual samples and composite samples. 761.292 Section 761.292 Protection of Environment ENVIRONMENTAL... Cleanup and On-Site Disposal of Bulk PCB Remediation Waste and Porous Surfaces in Accordance With § 761... individual and composite samples of PCB remediation waste. Use Method 8082 from SW-846, or a method validated...
40 CFR 761.292 - Chemical extraction and analysis of individual samples and composite samples.
Code of Federal Regulations, 2014 CFR
2014-07-01
... individual samples and composite samples. 761.292 Section 761.292 Protection of Environment ENVIRONMENTAL... Cleanup and On-Site Disposal of Bulk PCB Remediation Waste and Porous Surfaces in Accordance With § 761... individual and composite samples of PCB remediation waste. Use Method 8082 from SW-846, or a method validated...
40 CFR 761.292 - Chemical extraction and analysis of individual samples and composite samples.
Code of Federal Regulations, 2012 CFR
2012-07-01
... individual samples and composite samples. 761.292 Section 761.292 Protection of Environment ENVIRONMENTAL... Cleanup and On-Site Disposal of Bulk PCB Remediation Waste and Porous Surfaces in Accordance With § 761... individual and composite samples of PCB remediation waste. Use Method 8082 from SW-846, or a method validated...
Sampling Operations on Big Data
2015-11-29
gories. These include edge sampling methods where edges are selected by a predetermined criteria; snowball sampling methods where algorithms start... Sampling Operations on Big Data Vijay Gadepally, Taylor Herr, Luke Johnson, Lauren Milechin, Maja Milosavljevic, Benjamin A. Miller Lincoln...process and disseminate information for discovery and exploration under real-time constraints. Common signal processing operations such as sampling and
Improved sampling and analysis of images in corneal confocal microscopy.
Schaldemose, E L; Fontain, F I; Karlsson, P; Nyengaard, J R
2017-10-01
Corneal confocal microscopy (CCM) is a noninvasive clinical method to analyse and quantify corneal nerve fibres in vivo. Although the CCM technique is in constant progress, there are methodological limitations in terms of sampling of images and objectivity of the nerve quantification. The aim of this study was to present a randomized sampling method of the CCM images and to develop an adjusted area-dependent image analysis. Furthermore, a manual nerve fibre analysis method was compared to a fully automated method. 23 idiopathic small-fibre neuropathy patients were investigated using CCM. Corneal nerve fibre length density (CNFL) and corneal nerve fibre branch density (CNBD) were determined in both a manual and automatic manner. Differences in CNFL and CNBD between (1) the randomized and the most common sampling method, (2) the adjusted and the unadjusted area and (3) the manual and automated quantification method were investigated. The CNFL values were significantly lower when using the randomized sampling method compared to the most common method (p = 0.01). There was not a statistical significant difference in the CNBD values between the randomized and the most common sampling method (p = 0.85). CNFL and CNBD values were increased when using the adjusted area compared to the standard area. Additionally, the study found a significant increase in the CNFL and CNBD values when using the manual method compared to the automatic method (p ≤ 0.001). The study demonstrated a significant difference in the CNFL values between the randomized and common sampling method indicating the importance of clear guidelines for the image sampling. The increase in CNFL and CNBD values when using the adjusted cornea area is not surprising. The observed increases in both CNFL and CNBD values when using the manual method of nerve quantification compared to the automatic method are consistent with earlier findings. This study underlines the importance of improving the analysis of the CCM images in order to obtain more objective corneal nerve fibre measurements. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Brorby, G P; Sheehan, P J; Berman, D W; Bogen, K T; Holm, S E
2011-05-01
Airborne samples collected in the 1970s for drywall workers using asbestos-containing joint compounds were likely prepared and analyzed according to National Institute of Occupational Safety and Health Method P&CAM 239, the historical precursor to current Method 7400. Experimentation with a re-created, chrysotile-containing, carbonate-based joint compound suggested that analysis following sample preparation by the historical vs. current method produces different fiber counts, likely because of an interaction between the different clearing and mounting chemicals used and the carbonate-based joint compound matrix. Differences were also observed during analysis using Method 7402, depending on whether acetic acid/dimethylformamide or acetone was used during preparation to collapse the filter. Specifically, air samples of sanded chrysotile-containing joint compound prepared by the historical method yielded fiber counts significantly greater (average of 1.7-fold, 95% confidence interval: 1.5- to 2.0-fold) than those obtained by the current method. In addition, air samples prepared by Method 7402 using acetic acid/dimethylformamide yielded fiber counts that were greater (2.8-fold, 95% confidence interval: 2.5- to 3.2-fold) than those prepared by this method using acetone. These results indicated (1) there is an interaction between Method P&CAM 239 preparation chemicals and the carbonate-based joint compound matrix that reveals fibers that were previously bound in the matrix, and (2) the same appeared to be true for Method 7402 preparation chemicals acetic acid/dimethylformamide. This difference in fiber counts is the opposite of what has been reported historically for samples of relatively pure chrysotile dusts prepared using the same chemicals. This preparation artifact should be considered when interpreting historical air samples for drywall workers prepared by Method P&CAM 239. Copyright © 2011 JOEH, LLC
2018-01-01
ABSTRACT To assess phenotypic bacterial antimicrobial resistance (AMR) in different strata (e.g., host populations, environmental areas, manure, or sewage effluents) for epidemiological purposes, isolates of target bacteria can be obtained from a stratum using various sample types. Also, different sample processing methods can be applied. The MIC of each target antimicrobial drug for each isolate is measured. Statistical equivalence testing of the MIC data for the isolates allows evaluation of whether different sample types or sample processing methods yield equivalent estimates of the bacterial antimicrobial susceptibility in the stratum. We demonstrate this approach on the antimicrobial susceptibility estimates for (i) nontyphoidal Salmonella spp. from ground or trimmed meat versus cecal content samples of cattle in processing plants in 2013-2014 and (ii) nontyphoidal Salmonella spp. from urine, fecal, and blood human samples in 2015 (U.S. National Antimicrobial Resistance Monitoring System data). We found that the sample types for cattle yielded nonequivalent susceptibility estimates for several antimicrobial drug classes and thus may gauge distinct subpopulations of salmonellae. The quinolone and fluoroquinolone susceptibility estimates for nontyphoidal salmonellae from human blood are nonequivalent to those from urine or feces, conjecturally due to the fluoroquinolone (ciprofloxacin) use to treat infections caused by nontyphoidal salmonellae. We also demonstrate statistical equivalence testing for comparing sample processing methods for fecal samples (culturing one versus multiple aliquots per sample) to assess AMR in fecal Escherichia coli. These methods yield equivalent results, except for tetracyclines. Importantly, statistical equivalence testing provides the MIC difference at which the data from two sample types or sample processing methods differ statistically. Data users (e.g., microbiologists and epidemiologists) may then interpret practical relevance of the difference. IMPORTANCE Bacterial antimicrobial resistance (AMR) needs to be assessed in different populations or strata for the purposes of surveillance and determination of the efficacy of interventions to halt AMR dissemination. To assess phenotypic antimicrobial susceptibility, isolates of target bacteria can be obtained from a stratum using different sample types or employing different sample processing methods in the laboratory. The MIC of each target antimicrobial drug for each of the isolates is measured, yielding the MIC distribution across the isolates from each sample type or sample processing method. We describe statistical equivalence testing for the MIC data for evaluating whether two sample types or sample processing methods yield equivalent estimates of the bacterial phenotypic antimicrobial susceptibility in the stratum. This includes estimating the MIC difference at which the data from the two approaches differ statistically. Data users (e.g., microbiologists, epidemiologists, and public health professionals) can then interpret whether that present difference is practically relevant. PMID:29475868
Shakeri, Heman; Volkova, Victoriya; Wen, Xuesong; Deters, Andrea; Cull, Charley; Drouillard, James; Müller, Christian; Moradijamei, Behnaz; Jaberi-Douraki, Majid
2018-05-01
To assess phenotypic bacterial antimicrobial resistance (AMR) in different strata (e.g., host populations, environmental areas, manure, or sewage effluents) for epidemiological purposes, isolates of target bacteria can be obtained from a stratum using various sample types. Also, different sample processing methods can be applied. The MIC of each target antimicrobial drug for each isolate is measured. Statistical equivalence testing of the MIC data for the isolates allows evaluation of whether different sample types or sample processing methods yield equivalent estimates of the bacterial antimicrobial susceptibility in the stratum. We demonstrate this approach on the antimicrobial susceptibility estimates for (i) nontyphoidal Salmonella spp. from ground or trimmed meat versus cecal content samples of cattle in processing plants in 2013-2014 and (ii) nontyphoidal Salmonella spp. from urine, fecal, and blood human samples in 2015 (U.S. National Antimicrobial Resistance Monitoring System data). We found that the sample types for cattle yielded nonequivalent susceptibility estimates for several antimicrobial drug classes and thus may gauge distinct subpopulations of salmonellae. The quinolone and fluoroquinolone susceptibility estimates for nontyphoidal salmonellae from human blood are nonequivalent to those from urine or feces, conjecturally due to the fluoroquinolone (ciprofloxacin) use to treat infections caused by nontyphoidal salmonellae. We also demonstrate statistical equivalence testing for comparing sample processing methods for fecal samples (culturing one versus multiple aliquots per sample) to assess AMR in fecal Escherichia coli These methods yield equivalent results, except for tetracyclines. Importantly, statistical equivalence testing provides the MIC difference at which the data from two sample types or sample processing methods differ statistically. Data users (e.g., microbiologists and epidemiologists) may then interpret practical relevance of the difference. IMPORTANCE Bacterial antimicrobial resistance (AMR) needs to be assessed in different populations or strata for the purposes of surveillance and determination of the efficacy of interventions to halt AMR dissemination. To assess phenotypic antimicrobial susceptibility, isolates of target bacteria can be obtained from a stratum using different sample types or employing different sample processing methods in the laboratory. The MIC of each target antimicrobial drug for each of the isolates is measured, yielding the MIC distribution across the isolates from each sample type or sample processing method. We describe statistical equivalence testing for the MIC data for evaluating whether two sample types or sample processing methods yield equivalent estimates of the bacterial phenotypic antimicrobial susceptibility in the stratum. This includes estimating the MIC difference at which the data from the two approaches differ statistically. Data users (e.g., microbiologists, epidemiologists, and public health professionals) can then interpret whether that present difference is practically relevant. Copyright © 2018 Shakeri et al.
Pontoni, Ludovico; Panico, Antonio; Matanò, Alessia; van Hullebusch, Eric D; Fabbricino, Massimiliano; Esposito, Giovanni; Pirozzi, Francesco
2017-12-06
A novel modification of the sample preparation procedure for the Folin-Ciocalteu colorimetric assay for the determination of total phenolic compounds in natural solid and semisolid organic materials (e.g., foods, organic solid waste, soils, plant tissues, agricultural residues, manure) is proposed. In this method, the sample is prepared by adding sodium sulfate as a solid diluting agent before homogenization. The method allows for the determination of total phenols (TP) in samples with high solids contents, and it provides good accuracy and reproducibility. Additionally, this method permits analyses of significant amounts of sample, which reduces problems related to heterogeneity. We applied this method to phenols-rich lignocellulosic and humic-like solids and semisolid samples, including rice straw (RS), peat-rich soil (PS), and food waste (FW). The TP concentrations measured with the solid dilution (SD) preparation were substantially higher (increases of 41.4%, 15.5%, and 59.4% in RS, PS and FW, respectively) than those obtained with the traditional method (solids suspended in water). These results showed that the traditional method underestimates the phenolic contents in the studied solids.
Pipes, W O; Minnigh, H A; Moyer, B; Troy, M A
1986-01-01
A total of 2,601 water samples from six different water systems were tested for coliform bacteria by Clark's presence-absence (P-A) test and by the membrane filter (MF) method. There was no significant difference in the fraction of samples positive for coliform bacteria for any of the systems tested. It was concluded that the two tests are equivalent for monitoring purposes. However, 152 samples were positive for coliform bacteria by the MF method but negative by the P-A test, and 132 samples were positive by the P-A test but negative by the MF method. Many of these differences for individual samples can be explained by random dispersion of bacteria in subsamples when the coliform density is low. However, 15 samples had MF counts greater than 3 and gave negative P-A results. The only apparent explanation for most of these results is that coliform bacteria were present in the P-A test bottles but did not produce acid and gas. Two other studies have reported more samples positive by Clark's P-A test than by the MF method. PMID:3532953
Investigating Test Equating Methods in Small Samples through Various Factors
ERIC Educational Resources Information Center
Asiret, Semih; Sünbül, Seçil Ömür
2016-01-01
In this study, equating methods for random group design using small samples through factors such as sample size, difference in difficulty between forms, and guessing parameter was aimed for comparison. Moreover, which method gives better results under which conditions was also investigated. In this study, 5,000 dichotomous simulated data…
Biomass Compositional Analysis Laboratory Procedures | Bioenergy | NREL
Compositional Analysis This procedure describes methods for sample drying and size reduction, obtaining samples methods used to determine the amount of solids or moisture present in a solid or slurry biomass sample as values? We have found that neutral detergent fiber (NDF) and acid detergent fiber (ADF) methods report
Sampling Methods and the Accredited Population in Athletic Training Education Research
ERIC Educational Resources Information Center
Carr, W. David; Volberding, Jennifer
2009-01-01
Context: We describe methods of sampling the widely-studied, yet poorly defined, population of accredited athletic training education programs (ATEPs). Objective: There are two purposes to this study; first to describe the incidence and types of sampling methods used in athletic training education research, and second to clearly define the…
Wipe sampling is an important technique for the estimation of contaminant deposition in buildings, homes, or outdoor surfaces as a source of possible human exposure. Numerous
methods of wipe sampling exist, and each method has its own specification for the type of wipe, we...
Savoie, Jennifer G.; LeBlanc, Denis R.
2012-01-01
Field tests were conducted near the Impact Area at Camp Edwards on the Massachusetts Military Reservation, Cape Cod, Massachusetts, to determine the utility of no-purge groundwater sampling for monitoring concentrations of ordnance-related explosive compounds and perchlorate in the sand and gravel aquifer. The no-purge methods included (1) a diffusion sampler constructed of rigid porous polyethylene, (2) a diffusion sampler constructed of regenerated-cellulose membrane, and (3) a tubular grab sampler (bailer) constructed of polyethylene film. In samples from 36 monitoring wells, concentrations of perchlorate (ClO4-), hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX), and octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX), the major contaminants of concern in the Impact Area, in the no-purge samples were compared to concentrations of these compounds in samples collected by low-flow pumped sampling with dedicated bladder pumps. The monitoring wells are constructed of 2- and 2.5-inch-diameter polyvinyl chloride pipe and have approximately 5- to 10-foot-long slotted screens. The no-purge samplers were left in place for 13-64 days to ensure that ambient groundwater flow had flushed the well screen and concentrations in the screen represented water in the adjacent formation. The sampling methods were compared first in six monitoring wells. Concentrations of ClO4-, RDX, and HMX in water samples collected by the three no-purge sampling methods and low-flow pumped sampling were in close agreement for all six monitoring wells. There is no evidence of a systematic bias in the concentration differences among the methods on the basis of type of sampling device, type of contaminant, or order in which the no-purge samplers were tested. A subsequent examination of vertical variations in concentrations of ClO4- in the 10-foot-long screens of six wells by using rigid porous polyethylene diffusion samplers indicated that concentrations in a given well varied by less than 15 percent and the small variations were unlikely to affect the utility of the various sampling methods. The grab sampler was selected for additional tests in 29 of the 36 monitoring wells used during the study. Concentrations of ClO4-, RDX, HMX, and other minor explosive compounds in water samples collected by using a 1-liter grab sampler and low-flow pumped sampling were in close agreement in field tests in the 29 wells. A statistical analysis based on the sign test indicated that there was no bias in the concentration differences between the methods. There also was no evidence for a systematic bias in concentration differences between the methods related to location of the monitoring wells laterally or vertically in the groundwater-flow system. Field tests in five wells also demonstrated that sample collection by using a 2-liter grab sampler and sequential bailing with the 1-liter grab sampler were options for obtaining sufficient sample volume for replicate and spiked quality assurance and control samples. The evidence from the field tests supports the conclusion that diffusion sampling with the rigid porous polyethylene and regenerated-cellulose membranes and grab sampling with the polyethylene-film samplers provide comparable data on the concentrations of ordnance-related compounds in groundwater at the MMR to that obtained by low-flow pumped sampling. These sampling methods are useful methods for monitoring these compounds at the MMR and in similar hydrogeologic environments.
Nelson, Jennifer C.; Marsh, Tracey; Lumley, Thomas; Larson, Eric B.; Jackson, Lisa A.; Jackson, Michael
2014-01-01
Objective Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased due to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. Study Design and Setting We applied two such methods, imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method’s ability to reduce bias using the control time period prior to influenza circulation. Results Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not utilize the validation sample confounders. Conclusion Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from healthcare database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which data can be imputed or reweighted using the additional validation sample information. PMID:23849144
Sparse feature learning for instrument identification: Effects of sampling and pooling methods.
Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu
2016-05-01
Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.
Hughes, Sarah A; Huang, Rongfu; Mahaffey, Ashley; Chelme-Ayala, Pamela; Klamerth, Nikolaus; Meshref, Mohamed N A; Ibrahim, Mohamed D; Brown, Christine; Peru, Kerry M; Headley, John V; Gamal El-Din, Mohamed
2017-11-01
There are several established methods for the determination of naphthenic acids (NAs) in waters associated with oil sands mining operations. Due to their highly complex nature, measured concentration and composition of NAs vary depending on the method used. This study compared different common sample preparation techniques, analytical instrument methods, and analytical standards to measure NAs in groundwater and process water samples collected from an active oil sands operation. In general, the high- and ultrahigh-resolution methods, namely high performance liquid chromatography time-of-flight mass spectrometry (UPLC-TOF-MS) and Orbitrap mass spectrometry (Orbitrap-MS), were within an order of magnitude of the Fourier transform infrared spectroscopy (FTIR) methods. The gas chromatography mass spectrometry (GC-MS) methods consistently had the highest NA concentrations and greatest standard error. Total NAs concentration was not statistically different between sample preparation of solid phase extraction and liquid-liquid extraction. Calibration standards influenced quantitation results. This work provided a comprehensive understanding of the inherent differences in the various techniques available to measure NAs and hence the potential differences in measured amounts of NAs in samples. Results from this study will contribute to the analytical method standardization for NA analysis in oil sands related water samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
An integrated bioanalytical method development and validation approach: case studies.
Xue, Y-J; Melo, Brian; Vallejo, Martha; Zhao, Yuwen; Tang, Lina; Chen, Yuan-Shek; Keller, Karin M
2012-10-01
We proposed an integrated bioanalytical method development and validation approach: (1) method screening based on analyte's physicochemical properties and metabolism information to determine the most appropriate extraction/analysis conditions; (2) preliminary stability evaluation using both quality control and incurred samples to establish sample collection, storage and processing conditions; (3) mock validation to examine method accuracy and precision and incurred sample reproducibility; and (4) method validation to confirm the results obtained during method development. This integrated approach was applied to the determination of compound I in rat plasma and compound II in rat and dog plasma. The effectiveness of the approach was demonstrated by the superior quality of three method validations: (1) a zero run failure rate; (2) >93% of quality control results within 10% of nominal values; and (3) 99% incurred sample within 9.2% of the original values. In addition, rat and dog plasma methods for compound II were successfully applied to analyze more than 900 plasma samples obtained from Investigational New Drug (IND) toxicology studies in rats and dogs with near perfect results: (1) a zero run failure rate; (2) excellent accuracy and precision for standards and quality controls; and (3) 98% incurred samples within 15% of the original values. Copyright © 2011 John Wiley & Sons, Ltd.
Drinking water test methods in crisis-afflicted areas: comparison of methods under field conditions.
Merle, Roswitha; Bleul, Ingo; Schulenburg, Jörg; Kreienbrock, Lothar; Klein, Günter
2011-11-01
To simplify the testing of drinking water in crisis-afflicted areas (as in Kosovo in 2007), rapid test methods were compared with the standard test. For Escherichia coli and coliform pathogens, rapid tests were made available: Colilert(®)-18, P/A test with 4-methylumbelliferyl-β-D-glucoronid, and m-Endo Broth. Biochemical differentiation was carried out by Enterotube™ II. Enterococci were determined following the standard ISO test and by means of Enterolert™. Four hundred ninety-nine water samples were tested for E. coli and coliforms using four methods. Following the standard method, 20.8% (n=104) of the samples contained E. coli, whereas the rapid tests detected between 19.6% (m-Endo Broth, 92.0% concordance) and 20.0% (concordance: 93.6% Colilert-18 and 94.8% P/A-test) positive samples. Regarding coliforms, the percentage of concordant results ranged from 98.4% (P/A-test) to 99.0% (Colilert-18). Colilert-18 and m-Endo Broth detected even more positive samples than the standard method did. Enterococci were detected in 93 of 573 samples by the standard method, but in 92 samples by Enterolert (concordance: 99.5%). Considering the high-quality equipment and time requirements of the standard method, the use of rapid tests in crisis-afflicted areas is sufficiently reliable.
Voelz, David G; Roggemann, Michael C
2009-11-10
Accurate simulation of scalar optical diffraction requires consideration of the sampling requirement for the phase chirp function that appears in the Fresnel diffraction expression. We describe three sampling regimes for FFT-based propagation approaches: ideally sampled, oversampled, and undersampled. Ideal sampling, where the chirp and its FFT both have values that match analytic chirp expressions, usually provides the most accurate results but can be difficult to realize in practical simulations. Under- or oversampling leads to a reduction in the available source plane support size, the available source bandwidth, or the available observation support size, depending on the approach and simulation scenario. We discuss three Fresnel propagation approaches: the impulse response/transfer function (angular spectrum) method, the single FFT (direct) method, and the two-step method. With illustrations and simulation examples we show the form of the sampled chirp functions and their discrete transforms, common relationships between the three methods under ideal sampling conditions, and define conditions and consequences to be considered when using nonideal sampling. The analysis is extended to describe the sampling limitations for the more exact Rayleigh-Sommerfeld diffraction solution.
A fast learning method for large scale and multi-class samples of SVM
NASA Astrophysics Data System (ADS)
Fan, Yu; Guo, Huiming
2017-06-01
A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.
Evaluation of direct saponification method for determination of cholesterol in meats.
Adams, M L; Sullivan, D M; Smith, R L; Richter, E F
1986-01-01
A gas chromatographic (GC) method has been developed for determination of cholesterol in meats. The method involves ethanolic KOH saponification of the sample material, homogeneous-phase toluene extraction of the unsaponifiables, derivatization of cholesterol to its trimethylsilylether, and quantitation by GC-flame ionization detection using 5-alpha-cholestane as internal standard. This direct saponification method is compared with the current AOAC official method for determination of cholesterol in 20 different meat products. The direct saponification method eliminates the need for initial lipid extraction, thus offering a 30% savings in labor, and requires fewer solvents than the AOAC method. It produced comparable or slightly higher cholesterol results than the AOAC method in all meat samples examined. Precision, determined by assaying a turkey meat sample 16 times over 4 days, was excellent (CV = 1.74%). Average recovery of cholesterol added to meat samples was 99.8%.
Mixed Methods Sampling: A Typology with Examples
ERIC Educational Resources Information Center
Teddlie, Charles; Yu, Fen
2007-01-01
This article presents a discussion of mixed methods (MM) sampling techniques. MM sampling involves combining well-established qualitative and quantitative techniques in creative ways to answer research questions posed by MM research designs. Several issues germane to MM sampling are presented including the differences between probability and…
40 CFR 761.272 - Chemical extraction and analysis of samples.
Code of Federal Regulations, 2012 CFR
2012-07-01
... samples. 761.272 Section 761.272 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... COMMERCE, AND USE PROHIBITIONS Cleanup Site Characterization Sampling for PCB Remediation Waste in... composite samples of PCB remediation waste. Use Method 8082 from SW-846, or a method validated under subpart...
40 CFR 761.272 - Chemical extraction and analysis of samples.
Code of Federal Regulations, 2011 CFR
2011-07-01
... samples. 761.272 Section 761.272 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... COMMERCE, AND USE PROHIBITIONS Cleanup Site Characterization Sampling for PCB Remediation Waste in... composite samples of PCB remediation waste. Use Method 8082 from SW-846, or a method validated under subpart...
40 CFR 761.272 - Chemical extraction and analysis of samples.
Code of Federal Regulations, 2013 CFR
2013-07-01
... samples. 761.272 Section 761.272 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... COMMERCE, AND USE PROHIBITIONS Cleanup Site Characterization Sampling for PCB Remediation Waste in... composite samples of PCB remediation waste. Use Method 8082 from SW-846, or a method validated under subpart...
40 CFR 761.272 - Chemical extraction and analysis of samples.
Code of Federal Regulations, 2014 CFR
2014-07-01
... samples. 761.272 Section 761.272 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... COMMERCE, AND USE PROHIBITIONS Cleanup Site Characterization Sampling for PCB Remediation Waste in... composite samples of PCB remediation waste. Use Method 8082 from SW-846, or a method validated under subpart...
Satzke, Catherine; Dunne, Eileen M; Porter, Barbara D; Klugman, Keith P; Mulholland, E Kim
2015-11-01
The pneumococcus is a diverse pathogen whose primary niche is the nasopharynx. Over 90 different serotypes exist, and nasopharyngeal carriage of multiple serotypes is common. Understanding pneumococcal carriage is essential for evaluating the impact of pneumococcal vaccines. Traditional serotyping methods are cumbersome and insufficient for detecting multiple serotype carriage, and there are few data comparing the new methods that have been developed over the past decade. We established the PneuCarriage project, a large, international multi-centre study dedicated to the identification of the best pneumococcal serotyping methods for carriage studies. Reference sample sets were distributed to 15 research groups for blinded testing. Twenty pneumococcal serotyping methods were used to test 81 laboratory-prepared (spiked) samples. The five top-performing methods were used to test 260 nasopharyngeal (field) samples collected from children in six high-burden countries. Sensitivity and positive predictive value (PPV) were determined for the test methods and the reference method (traditional serotyping of >100 colonies from each sample). For the alternate serotyping methods, the overall sensitivity ranged from 1% to 99% (reference method 98%), and PPV from 8% to 100% (reference method 100%), when testing the spiked samples. Fifteen methods had ≥70% sensitivity to detect the dominant (major) serotype, whilst only eight methods had ≥70% sensitivity to detect minor serotypes. For the field samples, the overall sensitivity ranged from 74.2% to 95.8% (reference method 93.8%), and PPV from 82.2% to 96.4% (reference method 99.6%). The microarray had the highest sensitivity (95.8%) and high PPV (93.7%). The major limitation of this study is that not all of the available alternative serotyping methods were included. Most methods were able to detect the dominant serotype in a sample, but many performed poorly in detecting the minor serotype populations. Microarray with a culture amplification step was the top-performing method. Results from this comprehensive evaluation will inform future vaccine evaluation and impact studies, particularly in low-income settings, where pneumococcal disease burden remains high.
Comparative evaluation of two methods of enumerating enterococci in foods: collaborative study.
Peterz, M; Steneryd, A C
1993-05-01
Two methods of enumerating enterococci in foods were compared in a collaborative study. Thirteen laboratories tested four blind duplicate samples containing different levels of enterococci and two negative control samples. Freeze-dried mixtures of bacteria were used as simulated food samples. The freeze-dried samples were reconstituted and either spread directly on the surface of Slanetz and Bartley medium (SB) and incubated at 44 degrees C for 48 h or preincubated in tryptone soya agar at 37 degrees C for 2 h before being overlaid by SB and incubated at 37 degrees C for a further 46 h. The numbers CFU of enterococci recovered by the two methods were not significantly different except for one sample where the 37 degrees C method gave a somewhat higher recovery. The 44 degrees C method was less time-consuming and less laborious.
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.
Durbin, Gregory W; Salter, Robert
2006-01-01
The Ecolite High Volume Juice (HVJ) presence-absence method for a 10-ml juice sample was compared with the U.S. Food and Drug Administration Bacteriological Analytical Manual most-probable-number (MPN) method for analysis of artificially contaminated orange juices. Samples were added to Ecolite-HVJ medium and incubated at 35 degrees C for 24 to 48 h. Fluorescent blue results were positive for glucuronidase- and galactosidase-producing microorganisms, specifically indicative of about 94% of Escherichia coli strains. Four strains of E. coli were added to juices at concentrations of 0.21 to 6.8 CFU/ ml. Mixtures of enteric bacteria (Enterobacter plus Klebsiella, Citrobacter plus Proteus, or Hafnia plus Citrobacter plus Enterobacter) were added to simulate background flora. Three orange juice types were evaluated (n = 10) with and without the addition of the E. coli strains. Ecolite-HVJ produced 90 of 90 (10 of 10 samples of three juice types, each inoculated with three different E. coli strains) positive (blue-fluorescent) results with artificially contaminated E. coli that had MPN concentrations of <0.3 to 9.3 CFU/ml. Ten of 30 E. coli ATCC 11229 samples with MPN concentrations of <0.3 CFU/ml were identified as positive with Ecolite-HVJ. Isolated colonies recovered from positive Ecolite-HVJ samples were confirmed biochemically as E. coli. Thirty (10 samples each of three juice types) negative (not fluorescent) results were obtained for samples contaminated with only enteric bacteria and for uninoculated control samples. A juice manufacturer evaluated citrus juice production with both the Ecolite-HVJ and Colicomplete methods and recorded identical negative results for 95 20-ml samples and identical positive results for 5 20-ml samples artificially contaminated with E. coli. The Ecolite-HVJ method requires no preenrichment and subsequent transfer steps, which makes it a simple and easy method for use by juice producers.
Espino, L; Way, M O; Wilson, L T
2008-02-01
Commercial rice, Oryza sativa L., fields in southeastern Texas were sampled during 2003 and 2004, and visual samples were compared with sweep net samples. Fields were sampled at different stages of panicle development, times of day, and by different operators. Significant differences were found between perimeter and within field sweep net samples, indicating that samples taken 9 m from the field margin overestimate within field Oebalus pugnax (F.) (Hemiptera: Pentatomidae) populations. Time of day did not significantly affect the number of O. pugnax caught with the sweep net; however, there was a trend to capture more insects during morning than afternoon. For all sampling methods evaluated during this study, O. pugnax was found to have an aggregated spatial pattern at most densities. When comparing sweep net with visual sampling methods, one sweep of the "long stick" and two sweeps of the "sweep stick" correlated well with the sweep net (r2 = 0.639 and r2 = 0.815, respectively). This relationship was not affected by time of day of sampling, stage of panicle development, type of planting or operator. Relative cost-reliability, which incorporates probability of adoption, indicates the visual methods are more cost-reliable than the sweep net for sampling O.
Hoffman, G.L.; Fishman, M. J.; Garbarino, J.R.
1996-01-01
Water samples for trace-metal determinations routinely have been prepared in open laboratories. For example, the U.S. Geological Survey method I-3485-85 (Extraction Procedure, for Water- Suspended Sediment) is performed in a laboratory hood on a laboratory bench without any special precautions to control airborne contamination. This method tends to be contamination prone for several trace metals primarily because the samples are transferred, acidified, digested, and filtered in an open laboratory environment. To reduce trace-metal contamination of digested water samples, procedures were established that rely on minimizing sample-transfer steps and using a class-100 clean bench during sample filtration. This new procedure involves the following steps: 1. The sample is acidified with HCl directly in the original water-sample bottle. 2. The water-sample bottle with the cap secured is heated in a laboratory oven. 3. The digestate is filtered in a class-100 laminar-flow clean bench. The exact conditions used (that is, oven temperature, time of heating, and filtration methods) for this digestion procedure are described. Comparisons between the previous U.S Geological Survey open-beaker method I-3485-85 and the new in-bottle procedure for synthetic and field-collected water samples are given. When the new procedure is used, blank concentrations for most trace metals determined are reduced significantly.
Mellerup, Anders; Ståhl, Marie
2015-01-01
The aim of this article was to define the sampling level and method combination that captures antibiotic resistance at pig herd level utilizing qPCR antibiotic resistance gene quantification and culture-based quantification of antibiotic resistant coliform indicator bacteria. Fourteen qPCR assays for commonly detected antibiotic resistance genes were developed, and used to quantify antibiotic resistance genes in total DNA from swine fecal samples that were obtained using different sampling and pooling methods. In parallel, the number of antibiotic resistant coliform indicator bacteria was determined in the same swine fecal samples. The results showed that the qPCR assays were capable of detecting differences in antibiotic resistance levels in individual animals that the coliform bacteria colony forming units (CFU) could not. Also, the qPCR assays more accurately quantified antibiotic resistance genes when comparing individual sampling and pooling methods. qPCR on pooled samples was found to be a good representative for the general resistance level in a pig herd compared to the coliform CFU counts. It had significantly reduced relative standard deviations compared to coliform CFU counts in the same samples, and therefore differences in antibiotic resistance levels between samples were more readily detected. To our knowledge, this is the first study to describe sampling and pooling methods for qPCR quantification of antibiotic resistance genes in total DNA extracted from swine feces. PMID:26114765
Chuang, Jane C; Emon, Jeanette M Van; Durnford, Joyce; Thomas, Kent
2005-09-15
An enzyme-linked immunosorbent assay (ELISA) method was developed to quantitatively measure 2,4-dichlorophenoxyacetic acid (2,4-D) in human urine. Samples were diluted (1:5) with phosphate-buffered saline containing 0.05% Tween and 0.02% sodium azide, with analysis by a 96-microwell plate immunoassay format. No clean up was required as dilution step minimized sample interferences. Fifty urine samples were received without identifiers from a subset of pesticide applicators and their spouses in an EPA pesticide exposure study (PES) and analyzed by the ELISA method and a conventional gas chromatography/mass spectrometry (GC/MS) procedure. For the GC/MS analysis, urine samples were extracted with acidic dichloromethane (DCM); methylated by diazomethane and fractionated by a Florisil solid phase extraction (SPE) column prior to GC/MS detection. The percent relative standard deviation (%R.S.D.) of the 96-microwell plate triplicate assays ranged from 1.2 to 22% for the urine samples. Day-to-day variation of the assay results was within +/-20%. Quantitative recoveries (>70%) of 2,4-D were obtained for the spiked urine samples by the ELISA method. Quantitative recoveries (>80%) of 2,4-D were also obtained for these samples by the GC/MS procedure. The overall method precision of these samples was within +/-20% for both the ELISA and GC/MS methods. The estimated quantification limit for 2,4-D in urine was 30ng/mL by ELISA and 0.2ng/mL by GC/MS. A higher quantification limit for the ELISA method is partly due to the requirement of a 1:5 dilution to remove the urine sample matrix effect. The GC/MS method can accommodate a 10:1 concentration factor (10mL of urine converted into 1mL organic solvent for analysis) but requires extraction, methylation and clean up on a solid phase column. The immunoassay and GC/MS data were highly correlated, with a correlation coefficient of 0.94 and a slope of 1.00. Favorable results between the two methods were achieved despite the vast differences in sample preparation. Results indicated that the ELISA method could be used as a high throughput, quantitative monitoring tool for human urine samples to identify individuals with exposure to 2,4-D above the typical background levels.
The efficacy of respondent-driven sampling for the health assessment of minority populations.
Badowski, Grazyna; Somera, Lilnabeth P; Simsiman, Brayan; Lee, Hye-Ryeon; Cassel, Kevin; Yamanaka, Alisha; Ren, JunHao
2017-10-01
Respondent driven sampling (RDS) is a relatively new network sampling technique typically employed for hard-to-reach populations. Like snowball sampling, initial respondents or "seeds" recruit additional respondents from their network of friends. Under certain assumptions, the method promises to produce a sample independent from the biases that may have been introduced by the non-random choice of "seeds." We conducted a survey on health communication in Guam's general population using the RDS method, the first survey that has utilized this methodology in Guam. It was conducted in hopes of identifying a cost-efficient non-probability sampling strategy that could generate reasonable population estimates for both minority and general populations. RDS data was collected in Guam in 2013 (n=511) and population estimates were compared with 2012 BRFSS data (n=2031) and the 2010 census data. The estimates were calculated using the unweighted RDS sample and the weighted sample using RDS inference methods and compared with known population characteristics. The sample size was reached in 23days, providing evidence that the RDS method is a viable, cost-effective data collection method, which can provide reasonable population estimates. However, the results also suggest that the RDS inference methods used to reduce bias, based on self-reported estimates of network sizes, may not always work. Caution is needed when interpreting RDS study findings. For a more diverse sample, data collection should not be conducted in just one location. Fewer questions about network estimates should be asked, and more careful consideration should be given to the kind of incentives offered to participants. Copyright © 2017. Published by Elsevier Ltd.
An atomic-absorption method for the determination of gold in large samples of geologic materials
VanSickle, Gordon H.; Lakin, Hubert William
1968-01-01
A laboratory method for the determination of gold in large (100-gram) samples has been developed for use in the study of the gold content of placer deposits and of trace amounts of gold in other geologic materials. In this method the sample is digested with bromine and ethyl ether, the gold is extracted into methyl isobutyl ketone, and the determination is made by atomicabsorption spectrophotometry. The lower limit of detection is 0.005 part per million in the sample. The few data obtained so far by this method agree favorably with those obtained by assay and by other atomic-absorption methods. About 25 determinations can be made per man-day.
Multi-laboratory survey of qPCR enterococci analysis method performance
Quantitative polymerase chain reaction (qPCR) has become a frequently used technique for quantifying enterococci in recreational surface waters, but there are several methodological options. Here we evaluated how three method permutations, type of mastermix, sample extract dilution and use of controls in results calculation, affect method reliability among multiple laboratories with respect to sample interference. Multiple samples from each of 22 sites representing an array of habitat types were analyzed using EPA Method 1611 and 1609 reagents with full strength and five-fold diluted extracts. The presence of interference was assessed three ways: using sample processing and PCR amplifications controls; consistency of results across extract dilutions; and relative recovery of target genes from spiked enterococci in water sample compared to control matrices with acceptable recovery defined as 50 to 200%. Method 1609, which is based on an environmental mastermix, was found to be superior to Method 1611, which is based on a universal mastermix. Method 1611 had over a 40% control assay failure rate with undiluted extracts and a 6% failure rate with diluted extracts. Method 1609 failed in only 11% and 3% of undiluted and diluted extracts analyses. Use of sample processing control assay results in the delta-delta Ct method for calculating relative target gene recoveries increased the number of acceptable recovery results. Delta-delta tended to bias recoveries fr
40 CFR 761.272 - Chemical extraction and analysis of samples.
Code of Federal Regulations, 2010 CFR
2010-07-01
... COMMERCE, AND USE PROHIBITIONS Cleanup Site Characterization Sampling for PCB Remediation Waste in... composite samples of PCB remediation waste. Use Method 8082 from SW-846, or a method validated under subpart...
Houts, Carrie R; Edwards, Michael C; Wirth, R J; Deal, Linda S
2016-11-01
There has been a notable increase in the advocacy of using small-sample designs as an initial quantitative assessment of item and scale performance during the scale development process. This is particularly true in the development of clinical outcome assessments (COAs), where Rasch analysis has been advanced as an appropriate statistical tool for evaluating the developing COAs using a small sample. We review the benefits such methods are purported to offer from both a practical and statistical standpoint and detail several problematic areas, including both practical and statistical theory concerns, with respect to the use of quantitative methods, including Rasch-consistent methods, with small samples. The feasibility of obtaining accurate information and the potential negative impacts of misusing large-sample statistical methods with small samples during COA development are discussed.
Zonta, Marco Antonio; Velame, Fernanda; Gema, Samara; Filassi, Jose Roberto; Longatto-Filho, Adhemar
2014-01-01
Background Breast cancer is the second cause of death in women worldwide. The spontaneous breast nipple discharge may contain cells that can be analyzed for malignancy. Halo® Mamo Cyto Test (HMCT) was recently developed as an automated system indicated to aspirate cells from the breast ducts. The objective of this study was to standardize the methodology of sampling and sample preparation of nipple discharge obtained by the automated method Halo breast test and perform cytological evaluation in samples preserved in liquid medium (SurePath™). Methods We analyzed 564 nipple fluid samples, from women between 20 and 85 years old, without history of breast disease and neoplasia, no pregnancy, and without gynecologic medical history, collected by HMCT method and preserved in two different vials with solutions for transport. Results From 306 nipple fluid samples from method 1, 199 (65%) were classified as unsatisfactory (class 0), 104 (34%) samples were classified as benign findings (class II), and three (1%) were classified as undetermined to neoplastic cells (class III). From 258 samples analyzed in method 2, 127 (49%) were classified as class 0, 124 (48%) were classified as class II, and seven (2%) were classified as class III. Conclusion Our study suggests an improvement in the quality and quantity of cellular samples when the association of the two methodologies is performed, Halo breast test and the method in liquid medium. PMID:29147397
La 2-xSr xCuO 4-δ superconducting samples prepared by the wet-chemical method
NASA Astrophysics Data System (ADS)
Loose, A.; Gonzalez, J. L.; Lopez, A.; Borges, H. A.; Baggio-Saitovitch, E.
2009-10-01
In this work, we report on the physical properties of good-quality polycrystalline superconducting samples of La 2-xSr xCu 1-yZn yO 4-δ ( y=0, 0.02) prepared by a wet-chemical method, focusing on the temperature dependence of the critical current. Using the wet-chemical method, we were able to produce samples with improved homogeneity compared to the solid-state method. A complete set of samples with several carrier concentrations, ranging from the underdoped (strontium concentration x≈0.05) to the highly overdoped ( x≈0.25) region, were prepared and investigated. The X-ray diffraction analysis, zero-field cooling magnetization and electrical resistivity measurements were reported on earlier. The structural parameters of the prepared samples seem to be slightly modified by the preparation method and their critical temperatures were lower than reported in the literature. The temperature dependence of the critical current was explained by a theoretical model which took the granular structure of the samples into account.
Correcting for Sample Contamination in Genotype Calling of DNA Sequence Data
Flickinger, Matthew; Jun, Goo; Abecasis, Gonçalo R.; Boehnke, Michael; Kang, Hyun Min
2015-01-01
DNA sample contamination is a frequent problem in DNA sequencing studies and can result in genotyping errors and reduced power for association testing. We recently described methods to identify within-species DNA sample contamination based on sequencing read data, showed that our methods can reliably detect and estimate contamination levels as low as 1%, and suggested strategies to identify and remove contaminated samples from sequencing studies. Here we propose methods to model contamination during genotype calling as an alternative to removal of contaminated samples from further analyses. We compare our contamination-adjusted calls to calls that ignore contamination and to calls based on uncontaminated data. We demonstrate that, for moderate contamination levels (5%–20%), contamination-adjusted calls eliminate 48%–77% of the genotyping errors. For lower levels of contamination, our contamination correction methods produce genotypes nearly as accurate as those based on uncontaminated data. Our contamination correction methods are useful generally, but are particularly helpful for sample contamination levels from 2% to 20%. PMID:26235984
Flagging versus dragging as sampling methods for nymphal Ixodes scapularis (Acari: Ixodidae)
Rulison, Eric L.; Kuczaj, Isis; Pang, Genevieve; Hickling, Graham J.; Tsao, Jean I.; Ginsberg, Howard S.
2013-01-01
The nymphal stage of the blacklegged tick, Ixodes scapularis (Acari: Ixodidae), is responsible for most transmission of Borrelia burgdorferi, the etiologic agent of Lyme disease, to humans in North America. From 2010 to fall of 2012, we compared two commonly used techniques, flagging and dragging, as sampling methods for nymphal I. scapularis at three sites, each with multiple sampling arrays (grids), in the eastern and central United States. Flagging and dragging collected comparable numbers of nymphs, with no consistent differences between methods. Dragging collected more nymphs than flagging in some samples, but these differences were not consistent among sites or sampling years. The ratio of nymphs collected by flagging vs dragging was not significantly related to shrub density, so habitat type did not have a strong effect on the relative efficacy of these methods. Therefore, although dragging collected more ticks in a few cases, the numbers collected by each method were so variable that neither technique had a clear advantage for sampling nymphal I. scapularis.
Fluidics platform and method for sample preparation
Benner, Henry W.; Dzenitis, John M.
2016-06-21
Provided herein are fluidics platforms and related methods for performing integrated sample collection and solid-phase extraction of a target component of the sample all in one tube. The fluidics platform comprises a pump, particles for solid-phase extraction and a particle-holding means. The method comprises contacting the sample with one or more reagents in a pump, coupling a particle-holding means to the pump and expelling the waste out of the pump while the particle-holding means retains the particles inside the pump. The fluidics platform and methods herein described allow solid-phase extraction without pipetting and centrifugation.
Sampling Based Influence Maximization on Linear Threshold Model
NASA Astrophysics Data System (ADS)
Jia, Su; Chen, Ling
2018-04-01
A sampling based influence maximization on linear threshold (LT) model method is presented. The method samples the routes in the possible worlds in the social networks, and uses Chernoff bound to estimate the number of samples so that the error can be constrained within a given bound. Then the active possibilities of the routes in the possible worlds are calculated, and are used to compute the influence spread of each node in the network. Our experimental results show that our method can effectively select appropriate seed nodes set that spreads larger influence than other similar methods.
Reflexion on linear regression trip production modelling method for ensuring good model quality
NASA Astrophysics Data System (ADS)
Suprayitno, Hitapriya; Ratnasari, Vita
2017-11-01
Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.
A simple modification of the Baermann method for diagnosis of strongyloidiasis.
Hernández-Chavarría, F; Avendaño, L
2001-08-01
The diagnosis of Strongyloides stercoralis infections is routinely made by microscopic observation of larvae in stool samples, a low sensitivity method, or by other, most effective methods, such as the Baermann or agar culture plate methods. We propose in this paper a practical modification of Baermann method. One hundred and six stool samples from alcoholic patients were analyzed using the direct smear test, agar culture plate method, the standard Baermann method, and its proposed modification. For this modification the funnel used in the original version of the method is substituted by a test tube with a rubber stopper, perforated to allow insertion of a pipette tip. The tube with a fecal suspension is inverted over another tube containing 6 ml of saline solution and incubated at 37 degrees C for at least 2 h. The saline solution from the second tube is centrifuged and the pellet is observed microscopically. Larva of S. stercoralis were detected in six samples (5.7%) by the two versions of the Baermann method. Five samples were positive using the agar culture plate method, and only in two samples the larva were observed using direct microscopic observation of fecal smears. Cysts of Endolimax nana and Entamoeba histolytica/dyspar were also detected in the modification of Baermann method. Data obtained by the modified Baermann method suggest that this methodology may helps concentrate larvae of S. stercoralis as efficiently as the original method.
Phytoforensics—Using trees to find contamination
Wilson, Jordan L.
2017-09-28
The water we drink, air we breathe, and soil we come into contact with have the potential to adversely affect our health because of contaminants in the environment. Environmental samples can characterize the extent of potential contamination, but traditional methods for collecting water, air, and soil samples below the ground (for example, well drilling or direct-push soil sampling) are expensive and time consuming. Trees are closely connected to the subsurface and sampling tree trunks can indicate subsurface pollutants, a process called phytoforensics. Scientists at the Missouri Water Science Center were among the first to use phytoforensics to screen sites for contamination before using traditional sampling methods, to guide additional sampling, and to show the large cost savings associated with tree sampling compared to traditional methods.
Exploring high dimensional free energy landscapes: Temperature accelerated sliced sampling
NASA Astrophysics Data System (ADS)
Awasthi, Shalini; Nair, Nisanth N.
2017-03-01
Biased sampling of collective variables is widely used to accelerate rare events in molecular simulations and to explore free energy surfaces. However, computational efficiency of these methods decreases with increasing number of collective variables, which severely limits the predictive power of the enhanced sampling approaches. Here we propose a method called Temperature Accelerated Sliced Sampling (TASS) that combines temperature accelerated molecular dynamics with umbrella sampling and metadynamics to sample the collective variable space in an efficient manner. The presented method can sample a large number of collective variables and is advantageous for controlled exploration of broad and unbound free energy basins. TASS is also shown to achieve quick free energy convergence and is practically usable with ab initio molecular dynamics techniques.
Quantitative Evaluation of Hard X-ray Damage to Biological Samples using EUV Ptychography
NASA Astrophysics Data System (ADS)
Baksh, Peter; Odstrcil, Michal; Parsons, Aaron; Bailey, Jo; Deinhardt, Katrin; Chad, John E.; Brocklesby, William S.; Frey, Jeremy G.
2017-06-01
Coherent diffractive imaging (CDI) has become a standard method on a variety of synchrotron beam lines. The high brilliance short wavelength radiation from these sources can be used to reconstruct attenuation and relative phase of a sample with nanometre resolution via CDI methods. However, the interaction between the sample and high energy ionising radiation can cause degradation to sample structure. We demonstrate, using a laboratory based high harmonic generation (HHG) based extreme ultraviolet (EUV) source, imaging a sample of hippocampal neurons using the ptychography method. The significant increase in contrast of the sample in the EUV light allows identification of damage induced from exposure to 7.3 keV photons, without causing any damage to the sample itself.
NASA Technical Reports Server (NTRS)
Carson, John M., III; Bayard, David S.
2006-01-01
G-SAMPLE is an in-flight dynamical method for use by sample collection missions to identify the presence and quantity of collected sample material. The G-SAMPLE method implements a maximum-likelihood estimator to identify the collected sample mass, based on onboard force sensor measurements, thruster firings, and a dynamics model of the spacecraft. With G-SAMPLE, sample mass identification becomes a computation rather than an extra hardware requirement; the added cost of cameras or other sensors for sample mass detection is avoided. Realistic simulation examples are provided for a spacecraft configuration with a sample collection device mounted on the end of an extended boom. In one representative example, a 1000 gram sample mass is estimated to within 110 grams (95% confidence) under realistic assumptions of thruster profile error, spacecraft parameter uncertainty, and sensor noise. For convenience to future mission design, an overall sample-mass estimation error budget is developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.
Jeffrey H. Gove
2003-01-01
Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...
Wells, Beth; Shaw, Hannah; Innocent, Giles; Guido, Stefano; Hotchkiss, Emily; Parigi, Maria; Opsteegh, Marieke; Green, James; Gillespie, Simon; Innes, Elisabeth A; Katzer, Frank
2015-12-15
Waterborne transmission of Toxoplasma gondii is a potential public health risk and there are currently no agreed optimised methods for the recovery, processing and detection of T. gondii oocysts in water samples. In this study modified methods of T. gondii oocyst recovery and DNA extraction were applied to 1427 samples collected from 147 public water supplies throughout Scotland. T. gondii DNA was detected, using real time PCR (qPCR) targeting the 529bp repeat element, in 8.79% of interpretable samples (124 out of 1411 samples). The samples which were positive for T. gondii DNA originated from a third of the sampled water sources. The samples which were positive by qPCR and some of the negative samples were reanalysed using ITS1 nested PCR (nPCR) and results compared. The 529bp qPCR was the more sensitive technique and a full analysis of assay performance, by Bayesian analysis using a Markov Chain Monte Carlo method, was completed which demonstrated the efficacy of this method for the detection of T. gondii in water samples. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Multilattice sampling strategies for region of interest dynamic MRI.
Rilling, Gabriel; Tao, Yuehui; Marshall, Ian; Davies, Mike E
2013-08-01
A multilattice sampling approach is proposed for dynamic MRI with Cartesian trajectories. It relies on the use of sampling patterns composed of several different lattices and exploits an image model where only some parts of the image are dynamic, whereas the rest is assumed static. Given the parameters of such an image model, the methodology followed for the design of a multilattice sampling pattern adapted to the model is described. The multi-lattice approach is compared to single-lattice sampling, as used by traditional acceleration methods such as UNFOLD (UNaliasing by Fourier-Encoding the Overlaps using the temporal Dimension) or k-t BLAST, and random sampling used by modern compressed sensing-based methods. On the considered image model, it allows more flexibility and higher accelerations than lattice sampling and better performance than random sampling. The method is illustrated on a phase-contrast carotid blood velocity mapping MR experiment. Combining the multilattice approach with the KEYHOLE technique allows up to 12× acceleration factors. Simulation and in vivo undersampling results validate the method. Compared to lattice and random sampling, multilattice sampling provides significant gains at high acceleration factors. © 2012 Wiley Periodicals, Inc.
Sampling strategies for estimating brook trout effective population size
Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher
2012-01-01
The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...