Amonette, James E.; Autrey, S. Thomas; Foster-Mills, Nancy S.
2006-02-14
Methods and apparatus for simultaneous or sequential, rapid analysis of multiple samples by photoacoustic spectroscopy are disclosed. Particularly, a photoacoustic spectroscopy sample array vessel including a vessel body having multiple sample cells connected thereto is disclosed. At least one acoustic detector is acoustically positioned near the sample cells. Methods for analyzing the multiple samples in the sample array vessels using photoacoustic spectroscopy are provided.
Amonette, James E.; Autrey, S. Thomas; Foster-Mills, Nancy S.; Green, David
2005-03-29
Methods and apparatus for analysis of multiple samples by photoacoustic spectroscopy are disclosed. Particularly, a photoacoustic spectroscopy sample array vessel including a vessel body having multiple sample cells connected thereto is disclosed. At least one acoustic detector is acoustically coupled with the vessel body. Methods for analyzing the multiple samples in the sample array vessels using photoacoustic spectroscopy are provided.
Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong
2015-12-26
This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.
THE SCREENING AND RANKING ALGORITHM FOR CHANGE-POINTS DETECTION IN MULTIPLE SAMPLES
Song, Chi; Min, Xiaoyi; Zhang, Heping
2016-01-01
The chromosome copy number variation (CNV) is the deviation of genomic regions from their normal copy number states, which may associate with many human diseases. Current genetic studies usually collect hundreds to thousands of samples to study the association between CNV and diseases. CNVs can be called by detecting the change-points in mean for sequences of array-based intensity measurements. Although multiple samples are of interest, the majority of the available CNV calling methods are single sample based. Only a few multiple sample methods have been proposed using scan statistics that are computationally intensive and designed toward either common or rare change-points detection. In this paper, we propose a novel multiple sample method by adaptively combining the scan statistic of the screening and ranking algorithm (SaRa), which is computationally efficient and is able to detect both common and rare change-points. We prove that asymptotically this method can find the true change-points with almost certainty and show in theory that multiple sample methods are superior to single sample methods when shared change-points are of interest. Additionally, we report extensive simulation studies to examine the performance of our proposed method. Finally, using our proposed method as well as two competing approaches, we attempt to detect CNVs in the data from the Primary Open-Angle Glaucoma Genes and Environment study, and conclude that our method is faster and requires less information while our ability to detect the CNVs is comparable or better. PMID:28090239
Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong
2015-01-01
This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB. PMID:26712765
Multi-scale occupancy estimation and modelling using multiple detection methods
Nichols, James D.; Bailey, Larissa L.; O'Connell, Allan F.; Talancy, Neil W.; Grant, Evan H. Campbell; Gilbert, Andrew T.; Annand, Elizabeth M.; Husband, Thomas P.; Hines, James E.
2008-01-01
Occupancy estimation and modelling based on detection–nondetection data provide an effective way of exploring change in a species’ distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method.We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species’ use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site.We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species.Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can be viewed as another variation of Pollock's robust design and may be applicable to a wide variety of scenarios where species occur in an area but are not always near the sampled locations. The estimation approach is likely to be especially useful in multispecies conservation programmes by providing efficient estimates using multiple detection devices and by providing device-specific detection probability estimates for use in survey design.
Synchronizing data from irregularly sampled sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uluyol, Onder
A system and method include receiving a set of sampled measurements for each of multiple sensors, wherein the sampled measurements are at irregular intervals or different rates, re-sampling the sampled measurements of each of the multiple sensors at a higher rate than one of the sensor's set of sampled measurements, and synchronizing the sampled measurements of each of the multiple sensors.
Yeung, Edward S.; Gong, Xiaoyi
2004-09-07
The present invention provides a method of analyzing multiple samples simultaneously by absorption detection. The method comprises: (i) providing a planar array of multiple containers, each of which contains a sample comprising at least one absorbing species, (ii) irradiating the planar array of multiple containers with a light source and (iii) detecting absorption of light with a detetion means that is in line with the light source at a distance of at leaat about 10 times a cross-sectional distance of a container in the planar array of multiple containers. The absorption of light by a sample indicates the presence of an absorbing species in it. The method can further comprise: (iv) measuring the amount of absorption of light detected in (iii) indicating the amount of the absorbing species in the sample. Also provided by the present invention is a system for use in the abov metho.The system comprises; (i) a light source comrnpising or consisting essentially of at leaat one wavelength of light, the absorption of which is to be detected, (ii) a planar array of multiple containers, and (iii) a detection means that is in line with the light source and is positioned in line with and parallel to the planar array of multiple contiainers at a distance of at least about 10 times a cross-sectional distance of a container.
NASA Technical Reports Server (NTRS)
Deepak, A.; Fluellen, A.
1978-01-01
An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.
Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min
2016-07-22
One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account.
Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min
2016-01-01
One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account. PMID:27445105
Estimating the mass variance in neutron multiplicity counting-A comparison of approaches
NASA Astrophysics Data System (ADS)
Dubi, C.; Croft, S.; Favalli, A.; Ocherashvili, A.; Pedersen, B.
2017-12-01
In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α , n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.
Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubi, C.; Croft, S.; Favalli, A.
In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches
Dubi, C.; Croft, S.; Favalli, A.; ...
2017-09-14
In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
Characterizing lentic freshwater fish assemblages using multiple sampling methods
Fischer, Jesse R.; Quist, Michael C.
2014-01-01
Characterizing fish assemblages in lentic ecosystems is difficult, and multiple sampling methods are almost always necessary to gain reliable estimates of indices such as species richness. However, most research focused on lentic fish sampling methodology has targeted recreationally important species, and little to no information is available regarding the influence of multiple methods and timing (i.e., temporal variation) on characterizing entire fish assemblages. Therefore, six lakes and impoundments (48–1,557 ha surface area) were sampled seasonally with seven gear types to evaluate the combined influence of sampling methods and timing on the number of species and individuals sampled. Probabilities of detection for species indicated strong selectivities and seasonal trends that provide guidance on optimal seasons to use gears when targeting multiple species. The evaluation of species richness and number of individuals sampled using multiple gear combinations demonstrated that appreciable benefits over relatively few gears (e.g., to four) used in optimal seasons were not present. Specifically, over 90 % of the species encountered with all gear types and season combinations (N = 19) from six lakes and reservoirs were sampled with nighttime boat electrofishing in the fall and benthic trawling, modified-fyke, and mini-fyke netting during the summer. Our results indicated that the characterization of lentic fish assemblages was highly influenced by the selection of sampling gears and seasons, but did not appear to be influenced by waterbody type (i.e., natural lake, impoundment). The standardization of data collected with multiple methods and seasons to account for bias is imperative to monitoring of lentic ecosystems and will provide researchers with increased reliability in their interpretations and decisions made using information on lentic fish assemblages.
Zhang, L; Liu, X J
2016-06-03
With the rapid development of next-generation high-throughput sequencing technology, RNA-seq has become a standard and important technique for transcriptome analysis. For multi-sample RNA-seq data, the existing expression estimation methods usually deal with each single-RNA-seq sample, and ignore that the read distributions are consistent across multiple samples. In the current study, we propose a structured sparse regression method, SSRSeq, to estimate isoform expression using multi-sample RNA-seq data. SSRSeq uses a non-parameter model to capture the general tendency of non-uniformity read distribution for all genes across multiple samples. Additionally, our method adds a structured sparse regularization, which not only incorporates the sparse specificity between a gene and its corresponding isoform expression levels, but also reduces the effects of noisy reads, especially for lowly expressed genes and isoforms. Four real datasets were used to evaluate our method on isoform expression estimation. Compared with other popular methods, SSRSeq reduced the variance between multiple samples, and produced more accurate isoform expression estimations, and thus more meaningful biological interpretations.
NASA Astrophysics Data System (ADS)
Liu, Xiaodong
2017-08-01
A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.
Two enzyme-linked immunosorbent assay (ELISA) methods were evaluated for the determination of 3,5,6-trichloro-2-pyridinol (3,5,6-TCP) in multiple sample media (dust, soil, food, and urine). The dust and soil samples were analyzed by the RaPID (TM) commercial immunoassay testing ...
The multiple imputation method: a case study involving secondary data analysis.
Walani, Salimah R; Cleland, Charles M
2015-05-01
To illustrate with the example of a secondary data analysis study the use of the multiple imputation method to replace missing data. Most large public datasets have missing data, which need to be handled by researchers conducting secondary data analysis studies. Multiple imputation is a technique widely used to replace missing values while preserving the sample size and sampling variability of the data. The 2004 National Sample Survey of Registered Nurses. The authors created a model to impute missing values using the chained equation method. They used imputation diagnostics procedures and conducted regression analysis of imputed data to determine the differences between the log hourly wages of internationally educated and US-educated registered nurses. The authors used multiple imputation procedures to replace missing values in a large dataset with 29,059 observations. Five multiple imputed datasets were created. Imputation diagnostics using time series and density plots showed that imputation was successful. The authors also present an example of the use of multiple imputed datasets to conduct regression analysis to answer a substantive research question. Multiple imputation is a powerful technique for imputing missing values in large datasets while preserving the sample size and variance of the data. Even though the chained equation method involves complex statistical computations, recent innovations in software and computation have made it possible for researchers to conduct this technique on large datasets. The authors recommend nurse researchers use multiple imputation methods for handling missing data to improve the statistical power and external validity of their studies.
Data-optimized source modeling with the Backwards Liouville Test–Kinetic method
Woodroffe, J. R.; Brito, T. V.; Jordanova, V. K.; ...
2017-09-14
In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution were used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. Our study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra,more » Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodroffe, J. R.; Brito, T. V.; Jordanova, V. K.
In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution were used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. Our study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra,more » Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
Sample size determination for logistic regression on a logit-normal distribution.
Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance
2017-06-01
Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.
Effects of electrofishing gear type on spatial and temporal variability in fish community sampling
Meador, M.R.; McIntyre, J.P.
2003-01-01
Fish community data collected from 24 major river basins between 1993 and 1998 as part of the U.S. Geological Survey's National Water-Quality Assessment Program were analyzed to assess multiple-reach (three consecutive reaches) and multiple-year (three consecutive years) variability in samples collected at a site. Variability was assessed using the coefficient of variation (CV; SD/mean) of species richness, the Jaccard index (JI), and the percent similarity index (PSI). Data were categorized by three electrofishing sample collection methods: backpack, towed barge, and boat. Overall, multiple-reach CV values were significantly lower than those for multiple years, whereas multiple-reach JI and PSI values were significantly greater than those for multiple years. Multiple-reach and multiple-year CV values did not vary significantly among electrofishing methods, although JI and PSI values were significantly greatest for backpack electrofishing across multiple reaches and multiple years. The absolute difference between mean species richness for multiple-reach samples and mean species richness for multiple-year samples was 0.8 species (9.5% of total species richness) for backpack samples, 1.7 species (10.1%) for towed-barge samples, and 4.5 species (24.4%) for boat-collected samples. Review of boat-collected fish samples indicated that representatives of four taxonomic families - Catostomidae, Centrarchidae, Cyprinidae, and Ictaluridae - were collected at all sites. Of these, catostomids exhibited greater interannual variability than centrarchids, cyprinids, or ictalurids. Caution should be exercised when combining boat-collected fish community data from different years because of relatively high interannual variability, which is primarily due to certain relatively mobile species. Such variability may obscure longer-term trends.
Evaluating Composite Sampling Methods of Bacillus Spores at Low Concentrations
Hess, Becky M.; Amidan, Brett G.; Anderson, Kevin K.; Hutchison, Janine R.
2016-01-01
Restoring all facility operations after the 2001 Amerithrax attacks took years to complete, highlighting the need to reduce remediation time. Some of the most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite (SM-SPC): a single cellulose sponge samples multiple coupons with a single pass across each coupon; 2) single medium multi-pass composite: a single cellulose sponge samples multiple coupons with multiple passes across each coupon (SM-MPC); and 3) multi-medium post-sample composite (MM-MPC): a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155 CFU/cm2). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted dry wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p< 0.0001) and coupon material (p = 0.0006). Recovery efficiency (RE) was higher overall using the MM-MPC method compared to the SM-SPC and SM-MPC methods. RE with the MM-MPC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, dry wall, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces. PMID:27736999
Evaluating Composite Sampling Methods of Bacillus Spores at Low Concentrations.
Hess, Becky M; Amidan, Brett G; Anderson, Kevin K; Hutchison, Janine R
2016-01-01
Restoring all facility operations after the 2001 Amerithrax attacks took years to complete, highlighting the need to reduce remediation time. Some of the most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite (SM-SPC): a single cellulose sponge samples multiple coupons with a single pass across each coupon; 2) single medium multi-pass composite: a single cellulose sponge samples multiple coupons with multiple passes across each coupon (SM-MPC); and 3) multi-medium post-sample composite (MM-MPC): a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155 CFU/cm2). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted dry wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p< 0.0001) and coupon material (p = 0.0006). Recovery efficiency (RE) was higher overall using the MM-MPC method compared to the SM-SPC and SM-MPC methods. RE with the MM-MPC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, dry wall, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces.
Two enzyme-linked immunosorbent assay (ELISA) methods were evaluated for the determination of 3,5,6-trichloro-2-pyridinol (3,5,6-TCP) in multiple sample media (dust, soil, food, and urine). The dust and soil samples were analyzed by a commercial RaPID immunoassay testing kit. ...
Brown, Angus M
2010-04-01
The objective of the method described in this paper is to develop a spreadsheet template for the purpose of comparing multiple sample means. An initial analysis of variance (ANOVA) test on the data returns F--the test statistic. If F is larger than the critical F value drawn from the F distribution at the appropriate degrees of freedom, convention dictates rejection of the null hypothesis and allows subsequent multiple comparison testing to determine where the inequalities between the sample means lie. A variety of multiple comparison methods are described that return the 95% confidence intervals for differences between means using an inclusive pairwise comparison of the sample means. 2009 Elsevier Ireland Ltd. All rights reserved.
Generating Multiple Imputations for Matrix Sampling Data Analyzed with Item Response Models.
ERIC Educational Resources Information Center
Thomas, Neal; Gan, Nianci
1997-01-01
Describes and assesses missing data methods currently used to analyze data from matrix sampling designs implemented by the National Assessment of Educational Progress. Several improved methods are developed, and these models are evaluated using an EM algorithm to obtain maximum likelihood estimates followed by multiple imputation of complete data…
Herath, Damayanthi; Tang, Sen-Lin; Tandon, Kshitij; Ackland, David; Halgamuge, Saman Kumara
2017-12-28
In metagenomics, the separation of nucleotide sequences belonging to an individual or closely matched populations is termed binning. Binning helps the evaluation of underlying microbial population structure as well as the recovery of individual genomes from a sample of uncultivable microbial organisms. Both supervised and unsupervised learning methods have been employed in binning; however, characterizing a metagenomic sample containing multiple strains remains a significant challenge. In this study, we designed and implemented a new workflow, Coverage and composition based binning of Metagenomes (CoMet), for binning contigs in a single metagenomic sample. CoMet utilizes coverage values and the compositional features of metagenomic contigs. The binning strategy in CoMet includes the initial grouping of contigs in guanine-cytosine (GC) content-coverage space and refinement of bins in tetranucleotide frequencies space in a purely unsupervised manner. With CoMet, the clustering algorithm DBSCAN is employed for binning contigs. The performances of CoMet were compared against four existing approaches for binning a single metagenomic sample, including MaxBin, Metawatt, MyCC (default) and MyCC (coverage) using multiple datasets including a sample comprised of multiple strains. Binning methods based on both compositional features and coverages of contigs had higher performances than the method which is based only on compositional features of contigs. CoMet yielded higher or comparable precision in comparison to the existing binning methods on benchmark datasets of varying complexities. MyCC (coverage) had the highest ranking score in F1-score. However, the performances of CoMet were higher than MyCC (coverage) on the dataset containing multiple strains. Furthermore, CoMet recovered contigs of more species and was 18 - 39% higher in precision than the compared existing methods in discriminating species from the sample of multiple strains. CoMet resulted in higher precision than MyCC (default) and MyCC (coverage) on a real metagenome. The approach proposed with CoMet for binning contigs, improves the precision of binning while characterizing more species in a single metagenomic sample and in a sample containing multiple strains. The F1-scores obtained from different binning strategies vary with different datasets; however, CoMet yields the highest F1-score with a sample comprised of multiple strains.
Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering
NASA Astrophysics Data System (ADS)
Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki
2018-03-01
We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geist, William H.
2017-09-15
The objectives for this presentation are to describe the method that the IAEA uses to determine a sampling plan for nuclear material measurements; describe the terms detection probability and significant quantity; list the three nuclear materials measurement types; describe the sampling method applied to an item facility; and describe multiple method sampling.
Salmonella testing of pooled pre-enrichment broth cultures for screening multiple food samples.
Price, W R; Olsen, R A; Hunter, J E
1972-04-01
A method has been described for testing multiple food samples for Salmonella without loss in sensitivity. The method pools multiple pre-enrichment broth cultures into single enrichment broths. The subsequent stages of the Salmonella analysis are not altered. The method was found applicable to several dry food materials including nonfat dry milk, dried egg albumin, cocoa, cottonseed flour, wheat flour, and shredded coconut. As many as 25 pre-enrichment broth cultures were pooled without apparent loss in the sensitivity of Salmonella detection as compared to individual sample analysis. The procedure offers a simple, yet effective, way to increase sample capacity in the Salmonella testing of foods, particularly where a large proportion of samples ordinarily is negative. It also permits small portions of pre-enrichment broth cultures to be retained for subsequent individual analysis if positive tests are found. Salmonella testing of pooled pre-enrichment broths provides increased consumer protection for a given amount of analytical effort as compared to individual sample analysis.
Experiential sampling in the study of multiple personality disorder.
Loewenstein, R J; Hamilton, J; Alagna, S; Reid, N; deVries, M
1987-01-01
The authors describe the application of experiential sampling, a new time-sampling method, to the assessment of rapid state changes in a woman with multiple personality disorder. She was signaled at random intervals during study periods and asked to provide information on alternate personality switches, amnesia, and mood state. The alternates displayed some characteristics that were as different as those occurring between separate individuals studied previously with this method. There were notable discrepancies between the self-report study data and information reported during therapy hours. The authors conclude that the phenomenology of multiple personality disorder is frequently more complex than is suspected early in the course of treatment.
Sampling and estimating recreational use.
Timothy G. Gregoire; Gregory J. Buhyoff
1999-01-01
Probability sampling methods applicable to estimate recreational use are presented. Both single- and multiple-access recreation sites are considered. One- and two-stage sampling methods are presented. Estimation of recreational use is presented in a series of examples.
Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin
2017-08-15
Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.
Sample size determination for equivalence assessment with multiple endpoints.
Sun, Anna; Dong, Xiaoyu; Tsong, Yi
2014-01-01
Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.
Psychological traits underlying different killing methods among Malaysian male murderers.
Kamaluddin, Mohammad Rahim; Shariff, Nadiah Syariani; Nurfarliza, Siti; Othman, Azizah; Ismail, Khaidzir H; Mat Saat, Geshina Ayu
2014-04-01
Murder is the most notorious crime that violates religious, social and cultural norms. Examining the types and number of different killing methods that used are pivotal in a murder case. However, the psychological traits underlying specific and multiple killing methods are still understudied. The present study attempts to fill this gap in knowledge by identifying the underlying psychological traits of different killing methods among Malaysian murderers. The study adapted an observational cross-sectional methodology using a guided self-administered questionnaire for data collection. The sampling frame consisted of 71 Malaysian male murderers from 11 Malaysian prisons who were selected using purposive sampling method. The participants were also asked to provide the types and number of different killing methods used to kill their respective victims. An independent sample t-test was performed to establish the mean score difference of psychological traits between the murderers who used single and multiple types of killing methods. Kruskal-Wallis tests were carried out to ascertain the psychological trait differences between specific types of killing methods. The results suggest that specific psychological traits underlie the type and number of different killing methods used during murder. The majority (88.7%) of murderers used a single method of killing. Multiple methods of killing was evident in 'premeditated' murder compared to 'passion' murder, and revenge was a common motive. Examples of multiple methods are combinations of stabbing and strangulation or slashing and physical force. An exception was premeditated murder committed with shooting, when it was usually a single method, attributed to the high lethality of firearms. Shooting was also notable when the motive was financial gain or related to drug dealing. Murderers who used multiple killing methods were more aggressive and sadistic than those who used a single killing method. Those who used multiple methods or slashing also displayed a higher level of minimisation traits. Despite its limitations, this study has provided some light on the underlying psychological traits of different killing methods which is useful in the field of criminology.
Field evaluation of personal sampling methods for multiple bioaerosols.
Wang, Chi-Hsun; Chen, Bean T; Han, Bor-Cheng; Liu, Andrew Chi-Yeu; Hung, Po-Chen; Chen, Chih-Yong; Chao, Hsing Jasmine
2015-01-01
Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.
Restricted random search method based on taboo search in the multiple minima problem
NASA Astrophysics Data System (ADS)
Hong, Seung Do; Jhon, Mu Shik
1997-03-01
The restricted random search method is proposed as a simple Monte Carlo sampling method to search minima fast in the multiple minima problem. This method is based on taboo search applied recently to continuous test functions. The concept of the taboo region instead of the taboo list is used and therefore the sampling of a region near an old configuration is restricted in this method. This method is applied to 2-dimensional test functions and the argon clusters. This method is found to be a practical and efficient method to search near-global configurations of test functions and the argon clusters.
Two-dimensional imaging via a narrowband MIMO radar system with two perpendicular linear arrays.
Wang, Dang-wei; Ma, Xiao-yan; Su, Yi
2010-05-01
This paper presents a system model and method for the 2-D imaging application via a narrowband multiple-input multiple-output (MIMO) radar system with two perpendicular linear arrays. Furthermore, the imaging formulation for our method is developed through a Fourier integral processing, and the parameters of antenna array including the cross-range resolution, required size, and sampling interval are also examined. Different from the spatial sequential procedure sampling the scattered echoes during multiple snapshot illuminations in inverse synthetic aperture radar (ISAR) imaging, the proposed method utilizes a spatial parallel procedure to sample the scattered echoes during a single snapshot illumination. Consequently, the complex motion compensation in ISAR imaging can be avoided. Moreover, in our array configuration, multiple narrowband spectrum-shared waveforms coded with orthogonal polyphase sequences are employed. The mainlobes of the compressed echoes from the different filter band could be located in the same range bin, and thus, the range alignment in classical ISAR imaging is not necessary. Numerical simulations based on synthetic data are provided for testing our proposed method.
Evaluating Composite Sampling Methods of Bacillus spores at Low Concentrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hess, Becky M.; Amidan, Brett G.; Anderson, Kevin K.
Restoring facility operations after the 2001 Amerithrax attacks took over three months to complete, highlighting the need to reduce remediation time. The most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite: a single cellulose sponge samples multiple coupons; 2) single medium multi-pass composite: a single cellulose sponge is used to sample multiple coupons; and 3) multi-medium post-samplemore » composite: a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155CFU/cm2, respectively). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p-value < 0.0001) and coupon material (p-value = 0.0008). Recovery efficiency (RE) was higher overall using the post-sample composite (PSC) method compared to single medium composite from both clean and grime coated materials. RE with the PSC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, painted wall board, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but significantly lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces.« less
Statistical inference from multiple iTRAQ experiments without using common reference standards.
Herbrich, Shelley M; Cole, Robert N; West, Keith P; Schulze, Kerry; Yager, James D; Groopman, John D; Christian, Parul; Wu, Lee; O'Meally, Robert N; May, Damon H; McIntosh, Martin W; Ruczinski, Ingo
2013-02-01
Isobaric tags for relative and absolute quantitation (iTRAQ) is a prominent mass spectrometry technology for protein identification and quantification that is capable of analyzing multiple samples in a single experiment. Frequently, iTRAQ experiments are carried out using an aliquot from a pool of all samples, or "masterpool", in one of the channels as a reference sample standard to estimate protein relative abundances in the biological samples and to combine abundance estimates from multiple experiments. In this manuscript, we show that using a masterpool is counterproductive. We obtain more precise estimates of protein relative abundance by using the available biological data instead of the masterpool and do not need to occupy a channel that could otherwise be used for another biological sample. In addition, we introduce a simple statistical method to associate proteomic data from multiple iTRAQ experiments with a numeric response and show that this approach is more powerful than the conventionally employed masterpool-based approach. We illustrate our methods using data from four replicate iTRAQ experiments on aliquots of the same pool of plasma samples and from a 406-sample project designed to identify plasma proteins that covary with nutrient concentrations in chronically undernourished children from South Asia.
On Two-Stage Multiple Comparison Procedures When There Are Unequal Sample Sizes in the First Stage.
ERIC Educational Resources Information Center
Wilcox, Rand R.
1984-01-01
Two stage multiple-comparison procedures give an exact solution to problems of power and Type I errors, but require equal sample sizes in the first stage. This paper suggests a method of evaluating the experimentwise Type I error probability when the first stage has unequal sample sizes. (Author/BW)
NASA Technical Reports Server (NTRS)
Pearson, Richard (Inventor); Lynch, Dana H. (Inventor); Gunter, William D. (Inventor)
1995-01-01
A method and apparatus for passing light bundles through a multiple pass sampling cell is disclosed. The multiple pass sampling cell includes a sampling chamber having first and second ends positioned along a longitudinal axis of the sampling cell. The sampling cell further includes an entrance opening, located adjacent the first end of the sampling cell at a first azimuthal angular position. The entrance opening permits a light bundle to pass into the sampling cell. The sampling cell also includes an exit opening at a second azimuthal angular position. The light exit permits a light bundle to pass out of the sampling cell after the light bundle has followed a predetermined path.
Fluid sampling apparatus and method
Yeamans, David R.
1998-01-01
Incorporation of a bellows in a sampling syringe eliminates ingress of contaminants, permits replication of amounts and compression of multiple sample injections, and enables remote sampling for off-site analysis.
Andrade, G C R M; Monteiro, S H; Francisco, J G; Figueiredo, L A; Botelho, R G; Tornisielo, V L
2015-05-15
A quick and sensitive liquid chromatography-electrospray ionization tandem mass spectrometry method, using dynamic multiple reaction monitoring and a 1.8-μm particle size analytical column, was developed to determine 57 pesticides in tomato in a 13-min run. QuEChERS (quick, easy, cheap, effective, rugged, and safe) method for samples preparations and validations was carried out in compliance with EU SANCO guidelines. The method was applied to 58 tomato samples. More than 84% of the compounds investigated showed limits of detection equal to or lower than 5 mg kg(-1). A mild (<20%), medium (20-50%), and strong (>50%) matrix effect was observed for 72%, 25%, and 3% of the pesticides studied, respectively. Eighty-one percent of the pesticides showed recoveries ranging between 70% and 120%. Twelve pesticides were detected in 35 samples, all below the maximum residue levels permitted in the Brazilian legislation; 15 samples exceeded the maximum residue levels established by the EU legislation for methamidophos; and 10 exceeded limits for acephate and four for bromuconazole. Copyright © 2014 Elsevier Ltd. All rights reserved.
Fluid sampling apparatus and method
Yeamans, D.R.
1998-02-03
Incorporation of a bellows in a sampling syringe eliminates ingress of contaminants, permits replication of amounts and compression of multiple sample injections, and enables remote sampling for off-site analysis. 3 figs.
Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.
2017-01-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.
A multiple-feature and multiple-kernel scene segmentation algorithm for humanoid robot.
Liu, Zhi; Xu, Shuqiong; Zhang, Yun; Chen, Chun Lung Philip
2014-11-01
This technical correspondence presents a multiple-feature and multiple-kernel support vector machine (MFMK-SVM) methodology to achieve a more reliable and robust segmentation performance for humanoid robot. The pixel wise intensity, gradient, and C1 SMF features are extracted via the local homogeneity model and Gabor filter, which would be used as inputs of MFMK-SVM model. It may provide multiple features of the samples for easier implementation and efficient computation of MFMK-SVM model. A new clustering method, which is called feature validity-interval type-2 fuzzy C-means (FV-IT2FCM) clustering algorithm, is proposed by integrating a type-2 fuzzy criterion in the clustering optimization process to improve the robustness and reliability of clustering results by the iterative optimization. Furthermore, the clustering validity is employed to select the training samples for the learning of the MFMK-SVM model. The MFMK-SVM scene segmentation method is able to fully take advantage of the multiple features of scene image and the ability of multiple kernels. Experiments on the BSDS dataset and real natural scene images demonstrate the superior performance of our proposed method.
ERIC Educational Resources Information Center
Urdan, Tim; Munoz, Chantico
2012-01-01
Multiple methods were used to examine the academic motivation and cultural identity of a sample of college undergraduates. The children of immigrant parents (CIPs, n = 52) and the children of non-immigrant parents (non-CIPs, n = 42) completed surveys assessing core cultural identity, valuing of cultural accomplishments, academic self-concept,…
Method and apparatus for determining nutrient stimulation of biological processes
Colwell, F.S.; Geesey, G.G.; Gillis, R.J.; Lehman, R.M.
1997-11-11
A method and apparatus is described for determining the nutrients to stimulate microorganisms in a particular environment. A representative sample of microorganisms from a particular environment are contacted with multiple support means wherein each support means has intimately associated with the surface of the support means a different nutrient composition for said microorganisms in said sample. The multiple support means is allowed to remain in contact with the microorganisms in the sample for a time period sufficient to measure differences in microorganism effects for the multiple support means. Microorganism effects for the multiple support means are then measured and compared. The invention is particularly adaptable to being conducted in situ. The additional steps of regulating nutrients added to the particular environment of microorganisms can enhance the desired results. Biological systems particularly suitable for this invention are bioremediation, biologically enhanced oil recovery, biological leaching of metals, and agricultural bioprocesses. 5 figs.
Method and apparatus for determining nutrient stimulation of biological processes
Colwell, Frederick S.; Geesey, Gill G.; Gillis, Richard J.; Lehman, R. Michael
1999-01-01
A method and apparatus for determining the nutrients to stimulate microorganisms in a particular environment. A representative sample of microorganisms from a particular environment are contacted with multiple support means wherein each support means has intimately associated with the surface of the support means a different nutrient composition for said microorganisms in said sample. The multiple support means is allowed to remain in contact with the microorganisms in the sample for a time period sufficient to measure difference in microorganism effects for the multiple support means. Microorganism effects for the multiple support means are then measured and compared. The invention is particularly adaptable to being conducted in situ. The additional steps of regulating nutrients added to the particular environment of microorganisms can enhance the desired results. Biological systems particularly suitable for this invention are bioremediation, biologically enhanced oil recovery, biological leaching of metals, and agricultural bioprocesses.
Method and apparatus for determining nutrient stimulation of biological processes
Colwell, F.S.; Geesey, G.G.; Gillis, R.J.; Lehman, R.M.
1999-07-13
A method and apparatus are disclosed for determining the nutrients to stimulate microorganisms in a particular environment. A representative sample of microorganisms from a particular environment are contacted with multiple support means wherein each support means has intimately associated with the surface of the support means a different nutrient composition for microorganisms in the sample. The multiple support means is allowed to remain in contact with the microorganisms in the sample for a time period sufficient to measure difference in microorganism effects for the multiple support means. Microorganism effects for the multiple support means are then measured and compared. The invention is particularly adaptable to being conducted in situ. The additional steps of regulating nutrients added to the particular environment of microorganisms can enhance the desired results. Biological systems particularly suitable for this invention are bioremediation, biologically enhanced oil recovery, biological leaching of metals, and agricultural bioprocesses. 5 figs.
Method and apparatus for determining nutrient stimulation of biological processes
Colwell, Frederick S.; Geesey, Gill G.; Gillis, Richard J.; Lehman, R. Michael
1997-01-01
A method and apparatus for determining the nutrients to stimulate microorganisms in a particular environment. A representative sample of microorganisms from a particular environment are contacted with multiple support means wherein each support means has intimately associated with the surface of the support means a different nutrient composition for said microorganisms in said sample. The multiple support means is allowed to remain in contact with the microorganisms in the sample for a time period sufficient to measure differences in microorganism effects for the multiple support means. Microorganism effects for the multiple support means are then measured and compared. The invention is particularly adaptable to being conducted in situ. The additional steps of regulating nutrients added to the particular environment of microorganisms can enhance the desired results. Biological systems particularly suitable for this invention are bioremediation, biologically enhanced oil recovery, biological leaching of metals, and agricultural bioprocesses.
Wu, Ci; Chen, Xi; Liu, Jianhui; Zhang, Xiaolin; Xue, Weifeng; Liang, Zhen; Liu, Mengyao; Cui, Yan; Huang, Daliang; Zhang, Lihua
2017-10-08
A novel method of the simultaneous detection of multiple kinds of allergenic proteins in infant food with parallel reaction monitoring (PRM) mode using liquid chromatography-tandem mass spectrometry (LC-MS/MS) was established. In this method, unique peptides with good stability and high sensibility were used to quantify the corresponding allergenic proteins. Furthermore, multiple kinds of allergenic proteins are inspected simultaneously with high sensitivity. In addition, such method was successfully used for the detection of multiple allergenic proteins in infant food. As for the sample preparation for infant food, compared with the traditional acetone precipitation strategy, the protein extraction efficiency and capacity of resisting disturbance are both higher with in-situ filter-aided sample pretreatment (i-FASP) method. All allergenic proteins gave a good linear response with the correlation coefficients ( R 2 ) ≥ 0.99, and the largest concentration range of the allergenic proteins could be four orders of magnitude, and the lowest detection limit was 0.028 mg/L, which was better than that reported in references. Finally, the method was conveniently used to detect the allergens from four imported infant food real samples. All the results demonstrate that this novel strategy is of great significance for providing a rapid and reliable analytical technique for allergen proteomics.
ERIC Educational Resources Information Center
Li, Yuan H.; Yang, Yu N.; Tompkins, Leroy J.; Modarresi, Shahpar
2005-01-01
The statistical technique, "Zero-One Linear Programming," that has successfully been used to create multiple tests with similar characteristics (e.g., item difficulties, test information and test specifications) in the area of educational measurement, was deemed to be a suitable method for creating multiple sets of matched samples to be…
Uechi, Ken; Asakura, Keiko; Ri, Yui; Masayasu, Shizuko; Sasaki, Satoshi
2016-02-01
Several estimation methods for 24-h sodium excretion using spot urine sample have been reported, but accurate estimation at the individual level remains difficult. We aimed to clarify the most accurate method of estimating 24-h sodium excretion with different numbers of available spot urine samples. A total of 370 participants from throughout Japan collected multiple 24-h urine and spot urine samples independently. Participants were allocated randomly into a development and a validation dataset. Two estimation methods were established in the development dataset using the two 24-h sodium excretion samples as reference: the 'simple mean method' estimated by multiplying the sodium-creatinine ratio by predicted 24-h creatinine excretion, whereas the 'regression method' employed linear regression analysis. The accuracy of the two methods was examined by comparing the estimated means and concordance correlation coefficients (CCC) in the validation dataset. Mean sodium excretion by the simple mean method with three spot urine samples was closest to that by 24-h collection (difference: -1.62 mmol/day). CCC with the simple mean method increased with an increased number of spot urine samples at 0.20, 0.31, and 0.42 using one, two, and three samples, respectively. This method with three spot urine samples yielded higher CCC than the regression method (0.40). When only one spot urine sample was available for each study participant, CCC was higher with the regression method (0.36). The simple mean method with three spot urine samples yielded the most accurate estimates of sodium excretion. When only one spot urine sample was available, the regression method was preferable.
Perthold, Jan Walther; Oostenbrink, Chris
2018-05-17
Enveloping distribution sampling (EDS) is an efficient approach to calculate multiple free-energy differences from a single molecular dynamics (MD) simulation. However, the construction of an appropriate reference-state Hamiltonian that samples all states efficiently is not straightforward. We propose a novel approach for the construction of the EDS reference-state Hamiltonian, related to a previously described procedure to smoothen energy landscapes. In contrast to previously suggested EDS approaches, our reference-state Hamiltonian preserves local energy minima of the combined end-states. Moreover, we propose an intuitive, robust and efficient parameter optimization scheme to tune EDS Hamiltonian parameters. We demonstrate the proposed method with established and novel test systems and conclude that our approach allows for the automated calculation of multiple free-energy differences from a single simulation. Accelerated EDS promises to be a robust and user-friendly method to compute free-energy differences based on solid statistical mechanics.
Sun, Zong-ke; Wu, Rong; Ding, Pei; Xue, Jin-Rong
2006-07-01
To compare between rapid detection method of enzyme substrate technique and multiple-tube fermentation technique in water coliform bacteria detection. Using inoculated and real water samples to compare the equivalence and false positive rate between two methods. Results demonstrate that enzyme substrate technique shows equivalence with multiple-tube fermentation technique (P = 0.059), false positive rate between the two methods has no statistical difference. It is suggested that enzyme substrate technique can be used as a standard method for water microbiological safety evaluation.
Buican, T.N.
1993-05-04
Apparatus and method is described for measuring intensities at a plurality of wavelengths and lifetimes. A source of multiple-wavelength electromagnetic radiation is passed through a first interferometer modulated at a first frequency, the output thereof being directed into a sample to be investigated. The light emitted from the sample as a result of the interaction thereof with the excitation radiation is directed into a second interferometer modulated at a second frequency, and the output detected and analyzed. In this manner excitation, emission, and lifetime information may be obtained for a multiplicity of fluorochromes in the sample.
Krishan, Kewal; Kanchan, Tanuj; Sharma, Abhilasha
2012-05-01
Estimation of stature is an important parameter in identification of human remains in forensic examinations. The present study is aimed to compare the reliability and accuracy of stature estimation and to demonstrate the variability in estimated stature and actual stature using multiplication factor and regression analysis methods. The study is based on a sample of 246 subjects (123 males and 123 females) from North India aged between 17 and 20 years. Four anthropometric measurements; hand length, hand breadth, foot length and foot breadth taken on the left side in each subject were included in the study. Stature was measured using standard anthropometric techniques. Multiplication factors were calculated and linear regression models were derived for estimation of stature from hand and foot dimensions. Derived multiplication factors and regression formula were applied to the hand and foot measurements in the study sample. The estimated stature from the multiplication factors and regression analysis was compared with the actual stature to find the error in estimated stature. The results indicate that the range of error in estimation of stature from regression analysis method is less than that of multiplication factor method thus, confirming that the regression analysis method is better than multiplication factor analysis in stature estimation. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Eckner, Karl F.
1998-01-01
A total of 338 water samples, 261 drinking water samples and 77 bathing water samples, obtained for routine testing were analyzed in duplicate by Swedish standard methods using multiple-tube fermentation or membrane filtration and by the Colilert and/or Enterolert methods. Water samples came from a wide variety of sources in southern Sweden (Skåne). The Colilert method was found to be more sensitive than Swedish standard methods for detecting coliform bacteria and of equal sensitivity for detecting Escherichia coli when all drinking water samples were grouped together. Based on these results, Swedac, the Swedish laboratory accreditation body, approved for the first time in Sweden use of the Colilert method at this laboratory for the analysis of all water sources not falling under public water regulations (A-krav). The coliform detection study of bathing water yielded anomalous results due to confirmation difficulties. E. coli detection in bathing water was similar by both the Colilert and Swedish standard methods as was fecal streptococcus and enterococcus detection by both the Enterolert and Swedish standard methods. PMID:9687478
USDA-ARS?s Scientific Manuscript database
This study compared the utility of three sampling methods for ecological monitoring based on: interchangeability of data (rank correlations), precision (coefficient of variation), cost (minutes/transect), and potential of each method to generate multiple indicators. Species richness and foliar cover...
Qin, Zifei; Lin, Pei; Dai, Yi; Yao, Zhihong; Wang, Li; Yao, Xinsheng; Liu, Liyin; Chen, Haifeng
2016-05-01
Allii Macrostemonis Bulbus (named Xiebai in China) is a folk medicine with medicinal values for the treatment of thoracic obstruction and cardialgia, and a food additive as well. However, there is even no quantitative standard for Allii Macrostemonis Bulbus recorded in the current edition of the Chinese Pharmacopeia. Hence, simultaneous assay of multiple components is urgent. In this study, chemometric methods were firstly applied to discover the components with significant fluctuation among multiple Allii Macrostemonis Bulbus samples based on optimized fingerprints. Meanwhile, the major components and main absorbed components in rats were all selected as its representative components. Subsequently, a sensitive method was established for the simultaneous determination of 54 components (15 components for quantification and 39 components for semiquantification) by ultra high performance liquid chromatography coupled with quadrupole time-of-flight tandem mass spectrometry. Moreover, the validated method was successfully applied to evaluate the quality of multiple samples on the market. It became known that multiple Allii Macrostemonis Bulbus samples varied significantly and showed poor consistency. This work illustrated that the proposed approach could improve the quality of Allii Macrostemonis Bulbus, and it also provided a feasible method for quality evaluation of other traditional Chinese medicines. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Consensus Classification Using Non-Optimized Classifiers.
Brownfield, Brett; Lemos, Tony; Kalivas, John H
2018-04-03
Classifying samples into categories is a common problem in analytical chemistry and other fields. Classification is usually based on only one method, but numerous classifiers are available with some being complex, such as neural networks, and others are simple, such as k nearest neighbors. Regardless, most classification schemes require optimization of one or more tuning parameters for best classification accuracy, sensitivity, and specificity. A process not requiring exact selection of tuning parameter values would be useful. To improve classification, several ensemble approaches have been used in past work to combine classification results from multiple optimized single classifiers. The collection of classifications for a particular sample are then combined by a fusion process such as majority vote to form the final classification. Presented in this Article is a method to classify a sample by combining multiple classification methods without specifically classifying the sample by each method, that is, the classification methods are not optimized. The approach is demonstrated on three analytical data sets. The first is a beer authentication set with samples measured on five instruments, allowing fusion of multiple instruments by three ways. The second data set is composed of textile samples from three classes based on Raman spectra. This data set is used to demonstrate the ability to classify simultaneously with different data preprocessing strategies, thereby reducing the need to determine the ideal preprocessing method, a common prerequisite for accurate classification. The third data set contains three wine cultivars for three classes measured at 13 unique chemical and physical variables. In all cases, fusion of nonoptimized classifiers improves classification. Also presented are atypical uses of Procrustes analysis and extended inverted signal correction (EISC) for distinguishing sample similarities to respective classes.
SAMPLING LARGE RIVERS FOR ALGAE, BENTHIC MACROINVERTEBRATES AND FISH
Multiple projects are currently underway to increase our understanding of the effects of different sampling methods and designs used for the biological assessment and monitoring of large (boatable) rivers. Studies include methods used to assess fish, benthic macroinvertebrates, ...
Li, Xiaofei; Wu, Yuhua; Li, Jun; Li, Yunjing; Long, Likun; Li, Feiwu; Wu, Gang
2015-01-05
The rapid increase in the number of genetically modified (GM) varieties has led to a demand for high-throughput methods to detect genetically modified organisms (GMOs). We describe a new dynamic array-based high throughput method to simultaneously detect 48 targets in 48 samples on a Fludigm system. The test targets included species-specific genes, common screening elements, most of the Chinese-approved GM events, and several unapproved events. The 48 TaqMan assays successfully amplified products from both single-event samples and complex samples with a GMO DNA amount of 0.05 ng, and displayed high specificity. To improve the sensitivity of detection, a preamplification step for 48 pooled targets was added to enrich the amount of template before performing dynamic chip assays. This dynamic chip-based method allowed the synchronous high-throughput detection of multiple targets in multiple samples. Thus, it represents an efficient, qualitative method for GMO multi-detection.
Li, Xiaofei; Wu, Yuhua; Li, Jun; Li, Yunjing; Long, Likun; Li, Feiwu; Wu, Gang
2015-01-01
The rapid increase in the number of genetically modified (GM) varieties has led to a demand for high-throughput methods to detect genetically modified organisms (GMOs). We describe a new dynamic array-based high throughput method to simultaneously detect 48 targets in 48 samples on a Fludigm system. The test targets included species-specific genes, common screening elements, most of the Chinese-approved GM events, and several unapproved events. The 48 TaqMan assays successfully amplified products from both single-event samples and complex samples with a GMO DNA amount of 0.05 ng, and displayed high specificity. To improve the sensitivity of detection, a preamplification step for 48 pooled targets was added to enrich the amount of template before performing dynamic chip assays. This dynamic chip-based method allowed the synchronous high-throughput detection of multiple targets in multiple samples. Thus, it represents an efficient, qualitative method for GMO multi-detection. PMID:25556930
An improved sampling method of complex network
NASA Astrophysics Data System (ADS)
Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing
2014-12-01
Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.
Molecular dynamics based enhanced sampling of collective variables with very large time steps.
Chen, Pei-Yang; Tuckerman, Mark E
2018-01-14
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
Molecular dynamics based enhanced sampling of collective variables with very large time steps
NASA Astrophysics Data System (ADS)
Chen, Pei-Yang; Tuckerman, Mark E.
2018-01-01
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
Clare, John; McKinney, Shawn T; DePue, John E; Loftin, Cynthia S
2017-10-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture-recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters. © 2017 by the Ecological Society of America.
Huffman, Raegan L.
2002-01-01
Ground-water samples were collected in April 1999 at Naval Air Station Whidbey Island, Washington, with passive diffusion samplers and a submersible pump to compare concentrations of volatile organic compounds (VOCs) in water samples collected using the two sampling methods. Single diffusion samplers were installed in wells with 10-foot screened intervals, and multiple diffusion samplers were installed in wells with 20- to 40-foot screened intervals. The diffusion samplers were recovered after 20 days and the wells were then sampled using a submersible pump. VOC concentrations in the 10-foot screened wells in water samples collected with diffusion samplers closely matched concentrations in samples collected with the submersible pump. Analysis of VOC concentrations in samples collected from the 20- to 40-foot screened wells with multiple diffusion samplers indicated vertical concentration variation within the screened interval, whereas the analysis of VOC concentrations in samples collected with the submersible pump indicated mixing during pumping. The results obtained using the two sampling methods indicate that the samples collected with the diffusion samplers were comparable with and can be considerably less expensive than samples collected using a submersible pump.
Zhou, Hanzhi; Elliott, Michael R; Raghunathan, Trivellore E
2016-06-01
Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in "Delta-V," a key crash severity measure.
Zhou, Hanzhi; Elliott, Michael R.; Raghunathan, Trivellore E.
2017-01-01
Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in “Delta-V,” a key crash severity measure. PMID:29226161
Samusik, Nikolay; Wang, Xiaowei; Guan, Leying; Nolan, Garry P.
2017-01-01
Mass cytometry (CyTOF) has greatly expanded the capability of cytometry. It is now easy to generate multiple CyTOF samples in a single study, with each sample containing single-cell measurement on 50 markers for more than hundreds of thousands of cells. Current methods do not adequately address the issues concerning combining multiple samples for subpopulation discovery, and these issues can be quickly and dramatically amplified with increasing number of samples. To overcome this limitation, we developed Partition-Assisted Clustering and Multiple Alignments of Networks (PAC-MAN) for the fast automatic identification of cell populations in CyTOF data closely matching that of expert manual-discovery, and for alignments between subpopulations across samples to define dataset-level cellular states. PAC-MAN is computationally efficient, allowing the management of very large CyTOF datasets, which are increasingly common in clinical studies and cancer studies that monitor various tissue samples for each subject. PMID:29281633
Sun, Yangbo; Chen, Long; Huang, Bisheng; Chen, Keli
2017-07-01
As a mineral, the traditional Chinese medicine calamine has a similar shape to many other minerals. Investigations of commercially available calamine samples have shown that there are many fake and inferior calamine goods sold on the market. The conventional identification method for calamine is complicated, therefore as a result of the large scale of calamine samples, a rapid identification method is needed. To establish a qualitative model using near-infrared (NIR) spectroscopy for rapid identification of various calamine samples, large quantities of calamine samples including crude products, counterfeits and processed products were collected and correctly identified using the physicochemical and powder X-ray diffraction method. The NIR spectroscopy method was used to analyze these samples by combining the multi-reference correlation coefficient (MRCC) method and the error back propagation artificial neural network algorithm (BP-ANN), so as to realize the qualitative identification of calamine samples. The accuracy rate of the model based on NIR and MRCC methods was 85%; in addition, the model, which took comprehensive multiple factors into consideration, can be used to identify crude calamine products, its counterfeits and processed products. Furthermore, by in-putting the correlation coefficients of multiple references as the spectral feature data of samples into BP-ANN, a BP-ANN model of qualitative identification was established, of which the accuracy rate was increased to 95%. The MRCC method can be used as a NIR-based method in the process of BP-ANN modeling.
Koren, Lee; Ng, Ella S M; Soma, Kiran K; Wynne-Edwards, Katherine E
2012-01-01
Blood samples from wild mammals and birds are often limited in volume, allowing researchers to quantify only one or two steroids from a single sample by immunoassays. In addition, wildlife serum or plasma samples are often lipemic, necessitating stringent sample preparation. Here, we validated sample preparation for simultaneous liquid chromatography--tandem mass spectrometry (LC-MS/MS) quantitation of cortisol, corticosterone, 11-deoxycortisol, dehydroepiandrosterone (DHEA), 17β-estradiol, progesterone, 17α-hydroxyprogesterone and testosterone from diverse mammalian (7 species) and avian (5 species) samples. Using 100 µL of serum or plasma, we quantified (signal-to-noise (S/N) ratio ≥ 10) 4-7 steroids depending on the species and sample, without derivatization. Steroids were extracted from serum or plasma using automated solid-phase extraction where samples were loaded onto C18 columns, washed with water and hexane, and then eluted with ethyl acetate. Quantitation by LC-MS/MS was done in positive ion, multiple reaction-monitoring (MRM) mode with an atmospheric pressure chemical ionization (APCI) source and heated nebulizer (500°C). Deuterated steroids served as internal standards and run time was 15 minutes. Extraction recoveries were 87-101% for the 8 analytes, and all intra- and inter-run CVs were ≤ 8.25%. This quantitation method yields good recoveries with variable lipid-content samples, avoids antibody cross-reactivity issues, and delivers results for multiple steroids. Thus, this method can enrich datasets by providing simultaneous quantitation of multiple steroids, and allow researchers to reimagine the hypotheses that could be tested with their volume-limited, lipemic, wildlife samples.
Multi-laboratory survey of qPCR enterococci analysis method performance
Quantitative polymerase chain reaction (qPCR) has become a frequently used technique for quantifying enterococci in recreational surface waters, but there are several methodological options. Here we evaluated how three method permutations, type of mastermix, sample extract dilution and use of controls in results calculation, affect method reliability among multiple laboratories with respect to sample interference. Multiple samples from each of 22 sites representing an array of habitat types were analyzed using EPA Method 1611 and 1609 reagents with full strength and five-fold diluted extracts. The presence of interference was assessed three ways: using sample processing and PCR amplifications controls; consistency of results across extract dilutions; and relative recovery of target genes from spiked enterococci in water sample compared to control matrices with acceptable recovery defined as 50 to 200%. Method 1609, which is based on an environmental mastermix, was found to be superior to Method 1611, which is based on a universal mastermix. Method 1611 had over a 40% control assay failure rate with undiluted extracts and a 6% failure rate with diluted extracts. Method 1609 failed in only 11% and 3% of undiluted and diluted extracts analyses. Use of sample processing control assay results in the delta-delta Ct method for calculating relative target gene recoveries increased the number of acceptable recovery results. Delta-delta tended to bias recoveries fr
Misiewicz-Krzeminska, Irena; Corchete, Luis Antonio; Rojas, Elizabeta A.; Martínez-López, Joaquín; García-Sanz, Ramón; Oriol, Albert; Bladé, Joan; Lahuerta, Juan-José; Miguel, Jesús San; Mateos, María-Victoria; Gutiérrez, Norma C.
2018-01-01
Protein analysis in bone marrow samples from patients with multiple myeloma has been limited by the low concentration of proteins obtained after CD138+ cell selection. A novel approach based on capillary nano-immunoassay could make it possible to quantify dozens of proteins from each myeloma sample in an automated manner. Here we present a method for the accurate and robust quantification of the expression of multiple proteins extracted from CD138-purified multiple myeloma samples frozen in RLT Plus buffer, which is commonly used for nucleic acid preservation and isolation. Additionally, the biological and clinical value of this analysis for a panel of 12 proteins essential to the pathogenesis of multiple myeloma was evaluated in 63 patients with newly diagnosed multiple myeloma. The analysis of the prognostic impact of CRBN/Cereblon and IKZF1/Ikaros mRNA/protein showed that only the protein levels were able to predict progression-free survival of patients; mRNA levels were not associated with prognosis. Interestingly, high levels of Cereblon and Ikaros proteins were associated with longer progression-free survival only in patients who received immunomodulatory drugs and not in those treated with other drugs. In conclusion, the capillary nano-immunoassay platform provides a novel opportunity for automated quantification of the expression of more than 20 proteins in CD138+ primary multiple myeloma samples. PMID:29545347
Misiewicz-Krzeminska, Irena; Corchete, Luis Antonio; Rojas, Elizabeta A; Martínez-López, Joaquín; García-Sanz, Ramón; Oriol, Albert; Bladé, Joan; Lahuerta, Juan-José; Miguel, Jesús San; Mateos, María-Victoria; Gutiérrez, Norma C
2018-05-01
Protein analysis in bone marrow samples from patients with multiple myeloma has been limited by the low concentration of proteins obtained after CD138 + cell selection. A novel approach based on capillary nano-immunoassay could make it possible to quantify dozens of proteins from each myeloma sample in an automated manner. Here we present a method for the accurate and robust quantification of the expression of multiple proteins extracted from CD138-purified multiple myeloma samples frozen in RLT Plus buffer, which is commonly used for nucleic acid preservation and isolation. Additionally, the biological and clinical value of this analysis for a panel of 12 proteins essential to the pathogenesis of multiple myeloma was evaluated in 63 patients with newly diagnosed multiple myeloma. The analysis of the prognostic impact of CRBN /Cereblon and IKZF1 /Ikaros mRNA/protein showed that only the protein levels were able to predict progression-free survival of patients; mRNA levels were not associated with prognosis. Interestingly, high levels of Cereblon and Ikaros proteins were associated with longer progression-free survival only in patients who received immunomodulatory drugs and not in those treated with other drugs. In conclusion, the capillary nano-immunoassay platform provides a novel opportunity for automated quantification of the expression of more than 20 proteins in CD138 + primary multiple myeloma samples. Copyright © 2018 Ferrata Storti Foundation.
Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient
ERIC Educational Resources Information Center
Krishnamoorthy, K.; Xia, Yanping
2008-01-01
The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…
Towards establishing a human fecal contamination index in microbial source tracking
There have been significant advances in development of PCR-based methods to detect source associated DNA sequences (markers), but method evaluation has focused on performance with individual challenge samples. Little attention has been given to integration of multiple samples fro...
NASA Astrophysics Data System (ADS)
Tang, Gao; Jiang, FanHuag; Li, JunFeng
2015-11-01
Near-Earth asteroids have gained a lot of interest and the development in low-thrust propulsion technology makes complex deep space exploration missions possible. A mission from low-Earth orbit using low-thrust electric propulsion system to rendezvous with near-Earth asteroid and bring sample back is investigated. By dividing the mission into five segments, the complex mission is solved separately. Then different methods are used to find optimal trajectories for every segment. Multiple revolutions around the Earth and multiple Moon gravity assists are used to decrease the fuel consumption to escape from the Earth. To avoid possible numerical difficulty of indirect methods, a direct method to parameterize the switching moment and direction of thrust vector is proposed. To maximize the mass of sample, optimal control theory and homotopic approach are applied to find the optimal trajectory. Direct methods of finding proper time to brake the spacecraft using Moon gravity assist are also proposed. Practical techniques including both direct and indirect methods are investigated to optimize trajectories for different segments and they can be easily extended to other missions and more precise dynamic model.
Modified electrokinetic sample injection method in chromatography and electrophoresis analysis
Davidson, J. Courtney; Balch, Joseph W.
2001-01-01
A sample injection method for horizontal configured multiple chromatography or electrophoresis units, each containing a number of separation/analysis channels, that enables efficient introduction of analyte samples. This method for loading when taken in conjunction with horizontal microchannels allows much reduced sample volumes and a means of sample stacking to greatly reduce the concentration of the sample. This reduction in the amount of sample can lead to great cost savings in sample preparation, particularly in massively parallel applications such as DNA sequencing. The essence of this method is in preparation of the input of the separation channel, the physical sample introduction, and subsequent removal of excess material. By this method, sample volumes of 100 nanoliter to 2 microliters have been used successfully, compared to the typical 5 microliters of sample required by the prior separation/analysis method.
John F. Caratti
2006-01-01
The FIREMON Line Intercept (LI) method is used to assess changes in plant species cover for a macroplot. This method uses multiple line transects to sample within plot variation and quantify statistically valid changes in plant species cover and height over time. This method is suited for most forest and rangeland communities, but is especially useful for sampling...
NASA Astrophysics Data System (ADS)
Jain, Pranay; Sarma, Sanjay E.
2015-05-01
Milk is an emulsion of fat globules and casein micelles dispersed in an aqueous medium with dissolved lactose, whey proteins and minerals. Quantification of constituents in milk is important in various stages of the dairy supply chain for proper process control and quality assurance. In field-level applications, spectrophotometric analysis is an economical option due to the low-cost of silicon photodetectors, sensitive to UV/Vis radiation with wavelengths between 300 - 1100 nm. Both absorption and scattering are witnessed as incident UV/Vis radiation interacts with dissolved and dispersed constituents in milk. These effects can in turn be used to characterize the chemical and physical composition of a milk sample. However, in order to simplify analysis, most existing instrument require dilution of samples to avoid effects of multiple scattering. The sample preparation steps are usually expensive, prone to human errors and unsuitable for field-level and online analysis. This paper introduces a novel digital imaging based method of online spectrophotometric measurements on raw milk without any sample preparation. Multiple LEDs of different emission spectra are used as discrete light sources and a digital CMOS camera is used as an image sensor. The extinction characteristic of samples is derived from captured images. The dependence of multiple scattering on power of incident radiation is exploited to quantify scattering. The method has been validated with experiments for response with varying fat concentrations and fat globule sizes. Despite of the presence of multiple scattering, the method is able to unequivocally quantify extinction of incident radiation and relate it to the fat concentrations and globule sizes of samples.
Imaging complex objects using learning tomography
NASA Astrophysics Data System (ADS)
Lim, JooWon; Goy, Alexandre; Shoreh, Morteza Hasani; Unser, Michael; Psaltis, Demetri
2018-02-01
Optical diffraction tomography (ODT) can be described using the scattering process through an inhomogeneous media. An inherent nonlinearity exists relating the scattering medium and the scattered field due to multiple scattering. Multiple scattering is often assumed to be negligible in weakly scattering media. This assumption becomes invalid as the sample gets more complex resulting in distorted image reconstructions. This issue becomes very critical when we image a complex sample. Multiple scattering can be simulated using the beam propagation method (BPM) as the forward model of ODT combined with an iterative reconstruction scheme. The iterative error reduction scheme and the multi-layer structure of BPM are similar to neural networks. Therefore we refer to our imaging method as learning tomography (LT). To fairly assess the performance of LT in imaging complex samples, we compared LT with the conventional iterative linear scheme using Mie theory which provides the ground truth. We also demonstrate the capacity of LT to image complex samples using experimental data of a biological cell.
Embellishment of Student Leadership in Learning Multiplication at Primary Level
ERIC Educational Resources Information Center
Singaravelu, G.
2006-01-01
The present study enlightens the efficacy of Student Leadership method in learning Multiplication in Mathematics at primary level. Single group experimental method was adopted for the study. Forty learners studying in Standard III in Panchayat union primary School, Muthupettai in South Tamil Nadu, India have been selected as sample for the study.…
Lin, Yu-Pin; Chu, Hone-Jay; Huang, Yu-Long; Tang, Chia-Hsi; Rouhani, Shahrokh
2011-06-01
This study develops a stratified conditional Latin hypercube sampling (scLHS) approach for multiple, remotely sensed, normalized difference vegetation index (NDVI) images. The objective is to sample, monitor, and delineate spatiotemporal landscape changes, including spatial heterogeneity and variability, in a given area. The scLHS approach, which is based on the variance quadtree technique (VQT) and the conditional Latin hypercube sampling (cLHS) method, selects samples in order to delineate landscape changes from multiple NDVI images. The images are then mapped for calibration and validation by using sequential Gaussian simulation (SGS) with the scLHS selected samples. Spatial statistical results indicate that in terms of their statistical distribution, spatial distribution, and spatial variation, the statistics and variograms of the scLHS samples resemble those of multiple NDVI images more closely than those of cLHS and VQT samples. Moreover, the accuracy of simulated NDVI images based on SGS with scLHS samples is significantly better than that of simulated NDVI images based on SGS with cLHS samples and VQT samples, respectively. However, the proposed approach efficiently monitors the spatial characteristics of landscape changes, including the statistics, spatial variability, and heterogeneity of NDVI images. In addition, SGS with the scLHS samples effectively reproduces spatial patterns and landscape changes in multiple NDVI images.
Yang, Yang; DeGruttola, Victor
2016-01-01
Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients. PMID:22740584
Yang, Yang; DeGruttola, Victor
2012-06-22
Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.
ERIC Educational Resources Information Center
Bishop, Malachy; Chan, Fong; Rumrill, Phillip D., Jr.; Frain, Michael P.; Tansey, Timothy N.; Chiu, Chung-Yi; Strauser, David; Umeasiegbu, Veronica I.
2015-01-01
Purpose: To examine demographic, functional, and clinical multiple sclerosis (MS) variables affecting employment status in a national sample of adults with MS in the United States. Method: The sample included 4,142 working-age (20-65 years) Americans with MS (79.1% female) who participated in a national survey. The mean age of participants was…
Meng, Yuguang; Lei, Hao
2010-06-01
An efficient iterative gridding reconstruction method with correction of off-resonance artifacts was developed, which is especially tailored for multiple-shot non-Cartesian imaging. The novelty of the method lies in that the transformation matrix for gridding (T) was constructed as the convolution of two sparse matrices, among which the former is determined by the sampling interval and the spatial distribution of the off-resonance frequencies and the latter by the sampling trajectory and the target grid in the Cartesian space. The resulting T matrix is also sparse and can be solved efficiently with the iterative conjugate gradient algorithm. It was shown that, with the proposed method, the reconstruction speed in multiple-shot non-Cartesian imaging can be improved significantly while retaining high reconstruction fidelity. More important, the method proposed allows tradeoff between the accuracy and the computation time of reconstruction, making customization of the use of such a method in different applications possible. The performance of the proposed method was demonstrated by numerical simulation and multiple-shot spiral imaging on rat brain at 4.7 T. (c) 2010 Wiley-Liss, Inc.
Liu, Tian; Spincemaille, Pascal; de Rochefort, Ludovic; Kressler, Bryan; Wang, Yi
2009-01-01
Magnetic susceptibility differs among tissues based on their contents of iron, calcium, contrast agent, and other molecular compositions. Susceptibility modifies the magnetic field detected in the MR signal phase. The determination of an arbitrary susceptibility distribution from the induced field shifts is a challenging, ill-posed inverse problem. A method called "calculation of susceptibility through multiple orientation sampling" (COSMOS) is proposed to stabilize this inverse problem. The field created by the susceptibility distribution is sampled at multiple orientations with respect to the polarization field, B(0), and the susceptibility map is reconstructed by weighted linear least squares to account for field noise and the signal void region. Numerical simulations and phantom and in vitro imaging validations demonstrated that COSMOS is a stable and precise approach to quantify a susceptibility distribution using MRI.
The Beginner's Guide to the Bootstrap Method of Resampling.
ERIC Educational Resources Information Center
Lane, Ginny G.
The bootstrap method of resampling can be useful in estimating the replicability of study results. The bootstrap procedure creates a mock population from a given sample of data from which multiple samples are then drawn. The method extends the usefulness of the jackknife procedure as it allows for computation of a given statistic across a maximal…
A New Sample Size Formula for Regression.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.
The focus of this research was to determine the efficacy of a new method of selecting sample sizes for multiple linear regression. A Monte Carlo simulation was used to study both empirical predictive power rates and empirical statistical power rates of the new method and seven other methods: those of C. N. Park and A. L. Dudycha (1974); J. Cohen…
Kaus, Joseph W; Harder, Edward; Lin, Teng; Abel, Robert; McCammon, J Andrew; Wang, Lingle
2015-06-09
Recent advances in improved force fields and sampling methods have made it possible for the accurate calculation of protein–ligand binding free energies. Alchemical free energy perturbation (FEP) using an explicit solvent model is one of the most rigorous methods to calculate relative binding free energies. However, for cases where there are high energy barriers separating the relevant conformations that are important for ligand binding, the calculated free energy may depend on the initial conformation used in the simulation due to the lack of complete sampling of all the important regions in phase space. This is particularly true for ligands with multiple possible binding modes separated by high energy barriers, making it difficult to sample all relevant binding modes even with modern enhanced sampling methods. In this paper, we apply a previously developed method that provides a corrected binding free energy for ligands with multiple binding modes by combining the free energy results from multiple alchemical FEP calculations starting from all enumerated poses, and the results are compared with Glide docking and MM-GBSA calculations. From these calculations, the dominant ligand binding mode can also be predicted. We apply this method to a series of ligands that bind to c-Jun N-terminal kinase-1 (JNK1) and obtain improved free energy results. The dominant ligand binding modes predicted by this method agree with the available crystallography, while both Glide docking and MM-GBSA calculations incorrectly predict the binding modes for some ligands. The method also helps separate the force field error from the ligand sampling error, such that deviations in the predicted binding free energy from the experimental values likely indicate possible inaccuracies in the force field. An error in the force field for a subset of the ligands studied was identified using this method, and improved free energy results were obtained by correcting the partial charges assigned to the ligands. This improved the root-mean-square error (RMSE) for the predicted binding free energy from 1.9 kcal/mol with the original partial charges to 1.3 kcal/mol with the corrected partial charges.
2016-01-01
Recent advances in improved force fields and sampling methods have made it possible for the accurate calculation of protein–ligand binding free energies. Alchemical free energy perturbation (FEP) using an explicit solvent model is one of the most rigorous methods to calculate relative binding free energies. However, for cases where there are high energy barriers separating the relevant conformations that are important for ligand binding, the calculated free energy may depend on the initial conformation used in the simulation due to the lack of complete sampling of all the important regions in phase space. This is particularly true for ligands with multiple possible binding modes separated by high energy barriers, making it difficult to sample all relevant binding modes even with modern enhanced sampling methods. In this paper, we apply a previously developed method that provides a corrected binding free energy for ligands with multiple binding modes by combining the free energy results from multiple alchemical FEP calculations starting from all enumerated poses, and the results are compared with Glide docking and MM-GBSA calculations. From these calculations, the dominant ligand binding mode can also be predicted. We apply this method to a series of ligands that bind to c-Jun N-terminal kinase-1 (JNK1) and obtain improved free energy results. The dominant ligand binding modes predicted by this method agree with the available crystallography, while both Glide docking and MM-GBSA calculations incorrectly predict the binding modes for some ligands. The method also helps separate the force field error from the ligand sampling error, such that deviations in the predicted binding free energy from the experimental values likely indicate possible inaccuracies in the force field. An error in the force field for a subset of the ligands studied was identified using this method, and improved free energy results were obtained by correcting the partial charges assigned to the ligands. This improved the root-mean-square error (RMSE) for the predicted binding free energy from 1.9 kcal/mol with the original partial charges to 1.3 kcal/mol with the corrected partial charges. PMID:26085821
The Community as Classroom: Multiple Perspectives on Student Learning.
ERIC Educational Resources Information Center
Kerrigan, Seanna; Gelmon, Sherrill; Spring, Amy
2003-01-01
Reports on the multiple perspectives of students, community members, and faculty to document the affect of student participation in service-learning courses. The study examined in this article used a large sample size and multiple qualitative and quantitative methods over several years. The results indicate that service learning affects students…
LeBouf, Ryan F; Virji, Mohammed Abbas; Ranpara, Anand; Stefaniak, Aleksandr B
2017-07-01
This method was designed for sampling select quaternary ammonium (quat) compounds in air or on surfaces followed by analysis using ultraperformance liquid chromatography tandem mass spectrometry. Target quats were benzethonium chloride, didecyldimethylammonium bromide, benzyldimethyldodecylammonium chloride, benzyldimethyltetradecylammonium chloride, and benzyldimethylhexadecylammonium chloride. For air sampling, polytetrafluoroethylene (PTFE) filters are recommended for 15-min to 24-hour sampling. For surface sampling, Pro-wipe® 880 (PW) media was chosen. Samples were extracted in 60:40 acetonitrile:0.1% formic acid for 1 hour on an orbital shaker. Method detection limits range from 0.3 to 2 ng/ml depending on media and analyte. Matrix effects of media are minimized through the use of multiple reaction monitoring versus selected ion recording. Upper confidence limits on accuracy meet the National Institute for Occupational Safety and Health 25% criterion for PTFE and PW media for all analytes. Using PTFE and PW analyzed with multiple reaction monitoring, the method quantifies levels among the different quats compounds with high precision (<10% relative standard deviation) and low bias (<11%). The method is sensitive enough with very low method detection limits to capture quats on air sampling filters with only a 15-min sample duration with a maximum assessed storage time of 103 days before sample extraction. This method will support future exposure assessment and quantitative epidemiologic studies to explore exposure-response relationships and establish levels of quats exposures associated with adverse health effects. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Shah, Iltaf; Petroczi, Andrea; Uvacsek, Martina; Ránky, Márta; Naughton, Declan P
2014-01-01
Considerable efforts are being extended to develop more effective methods to detect drugs in forensic science for applications such as preventing doping in sport. The aim of this study was to develop a sensitive and accurate method for analytes of forensic and toxicological nature in human hair at sub-pg levels. The hair test covers a range of different classes of drugs and metabolites of forensic and toxicological nature including selected anabolic steroids, cocaine, amphetamines, cannabinoids, opiates, bronchodilators, phencyclidine and ketamine. For extraction purposes, the hair samples were decontaminated using dichloromethane, ground and treated with 1 M sodium hydroxide and neutralised with hydrochloric acid and phosphate buffer and the homogenate was later extracted with hexane using liquid-liquid extraction (LLE). Following extraction from hair samples, drug-screening employed liquid chromatography coupled to tandem mass spectrometric (LC-MS/MS) analysis using dynamic multiple reaction monitoring (DYN-MRM) method using proprietary software. The screening method (for > 200 drugs/metabolites) was calibrated with a tailored drug mixture and was validated for 20 selected drugs for this study. Using standard additions to hair sample extracts, validation was in line with FDA guidance. A Zorbax Eclipse plus C18 (2.1 mm internal diameter × 100 mm length × 1.8 μm particle size) column was used for analysis. Total instrument run time was 8 minutes with no noted matrix interferences. The LOD of compounds ranged between 0.05-0.5 pg/mg of hair. 233 human hair samples were screened using this new method and samples were confirmed positive for 20 different drugs, mainly steroids and drugs of abuse. This is the first report of the application of this proprietary system to investigate the presence of drugs in human hair samples. The method is selective, sensitive and robust for the screening and confirmation of multiple drugs in a single analysis and has potential as a very useful tool for the analysis of large array of controlled substances and drugs of abuse.
Al, Kait F; Bisanz, Jordan E; Gloor, Gregory B; Reid, Gregor; Burton, Jeremy P
2018-01-01
The increasing interest on the impact of the gut microbiota on health and disease has resulted in multiple human microbiome-related studies emerging. However, multiple sampling methods are being used, making cross-comparison of results difficult. To avoid additional clinic visits and increase patient recruitment to these studies, there is the potential to utilize at-home stool sampling. The aim of this pilot study was to compare simple self-sampling collection and storage methods. To simulate storage conditions, stool samples from three volunteers were freshly collected, placed on toilet tissue, and stored at four temperatures (-80, 7, 22 and 37°C), either dry or in the presence of a stabilization agent (RNAlater®) for 3 or 7days. Using 16S rRNA gene sequencing by Illumina, the effect of storage variations for each sample was compared to a reference community from fresh, unstored counterparts. Fastq files may be accessed in the NCBI Sequence Read Archive: Bioproject ID PRJNA418287. Microbial diversity and composition were not significantly altered by any storage method. Samples were always separable based on participant, regardless of storage method suggesting there was no need for sample preservation by a stabilization agent. In summary, if immediate sample processing is not feasible, short term storage of unpreserved stool samples on toilet paper offers a reliable way to assess the microbiota composition by 16S rRNA gene sequencing. Copyright © 2017 Elsevier B.V. All rights reserved.
Mauchline, T H; Mohan, S; Davies, K G; Schaff, J E; Opperman, C H; Kerry, B R; Hirsch, P R
2010-05-01
To establish a reliable protocol to extract DNA from Pasteuria penetrans endospores for use as template in multiple strand amplification, thus providing sufficient material for genetic analyses. To develop a highly sensitive PCR-based diagnostic tool for P. penetrans. An optimized method to decontaminate endospores, release and purify DNA enabled multiple strand amplification. DNA purity was assessed by cloning and sequencing gyrB and 16S rRNA gene fragments obtained from PCR using generic primers. Samples indicated to be 100%P. penetrans by the gyrB assay were estimated at 46% using the 16S rRNA gene. No bias was detected on cloning and sequencing 12 housekeeping and sporulation gene fragments from amplified DNA. The detection limit by PCR with Pasteuria-specific 16S rRNA gene primers following multiple strand amplification of DNA extracted using the method was a single endospore. Generation of large quantities DNA will facilitate genomic sequencing of P. penetrans. Apparent differences in sample purity are explained by variations in 16S rRNA gene copy number in Eubacteria leading to exaggerated estimations of sample contamination. Detection of single endospores will facilitate investigations of P. penetrans molecular ecology. These methods will advance studies on P. penetrans and facilitate research on other obligate and fastidious micro-organisms where it is currently impractical to obtain DNA in sufficient quantity and quality.
Capers, Patrice L.; Brown, Andrew W.; Dawson, John A.; Allison, David B.
2015-01-01
Background: Meta-research can involve manual retrieval and evaluation of research, which is resource intensive. Creation of high throughput methods (e.g., search heuristics, crowdsourcing) has improved feasibility of large meta-research questions, but possibly at the cost of accuracy. Objective: To evaluate the use of double sampling combined with multiple imputation (DS + MI) to address meta-research questions, using as an example adherence of PubMed entries to two simple consolidated standards of reporting trials guidelines for titles and abstracts. Methods: For the DS large sample, we retrieved all PubMed entries satisfying the filters: RCT, human, abstract available, and English language (n = 322, 107). For the DS subsample, we randomly sampled 500 entries from the large sample. The large sample was evaluated with a lower rigor, higher throughput (RLOTHI) method using search heuristics, while the subsample was evaluated using a higher rigor, lower throughput (RHITLO) human rating method. Multiple imputation of the missing-completely at-random RHITLO data for the large sample was informed by: RHITLO data from the subsample; RLOTHI data from the large sample; whether a study was an RCT; and country and year of publication. Results: The RHITLO and RLOTHI methods in the subsample largely agreed (phi coefficients: title = 1.00, abstract = 0.92). Compliance with abstract and title criteria has increased over time, with non-US countries improving more rapidly. DS + MI logistic regression estimates were more precise than subsample estimates (e.g., 95% CI for change in title and abstract compliance by year: subsample RHITLO 1.050–1.174 vs. DS + MI 1.082–1.151). As evidence of improved accuracy, DS + MI coefficient estimates were closer to RHITLO than the large sample RLOTHI. Conclusion: Our results support our hypothesis that DS + MI would result in improved precision and accuracy. This method is flexible and may provide a practical way to examine large corpora of literature. PMID:25988135
John F. Caratti
2006-01-01
The FIREMON Density (DE) method is used to assess changes in plant species density and height for a macroplot. This method uses multiple quadrats and belt transects (transects having a width) to sample within plot variation and quantify statistically valid changes in plant species density and height over time. Herbaceous plant species are sampled with quadrats while...
Radiation pattern synthesis of planar antennas using the iterative sampling method
NASA Technical Reports Server (NTRS)
Stutzman, W. L.; Coffey, E. L.
1975-01-01
A synthesis method is presented for determining an excitation of an arbitrary (but fixed) planar source configuration. The desired radiation pattern is specified over all or part of the visible region. It may have multiple and/or shaped main beams with low sidelobes. The iterative sampling method is used to find an excitation of the source which yields a radiation pattern that approximates the desired pattern to within a specified tolerance. In this paper the method is used to calculate excitations for line sources, linear arrays (equally and unequally spaced), rectangular apertures, rectangular arrays (arbitrary spacing grid), and circular apertures. Examples using these sources to form patterns with shaped main beams, multiple main beams, shaped sidelobe levels, and combinations thereof are given.
Monitoring multiple components in vinegar fermentation using Raman spectroscopy.
Uysal, Reyhan Selin; Soykut, Esra Acar; Boyaci, Ismail Hakki; Topcu, Ali
2013-12-15
In this study, the utility of Raman spectroscopy (RS) with chemometric methods for quantification of multiple components in the fermentation process was investigated. Vinegar, the product of a two stage fermentation, was used as a model and glucose and fructose consumption, ethanol production and consumption and acetic acid production were followed using RS and the partial least squares (PLS) method. Calibration of the PLS method was performed using model solutions. The prediction capability of the method was then investigated with both model and real samples. HPLC was used as a reference method. The results from comparing RS-PLS and HPLC with each other showed good correlations were obtained between predicted and actual sample values for glucose (R(2)=0.973), fructose (R(2)=0.988), ethanol (R(2)=0.996) and acetic acid (R(2)=0.983). In conclusion, a combination of RS with chemometric methods can be applied to monitor multiple components of the fermentation process from start to finish with a single measurement in a short time. Copyright © 2013 Elsevier Ltd. All rights reserved.
40 CFR Appendix C to Subpart Nnn... - Method for the Determination of Product Density
Code of Federal Regulations, 2013 CFR
2013-07-01
... One square foot (12 in. by 12 in.) template, or templates that are multiples of one square foot, for... to the plant's written procedure for the designated product. 3.2Cut samples using one square foot (or multiples of one square foot) template. 3.3Weigh product and obtain area weight (lb/ft2). 3.4Measure sample...
40 CFR Appendix C to Subpart Nnn... - Method for the Determination of Product Density
Code of Federal Regulations, 2014 CFR
2014-07-01
... One square foot (12 in. by 12 in.) template, or templates that are multiples of one square foot, for... to the plant's written procedure for the designated product. 3.2Cut samples using one square foot (or multiples of one square foot) template. 3.3Weigh product and obtain area weight (lb/ft2). 3.4Measure sample...
FISHtrees 3.0: Tumor Phylogenetics Using a Ploidy Probe.
Gertz, E Michael; Chowdhury, Salim Akhter; Lee, Woei-Jyh; Wangsa, Darawalee; Heselmeyer-Haddad, Kerstin; Ried, Thomas; Schwartz, Russell; Schäffer, Alejandro A
2016-01-01
Advances in fluorescence in situ hybridization (FISH) make it feasible to detect multiple copy-number changes in hundreds of cells of solid tumors. Studies using FISH, sequencing, and other technologies have revealed substantial intra-tumor heterogeneity. The evolution of subclones in tumors may be modeled by phylogenies. Tumors often harbor aneuploid or polyploid cell populations. Using a FISH probe to estimate changes in ploidy can guide the creation of trees that model changes in ploidy and individual gene copy-number variations. We present FISHtrees 3.0, which implements a ploidy-based tree building method based on mixed integer linear programming (MILP). The ploidy-based modeling in FISHtrees includes a new formulation of the problem of merging trees for changes of a single gene into trees modeling changes in multiple genes and the ploidy. When multiple samples are collected from each patient, varying over time or tumor regions, it is useful to evaluate similarities in tumor progression among the samples. Therefore, we further implemented in FISHtrees 3.0 a new method to build consensus graphs for multiple samples. We validate FISHtrees 3.0 on a simulated data and on FISH data from paired cases of cervical primary and metastatic tumors and on paired breast ductal carcinoma in situ (DCIS) and invasive ductal carcinoma (IDC). Tests on simulated data show improved accuracy of the ploidy-based approach relative to prior ploidyless methods. Tests on real data further demonstrate novel insights these methods offer into tumor progression processes. Trees for DCIS samples are significantly less complex than trees for paired IDC samples. Consensus graphs show substantial divergence among most paired samples from both sets. Low consensus between DCIS and IDC trees may help explain the difficulty in finding biomarkers that predict which DCIS cases are at most risk to progress to IDC. The FISHtrees software is available at ftp://ftp.ncbi.nih.gov/pub/FISHtrees.
FISHtrees 3.0: Tumor Phylogenetics Using a Ploidy Probe
Chowdhury, Salim Akhter; Lee, Woei-Jyh; Wangsa, Darawalee; Heselmeyer-Haddad, Kerstin; Ried, Thomas; Schwartz, Russell; Schäffer, Alejandro A.
2016-01-01
Advances in fluorescence in situ hybridization (FISH) make it feasible to detect multiple copy-number changes in hundreds of cells of solid tumors. Studies using FISH, sequencing, and other technologies have revealed substantial intra-tumor heterogeneity. The evolution of subclones in tumors may be modeled by phylogenies. Tumors often harbor aneuploid or polyploid cell populations. Using a FISH probe to estimate changes in ploidy can guide the creation of trees that model changes in ploidy and individual gene copy-number variations. We present FISHtrees 3.0, which implements a ploidy-based tree building method based on mixed integer linear programming (MILP). The ploidy-based modeling in FISHtrees includes a new formulation of the problem of merging trees for changes of a single gene into trees modeling changes in multiple genes and the ploidy. When multiple samples are collected from each patient, varying over time or tumor regions, it is useful to evaluate similarities in tumor progression among the samples. Therefore, we further implemented in FISHtrees 3.0 a new method to build consensus graphs for multiple samples. We validate FISHtrees 3.0 on a simulated data and on FISH data from paired cases of cervical primary and metastatic tumors and on paired breast ductal carcinoma in situ (DCIS) and invasive ductal carcinoma (IDC). Tests on simulated data show improved accuracy of the ploidy-based approach relative to prior ploidyless methods. Tests on real data further demonstrate novel insights these methods offer into tumor progression processes. Trees for DCIS samples are significantly less complex than trees for paired IDC samples. Consensus graphs show substantial divergence among most paired samples from both sets. Low consensus between DCIS and IDC trees may help explain the difficulty in finding biomarkers that predict which DCIS cases are at most risk to progress to IDC. The FISHtrees software is available at ftp://ftp.ncbi.nih.gov/pub/FISHtrees. PMID:27362268
Method Development and Monitoring of Cyanotoxins in Water ...
Increasing occurrence of cyanobacterial harmful algal blooms (HABs) in ambient waters has become a worldwide concern. Numerous cyanotoxins can be produced during HAB events which are toxic to animals and humans. Validated standardized methods that are rugged, selective and sensitive are needed for these cyanotoxins in drinking and ambient waters. EPA Drinking Water Methods 544 (six microcystins [MCs] and nodularin) and 545 (cylindrospermopsin [CYL] and anatoxin-a [ANA]) have been developed using liquid chromatography/tandem mass spectrometry (LC/MS/MS). This presentation will describe the adaptation of Methods 544 and 545 to ambient waters and application of these ambient water methods to seven bodies of water across the country with visible cyanobacterial blooms.Several changes were made to Method 544 to accommodate the increased complexity of ambient water. The major changes were to reduce the sample volume from 500 to 100 mL for ambient water analyses and to incorporate seven additional MCs in an effort to capture data for more MC congeners in ambient waters. The major change to Method 545 for ambient water analyses was the addition of secondary ion transitions for each of the target analytes for confirmation purposes. Both methods have been ruggedly tested in bloom samples from multiple bodies of water, some with multiple sample locations and sampling days. For ambient water bloom samples spiked with MCs (>800 congener measurements), 97% of the measurements
Le Pichon, Céline; Tales, Évelyne; Belliard, Jérôme; Torgersen, Christian E.
2017-01-01
Spatially intensive sampling by electrofishing is proposed as a method for quantifying spatial variation in fish assemblages at multiple scales along extensive stream sections in headwater catchments. We used this method to sample fish species at 10-m2 points spaced every 20 m throughout 5 km of a headwater stream in France. The spatially intensive sampling design provided information at a spatial resolution and extent that enabled exploration of spatial heterogeneity in fish assemblage structure and aquatic habitat at multiple scales with empirical variograms and wavelet analysis. These analyses were effective for detecting scales of periodicity, trends, and discontinuities in the distribution of species in relation to tributary junctions and obstacles to fish movement. This approach to sampling riverine fishes may be useful in fisheries research and management for evaluating stream fish responses to natural and altered habitats and for identifying sites for potential restoration.
ERIC Educational Resources Information Center
Baylor, Carolyn; Yorkston, Kathryn; Bamer, Alyssa; Britton, Deanna; Amtmann, Dagmar
2010-01-01
Purpose: To explore variables associated with self-reported communicative participation in a sample (n = 498) of community-dwelling adults with multiple sclerosis (MS). Method: A battery of questionnaires was administered online or on paper per participant preference. Data were analyzed using multiple linear backward stepwise regression. The…
Doll, Charles G.; Wright, Cherylyn W.; Morley, Shannon M.; ...
2017-02-01
In this paper, a modified version of the Direct LSC method to correct for quenching effect was investigated for the determination of bio-originated fuel content in fuel samples produced from multiple biological starting materials. The modified method was found to be accurate in determining the percent bio-originated fuel to within 5% of the actual value for samples with quenching effects ≤43%. Finally, analysis of highly quenched samples was possible when diluted with the exception of one sample with a 100% quenching effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doll, Charles G.; Wright, Cherylyn W.; Morley, Shannon M.
A modified version of the Direct LSC method to correct for quenching effect was investigated for the determination of bio-originated fuel content in fuel samples produced from multiple biological starting materials. The modified method was found to be accurate in determining the percent bio-originated fuel to within 5% of the actual value for samples with quenching effects ≤43%. Analysis of highly quenched samples was possible when diluted with the exception of one sample with a 100% quenching effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doll, Charles G.; Wright, Cherylyn W.; Morley, Shannon M.
In this paper, a modified version of the Direct LSC method to correct for quenching effect was investigated for the determination of bio-originated fuel content in fuel samples produced from multiple biological starting materials. The modified method was found to be accurate in determining the percent bio-originated fuel to within 5% of the actual value for samples with quenching effects ≤43%. Finally, analysis of highly quenched samples was possible when diluted with the exception of one sample with a 100% quenching effect.
Composite analysis for Escherichia coli at coastal beaches
Bertke, E.E.
2007-01-01
At some coastal beaches, concentrations of fecal-indicator bacteria can differ substantially between multiple points at the same beach at the same time. Because of this spatial variability, the recreational water quality at beaches is sometimes determined by stratifying a beach into several areas and collecting a sample from each area to analyze for the concentration of fecal-indicator bacteria. The average concentration of bacteria from those points is often used to compare to the recreational standard for advisory postings. Alternatively, if funds are limited, a single sample is collected to represent the beach. Compositing the samples collected from each section of the beach may yield equally accurate data as averaging concentrations from multiple points, at a reduced cost. In the study described herein, water samples were collected at multiple points from three Lake Erie beaches and analyzed for Escherichia coli on modified mTEC agar (EPA Method 1603). From the multiple-point samples, a composite sample (n = 116) was formed at each beach by combining equal aliquots of well-mixed water from each point. Results from this study indicate that E. coli concentrations from the arithmetic average of multiple-point samples and from composited samples are not significantly different (t = 1.59, p = 0.1139) and yield similar measures of recreational water quality; additionally, composite samples could result in a significant cost savings.
Application of permanents of square matrices for DNA identification in multiple-fatality cases
2013-01-01
Background DNA profiling is essential for individual identification. In forensic medicine, the likelihood ratio (LR) is commonly used to identify individuals. The LR is calculated by comparing two hypotheses for the sample DNA: that the sample DNA is identical or related to a reference DNA, and that it is randomly sampled from a population. For multiple-fatality cases, however, identification should be considered as an assignment problem, and a particular sample and reference pair should therefore be compared with other possibilities conditional on the entire dataset. Results We developed a new method to compute the probability via permanents of square matrices of nonnegative entries. As the exact permanent is known as a #P-complete problem, we applied the Huber–Law algorithm to approximate the permanents. We performed a computer simulation to evaluate the performance of our method via receiver operating characteristic curve analysis compared with LR under the assumption of a closed incident. Differences between the two methods were well demonstrated when references provided neither obligate alleles nor impossible alleles. The new method exhibited higher sensitivity (0.188 vs. 0.055) at a threshold value of 0.999, at which specificity was 1, and it exhibited higher area under a receiver operating characteristic curve (0.990 vs. 0.959, P = 9.6E-15). Conclusions Our method therefore offers a solution for a computationally intensive assignment problem and may be a viable alternative to LR-based identification for closed-incident multiple-fatality cases. PMID:23962363
Lee, Minjung; Dignam, James J.; Han, Junhee
2014-01-01
We propose a nonparametric approach for cumulative incidence estimation when causes of failure are unknown or missing for some subjects. Under the missing at random assumption, we estimate the cumulative incidence function using multiple imputation methods. We develop asymptotic theory for the cumulative incidence estimators obtained from multiple imputation methods. We also discuss how to construct confidence intervals for the cumulative incidence function and perform a test for comparing the cumulative incidence functions in two samples with missing cause of failure. Through simulation studies, we show that the proposed methods perform well. The methods are illustrated with data from a randomized clinical trial in early stage breast cancer. PMID:25043107
NASA Astrophysics Data System (ADS)
Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.
2017-08-01
Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.
Brooks, M.H.; Schroder, L.J.; Malo, B.A.
1985-01-01
Four laboratories were evaluated in their analysis of identical natural and simulated precipitation water samples. Interlaboratory comparability was evaluated using analysis of variance coupled with Duncan 's multiple range test, and linear-regression models describing the relations between individual laboratory analytical results for natural precipitation samples. Results of the statistical analyses indicate that certain pairs of laboratories produce different results when analyzing identical samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple range test on data produced by the laboratories from the analysis of identical simulated precipitation samples. Bias for a given analyte produced by a single laboratory has been indicated when the laboratory mean for that analyte is shown to be significantly different from the mean for the most-probable analyte concentrations in the simulated precipitation samples. Ion-chromatographic methods for the determination of chloride, nitrate, and sulfate have been compared with the colorimetric methods that were also in use during the study period. Comparisons were made using analysis of variance coupled with Duncan 's multiple range test for means produced by the two methods. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Analyte estimated precisions have been compared using F-tests and differences in analyte precisions for laboratory pairs have been reported. (USGS)
Creating ensembles of decision trees through sampling
Kamath, Chandrika; Cantu-Paz, Erick
2005-08-30
A system for decision tree ensembles that includes a module to read the data, a module to sort the data, a module to evaluate a potential split of the data according to some criterion using a random sample of the data, a module to split the data, and a module to combine multiple decision trees in ensembles. The decision tree method is based on statistical sampling techniques and includes the steps of reading the data; sorting the data; evaluating a potential split according to some criterion using a random sample of the data, splitting the data, and combining multiple decision trees in ensembles.
40 CFR Appendix C to Subpart Nnn... - Method for the Determination of Product Density
Code of Federal Regulations, 2011 CFR
2011-07-01
... insulation. The method is applicable to all cured board and blanket products. 2. Equipment One square foot (12 in. by 12 in.) template, or templates that are multiples of one square foot, for use in cutting... procedure for the designated product. 3.2Cut samples using one square foot (or multiples of one square foot...
40 CFR Appendix C to Subpart Nnn... - Method for the Determination of Product Density
Code of Federal Regulations, 2012 CFR
2012-07-01
.... The method is applicable to all cured board and blanket products. 2. Equipment One square foot (12 in. by 12 in.) template, or templates that are multiples of one square foot, for use in cutting... procedure for the designated product. 3.2Cut samples using one square foot (or multiples of one square foot...
40 CFR Appendix C to Subpart Nnn... - Method for the Determination of Product Density
Code of Federal Regulations, 2010 CFR
2010-07-01
.... The method is applicable to all cured board and blanket products. 2. Equipment One square foot (12 in. by 12 in.) template, or templates that are multiples of one square foot, for use in cutting... procedure for the designated product. 3.2Cut samples using one square foot (or multiples of one square foot...
Efficient computation of the joint sample frequency spectra for multiple populations.
Kamm, John A; Terhorst, Jonathan; Song, Yun S
2017-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.
Efficient computation of the joint sample frequency spectra for multiple populations
Kamm, John A.; Terhorst, Jonathan; Song, Yun S.
2016-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity. PMID:28239248
NASA Astrophysics Data System (ADS)
Popescu, Dan P.; Hewko, Mark D.; Sowa, Michael G.
2007-01-01
This study demonstrates a simple method for attenuating the speckle noise generated by coherent multiple-scattered photons in optical-coherence tomography images. The method could be included among the space-diversity techniques used for speckle reduction. It relies on displacing the sample along a weakly focused beam in the sample arm of the interferometer, acquiring a coherent image for each sample position and adding the individual images to form a compounded image. It is proven that the compounded image displays a reduction in the speckle noise generated by multiple scattered photons and an enhancement in the intensity signal caused by single-backscattered photons. To evaluate its potential biomedical applications, the method is used to investigate in vitro a caries lesion affecting the enamel layer of a wisdom tooth. Because of the uncorrelated nature of the speckle noise the compounded image provides a better mapping of the lesion compared to a single (coherent) image.
An integrate-over-temperature approach for enhanced sampling.
Gao, Yi Qin
2008-02-14
A simple method is introduced to achieve efficient random walking in the energy space in molecular dynamics simulations which thus enhances the sampling over a large energy range. The approach is closely related to multicanonical and replica exchange simulation methods in that it allows configurations of the system to be sampled in a wide energy range by making use of Boltzmann distribution functions at multiple temperatures. A biased potential is quickly generated using this method and is then used in accelerated molecular dynamics simulations.
Xie, Wei-Qi; Chai, Xin-Sheng
2016-04-22
This paper describes a new method for the rapid determination of the moisture content in paper materials. The method is based on multiple headspace extraction gas chromatography (MHE-GC) at a temperature above the boiling point of water, from which an integrated water loss from the tested sample due to evaporation can be measured and from which the moisture content in the sample can be determined. The results show that the new method has a good precision (with the relative standard deviation <0.96%), high sensitivity (the limit of quantitation=0.005%) and good accuracy (the relative differences <1.4%). Therefore, the method is quite suitable for many uses in research and industrial applications. Copyright © 2016 Elsevier B.V. All rights reserved.
Mollah, Mohammad Manir Hossain; Jamal, Rahman; Mokhtar, Norfilza Mohd; Harun, Roslan; Mollah, Md. Nurul Haque
2015-01-01
Background Identifying genes that are differentially expressed (DE) between two or more conditions with multiple patterns of expression is one of the primary objectives of gene expression data analysis. Several statistical approaches, including one-way analysis of variance (ANOVA), are used to identify DE genes. However, most of these methods provide misleading results for two or more conditions with multiple patterns of expression in the presence of outlying genes. In this paper, an attempt is made to develop a hybrid one-way ANOVA approach that unifies the robustness and efficiency of estimation using the minimum β-divergence method to overcome some problems that arise in the existing robust methods for both small- and large-sample cases with multiple patterns of expression. Results The proposed method relies on a β-weight function, which produces values between 0 and 1. The β-weight function with β = 0.2 is used as a measure of outlier detection. It assigns smaller weights (≥ 0) to outlying expressions and larger weights (≤ 1) to typical expressions. The distribution of the β-weights is used to calculate the cut-off point, which is compared to the observed β-weight of an expression to determine whether that gene expression is an outlier. This weight function plays a key role in unifying the robustness and efficiency of estimation in one-way ANOVA. Conclusion Analyses of simulated gene expression profiles revealed that all eight methods (ANOVA, SAM, LIMMA, EBarrays, eLNN, KW, robust BetaEB and proposed) perform almost identically for m = 2 conditions in the absence of outliers. However, the robust BetaEB method and the proposed method exhibited considerably better performance than the other six methods in the presence of outliers. In this case, the BetaEB method exhibited slightly better performance than the proposed method for the small-sample cases, but the the proposed method exhibited much better performance than the BetaEB method for both the small- and large-sample cases in the presence of more than 50% outlying genes. The proposed method also exhibited better performance than the other methods for m > 2 conditions with multiple patterns of expression, where the BetaEB was not extended for this condition. Therefore, the proposed approach would be more suitable and reliable on average for the identification of DE genes between two or more conditions with multiple patterns of expression. PMID:26413858
Multiple immunofluorescence labelling of formalin-fixed paraffin-embedded (FFPE) tissue
Robertson, David; Savage, Kay; Reis-Filho, Jorge S; Isacke, Clare M
2008-01-01
Background Investigating the expression of candidate genes in tissue samples usually involves either immunohistochemical labelling of formalin-fixed paraffin-embedded (FFPE) sections or immunofluorescence labelling of cryosections. Although both of these methods provide essential data, both have important limitations as research tools. Consequently, there is a demand in the research community to be able to perform routine, high quality immunofluorescence labelling of FFPE tissues. Results We present here a robust optimised method for high resolution immunofluorescence labelling of FFPE tissues, which involves the combination of antigen retrieval, indirect immunofluorescence and confocal laser scanning microscopy. We demonstrate the utility of this method with examples of immunofluorescence labelling of human kidney, human breast and a tissue microarray of invasive human breast cancers. Finally, we demonstrate that stained slides can be stored in the short term at 4°C or in the longer term at -20°C prior to images being collected. This approach has the potential to unlock a large in vivo database for immunofluorescence investigations and has the major advantages over immunohistochemistry in that it provides higher resolution imaging of antigen localization and the ability to label multiple antigens simultaneously. Conclusion This method provides a link between the cell biology and pathology communities. For the cell biologist, it will enable them to utilise the vast archive of pathology specimens to advance their in vitro data into in vivo samples, in particular archival material and tissue microarrays. For the pathologist, it will enable them to utilise multiple antibodies on a single section to characterise particular cell populations or to test multiple biomarkers in limited samples and define with greater accuracy cellular heterogeneity in tissue samples. PMID:18366689
Furlong, Edward T.; Noriega, Mary C.; Kanagy, Christopher J.; Kanagy, Leslie K.; Coffey, Laura J.; Burkhardt, Mark R.
2014-01-01
This report describes a method for the determination of 110 human-use pharmaceuticals using a 100-microliter aliquot of a filtered water sample directly injected into a high-performance liquid chromatograph coupled to a triple-quadrupole tandem mass spectrometer using an electrospray ionization source operated in the positive ion mode. The pharmaceuticals were separated by using a reversed-phase gradient of formic acid/ammonium formate-modified water and methanol. Multiple reaction monitoring of two fragmentations of the protonated molecular ion of each pharmaceutical to two unique product ions was used to identify each pharmaceutical qualitatively. The primary multiple reaction monitoring precursor-product ion transition was quantified for each pharmaceutical relative to the primary multiple reaction monitoring precursor-product transition of one of 19 isotope-dilution standard pharmaceuticals or the pesticide atrazine, using an exact stable isotope analogue where possible. Each isotope-dilution standard was selected, when possible, for its chemical similarity to the unlabeled pharmaceutical of interest, and added to the sample after filtration but prior to analysis. Method performance for each pharmaceutical was determined for reagent water, groundwater, treated drinking water, surface water, treated wastewater effluent, and wastewater influent sample matrixes that this method will likely be applied to. Each matrix was evaluated in order of increasing complexity to demonstrate (1) the sensitivity of the method in different water matrixes and (2) the effect of sample matrix, particularly matrix enhancement or suppression of the precursor ion signal, on the quantitative determination of pharmaceutical concentrations. Recovery of water samples spiked (fortified) with the suite of pharmaceuticals determined by this method typically was greater than 90 percent in reagent water, groundwater, drinking water, and surface water. Correction for ambient environmental concentrations of pharmaceuticals hampered the determination of absolute recoveries and method sensitivity of some compounds in some water types, particularly for wastewater effluent and influent samples. The method detection limit of each pharmaceutical was determined from analysis of pharmaceuticals fortified at multiple concentrations in reagent water. The calibration range for each compound typically spanned three orders of magnitude of concentration. Absolute sensitivity for some compounds, using isotope-dilution quantitation, ranged from 0.45 to 94.1 nanograms per liter, primarily as a result of the inherent ionization efficiency of each pharmaceutical in the electrospray ionization process. Holding-time studies indicate that acceptable recoveries of pharmaceuticals can be obtained from filtered water samples held at 4 °C for as long as 9 days after sample collection. Freezing samples to provide for storage for longer periods currently (2014) is under evaluation by the National Water Quality Laboratory.
Tanaka, Yukari; Yoshikawa, Yutaka; Yasui, Hiroyuki
2012-01-01
An ultra high-sensitivity method for quantifying fexofenadine concentration in rat plasma samples by multiple injection method (MIM) was developed for a microdose study. In this study, MIM involved continuous injections of multiple samples containing the single compound into a column of the ultra-HPLC (UHPLC) system, and then, temporary trapping of the analyte at the column head. This was followed by elution of the compound from the column and detection by mass spectrometer. Fexofenadine, used as a model compound in this study, was extracted from the plasma samples by a protein precipitation method. Chromatographic separation was achieved on a reversed-phase C18 column by using a gradient method with 0.1% formic acid and 0.1% formic acid in acetonitrile as the mobile phase. The analyte was quantified in the positive-ion electrospray ionization mode using selected reaction monitoring. In this study, the analytical time per fexofenadine sample was approximately 2 min according to the UHPLC system. The method exhibited the linear dynamic ranges of 5-5000 pg/mL for fexofenadine in rat plasma. The intra-day precisions were from 3.2 to 8.7% and the accuracy range was 95.2-99.3%. The inter-day precisions and accuracies ranged from 3.5 to 8.4% and from 98.6 to 102.6%, respectively. The validated MIM was successfully applied to a microdose study in the rats that received oral administration of 100 µg/kg fexofenadine. We suggest that this method might be beneficial for the quantification of fexofenadine concentrations in a microdose clinical study.
A single-sampling hair trap for mesocarnivores
Jonathan N. Pauli; Matthew B. Hamilton; Edward B. Crain; Steven W. Buskirk
2007-01-01
Although techniques to analyze and quantifY DNA-based data have progressed, methods to noninvasively collect samples lag behind. Samples are generally collected from devices that permit coincident sampling of multiple individuals. Because of cross-contamination, substantive genotyping errors can arise. We developed a cost-effective (US$4.60/trap) single-capture hair...
ABSTRACT: Few studies have addressed the efficacy of composite sampling for measurement of indicator bacteria by QPCR. In this study, composite results were compared to single sample results for culture- and QPCR-based water quality monitoring. Composite results for both methods ...
Xin, Li-Ping; Chai, Xin-Sheng; Hu, Hui-Chao; Barnes, Donald G
2014-09-05
This work demonstrates a novel method for rapid determination of total solid content in viscous liquid (polymer-enriched) samples. The method is based multiple headspace extraction gas chromatography (MHE-GC) on a headspace vial at a temperature above boiling point of water. Thus, the trend of water loss from the tested liquid due to evaporation can be followed. With the limited MHE-GC testing (e.g., 5 extractions) and a one-point calibration procedure (i.e., recording the weight difference before and after analysis), the total amount of water in the sample can be determined, from which the total solid contents in the liquid can be calculated. A number of black liquors were analyzed by the new method which yielded results that closely matched those of the reference method; i.e., the results of these two methods differed by no more than 2.3%. Compared with the reference method, the MHE-GC method is much simpler and more practical. Therefore, it is suitable for the rapid determination of the solid content in many polymer-containing liquid samples. Copyright © 2014 Elsevier B.V. All rights reserved.
A Two-Step Approach to Uncertainty Quantification of Core Simulators
Yankov, Artem; Collins, Benjamin; Klein, Markus; ...
2012-01-01
For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less
NASA Astrophysics Data System (ADS)
Drukker, Karen; Hammes-Schiffer, Sharon
1997-07-01
This paper presents an analytical derivation of a multiconfigurational self-consistent-field (MC-SCF) solution of the time-independent Schrödinger equation for nuclear motion (i.e. vibrational modes). This variational MC-SCF method is designed for the mixed quantum/classical molecular dynamics simulation of multiple proton transfer reactions, where the transferring protons are treated quantum mechanically while the remaining degrees of freedom are treated classically. This paper presents a proof that the Hellmann-Feynman forces on the classical degrees of freedom are identical to the exact forces (i.e. the Pulay corrections vanish) when this MC-SCF method is used with an appropriate choice of basis functions. This new MC-SCF method is applied to multiple proton transfer in a protonated chain of three hydrogen-bonded water molecules. The ground state and the first three excited state energies and the ground state forces agree well with full configuration interaction calculations. Sample trajectories are obtained using adiabatic molecular dynamics methods, and nonadiabatic effects are found to be insignificant for these sample trajectories. The accuracy of the excited states will enable this MC-SCF method to be used in conjunction with nonadiabatic molecular dynamics methods. This application differs from previous work in that it is a real-time quantum dynamical nonequilibrium simulation of multiple proton transfer in a chain of water molecules.
A Multiple-Tracer Approach for Identifying Sewage Sources to an Urban Stream System
Hyer, Kenneth Edward
2007-01-01
The presence of human-derived fecal coliform bacteria (sewage) in streams and rivers is recognized as a human health hazard. The source of these human-derived bacteria, however, is often difficult to identify and eliminate, because sewage can be delivered to streams through a variety of mechanisms, such as leaking sanitary sewers or private lateral lines, cross-connected pipes, straight pipes, sewer-line overflows, illicit dumping of septic waste, and vagrancy. A multiple-tracer study was conducted to identify site-specific sources of sewage in Accotink Creek, an urban stream in Fairfax County, Virginia, that is listed on the Commonwealth's priority list of impaired streams for violations of the fecal coliform bacteria standard. Beyond developing this multiple-tracer approach for locating sources of sewage inputs to Accotink Creek, the second objective of the study was to demonstrate how the multiple-tracer approach can be applied to other streams affected by sewage sources. The tracers used in this study were separated into indicator tracers, which are relatively simple and inexpensive to apply, and confirmatory tracers, which are relatively difficult and expensive to analyze. Indicator tracers include fecal coliform bacteria, surfactants, boron, chloride, chloride/bromide ratio, specific conductance, dissolved oxygen, turbidity, and water temperature. Confirmatory tracers include 13 organic compounds that are associated with human waste, including caffeine, cotinine, triclosan, a number of detergent metabolites, several fragrances, and several plasticizers. To identify sources of sewage to Accotink Creek, a detailed investigation of the Accotink Creek main channel, tributaries, and flowing storm drains was undertaken from 2001 to 2004. Sampling was conducted in a series of eight synoptic sampling events, each of which began at the most downstream site and extended upstream through the watershed and into the headwaters of each tributary. Using the synoptic sampling approach, 149 sites were sampled at least one time for indicator tracers; 52 of these sites also were sampled for confirmatory tracers at least one time. Through the analysis of multiple-tracer levels in the synoptic samples, three major sewage sources to the Accotink Creek stream network were identified, and several other minor sewage sources to the Accotink Creek system likely deserve additional investigation. Near the end of the synoptic sampling activities, three additional sampling methods were used to gain better understanding of the potential for sewage sources to the watershed. These additional sampling methods included optical brightener monitoring, intensive stream sampling using automated samplers, and additional sampling of several storm-drain networks. The samples obtained by these methods provided further understanding of possible sewage sources to the streams and a better understanding of the variability in the tracer concentrations at a given sampling site. Collectively, these additional sampling methods were a valuable complement to the synoptic sampling approach that was used for the bulk of this study. The study results provide an approach for local authorities to use in applying a relatively simple and inexpensive collection of tracers to locate sewage sources to streams. Although this multiple-tracer approach is effective in detecting sewage sources to streams, additional research is needed to better detect extremely low-volume sewage sources and better enable local authorities to identify the specific sources of the sewage once it is detected in a stream reach.
Guo, Ying; Little, Roderick J; McConnell, Daniel S
2012-01-01
Covariate measurement error is common in epidemiologic studies. Current methods for correcting measurement error with information from external calibration samples are insufficient to provide valid adjusted inferences. We consider the problem of estimating the regression of an outcome Y on covariates X and Z, where Y and Z are observed, X is unobserved, but a variable W that measures X with error is observed. Information about measurement error is provided in an external calibration sample where data on X and W (but not Y and Z) are recorded. We describe a method that uses summary statistics from the calibration sample to create multiple imputations of the missing values of X in the regression sample, so that the regression coefficients of Y on X and Z and associated standard errors can be estimated using simple multiple imputation combining rules, yielding valid statistical inferences under the assumption of a multivariate normal distribution. The proposed method is shown by simulation to provide better inferences than existing methods, namely the naive method, classical calibration, and regression calibration, particularly for correction for bias and achieving nominal confidence levels. We also illustrate our method with an example using linear regression to examine the relation between serum reproductive hormone concentrations and bone mineral density loss in midlife women in the Michigan Bone Health and Metabolism Study. Existing methods fail to adjust appropriately for bias due to measurement error in the regression setting, particularly when measurement error is substantial. The proposed method corrects this deficiency.
Doll, Charles G; Wright, Cherylyn W; Morley, Shannon M; Wright, Bob W
2017-04-01
A modified version of the Direct LSC method to correct for quenching effect was investigated for the determination of bio-originated fuel content in fuel samples produced from multiple biological starting materials. The modified method was found to be accurate in determining the percent bio-originated fuel to within 5% of the actual value for samples with quenching effects ≤43%. Analysis of highly quenched samples was possible when diluted with the exception of one sample with a 100% quenching effect. Copyright © 2017. Published by Elsevier Ltd.
Yuan, Xiangjuan; Qiang, Zhimin; Ben, Weiwei; Zhu, Bing; Liu, Junxin
2014-09-01
This work described the development, optimization and validation of an analytical method for rapid detection of multiple-class pharmaceuticals in both municipal wastewater and sludge samples based on ultrasonic solvent extraction, solid-phase extraction, and ultra high performance liquid chromatography-tandem mass spectrometry quantification. The results indicated that the developed method could effectively extract all the target pharmaceuticals (25) in a single process and analyze them within 24min. The recoveries of the target pharmaceuticals were in the range of 69%-131% for wastewater and 54%-130% for sludge at different spiked concentration levels. The method quantification limits in wastewater and sludge ranged from 0.02 to 0.73ng/L and from 0.02 to 1.00μg/kg, respectively. Subsequently, this method was validated and applied for residual pharmaceutical analysis in a wastewater treatment plant located in Beijing, China. All the target pharmaceuticals were detected in the influent samples with concentrations varying from 0.09ng/L (tiamulin) to 15.24μg/L (caffeine); meanwhile, up to 23 pharmaceuticals were detected in sludge samples with concentrations varying from 60ng/kg (sulfamethizole) to 8.55mg/kg (ofloxacin). The developed method demonstrated its selectivity, sensitivity, and reliability for detecting multiple-class pharmaceuticals in complex matrices such as municipal wastewater and sludge. Copyright © 2014. Published by Elsevier B.V.
Liu, Tian; Liu, Jing; de Rochefort, Ludovic; Spincemaille, Pascal; Khalidov, Ildar; Ledoux, James Robert; Wang, Yi
2011-09-01
Magnetic susceptibility varies among brain structures and provides insights into the chemical and molecular composition of brain tissues. However, the determination of an arbitrary susceptibility distribution from the measured MR signal phase is a challenging, ill-conditioned inverse problem. Although a previous method named calculation of susceptibility through multiple orientation sampling (COSMOS) has solved this inverse problem both theoretically and experimentally using multiple angle acquisitions, it is often impractical to carry out on human subjects. Recently, the feasibility of calculating the brain susceptibility distribution from a single-angle acquisition was demonstrated using morphology enabled dipole inversion (MEDI). In this study, we further improved the original MEDI method by sparsifying the edges in the quantitative susceptibility map that do not have a corresponding edge in the magnitude image. Quantitative susceptibility maps generated by the improved MEDI were compared qualitatively and quantitatively with those generated by calculation of susceptibility through multiple orientation sampling. The results show a high degree of agreement between MEDI and calculation of susceptibility through multiple orientation sampling, and the practicality of MEDI allows many potential clinical applications. Copyright © 2011 Wiley-Liss, Inc.
Method for improving accuracy in full evaporation headspace analysis.
Xie, Wei-Qi; Chai, Xin-Sheng
2017-05-01
We report a new headspace analytical method in which multiple headspace extraction is incorporated with the full evaporation technique. The pressure uncertainty caused by the solid content change in the samples has a great impact to the measurement accuracy in the conventional full evaporation headspace analysis. The results (using ethanol solution as the model sample) showed that the present technique is effective to minimize such a problem. The proposed full evaporation multiple headspace extraction analysis technique is also automated and practical, and which could greatly broaden the applications of the full-evaporation-based headspace analysis. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Foam generation and sample composition optimization for the FOAM-C experiment of the ISS
NASA Astrophysics Data System (ADS)
Carpy, R.; Picker, G.; Amann, B.; Ranebo, H.; Vincent-Bonnieu, S.; Minster, O.; Winter, J.; Dettmann, J.; Castiglione, L.; Höhler, R.; Langevin, D.
2011-12-01
End of 2009 and early 2010 a sealed cell, for foam generation and observation, has been designed and manufactured at Astrium Friedrichshafen facilities. With the use of this cell, different sample compositions of "wet foams" have been optimized for mixtures of chemicals such as water, dodecanol, pluronic, aethoxisclerol, glycerol, CTAB, SDS, as well as glass beads. This development is performed in the frame of the breadboarding development activities of the Experiment Container FOAM-C for operation in the ISS Fluid Science Laboratory (ISS). The sample cell supports multiple observation methods such as: Diffusing-Wave and Diffuse Transmission Spectrometry, Time Resolved Correlation Spectroscopy [1] and microscope observation, all of these methods are applied in the cell with a relatively small experiment volume <3cm3. These units, will be on orbit replaceable sets, that will allow multiple sample compositions processing (in the range of >40).
Toroid cavity/coil NMR multi-detector
Gerald, II, Rex E.; Meadows, Alexander D.; Gregar, Joseph S.; Rathke, Jerome W.
2007-09-18
An analytical device for rapid, non-invasive nuclear magnetic resonance (NMR) spectroscopy of multiple samples using a single spectrometer is provided. A modified toroid cavity/coil detector (TCD), and methods for conducting the simultaneous acquisition of NMR data for multiple samples including a protocol for testing NMR multi-detectors are provided. One embodiment includes a plurality of LC resonant circuits including spatially separated toroid coil inductors, each toroid coil inductor enveloping its corresponding sample volume, and tuned to resonate at a predefined frequency using a variable capacitor. The toroid coil is formed into a loop, where both ends of the toroid coil are brought into coincidence. Another embodiment includes multiple micro Helmholtz coils arranged on a circular perimeter concentric with a central conductor of the toroid cavity.
MORRISON, LIAM J.; McCORMACK, GILLIAN; SWEENEY, LINDSAY; LIKEUFACK, ANNE C. L.; TRUC, PHILIPPE; TURNER, C. MICHAEL; TAIT, ANDY; MacLEOD, ANNETTE
2007-01-01
Whole genome amplification methods are a recently developed tool for amplifying DNA from limited template. We report its application in trypanosome infections, characterised by low parasitaemias. Multiple Displacement Amplification (MDA) amplifies DNA with a simple in vitro step, and was evaluated on mouse blood samples on FTA filter cards with known numbers of Trypanosoma brucei parasites. The data showed a twenty-fold increase in the number of PCRs possible per sample, using primers diagnostic for the multi-copy ribosomal ITS region or 177 bp repeats, and a twenty-fold increase in sensitivity over nested PCR against a single copy microsatellite. Using MDA for microsatellite genotyping caused allele dropout at low DNA concentrations, which was overcome by pooling multiple MDA reactions. The validity of using MDA was established with samples from Human African Trypanosomiasis patients. The use of MDA allows maximal use of finite DNA samples and may prove a valuable tool in studies where multiple reactions are necessary, such as population genetic analyses. PMID:17556624
Coagulation dynamics of a blood sample by multiple scattering analysis
NASA Astrophysics Data System (ADS)
Faivre, Magalie; Peltié, Philippe; Planat-Chrétien, Anne; Cosnier, Marie-Line; Cubizolles, Myriam; Nougier, Christophe; Négrier, Claude; Pouteau, Patrick
2011-05-01
We report a new technique to measure coagulation dynamics on whole-blood samples. The method relies on the analysis of the speckle figure resulting from a whole-blood sample mixed with coagulation reagent and introduced in a thin chamber illuminated with a coherent light. A dynamic study of the speckle reveals a typical behavior due to coagulation. We compare our measured coagulation times to a reference method obtained in a medical laboratory.
A novel strategy for isolation and determination of sugars and sugar alcohols from conifers.
Sarvin, B A; Seregin, A P; Shpigun, O A; Rodin, I A; Stavrianidi, A N
2018-06-02
The ultrasound-assisted extraction method for isolation of 17 sugars and sugar alcohols from conifers with a subsequent hydrophilic interaction liquid chromatography-tandem mass spectrometry method for their determination is proposed. The optimization of extraction parameters was carried out using Taguchi - L 9 (3 4 ) orthogonal array experimental design for the following parameters-a methanol concentration in the extraction solution, an extraction time, a type of plant sample and an extraction temperature. The optimal ultrasound-assisted extraction conditions were-MeOH concentration - 30% (water - 70%), extraction time - 30 min, type of plant sample - II (grinded leaves 2-4 mm long), extraction temperature - 60 °C. Pure water and acetonitrile were used as eluents in gradient elution mode to separate the analytes. Direct determination of multiple sugars and sugar alcohols was carried out using a mass spectrometric detector operated in a multiple reaction monitoring mode, providing detection limits in the range between 0.1 and 20 ng/mL and good analytical characteristics of the method without derivatization. The developed approach was validated by multiple successive extraction method applied to test its performance on a series of 10 samples, i.e. 2 samples per each of 5 genera: Abies, Larix, Picea, Pinus (Pinaceae) and Juniperus (Cupressaceae), widely distributed in the boreal conifer forests of Eurasia. The novel strategy can be used for profiling of sugars and sugar alcohols in a wide range of plant species. Copyright © 2018. Published by Elsevier B.V.
Precision Efficacy Analysis for Regression.
ERIC Educational Resources Information Center
Brooks, Gordon P.
When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…
LARGE RIVER ASSESSMENT METHODS FOR BENTHIC MACROINVERTEBRATES AND FISH
Multiple projects are currently underway to increase our understanding of the varying results of different sampling methods and designs used for the biological assessment and monitoring of large (boatable) rivers. Studies include methods used to assess fish, benthic macroinverte...
Guomundsdottir, S.; Applegate, Lynn M.; Arnason, I.O.; Kristmundsson, A.; Purcell, Maureen K.; Elliott, Diane G.
2017-01-01
Renibacterium salmoninarum, the causative agent of salmonid bacterial kidney disease (BKD), is endemic in many wild trout species in northerly regions. The aim of the present study was to determine the optimal R. salmoninarum sampling/testing strategy for wild brown trout (Salmo trutta L.) populations in Iceland. Fish were netted in a lake and multiple organs—kidney, spleen, gills, oesophagus and mid-gut—were sampled and subjected to five detection tests i.e. culture, polyclonal enzyme-linked immunosorbent assay (pELISA) and three different PCR tests. The results showed that each fish had encountered R. salmoninarum but there were marked differences between results obtained depending on organ and test. The bacterium was not cultured from any kidney sample while all kidney samples were positive by pELISA. At least one organ from 92.9% of the fish tested positive by PCR. The results demonstrated that the choice of tissue and diagnostic method can dramatically influence the outcome of R. salmoninarum surveys.
Minimal-assumption inference from population-genomic data
NASA Astrophysics Data System (ADS)
Weissman, Daniel; Hallatschek, Oskar
Samples of multiple complete genome sequences contain vast amounts of information about the evolutionary history of populations, much of it in the associations among polymorphisms at different loci. Current methods that take advantage of this linkage information rely on models of recombination and coalescence, limiting the sample sizes and populations that they can analyze. We introduce a method, Minimal-Assumption Genomic Inference of Coalescence (MAGIC), that reconstructs key features of the evolutionary history, including the distribution of coalescence times, by integrating information across genomic length scales without using an explicit model of recombination, demography or selection. Using simulated data, we show that MAGIC's performance is comparable to PSMC' on single diploid samples generated with standard coalescent and recombination models. More importantly, MAGIC can also analyze arbitrarily large samples and is robust to changes in the coalescent and recombination processes. Using MAGIC, we show that the inferred coalescence time histories of samples of multiple human genomes exhibit inconsistencies with a description in terms of an effective population size based on single-genome data.
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
Mahon, Michael B; Campbell, Kaitlin U; Crist, Thomas O
2017-06-01
Selection of proper sampling methods for measuring a community of interest is essential whether the study goals are to conduct a species inventory, environmental monitoring, or a manipulative experiment. Insect diversity studies often employ multiple collection methods at the expense of researcher time and funding. Ants (Formicidae) are widely used in environmental monitoring owing to their sensitivity to ecosystem changes. When sampling ant communities, two passive techniques are recommended in combination: pitfall traps and Winkler litter extraction. These recommendations are often based on studies from highly diverse tropical regions or when a species inventory is the goal. Studies in temperate regions often focus on measuring consistent community response along gradients of disturbance or among management regimes; therefore, multiple sampling methods may be unnecessary. We compared the effectiveness of pitfalls and Winkler litter extraction in an eastern temperate forest for measuring ant species richness, composition, and occurrence of ant functional groups in response to experimental manipulations of two key forest ecosystem drivers, white-tailed deer and an invasive shrub (Amur honeysuckle). We found no significant effect of sampling method on the outcome of the ecological experiment; however, we found differences between the two sampling methods in the resulting ant species richness and functional group occurrence. Litter samples approximated the overall combined species richness and composition, but pitfalls were better at sampling large-bodied (Camponotus) species. We conclude that employing both methods is essential only for species inventories or monitoring ants in the Cold-climate Specialists functional group. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Dynamics Sampling in Transition Pathway Space.
Zhou, Hongyu; Tao, Peng
2018-01-09
The minimum energy pathway contains important information describing the transition between two states on a potential energy surface (PES). Chain-of-states methods were developed to efficiently calculate minimum energy pathways connecting two stable states. In the chain-of-states framework, a series of structures are generated and optimized to represent the minimum energy pathway connecting two states. However, multiple pathways may exist connecting two existing states and should be identified to obtain a full view of the transitions. Therefore, we developed an enhanced sampling method, named as the direct pathway dynamics sampling (DPDS) method, to facilitate exploration of a PES for multiple pathways connecting two stable states as well as addition minima and their associated transition pathways. In the DPDS method, molecular dynamics simulations are carried out on the targeting PES within a chain-of-states framework to directly sample the transition pathway space. The simulations of DPDS could be regulated by two parameters controlling distance among states along the pathway and smoothness of the pathway. One advantage of the chain-of-states framework is that no specific reaction coordinates are necessary to generate the reaction pathway, because such information is implicitly represented by the structures along the pathway. The chain-of-states setup in a DPDS method greatly enhances the sufficient sampling in high-energy space between two end states, such as transition states. By removing the constraint on the end states of the pathway, DPDS will also sample pathways connecting minima on a PES in addition to the end points of the starting pathway. This feature makes DPDS an ideal method to directly explore transition pathway space. Three examples demonstrate the efficiency of DPDS methods in sampling the high-energy area important for reactions on the PES.
Iachan, Ronaldo; H. Johnson, Christopher; L. Harding, Richard; Kyle, Tonja; Saavedra, Pedro; L. Frazier, Emma; Beer, Linda; L. Mattson, Christine; Skarbinski, Jacek
2016-01-01
Background: Health surveys of the general US population are inadequate for monitoring human immunodeficiency virus (HIV) infection because the relatively low prevalence of the disease (<0.5%) leads to small subpopulation sample sizes. Objective: To collect a nationally and locally representative probability sample of HIV-infected adults receiving medical care to monitor clinical and behavioral outcomes, supplementing the data in the National HIV Surveillance System. This paper describes the sample design and weighting methods for the Medical Monitoring Project (MMP) and provides estimates of the size and characteristics of this population. Methods: To develop a method for obtaining valid, representative estimates of the in-care population, we implemented a cross-sectional, three-stage design that sampled 23 jurisdictions, then 691 facilities, then 9,344 HIV patients receiving medical care, using probability-proportional-to-size methods. The data weighting process followed standard methods, accounting for the probabilities of selection at each stage and adjusting for nonresponse and multiplicity. Nonresponse adjustments accounted for differing response at both facility and patient levels. Multiplicity adjustments accounted for visits to more than one HIV care facility. Results: MMP used a multistage stratified probability sampling design that was approximately self-weighting in each of the 23 project areas and nationally. The probability sample represents the estimated 421,186 HIV-infected adults receiving medical care during January through April 2009. Methods were efficient (i.e., induced small, unequal weighting effects and small standard errors for a range of weighted estimates). Conclusion: The information collected through MMP allows monitoring trends in clinical and behavioral outcomes and informs resource allocation for treatment and prevention activities. PMID:27651851
Hotspot and sampling analysis for effective maintenance and performance monitoring.
DOT National Transportation Integrated Search
2017-05-01
In this project, we propose two sampling methods addressing how much and where the agencies need to collect infrastraucture condition data for accurate Level-of-Maintenance (LOM) estimation in maintenance network with single type or multiple ty...
Integrative Exploratory Analysis of Two or More Genomic Datasets.
Meng, Chen; Culhane, Aedin
2016-01-01
Exploratory analysis is an essential step in the analysis of high throughput data. Multivariate approaches such as correspondence analysis (CA), principal component analysis, and multidimensional scaling are widely used in the exploratory analysis of single dataset. Modern biological studies often assay multiple types of biological molecules (e.g., mRNA, protein, phosphoproteins) on a same set of biological samples, thereby creating multiple different types of omics data or multiassay data. Integrative exploratory analysis of these multiple omics data is required to leverage the potential of multiple omics studies. In this chapter, we describe the application of co-inertia analysis (CIA; for analyzing two datasets) and multiple co-inertia analysis (MCIA; for three or more datasets) to address this problem. These methods are powerful yet simple multivariate approaches that represent samples using a lower number of variables, allowing a more easily identification of the correlated structure in and between multiple high dimensional datasets. Graphical representations can be employed to this purpose. In addition, the methods simultaneously project samples and variables (genes, proteins) onto the same lower dimensional space, so the most variant variables from each dataset can be selected and associated with samples, which can be further used to facilitate biological interpretation and pathway analysis. We applied CIA to explore the concordance between mRNA and protein expression in a panel of 60 tumor cell lines from the National Cancer Institute. In the same 60 cell lines, we used MCIA to perform a cross-platform comparison of mRNA gene expression profiles obtained on four different microarray platforms. Last, as an example of integrative analysis of multiassay or multi-omics data we analyzed transcriptomic, proteomic, and phosphoproteomic data from pluripotent (iPS) and embryonic stem (ES) cell lines.
Mark J. Ducey; Jeffrey H. Gove; Harry T. Valentine
2008-01-01
Perpendicular distance sampling (PDS) is a fast probability-proportional-to-size method for inventory of downed wood. However, previous development of PDS had limited the method to estimating only one variable (such as volume per hectare, or surface area per hectare) at a time. Here, we develop a general design-unbiased estimator for PDS. We then show how that...
USDA-ARS?s Scientific Manuscript database
To determine the genetic diversity within the baculovirus species Autographa calfornica multiple nucleopolyhedrovirus (AcMNPV; Baculoviridae: Alphabaculovirus), a PCR-based method was used to identify and classify baculoviruses found in virus samples from the lepidopteran host species A. californi...
Unsupervised Ensemble Anomaly Detection Using Time-Periodic Packet Sampling
NASA Astrophysics Data System (ADS)
Uchida, Masato; Nawata, Shuichi; Gu, Yu; Tsuru, Masato; Oie, Yuji
We propose an anomaly detection method for finding patterns in network traffic that do not conform to legitimate (i.e., normal) behavior. The proposed method trains a baseline model describing the normal behavior of network traffic without using manually labeled traffic data. The trained baseline model is used as the basis for comparison with the audit network traffic. This anomaly detection works in an unsupervised manner through the use of time-periodic packet sampling, which is used in a manner that differs from its intended purpose — the lossy nature of packet sampling is used to extract normal packets from the unlabeled original traffic data. Evaluation using actual traffic traces showed that the proposed method has false positive and false negative rates in the detection of anomalies regarding TCP SYN packets comparable to those of a conventional method that uses manually labeled traffic data to train the baseline model. Performance variation due to the probabilistic nature of sampled traffic data is mitigated by using ensemble anomaly detection that collectively exploits multiple baseline models in parallel. Alarm sensitivity is adjusted for the intended use by using maximum- and minimum-based anomaly detection that effectively take advantage of the performance variations among the multiple baseline models. Testing using actual traffic traces showed that the proposed anomaly detection method performs as well as one using manually labeled traffic data and better than one using randomly sampled (unlabeled) traffic data.
Satzke, Catherine; Dunne, Eileen M.; Porter, Barbara D.; Klugman, Keith P.; Mulholland, E. Kim
2015-01-01
Background The pneumococcus is a diverse pathogen whose primary niche is the nasopharynx. Over 90 different serotypes exist, and nasopharyngeal carriage of multiple serotypes is common. Understanding pneumococcal carriage is essential for evaluating the impact of pneumococcal vaccines. Traditional serotyping methods are cumbersome and insufficient for detecting multiple serotype carriage, and there are few data comparing the new methods that have been developed over the past decade. We established the PneuCarriage project, a large, international multi-centre study dedicated to the identification of the best pneumococcal serotyping methods for carriage studies. Methods and Findings Reference sample sets were distributed to 15 research groups for blinded testing. Twenty pneumococcal serotyping methods were used to test 81 laboratory-prepared (spiked) samples. The five top-performing methods were used to test 260 nasopharyngeal (field) samples collected from children in six high-burden countries. Sensitivity and positive predictive value (PPV) were determined for the test methods and the reference method (traditional serotyping of >100 colonies from each sample). For the alternate serotyping methods, the overall sensitivity ranged from 1% to 99% (reference method 98%), and PPV from 8% to 100% (reference method 100%), when testing the spiked samples. Fifteen methods had ≥70% sensitivity to detect the dominant (major) serotype, whilst only eight methods had ≥70% sensitivity to detect minor serotypes. For the field samples, the overall sensitivity ranged from 74.2% to 95.8% (reference method 93.8%), and PPV from 82.2% to 96.4% (reference method 99.6%). The microarray had the highest sensitivity (95.8%) and high PPV (93.7%). The major limitation of this study is that not all of the available alternative serotyping methods were included. Conclusions Most methods were able to detect the dominant serotype in a sample, but many performed poorly in detecting the minor serotype populations. Microarray with a culture amplification step was the top-performing method. Results from this comprehensive evaluation will inform future vaccine evaluation and impact studies, particularly in low-income settings, where pneumococcal disease burden remains high. PMID:26575033
Resolving occlusion and segmentation errors in multiple video object tracking
NASA Astrophysics Data System (ADS)
Cheng, Hsu-Yung; Hwang, Jenq-Neng
2009-02-01
In this work, we propose a method to integrate the Kalman filter and adaptive particle sampling for multiple video object tracking. The proposed framework is able to detect occlusion and segmentation error cases and perform adaptive particle sampling for accurate measurement selection. Compared with traditional particle filter based tracking methods, the proposed method generates particles only when necessary. With the concept of adaptive particle sampling, we can avoid degeneracy problem because the sampling position and range are dynamically determined by parameters that are updated by Kalman filters. There is no need to spend time on processing particles with very small weights. The adaptive appearance for the occluded object refers to the prediction results of Kalman filters to determine the region that should be updated and avoids the problem of using inadequate information to update the appearance under occlusion cases. The experimental results have shown that a small number of particles are sufficient to achieve high positioning and scaling accuracy. Also, the employment of adaptive appearance substantially improves the positioning and scaling accuracy on the tracking results.
Goodman, Laura B; McDonough, Patrick L; Anderson, Renee R; Franklin-Guild, Rebecca J; Ryan, James R; Perkins, Gillian A; Thachil, Anil J; Glaser, Amy L; Thompson, Belinda S
2017-11-01
Rapid screening for enteric bacterial pathogens in clinical environments is essential for biosecurity. Salmonella found in veterinary hospitals, particularly Salmonella enterica serovar Dublin, can pose unique challenges for culture and testing because of its poor growth. Multiple Salmonella serovars including Dublin are emerging threats to public health given increasing prevalence and antimicrobial resistance. We adapted an automated food testing method to veterinary samples and evaluated the performance of the method in a variety of matrices including environmental samples ( n = 81), tissues ( n = 52), feces ( n = 148), and feed ( n = 29). A commercial kit was chosen as the basis for this approach in view of extensive performance characterizations published by multiple independent organizations. A workflow was established for efficiently and accurately testing veterinary matrices and environmental samples by use of real-time PCR after selective enrichment in Rappaport-Vassiliadis soya (RVS) medium. Using this method, the detection limit for S. Dublin improved by 100-fold over subculture on selective agars (eosin-methylene blue, brilliant green, and xylose-lysine-deoxycholate). Overall, the procedure was effective in detecting Salmonella spp. and provided next-day results.
NASA Astrophysics Data System (ADS)
Sjöberg, Daniel; Larsson, Christer
2015-06-01
We present a method aimed at reducing uncertainties and instabilities when characterizing materials in waveguide setups. The method is based on measuring the S parameters for three different orientations of a rectangular sample block in a rectangular waveguide. The corresponding geometries are modeled in a commercial full-wave simulation program, taking any material parameters as input. The material parameters of the sample are found by minimizing the squared distance between measured and calculated S parameters. The information added by the different sample orientations is quantified using the Cramér-Rao lower bound. The flexibility of the method allows the determination of material parameters of an arbitrarily shaped sample that fits in the waveguide.
Satzke, Catherine; Dunne, Eileen M; Porter, Barbara D; Klugman, Keith P; Mulholland, E Kim
2015-11-01
The pneumococcus is a diverse pathogen whose primary niche is the nasopharynx. Over 90 different serotypes exist, and nasopharyngeal carriage of multiple serotypes is common. Understanding pneumococcal carriage is essential for evaluating the impact of pneumococcal vaccines. Traditional serotyping methods are cumbersome and insufficient for detecting multiple serotype carriage, and there are few data comparing the new methods that have been developed over the past decade. We established the PneuCarriage project, a large, international multi-centre study dedicated to the identification of the best pneumococcal serotyping methods for carriage studies. Reference sample sets were distributed to 15 research groups for blinded testing. Twenty pneumococcal serotyping methods were used to test 81 laboratory-prepared (spiked) samples. The five top-performing methods were used to test 260 nasopharyngeal (field) samples collected from children in six high-burden countries. Sensitivity and positive predictive value (PPV) were determined for the test methods and the reference method (traditional serotyping of >100 colonies from each sample). For the alternate serotyping methods, the overall sensitivity ranged from 1% to 99% (reference method 98%), and PPV from 8% to 100% (reference method 100%), when testing the spiked samples. Fifteen methods had ≥70% sensitivity to detect the dominant (major) serotype, whilst only eight methods had ≥70% sensitivity to detect minor serotypes. For the field samples, the overall sensitivity ranged from 74.2% to 95.8% (reference method 93.8%), and PPV from 82.2% to 96.4% (reference method 99.6%). The microarray had the highest sensitivity (95.8%) and high PPV (93.7%). The major limitation of this study is that not all of the available alternative serotyping methods were included. Most methods were able to detect the dominant serotype in a sample, but many performed poorly in detecting the minor serotype populations. Microarray with a culture amplification step was the top-performing method. Results from this comprehensive evaluation will inform future vaccine evaluation and impact studies, particularly in low-income settings, where pneumococcal disease burden remains high.
Protocol for Detection of Yersinia pestis in Environmental ...
Methods Report This is the first ever open-access and detailed protocol available to all government departments and agencies, and their contractors to detect Yersinia pestis, the pathogen that causes plague, from multiple environmental sample types including water. Each analytical method includes sample processing procedure for each sample type in a step-by-step manner. It includes real-time PCR, traditional microbiological culture, and the Rapid Viability PCR (RV-PCR) analytical methods. For large volume water samples it also includes an ultra-filtration-based sample concentration procedure. Because of such a non-restrictive availability of this protocol to all government departments and agencies, and their contractors, the nation will now have increased laboratory capacity to analyze large number of samples during a wide-area plague incident.
Shi, Xiaohu; Zhang, Jingfen; He, Zhiquan; Shang, Yi; Xu, Dong
2011-09-01
One of the major challenges in protein tertiary structure prediction is structure quality assessment. In many cases, protein structure prediction tools generate good structural models, but fail to select the best models from a huge number of candidates as the final output. In this study, we developed a sampling-based machine-learning method to rank protein structural models by integrating multiple scores and features. First, features such as predicted secondary structure, solvent accessibility and residue-residue contact information are integrated by two Radial Basis Function (RBF) models trained from different datasets. Then, the two RBF scores and five selected scoring functions developed by others, i.e., Opus-CA, Opus-PSP, DFIRE, RAPDF, and Cheng Score are synthesized by a sampling method. At last, another integrated RBF model ranks the structural models according to the features of sampling distribution. We tested the proposed method by using two different datasets, including the CASP server prediction models of all CASP8 targets and a set of models generated by our in-house software MUFOLD. The test result shows that our method outperforms any individual scoring function on both best model selection, and overall correlation between the predicted ranking and the actual ranking of structural quality.
Ensemble-Biased Metadynamics: A Molecular Simulation Method to Sample Experimental Distributions
Marinelli, Fabrizio; Faraldo-Gómez, José D.
2015-01-01
We introduce an enhanced-sampling method for molecular dynamics (MD) simulations referred to as ensemble-biased metadynamics (EBMetaD). The method biases a conventional MD simulation to sample a molecular ensemble that is consistent with one or more probability distributions known a priori, e.g., experimental intramolecular distance distributions obtained by double electron-electron resonance or other spectroscopic techniques. To this end, EBMetaD adds an adaptive biasing potential throughout the simulation that discourages sampling of configurations inconsistent with the target probability distributions. The bias introduced is the minimum necessary to fulfill the target distributions, i.e., EBMetaD satisfies the maximum-entropy principle. Unlike other methods, EBMetaD does not require multiple simulation replicas or the introduction of Lagrange multipliers, and is therefore computationally efficient and straightforward in practice. We demonstrate the performance and accuracy of the method for a model system as well as for spin-labeled T4 lysozyme in explicit water, and show how EBMetaD reproduces three double electron-electron resonance distance distributions concurrently within a few tens of nanoseconds of simulation time. EBMetaD is integrated in the open-source PLUMED plug-in (www.plumed-code.org), and can be therefore readily used with multiple MD engines. PMID:26083917
Huang, Yunda; Huang, Ying; Moodie, Zoe; Li, Sue; Self, Steve
2014-01-01
Summary In biomedical research such as the development of vaccines for infectious diseases or cancer, measures from the same assay are often collected from multiple sources or laboratories. Measurement error that may vary between laboratories needs to be adjusted for when combining samples across laboratories. We incorporate such adjustment in comparing and combining independent samples from different labs via integration of external data, collected on paired samples from the same two laboratories. We propose: 1) normalization of individual level data from two laboratories to the same scale via the expectation of true measurements conditioning on the observed; 2) comparison of mean assay values between two independent samples in the Main study accounting for inter-source measurement error; and 3) sample size calculations of the paired-sample study so that hypothesis testing error rates are appropriately controlled in the Main study comparison. Because the goal is not to estimate the true underlying measurements but to combine data on the same scale, our proposed methods do not require that the true values for the errorprone measurements are known in the external data. Simulation results under a variety of scenarios demonstrate satisfactory finite sample performance of our proposed methods when measurement errors vary. We illustrate our methods using real ELISpot assay data generated by two HIV vaccine laboratories. PMID:22764070
Taguchi, Y-h; Iwadate, Mitsuo; Umeyama, Hideaki
2015-04-30
Feature extraction (FE) is difficult, particularly if there are more features than samples, as small sample numbers often result in biased outcomes or overfitting. Furthermore, multiple sample classes often complicate FE because evaluating performance, which is usual in supervised FE, is generally harder than the two-class problem. Developing sample classification independent unsupervised methods would solve many of these problems. Two principal component analysis (PCA)-based FE, specifically, variational Bayes PCA (VBPCA) was extended to perform unsupervised FE, and together with conventional PCA (CPCA)-based unsupervised FE, were tested as sample classification independent unsupervised FE methods. VBPCA- and CPCA-based unsupervised FE both performed well when applied to simulated data, and a posttraumatic stress disorder (PTSD)-mediated heart disease data set that had multiple categorical class observations in mRNA/microRNA expression of stressed mouse heart. A critical set of PTSD miRNAs/mRNAs were identified that show aberrant expression between treatment and control samples, and significant, negative correlation with one another. Moreover, greater stability and biological feasibility than conventional supervised FE was also demonstrated. Based on the results obtained, in silico drug discovery was performed as translational validation of the methods. Our two proposed unsupervised FE methods (CPCA- and VBPCA-based) worked well on simulated data, and outperformed two conventional supervised FE methods on a real data set. Thus, these two methods have suggested equivalence for FE on categorical multiclass data sets, with potential translational utility for in silico drug discovery.
Investigating the Stability of Four Methods for Estimating Item Bias.
ERIC Educational Resources Information Center
Perlman, Carole L.; And Others
The reliability of item bias estimates was studied for four methods: (1) the transformed delta method; (2) Shepard's modified delta method; (3) Rasch's one-parameter residual analysis; and (4) the Mantel-Haenszel procedure. Bias statistics were computed for each sample using all methods. Data were from administration of multiple-choice items from…
Remily-Wood, Elizabeth R.; Benson, Kaaron; Baz, Rachid C.; Chen, Y. Ann; Hussein, Mohamad; Hartley-Brown, Monique A.; Sprung, Robert W.; Perez, Brianna; Liu, Richard Z.; Yoder, Sean; Teer, Jamie; Eschrich, Steven A.; Koomen, John M.
2014-01-01
Purpose Quantitative mass spectrometry assays for immunoglobulins (Igs) are compared with existing clinical methods in samples from patients with plasma cell dyscrasias, e.g. multiple myeloma. Experimental design Using LC-MS/MS data, Ig constant region peptides and transitions were selected for liquid chromatography-multiple reaction monitoring mass spectrometry (LC-MRM). Quantitative assays were used to assess Igs in serum from 83 patients. Results LC-MRM assays quantify serum levels of Igs and their isoforms (IgG1–4, IgA1–2, IgM, IgD, and IgE, as well as kappa(κ) and lambda(λ) light chains). LC-MRM quantification has been applied to single samples from a patient cohort and a longitudinal study of an IgE patient undergoing treatment, to enable comparison with existing clinical methods. Proof-of-concept data for defining and monitoring variable region peptides are provided using the H929 multiple myeloma cell line and two MM patients. Conclusions and Clinical Relevance LC-MRM assays targeting constant region peptides determine the type and isoform of the involved immunoglobulin and quantify its expression; the LC-MRM approach has improved sensitivity compared with the current clinical method, but slightly higher interassay variability. Detection of variable region peptides is a promising way to improve Ig quantification, which could produce a dramatic increase in sensitivity over existing methods, and could further complement current clinical techniques. PMID:24723328
Designing single- and multiple-shell sampling schemes for diffusion MRI using spherical code.
Cheng, Jian; Shen, Dinggang; Yap, Pew-Thian
2014-01-01
In diffusion MRI (dMRI), determining an appropriate sampling scheme is crucial for acquiring the maximal amount of information for data reconstruction and analysis using the minimal amount of time. For single-shell acquisition, uniform sampling without directional preference is usually favored. To achieve this, a commonly used approach is the Electrostatic Energy Minimization (EEM) method introduced in dMRI by Jones et al. However, the electrostatic energy formulation in EEM is not directly related to the goal of optimal sampling-scheme design, i.e., achieving large angular separation between sampling points. A mathematically more natural approach is to consider the Spherical Code (SC) formulation, which aims to achieve uniform sampling by maximizing the minimal angular difference between sampling points on the unit sphere. Although SC is well studied in the mathematical literature, its current formulation is limited to a single shell and is not applicable to multiple shells. Moreover, SC, or more precisely continuous SC (CSC), currently can only be applied on the continuous unit sphere and hence cannot be used in situations where one or several subsets of sampling points need to be determined from an existing sampling scheme. In this case, discrete SC (DSC) is required. In this paper, we propose novel DSC and CSC methods for designing uniform single-/multi-shell sampling schemes. The DSC and CSC formulations are solved respectively by Mixed Integer Linear Programming (MILP) and a gradient descent approach. A fast greedy incremental solution is also provided for both DSC and CSC. To our knowledge, this is the first work to use SC formulation for designing sampling schemes in dMRI. Experimental results indicate that our methods obtain larger angular separation and better rotational invariance than the generalized EEM (gEEM) method currently used in the Human Connectome Project (HCP).
Zulfiqar, Adnan; Morgan, Geraint; Turner, Nicholas W
2014-10-07
A method capable of screening for multiple steroids in urine has been developed, using a series of twelve structurally similar, and commercially relevant compounds as target analytes. A molecularly imprinted solid phase extraction clean-up step was used to make the sample suitable for injection onto a GC×GC-MS setup. Significant improvements compared to a commercially available C-18 material were observed. Each individual steroid was able to be separated and identified, using both the retention profile and diagnostic fragmentation ion monitoring abilities of the comprehensive chromatographic-mass spectrometry method. Effective LODs of between 11.7 and 27.0 pg were calculated for individual steroids, effectively equivalent to concentration levels of between 0.234 and 0.540 ng mL(-1) in urine, while the application of multiple screen was demonstrated using a 10 ng mL(-1) mixed sample. The nature of this study also removes the need for sample derivitisation which speeds up the screening process.
Avelar, Daniel M; Linardi, Pedro M
2010-09-15
The recently developed Multiple Displacement Amplification technique (MDA) allows for the production of a large quantity of high quality genomic DNA from low amounts of the original DNA. The goal of this study was to evaluate the performance of the MDA technique to amplify genomic DNA of siphonapterids that have been stored for long periods in 70% ethanol at room temperature. We subjected each DNA sample to two different methodologies: (1) amplification of mitochondrial 16S sequences without MDA; (2) amplification of 16S after MDA. All the samples obtained from these procedures were then sequenced. Only 4 samples (15.4%) subjected to method 1 showed amplification. In contrast, the application of MDA (method 2) improved the performance substantially, with 24 samples (92.3%) showing amplification, with significant difference. Interestingly, one of the samples successfully amplified with this method was originally collected in 1909. All of the sequenced samples displayed satisfactory results in quality evaluations (Phred ≥ 20) and good similarities, as identified with the BLASTn tool. Our results demonstrate that the use of MDA may be an effective tool in molecular studies involving specimens of fleas that have traditionally been considered inadequately preserved for such purposes.
2010-01-01
The recently developed Multiple Displacement Amplification technique (MDA) allows for the production of a large quantity of high quality genomic DNA from low amounts of the original DNA. The goal of this study was to evaluate the performance of the MDA technique to amplify genomic DNA of siphonapterids that have been stored for long periods in 70% ethanol at room temperature. We subjected each DNA sample to two different methodologies: (1) amplification of mitochondrial 16S sequences without MDA; (2) amplification of 16S after MDA. All the samples obtained from these procedures were then sequenced. Only 4 samples (15.4%) subjected to method 1 showed amplification. In contrast, the application of MDA (method 2) improved the performance substantially, with 24 samples (92.3%) showing amplification, with significant difference. Interestingly, one of the samples successfully amplified with this method was originally collected in 1909. All of the sequenced samples displayed satisfactory results in quality evaluations (Phred ≥ 20) and good similarities, as identified with the BLASTn tool. Our results demonstrate that the use of MDA may be an effective tool in molecular studies involving specimens of fleas that have traditionally been considered inadequately preserved for such purposes. PMID:20840790
Hirose, Makoto; Shimomura, Kei; Suzuki, Akihiro; Burdet, Nicolas; Takahashi, Yukio
2016-05-30
The sample size must be less than the diffraction-limited focal spot size of the incident beam in single-shot coherent X-ray diffraction imaging (CXDI) based on a diffract-before-destruction scheme using X-ray free electron lasers (XFELs). This is currently a major limitation preventing its wider applications. We here propose multiple defocused CXDI, in which isolated objects are sequentially illuminated with a divergent beam larger than the objects and the coherent diffraction pattern of each object is recorded. This method can simultaneously reconstruct both objects and a probe from the coherent X-ray diffraction patterns without any a priori knowledge. We performed a computer simulation of the prposed method and then successfully demonstrated it in a proof-of-principle experiment at SPring-8. The prposed method allows us to not only observe broad samples but also characterize focused XFEL beams.
Multiple site receptor modeling with a minimal spanning tree combined with a Kohonen neural network
NASA Astrophysics Data System (ADS)
Hopke, Philip K.
1999-12-01
A combination of two pattern recognition methods has been developed that allows the generation of geographical emission maps form multivariate environmental data. In such a projection into a visually interpretable subspace by a Kohonen Self-Organizing Feature Map, the topology of the higher dimensional variables space can be preserved, but parts of the information about the correct neighborhood among the sample vectors will be lost. This can partly be compensated for by an additional projection of Prim's Minimal Spanning Tree into the trained neural network. This new environmental receptor modeling technique has been adapted for multiple sampling sites. The behavior of the method has been studied using simulated data. Subsequently, the method has been applied to mapping data sets from the Southern California Air Quality Study. The projection of a 17 chemical variables measured at up to 8 sampling sites provided a 2D, visually interpretable, geometrically reasonable arrangement of air pollution source sin the South Coast Air Basin.
Graves, Tabitha A.; Royle, J. Andrew; Kendall, Katherine C.; Beier, Paul; Stetz, Jeffrey B.; Macleod, Amy C.
2012-01-01
Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic) and bear rubs (opportunistic). We used hierarchical abundance models (N-mixture models) with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1) lead to the selection of the same variables as important and (2) provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3) yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight), and (4) improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed against those risks. The analysis framework presented here will be useful for other species exhibiting heterogeneity by detection method.
Hansen, Heidi; Ben-David, Merav; McDonald, David B
2008-03-01
In noninvasive genetic sampling, when genotyping error rates are high and recapture rates are low, misidentification of individuals can lead to overestimation of population size. Thus, estimating genotyping errors is imperative. Nonetheless, conducting multiple polymerase chain reactions (PCRs) at multiple loci is time-consuming and costly. To address the controversy regarding the minimum number of PCRs required for obtaining a consensus genotype, we compared consumer-style the performance of two genotyping protocols (multiple-tubes and 'comparative method') in respect to genotyping success and error rates. Our results from 48 faecal samples of river otters (Lontra canadensis) collected in Wyoming in 2003, and from blood samples of five captive river otters amplified with four different primers, suggest that use of the comparative genotyping protocol can minimize the number of PCRs per locus. For all but five samples at one locus, the same consensus genotypes were reached with fewer PCRs and with reduced error rates with this protocol compared to the multiple-tubes method. This finding is reassuring because genotyping errors can occur at relatively high rates even in tissues such as blood and hair. In addition, we found that loci that amplify readily and yield consensus genotypes, may still exhibit high error rates (7-32%) and that amplification with different primers resulted in different types and rates of error. Thus, assigning a genotype based on a single PCR for several loci could result in misidentification of individuals. We recommend that programs designed to statistically assign consensus genotypes should be modified to allow the different treatment of heterozygotes and homozygotes intrinsic to the comparative method. © 2007 The Authors.
Using DNA to track the origin of the largest ivory seizure since the 1989 trade ban.
Wasser, Samuel K; Mailand, Celia; Booth, Rebecca; Mutayoba, Benezeth; Kisamo, Emily; Clark, Bill; Stephens, Matthew
2007-03-06
The illegal ivory trade recently intensified to the highest levels ever reported. Policing this trafficking has been hampered by the inability to reliably determine geographic origin of contraband ivory. Ivory can be smuggled across multiple international borders and along numerous trade routes, making poaching hotspots and potential trade routes difficult to identify. This fluidity also makes it difficult to refute a country's denial of poaching problems. We extend an innovative DNA assignment method to determine the geographic origin(s) of large elephant ivory seizures. A Voronoi tessellation method is used that utilizes genetic similarities across tusks to simultaneously infer the origin of multiple samples that could have one or more common origin(s). We show that this joint analysis performs better than sample-by-sample methods in assigning sample clusters of known origin. The joint method is then used to infer the geographic origin of the largest ivory seizure since the 1989 ivory trade ban. Wildlife authorities initially suspected that this ivory came from multiple locations across forest and savanna Africa. However, we show that the ivory was entirely from savanna elephants, most probably originating from a narrow east-to-west band of southern Africa, centered on Zambia. These findings enabled law enforcement to focus their investigation to a smaller area and fewer trade routes and led to changes within the Zambian government to improve antipoaching efforts. Such outcomes demonstrate the potential of genetic analyses to help combat the expanding wildlife trade by identifying origin(s) of large seizures of contraband ivory. Broader applications to wildlife trade are discussed.
Using DNA to track the origin of the largest ivory seizure since the 1989 trade ban
Wasser, Samuel K.; Mailand, Celia; Booth, Rebecca; Mutayoba, Benezeth; Kisamo, Emily; Clark, Bill; Stephens, Matthew
2007-01-01
The illegal ivory trade recently intensified to the highest levels ever reported. Policing this trafficking has been hampered by the inability to reliably determine geographic origin of contraband ivory. Ivory can be smuggled across multiple international borders and along numerous trade routes, making poaching hotspots and potential trade routes difficult to identify. This fluidity also makes it difficult to refute a country's denial of poaching problems. We extend an innovative DNA assignment method to determine the geographic origin(s) of large elephant ivory seizures. A Voronoi tessellation method is used that utilizes genetic similarities across tusks to simultaneously infer the origin of multiple samples that could have one or more common origin(s). We show that this joint analysis performs better than sample-by-sample methods in assigning sample clusters of known origin. The joint method is then used to infer the geographic origin of the largest ivory seizure since the 1989 ivory trade ban. Wildlife authorities initially suspected that this ivory came from multiple locations across forest and savanna Africa. However, we show that the ivory was entirely from savanna elephants, most probably originating from a narrow east-to-west band of southern Africa, centered on Zambia. These findings enabled law enforcement to focus their investigation to a smaller area and fewer trade routes and led to changes within the Zambian government to improve antipoaching efforts. Such outcomes demonstrate the potential of genetic analyses to help combat the expanding wildlife trade by identifying origin(s) of large seizures of contraband ivory. Broader applications to wildlife trade are discussed. PMID:17360505
ERIC Educational Resources Information Center
Chen, Pei-Hua; Chang, Hua-Hua; Wu, Haiyan
2012-01-01
Two sampling-and-classification-based procedures were developed for automated test assembly: the Cell Only and the Cell and Cube methods. A simulation study based on a 540-item bank was conducted to compare the performance of the procedures with the performance of a mixed-integer programming (MIP) method for assembling multiple parallel test…
Salem, Shady; Chang, Sam S; Clark, Peter E; Davis, Rodney; Herrell, S Duke; Kordan, Yakup; Wills, Marcia L; Shappell, Scott B; Baumgartner, Roxelyn; Phillips, Sharon; Smith, Joseph A; Cookson, Michael S; Barocas, Daniel A
2010-10-01
Whole mount processing is more resource intensive than routine systematic sampling of radical retropubic prostatectomy specimens. We compared whole mount and systematic sampling for detecting pathological outcomes, and compared the prognostic value of pathological findings across pathological methods. We included men (608 whole mount and 525 systematic sampling samples) with no prior treatment who underwent radical retropubic prostatectomy at Vanderbilt University Medical Center between January 2000 and June 2008. We used univariate and multivariate analysis to compare the pathological outcome detection rate between pathological methods. Kaplan-Meier curves and the log rank test were used to compare the prognostic value of pathological findings across pathological methods. There were no significant differences between the whole mount and the systematic sampling groups in detecting extraprostatic extension (25% vs 30%), positive surgical margins (31% vs 31%), pathological Gleason score less than 7 (49% vs 43%), 7 (39% vs 43%) or greater than 7 (12% vs 13%), seminal vesicle invasion (8% vs 10%) or lymph node involvement (3% vs 5%). Tumor volume was higher in the systematic sampling group and whole mount detected more multiple surgical margins (each p <0.01). There were no significant differences in the likelihood of biochemical recurrence between the pathological methods when patients were stratified by pathological outcome. Except for estimated tumor volume and multiple margins whole mount and systematic sampling yield similar pathological information. Each method stratifies patients into comparable risk groups for biochemical recurrence. Thus, while whole mount is more resource intensive, it does not appear to result in improved detection of clinically important pathological outcomes or prognostication. Copyright © 2010 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Excited-State Effective Masses in Lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
George Fleming, Saul Cohen, Huey-Wen Lin
2009-10-01
We apply black-box methods, i.e. where the performance of the method does not depend upon initial guesses, to extract excited-state energies from Euclidean-time hadron correlation functions. In particular, we extend the widely used effective-mass method to incorporate multiple correlation functions and produce effective mass estimates for multiple excited states. In general, these excited-state effective masses will be determined by finding the roots of some polynomial. We demonstrate the method using sample lattice data to determine excited-state energies of the nucleon and compare the results to other energy-level finding techniques.
Determinations of pesticides in food are often complicated by the presence of fats and require multiple cleanup steps before analysis. Cost-effective analytical methods are needed for conducting large-scale exposure studies. We examined two extraction methods, supercritical flu...
The effect of sampling techniques used in the multiconfigurational Ehrenfest method
NASA Astrophysics Data System (ADS)
Symonds, C.; Kattirtzi, J. A.; Shalashilin, D. V.
2018-05-01
In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.
The effect of sampling techniques used in the multiconfigurational Ehrenfest method.
Symonds, C; Kattirtzi, J A; Shalashilin, D V
2018-05-14
In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.
Hu, Xiaofeng; Hu, Rui; Zhang, Zhaowei; Li, Peiwu; Zhang, Qi; Wang, Min
2016-09-01
A sensitive and specific immunoaffinity column to clean up and isolate multiple mycotoxins was developed along with a rapid one-step sample preparation procedure for ultra-performance liquid chromatography-tandem mass spectrometry analysis. Monoclonal antibodies against aflatoxin B1, aflatoxin B2, aflatoxin G1, aflatoxin G2, zearalenone, ochratoxin A, sterigmatocystin, and T-2 toxin were coupled to microbeads for mycotoxin purification. We optimized a homogenization and extraction procedure as well as column loading and elution conditions to maximize recoveries from complex feed matrices. This method allowed rapid, simple, and simultaneous determination of mycotoxins in feeds with a single chromatographic run. Detection limits for these toxins ranged from 0.006 to 0.12 ng mL(-1), and quantitation limits ranged from 0.06 to 0.75 ng mL(-1). Concentration curves were linear from 0.12 to 40 μg kg(-1) with correlation coefficients of R (2) > 0.99. Intra-assay and inter-assay comparisons indicated excellent repeatability and reproducibility of the multiple immunoaffinity columns. As a proof of principle, 80 feed samples were tested and several contained multiple mycotoxins. This method is sensitive, rapid, and durable enough for multiple mycotoxin determinations that fulfill European Union and Chinese testing criteria.
NASA Astrophysics Data System (ADS)
Chen, Ye; Wolanyk, Nathaniel; Ilker, Tunc; Gao, Shouguo; Wang, Xujing
Methods developed based on bifurcation theory have demonstrated their potential in driving network identification for complex human diseases, including the work by Chen, et al. Recently bifurcation theory has been successfully applied to model cellular differentiation. However, there one often faces a technical challenge in driving network prediction: time course cellular differentiation study often only contains one sample at each time point, while driving network prediction typically require multiple samples at each time point to infer the variation and interaction structures of candidate genes for the driving network. In this study, we investigate several methods to identify both the critical time point and the driving network through examination of how each time point affects the autocorrelation and phase locking. We apply these methods to a high-throughput sequencing (RNA-Seq) dataset of 42 subsets of thymocytes and mature peripheral T cells at multiple time points during their differentiation (GSE48138 from GEO). We compare the predicted driving genes with known transcription regulators of cellular differentiation. We will discuss the advantages and limitations of our proposed methods, as well as potential further improvements of our methods.
NASA Astrophysics Data System (ADS)
Toparli, M. Burak; Fitzpatrick, Michael E.; Gungor, Salih
2015-09-01
In this study, residual stress fields, including the near-surface residual stresses, were determined for an Al7050-T7451 sample after laser peening. The contour method was applied to measure one component of the residual stress, and the relaxed stresses on the cut surfaces were then measured by X-ray diffraction. This allowed calculation of the three orthogonal stress components using the superposition principle. The near-surface results were validated with results from incremental hole drilling and conventional X-ray diffraction. The results demonstrate that multiple residual stress components can be determined using a combination of the contour method and another technique. If the measured stress components are congruent with the principal stress axes in the sample, then this allows for determination of the complete stress tensor.
Universal nucleic acids sample preparation method for cells, spores and their mixture
Bavykin, Sergei [Darien, IL
2011-01-18
The present invention relates to a method for extracting nucleic acids from biological samples. More specifically the invention relates to a universal method for extracting nucleic acids from unidentified biological samples. An advantage of the presently invented method is its ability to effectively and efficiently extract nucleic acids from a variety of different cell types including but not limited to prokaryotic or eukaryotic cells and/or recalcitrant organisms (i.e. spores). Unlike prior art methods which are focused on extracting nucleic acids from vegetative cell or spores, the present invention effectively extracts nucleic acids from spores, multiple cell types or mixtures thereof using a single method. Important that the invented method has demonstrated an ability to extract nucleic acids from spores and vegetative bacterial cells with similar levels effectiveness. The invented method employs a multi-step protocol which erodes the cell structure of the biological sample, isolates, labels, fragments nucleic acids and purifies labeled samples from the excess of dye.
Accurate and fast multiple-testing correction in eQTL studies.
Sul, Jae Hoon; Raj, Towfique; de Jong, Simone; de Bakker, Paul I W; Raychaudhuri, Soumya; Ophoff, Roel A; Stranger, Barbara E; Eskin, Eleazar; Han, Buhm
2015-06-04
In studies of expression quantitative trait loci (eQTLs), it is of increasing interest to identify eGenes, the genes whose expression levels are associated with variation at a particular genetic variant. Detecting eGenes is important for follow-up analyses and prioritization because genes are the main entities in biological processes. To detect eGenes, one typically focuses on the genetic variant with the minimum p value among all variants in cis with a gene and corrects for multiple testing to obtain a gene-level p value. For performing multiple-testing correction, a permutation test is widely used. Because of growing sample sizes of eQTL studies, however, the permutation test has become a computational bottleneck in eQTL studies. In this paper, we propose an efficient approach for correcting for multiple testing and assess eGene p values by utilizing a multivariate normal distribution. Our approach properly takes into account the linkage-disequilibrium structure among variants, and its time complexity is independent of sample size. By applying our small-sample correction techniques, our method achieves high accuracy in both small and large studies. We have shown that our method consistently produces extremely accurate p values (accuracy > 98%) for three human eQTL datasets with different sample sizes and SNP densities: the Genotype-Tissue Expression pilot dataset, the multi-region brain dataset, and the HapMap 3 dataset. Copyright © 2015 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Isnard, H.; Aubert, M.; Blanchet, P.; Brennetot, R.; Chartier, F.; Geertsen, V.; Manuguerra, F.
2006-02-01
Strontium-90 is one of the most important fission products generated in nuclear industry. In the research field concerning nuclear waste disposal in deep geological environment, it is necessary to quantify accurately and precisely its concentration (or the 90Sr / 238U atomic ratio) in irradiated fuels. To obtain accurate analysis of radioactive 90Sr, mass spectrometry associated with isotope dilution is the most appropriated method. But, in nuclear fuel samples the interference with 90Zr must be previously eliminated. An inductively coupled plasma mass spectrometer with multiple collection, equipped with an hexapole collision cell, has been used to eliminate the 90Sr / 90Zr interference by addition of oxygen in the collision cell as a reactant gas. Zr + ions are converted into ZrO +, whereas Sr + ions are not reactive. A mixed solution, prepared from a solution of enriched 84Sr and a solution of enriched 235U was then used to quantify the 90Sr / 238U ratio in spent fuel sample solutions using the double isotope dilution method. This paper shows the results, the reproducibility and the uncertainties that can be obtained with this method to quantify the 90Sr / 238U atomic ratio in an UOX (uranium oxide) and a MOX (mixed oxide) spent fuel samples using the collision cell of an inductively coupled plasma mass spectrometer with multiple collection to perform the 90Sr / 90Zr separation. A comparison with the results obtained by inductively coupled plasma mass spectrometer with multiple collection after a chemical separation of strontium from zirconium using a Sr spec resin (Eichrom) has been performed. Finally, to validate the analytical procedure developed, measurements of the same samples have been performed by thermal ionization mass spectrometry, used as an independent technique, after chemical separation of Sr.
Statistical technique for analysing functional connectivity of multiple spike trains.
Masud, Mohammad Shahed; Borisyuk, Roman
2011-03-15
A new statistical technique, the Cox method, used for analysing functional connectivity of simultaneously recorded multiple spike trains is presented. This method is based on the theory of modulated renewal processes and it estimates a vector of influence strengths from multiple spike trains (called reference trains) to the selected (target) spike train. Selecting another target spike train and repeating the calculation of the influence strengths from the reference spike trains enables researchers to find all functional connections among multiple spike trains. In order to study functional connectivity an "influence function" is identified. This function recognises the specificity of neuronal interactions and reflects the dynamics of postsynaptic potential. In comparison to existing techniques, the Cox method has the following advantages: it does not use bins (binless method); it is applicable to cases where the sample size is small; it is sufficiently sensitive such that it estimates weak influences; it supports the simultaneous analysis of multiple influences; it is able to identify a correct connectivity scheme in difficult cases of "common source" or "indirect" connectivity. The Cox method has been thoroughly tested using multiple sets of data generated by the neural network model of the leaky integrate and fire neurons with a prescribed architecture of connections. The results suggest that this method is highly successful for analysing functional connectivity of simultaneously recorded multiple spike trains. Copyright © 2011 Elsevier B.V. All rights reserved.
Mathematical Analysis of a Multiple-Look Concept Identification Model.
ERIC Educational Resources Information Center
Cotton, John W.
The behavior of focus samples central to the multiple-look model of Trabasso and Bower is examined by three methods. First, exact probabilities of success conditional upon a certain brief history of stimulation are determined. Second, possible states of the organism during the experiment are defined and a transition matrix for those states…
Rater Perceptions of Bias Using the Multiple Mini-Interview Format: A Qualitative Study
ERIC Educational Resources Information Center
Alweis, Richard L.; Fitzpatrick, Caroline; Donato, Anthony A.
2015-01-01
Introduction: The Multiple Mini-Interview (MMI) format appears to mitigate individual rater biases. However, the format itself may introduce structural systematic bias, favoring extroverted personality types. This study aimed to gain a better understanding of these biases from the perspective of the interviewer. Methods: A sample of MMI…
A rapid random-sampling method was used to relate densities of juvenile winter flounder to multiple scales of habitat variation in Narragansett Bay and two nearby coastal lagoons in Rhode Island. We used a 1-m beam trawl with attached video camera, continuous GPS track overlay, ...
Micro/Nano-scale Strain Distribution Measurement from Sampling Moiré Fringes.
Wang, Qinghua; Ri, Shien; Tsuda, Hiroshi
2017-05-23
This work describes the measurement procedure and principles of a sampling moiré technique for full-field micro/nano-scale deformation measurements. The developed technique can be performed in two ways: using the reconstructed multiplication moiré method or the spatial phase-shifting sampling moiré method. When the specimen grid pitch is around 2 pixels, 2-pixel sampling moiré fringes are generated to reconstruct a multiplication moiré pattern for a deformation measurement. Both the displacement and strain sensitivities are twice as high as in the traditional scanning moiré method in the same wide field of view. When the specimen grid pitch is around or greater than 3 pixels, multi-pixel sampling moiré fringes are generated, and a spatial phase-shifting technique is combined for a full-field deformation measurement. The strain measurement accuracy is significantly improved, and automatic batch measurement is easily achievable. Both methods can measure the two-dimensional (2D) strain distributions from a single-shot grid image without rotating the specimen or scanning lines, as in traditional moiré techniques. As examples, the 2D displacement and strain distributions, including the shear strains of two carbon fiber-reinforced plastic specimens, were measured in three-point bending tests. The proposed technique is expected to play an important role in the non-destructive quantitative evaluations of mechanical properties, crack occurrences, and residual stresses of a variety of materials.
Mohandesan, Elmira; Prost, Stefan; Hofreiter, Michael
2012-01-01
A major challenge for ancient DNA (aDNA) studies using museum specimens is that sampling procedures usually involve at least the partial destruction of each specimen used, such as the removal of skin, pieces of bone, or a tooth. Recently, a nondestructive DNA extraction method was developed for the extraction of amplifiable DNA fragments from museum specimens without appreciable damage to the specimen. Here, we examine the utility of this method by attempting DNA extractions from historic (older than 70 years) chimpanzee specimens. Using this method, we PCR-amplified part of the mitochondrial HVR-I region from 65% (56/86) of the specimens from which we attempted DNA extraction. However, we found a high incidence of multiple sequences in individual samples, suggesting substantial cross-contamination among samples, most likely originating from storage and handling in the museums. Consequently, reproducible sequences could be reconstructed from only 79% (44/56) of the successfully extracted samples, even after multiple extractions and amplifications. This resulted in an overall success rate of just over half (44/86 of samples, or 51% success), from which 39 distinct HVR-I haplotypes were recovered. We found a high incidence of C to T changes, arguing for both low concentrations of and substantial damage to the endogenous DNA. This chapter highlights both the potential and the limitations of nondestructive DNA extraction from museum specimens.
Huang, Huan; Li, Shuo; Sun, Lizhou; Zhou, Guohua
2015-01-01
To simultaneously analyze mutations and expression levels of multiple genes on one detection platform, we proposed a method termed "multiplex ligation-dependent probe amplification-digital amplification coupled with hydrogel bead-array" (MLPA-DABA) and applied it to diagnose colorectal cancer (CRC). CRC cells and tissues were sampled to extract nucleic acid, perform MLPA with sequence-tagged probes, perform digital emulsion polymerase chain reaction (PCR), and produce a hydrogel bead-array to immobilize beads and form a single bead layer on the array. After hybridization with fluorescent probes, the number of colored beads, which reflects the abundance of expressed genes and the mutation rate, was counted for diagnosis. Only red or green beads occurred on the chips in the mixed samples, indicating the success of single-molecule PCR. When a one-source sample was analyzed using mixed MLPA probes, beads of only one color occurred, suggesting the high specificity of the method in analyzing CRC mutation and gene expression. In gene expression analysis of a CRC tissue from one CRC patient, the mutant percentage was 3.1%, and the expression levels of CRC-related genes were much higher than those of normal tissue. The highly sensitive MLPA-DABA succeeds in the relative quantification of mutations and gene expressions of exfoliated cells in stool samples of CRC patients on the same chip platform. MLPA-DABA coupled with hydrogel bead-array is a promising method in the non-invasive diagnosis of CRC.
Inferring HIV Escape Rates from Multi-Locus Genotype Data
Kessinger, Taylor A.; Perelson, Alan S.; Neher, Richard A.
2013-09-03
Cytotoxic T-lymphocytes (CTLs) recognize viral protein fragments displayed by major histocompatibility complex molecules on the surface of virally infected cells and generate an anti-viral response that can kill the infected cells. Virus variants whose protein fragments are not efficiently presented on infected cells or whose fragments are presented but not recognized by CTLs therefore have a competitive advantage and spread rapidly through the population. We present a method that allows a more robust estimation of these escape rates from serially sampled sequence data. The proposed method accounts for competition between multiple escapes by explicitly modeling the accumulation of escape mutationsmore » and the stochastic effects of rare multiple mutants. Applying our method to serially sampled HIV sequence data, we estimate rates of HIV escape that are substantially larger than those previously reported. The method can be extended to complex escapes that require compensatory mutations. We expect our method to be applicable in other contexts such as cancer evolution where time series data is also available.« less
Quantitative Proteomics via High Resolution MS Quantification: Capabilities and Limitations
Higgs, Richard E.; Butler, Jon P.; Han, Bomie; Knierman, Michael D.
2013-01-01
Recent improvements in the mass accuracy and resolution of mass spectrometers have led to renewed interest in label-free quantification using data from the primary mass spectrum (MS1) acquired from data-dependent proteomics experiments. The capacity for higher specificity quantification of peptides from samples enriched for proteins of biological interest offers distinct advantages for hypothesis generating experiments relative to immunoassay detection methods or prespecified peptide ions measured by multiple reaction monitoring (MRM) approaches. Here we describe an evaluation of different methods to post-process peptide level quantification information to support protein level inference. We characterize the methods by examining their ability to recover a known dilution of a standard protein in background matrices of varying complexity. Additionally, the MS1 quantification results are compared to a standard, targeted, MRM approach on the same samples under equivalent instrument conditions. We show the existence of multiple peptides with MS1 quantification sensitivity similar to the best MRM peptides for each of the background matrices studied. Based on these results we provide recommendations on preferred approaches to leveraging quantitative measurements of multiple peptides to improve protein level inference. PMID:23710359
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Ji; Fischer, Debra A.; Xie, Ji-Wei
2014-03-01
The planet occurrence rate for multiple stars is important in two aspects. First, almost half of stellar systems in the solar neighborhood are multiple systems. Second, the comparison of the planet occurrence rate for multiple stars to that for single stars sheds light on the influence of stellar multiplicity on planet formation and evolution. We developed a method of distinguishing planet occurrence rates for single and multiple stars. From a sample of 138 bright (K{sub P} < 13.5) Kepler multi-planet candidate systems, we compared the stellar multiplicity rate of these planet host stars to that of field stars. Using dynamicalmore » stability analyses and archival Doppler measurements, we find that the stellar multiplicity rate of planet host stars is significantly lower than field stars for semimajor axes less than 20 AU, suggesting that planet formation and evolution are suppressed by the presence of a close-in companion star at these separations. The influence of stellar multiplicity at larger separations is uncertain because of search incompleteness due to a limited Doppler observation time baseline and a lack of high-resolution imaging observation. We calculated the planet confidence for the sample of multi-planet candidates and find that the planet confidences for KOI 82.01, KOI 115.01, KOI 282.01, and KOI 1781.02 are higher than 99.7% and thus validate the planetary nature of these four planet candidates. This sample of bright Kepler multi-planet candidates with refined stellar and orbital parameters, planet confidence estimation, and nearby stellar companion identification offers a well-characterized sample for future theoretical and observational study.« less
NASA Technical Reports Server (NTRS)
Roman, Monserrate C.; Jones, Kathy U.; Oubre, Cherie M.; Castro, Victoria; Ott, Mark C.; Birmele, Michele; Venkateswaran, Kasthuri J.; Vaishampayan, Parag A.
2013-01-01
Current methods for microbial detection: a) Labor & time intensive cultivation-based approaches that can fail to detect or characterize all cells present. b) Requires collection of samples on orbit and transportation back to ground for analysis. Disadvantages to current detection methods: a) Unable to perform quick and reliable detection on orbit. b) Lengthy sampling intervals. c) No microbe identification.
ERIC Educational Resources Information Center
Grigorenko, Elena L.; Geiser, Christian; Slobodskaya, Helena R.; Francis, David J.
2010-01-01
A large community-based sample of Russian youths (n = 841, age M = 13.17 years, SD = 2.51) was assessed with the Child Behavior Checklist (mothers and fathers separately), Teacher's Report Form, and Youth Self-Report. The multiple indicator-version of the correlated trait-correlated method minus one, or CT-C(M-1), model was applied to analyze (a)…
Mairinger, Fabian D; Walter, Robert Fh; Vollbrecht, Claudia; Hager, Thomas; Worm, Karl; Ting, Saskia; Wohlschläger, Jeremias; Zarogoulidis, Paul; Zarogoulidis, Konstantinos; Schmid, Kurt W
2014-01-01
Isothermal multiple displacement amplification (IMDA) can be a powerful tool in molecular routine diagnostics for homogeneous and sequence-independent whole-genome amplification of notably small tumor samples, eg, microcarcinomas and biopsies containing a small amount of tumor. Currently, this method is not well established in pathology laboratories. We designed a study to confirm the feasibility and convenience of this method for routine diagnostics with formalin-fixed, paraffin-embedded samples prepared by laser-capture microdissection. A total of 250 μg DNA (concentration 5 μg/μL) was generated by amplification over a period of 8 hours with a material input of approximately 25 cells, approximately equivalent to 175 pg of genomic DNA. In the generated DNA, a representation of all chromosomes could be shown and the presence of elected genes relevant for diagnosis in clinical samples could be proven. Mutational analysis of clinical samples could be performed without any difficulty and showed concordance with earlier diagnostic findings. We established the feasibility and convenience of IMDA for routine diagnostics. We also showed that small amounts of DNA, which were not analyzable with current molecular methods, could be sufficient for a wide field of applications in molecular routine diagnostics when they are preamplified with IMDA.
Phase II Trials for Heterogeneous Patient Populations with a Time-to-Event Endpoint.
Jung, Sin-Ho
2017-07-01
In this paper, we consider a single-arm phase II trial with a time-to-event end-point. We assume that the study population has multiple subpopulations with different prognosis, but the study treatment is expected to be similarly efficacious across the subpopulations. We review a stratified one-sample log-rank test and present its sample size calculation method under some practical design settings. Our sample size method requires specification of the prevalence of subpopulations. We observe that the power of the resulting sample size is not very sensitive to misspecification of the prevalence.
Zeng, Jia-Kai; Li, Yuan-Yuan; Wang, Tian-Ming; Zhong, Jie; Wu, Jia-Sheng; Liu, Ping; Zhang, Hua; Ma, Yue-Ming
2018-05-01
A rapid, sensitive and accurate UPLC-MS/MS method was developed for the simultaneous quantification of components of Huangqi decoction (HQD), such as calycosin-7-O-β-d-glucoside, calycosin-glucuronide, liquiritin, formononetin-glucuronide, isoliquiritin, liquiritigenin, ononin, calycosin, isoliquiritigenin, formononetin, glycyrrhizic acid, astragaloside IV, cycloastragenol, and glycyrrhetinic acid, in rat plasma. After plasma samples were extracted by protein precipitation, chromatographic separation was performed with a C 18 column, using a gradient of methanol and 0.05% acetic acid containing 4mm ammonium acetate as the mobile phase. Multiple reaction monitoring scanning was performed to quantify the analytes, and the electrospray ion source polarity was switched between positive and negative modes in a single run of 10 min. Method validation showed that specificity, linearity, accuracy, precision, extraction recovery, matrix effect and stability for 14 components met the requirements for their quantitation in biological samples. The established method was successfully applied to the pharmacokinetic study of multiple components in rats after intragastric administration of HQD. The results clarified the pharmacokinetic characteristics of multiple components found in HQD. This research provides useful information for understanding the relation between the chemical components of HQD and their therapeutic effects. Copyright © 2017 John Wiley & Sons, Ltd.
Sampling bees in tropical forests and agroecosystems: A review
Prado, Sara G.; Ngo, Hien T.; Florez, Jaime A.; Collazo, Jaime A.
2017-01-01
Bees are the predominant pollinating taxa, providing a critical ecosystem service upon which many angiosperms rely for successful reproduction. Available data suggests that bee populations worldwide are declining, but scarce data in tropical regions precludes assessing their status and distribution, impact on ecological services, and response to management actions. Herein, we reviewed >150 papers that used six common sampling methods (pan traps, baits, Malaise traps, sweep nets, timed observations and aspirators) to better understand their strengths and weaknesses, and help guide method selection to meet research objectives and development of multi-species monitoring approaches. Several studies evaluated the effectiveness of sweep nets, pan traps, and malaise traps, but only one evaluated timed observations, and none evaluated aspirators. Only five studies compared two or more of the remaining four sampling methods to each other. There was little consensus regarding which method would be most reliable for sampling multiple species. However, we recommend that if the objective of the study is to estimate abundance or species richness, malaise traps, pan traps and sweep nets are the most effective sampling protocols in open tropical systems; conversely, malaise traps, nets and baits may be the most effective in forests. Declining bee populations emphasize the critical need in method standardization and reporting precision. Moreover, we recommend reporting a catchability coefficient, a measure of the interaction between the resource (bee) abundance and catching effort. Melittologists could also consider existing methods, such as occupancy models, to quantify changes in distribution and abundance after modeling heterogeneity in trapping probability, and consider the possibility of developing monitoring frameworks that draw from multiple sources of data.
An evaluation of a reagentless method for the determination of total mercury in aquatic life
Haynes, Sekeenia; Gragg, Richard D.; Johnson, Elijah; Robinson, Larry; Orazio, Carl E.
2006-01-01
Multiple treatment (i.e., drying, chemical digestion, and oxidation) steps are often required during preparation of biological matrices for quantitative analysis of mercury; these multiple steps could potentially lead to systematic errors and poor recovery of the analyte. In this study, the Direct Mercury Analyzer (Milestone Inc., Monroe, CT) was utilized to measure total mercury in fish tissue by integrating steps of drying, sample combustion and gold sequestration with successive identification using atomic absorption spectrometry. We also evaluated the differences between the mercury concentrations found in samples that were homogenized and samples with no preparation. These results were confirmed with cold vapor atomic absorbance and fluorescence spectrometric methods of analysis. Finally, total mercury in wild captured largemouth bass (n = 20) were assessed using the Direct Mercury Analyzer to examine internal variability between mercury concentrations in muscle, liver and brain organs. Direct analysis of total mercury measured in muscle tissue was strongly correlated with muscle tissue that was homogenized before analysis (r = 0.81, p < 0.0001). Additionally, results using this integrated method compared favorably (p < 0.05) with conventional cold vapor spectrometry with atomic absorbance and fluorescence detection methods. Mercury concentrations in brain were significantly lower than concentrations in muscle (p < 0.001) and liver (p < 0.05) tissues. This integrated method can measure a wide range of mercury concentrations (0-500 ??g) using small sample sizes. Total mercury measurements in this study are comparative to the methods (cold vapor) commonly used for total mercury analysis and are devoid of laborious sample preparation and expensive hazardous waste. ?? Springer 2006.
A Multiplicity Survey of Chromospherically Active and Inactive Stars
NASA Technical Reports Server (NTRS)
Mason, Brian D.; Henry, Todd J.; Hartkopf, William I.; TenBrummelaar, Theo; Soderblom, David R.
1998-01-01
Surveys of the three samples of solar-type stars, segregated by chromospheric emission level, were made to determine their multiplicity fractions and to investigate the evolution of multiplicity with age. In total, 245 stars were searched for companions with DeltaV <= 3.0 and separations of 0.035" to 1.08" using optical speckle interferometry, By incorporating the visual micrometer survey for duplicity of the LamontHussey Observatory, the angular coverage was extended to 5.0" with no change in in the DeltaV limit. This magnitude difference allows mass ratios of 0.63 and larger to be detected throughout a search region of 2-127 AU for the stars observed. The 84 primaries observed in the chromospherically active sample are presumably part of a young population and are found to have a multiplicity fraction of 17.9% +/- 4.6%. The sample of 118 inactive, presumably older, primaries were selected and observed using identical methods and are found to have a multiplicity fraction of only 8.5% +/- 2.7%. Given the known link between chromospheric activity and age, these results tentatively imply a decreasing stellar multiplicity fraction from 1 to 4 Gyr, the approximate ages of the two samples. Finally, only two of the 14 very active primaries observed were found to have a companion meeting the survey detection parameters. In this case, many of the systems are either very young, or close, RS CVn type multiples that are unresolvable using techniques employed here.
Vu, Dai Long; Ranglová, Karolína; Hájek, Jan; Hrouzek, Pavel
2018-05-01
Quantification of selenated amino-acids currently relies on methods employing inductively coupled plasma mass spectrometry (ICP-MS). Although very accurate, these methods do not allow the simultaneous determination of standard amino-acids, hampering the comparison of the content of selenated versus non-selenated species such as methionine (Met) and selenomethionine (SeMet). This paper reports two approaches for the simultaneous quantification of Met and SeMet. In the first approach, standard enzymatic hydrolysis employing Protease XIV was applied for the preparation of samples. The second approach utilized methanesulfonic acid (MA) for the hydrolysis of samples, either in a reflux system or in a microwave oven, followed by derivatization with diethyl ethoxymethylenemalonate. The prepared samples were then analyzed by multiple reaction monitoring high performance liquid chromatography tandem mass spectrometry (MRM-HPLC-MS/MS). Both approaches provided platforms for the accurate determination of selenium/sulfur substitution rate in Met. Moreover the second approach also provided accurate simultaneous quantification of Met and SeMet with a low limit of detection, low limit of quantification and wide linearity range, comparable to the commonly used gas chromatography mass spectrometry (GC-MS) method or ICP-MS. The novel method was validated using certified reference material in conjunction with the GC-MS reference method. Copyright © 2018. Published by Elsevier B.V.
Mirzabekov, Andrei; Guschin, Dmitry Y.; Chik, Valentine; Drobyshev, Aleksei; Fotin, Alexander; Yershov, Gennadiy; Lysov, Yuri
2002-01-01
This invention relates to using customized oligonucleotide microchips as biosensors for the detection and identification of nucleic acids specific for different genes, organisms and/or individuals in the environment, in food and in biological samples. The microchips are designed to convert multiple bits of genetic information into simpler patterns of signals that are interpreted as a unit. Because of an improved method of hybridizing oligonucleotides from samples to microchips, microchips are reusable and transportable. For field study, portable laser or bar code scanners are suitable.
Determination of effective atomic number of biomedical samples using Gamma ray back-scattering
NASA Astrophysics Data System (ADS)
Singh, Inderjeet; Singh, Bhajan; Sandhu, B. S.; Sabharwal, Arvind D.
2018-05-01
The study of effective atomic number of biomedical sample has been carried out by using a non-destructive multiple back-scattering technique. Also radiation characterization method is used to compare the tissue equivalent material as human tissue. Response function of 3″ × 3″ NaI(Tl) scintillation detector is implemented on recorded pulse-height distribution to boost the counts under the photo-peak and help to reduce the uncertainty in the experimental result. Monte Carlo calculation for multiple back-scattered events supports the reported experimental work.
Rafkin, Lisa E.; Matheson, Della; Steck, Andrea K.; Yu, Liping; Henderson, Courtney; Beam, Craig A.; Boulware, David C.
2015-01-01
Abstract Background: Islet autoantibody testing provides the basis for assessment of risk of progression to type 1 diabetes. We set out to determine the feasibility and acceptability of dried capillary blood spot–based screening to identify islet autoantibody–positive relatives potentially eligible for inclusion in prevention trials. Materials and Methods: Dried blood spot (DBS) and venous samples were collected from 229 relatives participating in the TrialNet Pathway to Prevention Study. Both samples were tested for glutamic acid decarboxylase, islet antigen 2, and zinc transporter 8 autoantibodies, and venous samples were additionally tested for insulin autoantibodies and islet cell antibodies. We defined multiple autoantibody positive as two or more autoantibodies in venous serum and DBS screen positive if one or more autoantibodies were detected. Participant questionnaires compared the sample collection methods. Results: Of 44 relatives who were multiple autoantibody positive in venous samples, 42 (95.5%) were DBS screen positive, and DBS accurately detected 145 of 147 autoantibody-negative relatives (98.6%). Capillary blood sampling was perceived as more painful than venous blood draw, but 60% of participants would prefer initial screening using home fingerstick with clinic visits only required if autoantibodies were found. Conclusions: Capillary blood sampling could facilitate screening for type 1 diabetes prevention studies. PMID:26375197
Taylor M. Wilcox; Kevin S. McKelvey; Michael K. Young; Adam J. Sepulveda; Bradley B. Shepard; Stephen F. Jane; Andrew R. Whiteley; Winsor H. Lowe; Michael K. Schwartz
2016-01-01
Environmental DNA sampling (eDNA) has emerged as a powerful tool for detecting aquatic animals. Previous research suggests that eDNA methods are substantially more sensitive than traditional sampling. However, the factors influencing eDNA detection and the resulting sampling costs are still not well understood. Here we use multiple experiments to derive...
Adetoro, O O
1988-06-01
Multiple exposure photography (MEP), an objective technique, was used in determining the percentage of motile sperms in the semen samples from 41 males being investigated for infertility. This technique was compared with the conventional subjective ordinary microscopy method of spermatozoal motility assessment. A satisfactory correlation was observed in percentage sperm motility assessment using the two methods but the MEP estimation was more consistent and reliable. The value of this technique of sperm motility study in the developing world is discussed.
Pistonesi, Marcelo F; Di Nezio, María S; Centurión, María E; Lista, Adriana G; Fragoso, Wallace D; Pontes, Márcio J C; Araújo, Mário C U; Band, Beatriz S Fernández
2010-12-15
In this study, a novel, simple, and efficient spectrofluorimetric method to determine directly and simultaneously five phenolic compounds (hydroquinone, resorcinol, phenol, m-cresol and p-cresol) in air samples is presented. For this purpose, variable selection by the successive projections algorithm (SPA) is used in order to obtain simple multiple linear regression (MLR) models based on a small subset of wavelengths. For comparison, partial least square (PLS) regression is also employed in full-spectrum. The concentrations of the calibration matrix ranged from 0.02 to 0.2 mg L(-1) for hydroquinone, from 0.05 to 0.6 mg L(-1) for resorcinol, and from 0.05 to 0.4 mg L(-1) for phenol, m-cresol and p-cresol; incidentally, such ranges are in accordance with the Argentinean environmental legislation. To verify the accuracy of the proposed method a recovery study on real air samples of smoking environment was carried out with satisfactory results (94-104%). The advantage of the proposed method is that it requires only spectrofluorimetric measurements of samples and chemometric modeling for simultaneous determination of five phenols. With it, air is simply sampled and no pre-treatment sample is needed (i.e., separation steps and derivatization reagents are avoided) that means a great saving of time. Copyright © 2010 Elsevier B.V. All rights reserved.
40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM-2.5
Code of Federal Regulations, 2010 CFR
2010-07-01
... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...
40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM-2.5
Code of Federal Regulations, 2011 CFR
2011-07-01
... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...
40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM−2.5.
Code of Federal Regulations, 2012 CFR
2012-07-01
... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...
Flagging versus dragging as sampling methods for nymphal Ixodes scapularis (Acari: Ixodidae)
Rulison, Eric L.; Kuczaj, Isis; Pang, Genevieve; Hickling, Graham J.; Tsao, Jean I.; Ginsberg, Howard S.
2013-01-01
The nymphal stage of the blacklegged tick, Ixodes scapularis (Acari: Ixodidae), is responsible for most transmission of Borrelia burgdorferi, the etiologic agent of Lyme disease, to humans in North America. From 2010 to fall of 2012, we compared two commonly used techniques, flagging and dragging, as sampling methods for nymphal I. scapularis at three sites, each with multiple sampling arrays (grids), in the eastern and central United States. Flagging and dragging collected comparable numbers of nymphs, with no consistent differences between methods. Dragging collected more nymphs than flagging in some samples, but these differences were not consistent among sites or sampling years. The ratio of nymphs collected by flagging vs dragging was not significantly related to shrub density, so habitat type did not have a strong effect on the relative efficacy of these methods. Therefore, although dragging collected more ticks in a few cases, the numbers collected by each method were so variable that neither technique had a clear advantage for sampling nymphal I. scapularis.
Analysis of Duplicated Multiple-Samples Rank Data Using the Mack-Skillings Test.
Carabante, Kennet Mariano; Alonso-Marenco, Jose Ramon; Chokumnoyporn, Napapan; Sriwattana, Sujinda; Prinyawiwatkul, Witoon
2016-07-01
Appropriate analysis for duplicated multiple-samples rank data is needed. This study compared analysis of duplicated rank preference data using the Friedman versus Mack-Skillings tests. Panelists (n = 125) ranked twice 2 orange juice sets: different-samples set (100%, 70%, vs. 40% juice) and similar-samples set (100%, 95%, vs. 90%). These 2 sample sets were designed to get contrasting differences in preference. For each sample set, rank sum data were obtained from (1) averaged rank data of each panelist from the 2 replications (n = 125), (2) rank data of all panelists from each of the 2 separate replications (n = 125 each), (3) jointed rank data of all panelists from the 2 replications (n = 125), and (4) rank data of all panelists pooled from the 2 replications (n = 250); rank data (1), (2), and (4) were separately analyzed by the Friedman test, although those from (3) by the Mack-Skillings test. The effect of sample sizes (n = 10 to 125) was evaluated. For the similar-samples set, higher variations in rank data from the 2 replications were observed; therefore, results of the main effects were more inconsistent among methods and sample sizes. Regardless of analysis methods, the larger the sample size, the higher the χ(2) value, the lower the P-value (testing H0 : all samples are not different). Analyzing rank data (2) separately by replication yielded inconsistent conclusions across sample sizes, hence this method is not recommended. The Mack-Skillings test was more sensitive than the Friedman test. Furthermore, it takes into account within-panelist variations and is more appropriate for analyzing duplicated rank data. © 2016 Institute of Food Technologists®
Chambers, Andrew G.; Percy, Andrew J.; Yang, Juncong; Camenzind, Alexander G.; Borchers, Christoph H.
2013-01-01
Dried blood spot (DBS) sampling, coupled with multiple reaction monitoring mass spectrometry (MRM-MS), is a well-established approach for quantifying a wide range of small molecule biomarkers and drugs. This sampling procedure is simpler and less-invasive than those required for traditional plasma or serum samples enabling collection by minimally trained personnel. Many analytes are stable in the DBS format without refrigeration, which reduces the cost and logistical challenges of sample collection in remote locations. These advantages make DBS sample collection desirable for advancing personalized medicine through population-wide biomarker screening. Here we expand this technology by demonstrating the first multiplexed method for the quantitation of endogenous proteins in DBS samples. A panel of 60 abundant proteins in human blood was targeted by monitoring proteotypic tryptic peptides and their stable isotope-labeled analogs by MRM. Linear calibration curves were obtained for 40 of the 65 peptide targets demonstrating multiple proteins can be quantitatively extracted from DBS collection cards. The method was also highly reproducible with a coefficient of variation of <15% for all 40 peptides. Overall, this assay quantified 37 proteins spanning a range of more than four orders of magnitude in concentration within a single 25 min LC/MRM-MS analysis. The protein abundances of the 33 proteins quantified in matching DBS and whole blood samples showed an excellent correlation, with a slope of 0.96 and an R2 value of 0.97. Furthermore, the measured concentrations for 80% of the proteins were stable for at least 10 days when stored at −20 °C, 4 °C and 37 °C. This work represents an important first step in evaluating the integration of DBS sampling with highly-multiplexed MRM for quantitation of endogenous proteins. PMID:23221968
Fluctuation Flooding Method (FFM) for accelerating conformational transitions of proteins.
Harada, Ryuhei; Takano, Yu; Shigeta, Yasuteru
2014-03-28
A powerful conformational sampling method for accelerating structural transitions of proteins, "Fluctuation Flooding Method (FFM)," is proposed. In FFM, cycles of the following steps enhance the transitions: (i) extractions of largely fluctuating snapshots along anisotropic modes obtained from trajectories of multiple independent molecular dynamics (MD) simulations and (ii) conformational re-sampling of the snapshots via re-generations of initial velocities when re-starting MD simulations. In an application to bacteriophage T4 lysozyme, FFM successfully accelerated the open-closed transition with the 6 ns simulation starting solely from the open state, although the 1-μs canonical MD simulation failed to sample such a rare event.
Fluctuation Flooding Method (FFM) for accelerating conformational transitions of proteins
NASA Astrophysics Data System (ADS)
Harada, Ryuhei; Takano, Yu; Shigeta, Yasuteru
2014-03-01
A powerful conformational sampling method for accelerating structural transitions of proteins, "Fluctuation Flooding Method (FFM)," is proposed. In FFM, cycles of the following steps enhance the transitions: (i) extractions of largely fluctuating snapshots along anisotropic modes obtained from trajectories of multiple independent molecular dynamics (MD) simulations and (ii) conformational re-sampling of the snapshots via re-generations of initial velocities when re-starting MD simulations. In an application to bacteriophage T4 lysozyme, FFM successfully accelerated the open-closed transition with the 6 ns simulation starting solely from the open state, although the 1-μs canonical MD simulation failed to sample such a rare event.
ERIC Educational Resources Information Center
Kandeel, Refat A. A.
2016-01-01
The purpose of this study was to determine the multiple intelligences patterns of students at King Saud University and its relationship with academic achievement for the courses of Mathematics. The study sample consisted of 917 students were selected a stratified random manner, the descriptive analysis method and Pearson correlation were used, the…
ERIC Educational Resources Information Center
Kadi, Sinem; Eldeniz Cetin, Muzeyyen
2018-01-01
This study investigated the resilience levels of parents with children with multiple disabilities by utilizing different variables. The study, conducted with survey model--a qualitative method--included a sample composed of a total of 222 voluntary parents (183 females, 39 males) residing in Bolu, Duzce and Zonguldak in Turkey. Parental…
ERIC Educational Resources Information Center
Macdonald, Alexandra; Danielson, Carla Kmett; Resnick, Heidi S.; Saunders, Benjamin E.; Kilpatrick, Dean G.
2010-01-01
Objective: This study compared the impact of multiple exposures to potentially traumatic events (PTEs), including sexual victimization, physical victimization, and witnessed violence, on posttraumatic stress disorder (PTSD) and comorbid conditions (i.e., major depressive episode [MDE], and substance use [SUD]). Methods: Participants were a…
Adamec, Jiri; Yang, Wen-Chu; Regnier, Fred E
2014-01-14
Reagents and methods are provided that permit simultaneous analysis of multiple diverse small molecule analytes present in a complex mixture. Samples are labeled with chemically identical but isotopically distince forms of the labeling reagent, and analyzed using mass spectrometry. A single reagent simultaneously derivatizes multiple small molecule analytes having different reactive functional groups.
Gerbig, Stefanie; Stern, Gerold; Brunn, Hubertus E; Düring, Rolf-Alexander; Spengler, Bernhard; Schulz, Sabine
2017-03-01
Direct analysis of fruit and vegetable surfaces is an important tool for in situ detection of food contaminants such as pesticides. We tested three different ways to prepare samples for the qualitative desorption electrospray ionization mass spectrometry (DESI-MS) analysis of 32 pesticides found on nine authentic fruits collected from food control. Best recovery rates for topically applied pesticides (88%) were found by analyzing the surface of a glass slide which had been rubbed against the surface of the food. Pesticide concentration in all samples was at or below the maximum residue level allowed. In addition to the high sensitivity of the method for qualitative analysis, quantitative or, at least, semi-quantitative information is needed in food control. We developed a DESI-MS method for the simultaneous determination of linear calibration curves of multiple pesticides of the same chemical class using normalization to one internal standard (ISTD). The method was first optimized for food extracts and subsequently evaluated for the quantification of pesticides in three authentic food extracts. Next, pesticides and the ISTD were applied directly onto food surfaces, and the corresponding calibration curves were obtained. The determination of linear calibration curves was still feasible, as demonstrated for three different food surfaces. This proof-of-principle method was used to simultaneously quantify two pesticides on an authentic sample, showing that the method developed could serve as a fast and simple preselective tool for disclosure of pesticide regulation violations. Graphical Abstract Multiple pesticide residues were detected and quantified in-situ from an authentic set of food items and extracts in a proof of principle study.
Detecting a Weak Association by Testing its Multiple Perturbations: a Data Mining Approach
NASA Astrophysics Data System (ADS)
Lo, Min-Tzu; Lee, Wen-Chung
2014-05-01
Many risk factors/interventions in epidemiologic/biomedical studies are of minuscule effects. To detect such weak associations, one needs a study with a very large sample size (the number of subjects, n). The n of a study can be increased but unfortunately only to an extent. Here, we propose a novel method which hinges on increasing sample size in a different direction-the total number of variables (p). We construct a p-based `multiple perturbation test', and conduct power calculations and computer simulations to show that it can achieve a very high power to detect weak associations when p can be made very large. As a demonstration, we apply the method to analyze a genome-wide association study on age-related macular degeneration and identify two novel genetic variants that are significantly associated with the disease. The p-based method may set a stage for a new paradigm of statistical tests.
Latin Hypercube Sampling (LHS) UNIX Library/Standalone
DOE Office of Scientific and Technical Information (OSTI.GOV)
2004-05-13
The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less
Improved Statistical Methods Enable Greater Sensitivity in Rhythm Detection for Genome-Wide Data
Hutchison, Alan L.; Maienschein-Cline, Mark; Chiang, Andrew H.; Tabei, S. M. Ali; Gudjonson, Herman; Bahroos, Neil; Allada, Ravi; Dinner, Aaron R.
2015-01-01
Robust methods for identifying patterns of expression in genome-wide data are important for generating hypotheses regarding gene function. To this end, several analytic methods have been developed for detecting periodic patterns. We improve one such method, JTK_CYCLE, by explicitly calculating the null distribution such that it accounts for multiple hypothesis testing and by including non-sinusoidal reference waveforms. We term this method empirical JTK_CYCLE with asymmetry search, and we compare its performance to JTK_CYCLE with Bonferroni and Benjamini-Hochberg multiple hypothesis testing correction, as well as to five other methods: cyclohedron test, address reduction, stable persistence, ANOVA, and F24. We find that ANOVA, F24, and JTK_CYCLE consistently outperform the other three methods when data are limited and noisy; empirical JTK_CYCLE with asymmetry search gives the greatest sensitivity while controlling for the false discovery rate. Our analysis also provides insight into experimental design and we find that, for a fixed number of samples, better sensitivity and specificity are achieved with higher numbers of replicates than with higher sampling density. Application of the methods to detecting circadian rhythms in a metadataset of microarrays that quantify time-dependent gene expression in whole heads of Drosophila melanogaster reveals annotations that are enriched among genes with highly asymmetric waveforms. These include a wide range of oxidation reduction and metabolic genes, as well as genes with transcripts that have multiple splice forms. PMID:25793520
NASA Astrophysics Data System (ADS)
Eric, L.; Vrugt, J. A.
2010-12-01
Spatially distributed hydrologic models potentially contain hundreds of parameters that need to be derived by calibration against a historical record of input-output data. The quality of this calibration strongly determines the predictive capability of the model and thus its usefulness for science-based decision making and forecasting. Unfortunately, high-dimensional optimization problems are typically difficult to solve. Here we present our recent developments to the Differential Evolution Adaptive Metropolis (DREAM) algorithm (Vrugt et al., 2009) to warrant efficient solution of high-dimensional parameter estimation problems. The algorithm samples from an archive of past states (Ter Braak and Vrugt, 2008), and uses multiple-try Metropolis sampling (Liu et al., 2000) to decrease the required burn-in time for each individual chain and increase efficiency of posterior sampling. This approach is hereafter referred to as MT-DREAM. We present results for 2 synthetic mathematical case studies, and 2 real-world examples involving from 10 to 240 parameters. Results for those cases show that our multiple-try sampler, MT-DREAM, can consistently find better solutions than other Bayesian MCMC methods. Moreover, MT-DREAM is admirably suited to be implemented and ran on a parallel machine and is therefore a powerful method for posterior inference.
Schmid-Bindert, Gerald; Wang, Yongsheng; Jiang, Hongbin; Sun, Hui; Henzler, Thomas; Wang, Hao; Pilz, Lothar R.; Ren, Shengxiang; Zhou, Caicun
2013-01-01
Background Multiple biomarker testing is necessary to facilitate individualized treatment of lung cancer patients. More than 80% of lung cancers are diagnosed based on very small tumor samples. Often there is not enough tissue for molecular analysis. We compared three minimal invasive sampling methods with respect to RNA quantity for molecular testing. Methods 106 small biopsies were prospectively collected by three different methods forceps biopsy, endobronchial ultrasound (EBUS) guided transbronchial needle aspiration (TBNA), and CT-guided core biopsy. Samples were split into two halves. One part was formalin fixed and paraffin embedded for standard pathological evaluation. The other part was put in RNAlater for immediate RNA/DNA extraction. If the pathologist confirmed the diagnosis of non-small cell lung cancer(NSCLC), the following molecular markers were tested: EGFR mutation, ERCC1, RRM1 and BRCA1. Results Overall, RNA-extraction was possible in 101 out of 106 patients (95.3%). We found 49% adenocarcinomas, 38% squamouscarcinomas, and 14% non-otherwise-specified(NOS). The highest RNA yield came from endobronchial ultrasound guided needle aspiration, which was significantly higher than bronchoscopy (37.74±41.09 vs. 13.74±15.53 ng respectively, P = 0.005) and numerically higher than CT-core biopsy (37.74±41.09 vs. 28.72±44.27 ng respectively, P = 0.244). EGFR mutation testing was feasible in 100% of evaluable patients and its incidence was 40.8%, 7.9% and 14.3% in adenocarcinomas, squamouscarcinomas and NSCLC NOS subgroup respectively. There was no difference in the feasibility of molecular testing between the three sampling methods with feasibility rates for ERCC1, RRM1 and BRCA1 of 91%, 87% and 81% respectively. Conclusion All three methods can provide sufficient tumor material for multiple biomarkers testing from routinely obtained small biopsies in lung cancer patients. In our study EBUS guided needle aspiration provided the highest amount of tumor RNA compared to bronchoscopy or CT guided core biopsy. Thus EBUS should be considered as an acceptable option for tissue acquisition for molecular testing. PMID:24205040
Chen, Guiqian; Qiu, Yuan; Zhuang, Qingye; Wang, Suchun; Wang, Tong; Chen, Jiming; Wang, Kaicheng
2018-05-09
Next generation sequencing (NGS) is a powerful tool for the characterization, discovery, and molecular identification of RNA viruses. There were multiple NGS library preparation methods published for strand-specific RNA-seq, but some methods are not suitable for identifying and characterizing RNA viruses. In this study, we report a NGS library preparation method to identify RNA viruses using the Ion Torrent PGM platform. The NGS sequencing adapters were directly inserted into the sequencing library through reverse transcription and polymerase chain reaction, without fragmentation and ligation of nucleic acids. The results show that this method is simple to perform, able to identify multiple species of RNA viruses in clinical samples.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
John F. Caratti
2006-01-01
The FIREMON Cover/Frequency (CF) method is used to assess changes in plant species cover and frequency for a macroplot. This method uses multiple quadrats to sample within-plot variation and quantify statistically valid changes in plant species cover, height, and frequency over time. Because it is difficult to estimate cover in quadrats for larger plants, this method...
Berger, Sebastian T; Ahmed, Saima; Muntel, Jan; Cuevas Polo, Nerea; Bachur, Richard; Kentsis, Alex; Steen, Judith; Steen, Hanno
2015-10-01
We describe a 96-well plate compatible membrane-based proteomic sample processing method, which enables the complete processing of 96 samples (or multiples thereof) within a single workday. This method uses a large-pore hydrophobic PVDF membrane that efficiently adsorbs proteins, resulting in fast liquid transfer through the membrane and significantly reduced sample processing times. Low liquid transfer speeds have prevented the useful 96-well plate implementation of FASP as a widely used membrane-based proteomic sample processing method. We validated our approach on whole-cell lysate and urine and cerebrospinal fluid as clinically relevant body fluids. Without compromising peptide and protein identification, our method uses a vacuum manifold and circumvents the need for digest desalting, making our processing method compatible with standard liquid handling robots. In summary, our new method maintains the strengths of FASP and simultaneously overcomes one of the major limitations of FASP without compromising protein identification and quantification. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.
Berger, Sebastian T.; Ahmed, Saima; Muntel, Jan; Cuevas Polo, Nerea; Bachur, Richard; Kentsis, Alex; Steen, Judith; Steen, Hanno
2015-01-01
We describe a 96-well plate compatible membrane-based proteomic sample processing method, which enables the complete processing of 96 samples (or multiples thereof) within a single workday. This method uses a large-pore hydrophobic PVDF membrane that efficiently adsorbs proteins, resulting in fast liquid transfer through the membrane and significantly reduced sample processing times. Low liquid transfer speeds have prevented the useful 96-well plate implementation of FASP as a widely used membrane-based proteomic sample processing method. We validated our approach on whole-cell lysate and urine and cerebrospinal fluid as clinically relevant body fluids. Without compromising peptide and protein identification, our method uses a vacuum manifold and circumvents the need for digest desalting, making our processing method compatible with standard liquid handling robots. In summary, our new method maintains the strengths of FASP and simultaneously overcomes one of the major limitations of FASP without compromising protein identification and quantification. PMID:26223766
Masood, Athar; Stark, Ken D; Salem, Norman
2005-10-01
Conventional sample preparation for fatty acid analysis is a complicated, multiple-step process, and gas chromatography (GC) analysis alone can require >1 h per sample to resolve fatty acid methyl esters (FAMEs). Fast GC analysis was adapted to human plasma FAME analysis using a modified polyethylene glycol column with smaller internal diameters, thinner stationary phase films, increased carrier gas linear velocity, and faster temperature ramping. Our results indicated that fast GC analyses were comparable to conventional GC in peak resolution. A conventional transesterification method based on Lepage and Roy was simplified to a one-step method with the elimination of the neutralization and centrifugation steps. A robotics-amenable method was also developed, with lower methylation temperatures and in an open-tube format using multiple reagent additions. The simplified methods produced results that were quantitatively similar and with similar coefficients of variation as compared with the original Lepage and Roy method. The present streamlined methodology is suitable for the direct fatty acid analysis of human plasma, is appropriate for research studies, and will facilitate large clinical trials and make possible population studies.
NASA Astrophysics Data System (ADS)
Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.
2018-05-01
Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.
Ilyin, S E; Plata-Salamán, C R
2000-02-15
Homogenization of tissue samples is a common first step in the majority of current protocols for RNA, DNA, and protein isolation. This report describes a simple device for centrifugation-mediated homogenization of tissue samples. The method presented is applicable to RNA, DNA, and protein isolation, and we show examples where high quality total cell RNA, DNA, and protein were obtained from brain and other tissue samples. The advantages of the approach presented include: (1) a significant reduction in time investment relative to hand-driven or individual motorized-driven pestle homogenization; (2) easy construction of the device from inexpensive parts available in any laboratory; (3) high replicability in the processing; and (4) the capacity for the parallel processing of multiple tissue samples, thus allowing higher efficiency, reliability, and standardization.
An Exemplar-Based Multi-View Domain Generalization Framework for Visual Recognition.
Niu, Li; Li, Wen; Xu, Dong; Cai, Jianfei
2018-02-01
In this paper, we propose a new exemplar-based multi-view domain generalization (EMVDG) framework for visual recognition by learning robust classifier that are able to generalize well to arbitrary target domain based on the training samples with multiple types of features (i.e., multi-view features). In this framework, we aim to address two issues simultaneously. First, the distribution of training samples (i.e., the source domain) is often considerably different from that of testing samples (i.e., the target domain), so the performance of the classifiers learnt on the source domain may drop significantly on the target domain. Moreover, the testing data are often unseen during the training procedure. Second, when the training data are associated with multi-view features, the recognition performance can be further improved by exploiting the relation among multiple types of features. To address the first issue, considering that it has been shown that fusing multiple SVM classifiers can enhance the domain generalization ability, we build our EMVDG framework upon exemplar SVMs (ESVMs), in which a set of ESVM classifiers are learnt with each one trained based on one positive training sample and all the negative training samples. When the source domain contains multiple latent domains, the learnt ESVM classifiers are expected to be grouped into multiple clusters. To address the second issue, we propose two approaches under the EMVDG framework based on the consensus principle and the complementary principle, respectively. Specifically, we propose an EMVDG_CO method by adding a co-regularizer to enforce the cluster structures of ESVM classifiers on different views to be consistent based on the consensus principle. Inspired by multiple kernel learning, we also propose another EMVDG_MK method by fusing the ESVM classifiers from different views based on the complementary principle. In addition, we further extend our EMVDG framework to exemplar-based multi-view domain adaptation (EMVDA) framework when the unlabeled target domain data are available during the training procedure. The effectiveness of our EMVDG and EMVDA frameworks for visual recognition is clearly demonstrated by comprehensive experiments on three benchmark data sets.
Application of Handheld Laser-Induced Breakdown Spectroscopy (LIBS) to Geochemical Analysis.
Connors, Brendan; Somers, Andrew; Day, David
2016-05-01
While laser-induced breakdown spectroscopy (LIBS) has been in use for decades, only within the last two years has technology progressed to the point of enabling true handheld, self-contained instruments. Several instruments are now commercially available with a range of capabilities and features. In this paper, the SciAps Z-500 handheld LIBS instrument functionality and sub-systems are reviewed. Several assayed geochemical sample sets, including igneous rocks and soils, are investigated. Calibration data are presented for multiple elements of interest along with examples of elemental mapping in heterogeneous samples. Sample preparation and the data collection method from multiple locations and data analysis are discussed. © The Author(s) 2016.
Zhang, Chun-Yun; Chai, Xin-Sheng
2015-03-13
A novel method for the determination of the diffusion coefficient (D) of methanol in water and olive oil has been developed. Based on multiple headspace extraction gas chromatography (MHE-GC), the methanol released from the liquid sample of interest in a closed sample vial was determined in a stepwise fashion. A theoretical model was derived to establish the relationship between the diffusion coefficient and the GC signals from MHE-GC measurements. The results showed that the present method has an excellent precision (RSD<1%) in the linear fitting procedure and good accuracy for the diffusion coefficients of methanol in both water and olive oil, when compared with data reported in the literature. The present method is simple and practical and can be a valuable tool for the determination of the diffusion coefficient of volatile analyte(s) into food simulants from food and beverage packaging material, both in research studies and in actual applications. Copyright © 2015 Elsevier B.V. All rights reserved.
Tsuda, Yukihiro; Uchimura, Tomohiro
2016-01-01
Resonance-enhanced multiphoton ionization time-of-flight mass spectrometry was applied to measurements of multiple emulsions with no pretreatment; a method for the quantitative evaluation of aging was proposed. We prepared water-in-oil-in-water (W/O/W) multiple emulsions containing toluene and m-phenylenediamine. The samples were measured immediately following both preparation and after having been stirred for 24 h. Time profiles of the peak areas for each analyte species were obtained, and several intense spikes for toluene could be detected from each sample after stirring, which suggests that the concentration of toluene in the middle phase had increased during stirring. On the other hand, in the case of a W/O/W multiple emulsion containing phenol and m-phenylenediamine, spikes for m-phenylenediamine, rather than phenol, were detected after stirring. In the present study, the time-profile data were converted into a scatter plot in order to quantitatively evaluate the aging. As a result, the ratio of the plots where strong signal intensities of toluene were detected increased from 8.4% before stirring to 33.2% after stirring for 24 h. The present method could be a powerful tool for evaluating multiple emulsions, such as studies on the kinetics of the encapsulation and release of active ingredients.
Han, Jiarui; Zhang, Xiangru; Liu, Jiaqi; Zhu, Xiaohu; Gong, Tingting
2017-08-01
Chlorine dioxide (ClO 2 ) is a widely used alternative disinfectant due to its high biocidal efficiency and low-level formation of trihalomethanes and haloacetic acids. A major portion of total organic halogen (TOX), a collective parameter for all halogenated DBPs, formed in ClO 2 -treated drinking water is still unknown. A commonly used pretreatment method for analyzing halogenated DBPs in drinking water is one-time liquid-liquid extraction (LLE), which may lead to a substantial loss of DBPs prior to analysis. In this study, characterization and identification of polar halogenated DBPs in a ClO 2 -treated drinking water sample were conducted by pretreating the sample with multiple extractions. Compared to one-time LLE, the combined four-time LLEs improved the recovery of TOX by 2.3 times. The developmental toxicity of the drinking water sample pretreated with the combined four-time LLEs was 1.67 times higher than that pretreated with one-time LLE. With the aid of ultra-performance liquid chromatography/electrospray ionization-triple quadrupole mass spectrometry, a new group of polar halogenated DBPs, trihalomethanols, were detected in the drinking water sample pretreated with multiple extractions; two of them, trichloromethanol and bromodichloromethanol, were identified with synthesized standard compounds. Moreover, these trihalomethanols were found to be the transformation products of trihalomethanes formed during ClO 2 disinfection. The results indicate that multiple LLEs can significantly improve extraction efficiencies of polar halogenated DBPs and is a better pretreatment method for characterizing and identifying new polar halogenated DBPs in drinking water. Copyright © 2017. Published by Elsevier B.V.
Schimpf, Karen J.; Meek, Claudia C.; Leff, Richard D.; Phelps, Dale L.; Schmitz, Daniel J.; Cordle, Christopher T.
2015-01-01
Inositol is a six-carbon sugar alcohol and is one of nine biologically significant isomers of hexahydroxycyclohexane. Myo-inositol is the primary biologically active form and is present in higher concentrations in the fetus and newborn than in adults. It is currently being examined for the prevention of retinopathy of prematurity in newborn preterm infants. A robust method for quantifying myo-inositol (MI), D-chiro-inositol (DCI) and 1,5-anhydro-D-sorbitol (ADS) in very small-volume (25 μL) urine, blood serum and/or plasma samples was developed. Using a multiple-column, multiple mobile phase liquid chromatographic system with electrochemical detection, the method was validated with respect to (a) selectivity, (b) accuracy/recovery, (c) precision/reproducibility, (d) sensitivity, (e) stability and (f) ruggedness. The standard curve was linear and ranged from 0.5 to 30 mg/L for each of the three analytes. Above-mentioned performance measures were within acceptable limits described in the Food and Drug Administration’s Guidance for Industry: Bioanalytical Method Validation. The method was validated using blood serum and plasma collected using four common anticoagulants, and also by quantifying the accuracy and sensitivity of MI measured in simulated urine samples recovered from preterm infant diaper systems. The method performs satisfactorily measuring the three most common inositol isomers on 25 μL clinical samples of serum, plasma milk, and/or urine. Similar performance is seen testing larger volume samples of infant formulas and infant formula ingredients. MI, ADS and DCI may be accurately tested in urine samples collected from five different preterm infant diapers if the urine volume is greater than 2–5 mL. PMID:26010453
Anis, Eman; Hawkins, Ian K; Ilha, Marcia R S; Woldemeskel, Moges W; Saliki, Jeremiah T; Wilkes, Rebecca P
2018-07-01
The laboratory diagnosis of infectious diseases, especially those caused by mixed infections, is challenging. Routinely, it requires submission of multiple samples to separate laboratories. Advances in next-generation sequencing (NGS) have provided the opportunity for development of a comprehensive method to identify infectious agents. This study describes the use of target-specific primers for PCR-mediated amplification with the NGS technology in which pathogen genomic regions of interest are enriched and selectively sequenced from clinical samples. In the study, 198 primers were designed to target 43 common bovine and small-ruminant bacterial, fungal, viral, and parasitic pathogens, and a bioinformatics tool was specifically constructed for the detection of targeted pathogens. The primers were confirmed to detect the intended pathogens by testing reference strains and isolates. The method was then validated using 60 clinical samples (including tissues, feces, and milk) that were also tested with other routine diagnostic techniques. The detection limits of the targeted NGS method were evaluated using 10 representative pathogens that were also tested by quantitative PCR (qPCR), and the NGS method was able to detect the organisms from samples with qPCR threshold cycle ( C T ) values in the 30s. The method was successful for the detection of multiple pathogens in the clinical samples, including some additional pathogens missed by the routine techniques because the specific tests needed for the particular organisms were not performed. The results demonstrate the feasibility of the approach and indicate that it is possible to incorporate NGS as a diagnostic tool in a cost-effective manner into a veterinary diagnostic laboratory. Copyright © 2018 Anis et al.
Chan, Leo Li-Ying; Smith, Tim; Kumph, Kendra A; Kuksin, Dmitry; Kessel, Sarah; Déry, Olivier; Cribbes, Scott; Lai, Ning; Qiu, Jean
2016-10-01
To ensure cell-based assays are performed properly, both cell concentration and viability have to be determined so that the data can be normalized to generate meaningful and comparable results. Cell-based assays performed in immuno-oncology, toxicology, or bioprocessing research often require measuring of multiple samples and conditions, thus the current automated cell counter that uses single disposable counting slides is not practical for high-throughput screening assays. In the recent years, a plate-based image cytometry system has been developed for high-throughput biomolecular screening assays. In this work, we demonstrate a high-throughput AO/PI-based cell concentration and viability method using the Celigo image cytometer. First, we validate the method by comparing directly to Cellometer automated cell counter. Next, cell concentration dynamic range, viability dynamic range, and consistency are determined. The high-throughput AO/PI method described here allows for 96-well to 384-well plate samples to be analyzed in less than 7 min, which greatly reduces the time required for the single sample-based automated cell counter. In addition, this method can improve the efficiency for high-throughput screening assays, where multiple cell counts and viability measurements are needed prior to performing assays such as flow cytometry, ELISA, or simply plating cells for cell culture.
Hyperspectral stimulated emission depletion microscopy and methods of use thereof
Timlin, Jerilyn A; Aaron, Jesse S
2014-04-01
A hyperspectral stimulated emission depletion ("STED") microscope system for high-resolution imaging of samples labeled with multiple fluorophores (e.g., two to ten fluorophores). The hyperspectral STED microscope includes a light source, optical systems configured for generating an excitation light beam and a depletion light beam, optical systems configured for focusing the excitation and depletion light beams on a sample, and systems for collecting and processing data generated by interaction of the excitation and depletion light beams with the sample. Hyperspectral STED data may be analyzed using multivariate curve resolution analysis techniques to deconvolute emission from the multiple fluorophores. The hyperspectral STED microscope described herein can be used for multi-color, subdiffraction imaging of samples (e.g., materials and biological materials) and for analyzing a tissue by Forster Resonance Energy Transfer ("FRET").
Huang, Huan; Li, Shuo; Sun, Lizhou; Zhou, Guohua
2015-01-01
To simultaneously analyze mutations and expression levels of multiple genes on one detection platform, we proposed a method termed “multiplex ligation-dependent probe amplification–digital amplification coupled with hydrogel bead-array” (MLPA–DABA) and applied it to diagnose colorectal cancer (CRC). CRC cells and tissues were sampled to extract nucleic acid, perform MLPA with sequence-tagged probes, perform digital emulsion polymerase chain reaction (PCR), and produce a hydrogel bead-array to immobilize beads and form a single bead layer on the array. After hybridization with fluorescent probes, the number of colored beads, which reflects the abundance of expressed genes and the mutation rate, was counted for diagnosis. Only red or green beads occurred on the chips in the mixed samples, indicating the success of single-molecule PCR. When a one-source sample was analyzed using mixed MLPA probes, beads of only one color occurred, suggesting the high specificity of the method in analyzing CRC mutation and gene expression. In gene expression analysis of a CRC tissue from one CRC patient, the mutant percentage was 3.1%, and the expression levels of CRC-related genes were much higher than those of normal tissue. The highly sensitive MLPA–DABA succeeds in the relative quantification of mutations and gene expressions of exfoliated cells in stool samples of CRC patients on the same chip platform. MLPA–DABA coupled with hydrogel bead-array is a promising method in the non-invasive diagnosis of CRC. PMID:25880764
Method and apparatus for fiber optic multiple scattering suppression
NASA Technical Reports Server (NTRS)
Ackerson, Bruce J. (Inventor)
2000-01-01
The instant invention provides a method and apparatus for use in laser induced dynamic light scattering which attenuates the multiple scattering component in favor of the single scattering component. The preferred apparatus utilizes two light detectors that are spatially and/or angularly separated and which simultaneously record the speckle pattern from a single sample. The recorded patterns from the two detectors are then cross correlated in time to produce one point on a composite single/multiple scattering function curve. By collecting and analyzing cross correlation measurements that have been taken at a plurality of different spatial/angular positions, the signal representative of single scattering may be differentiated from the signal representative of multiple scattering, and a near optimum detector separation angle for use in taking future measurements may be determined.
2013-01-01
Background The theoretical basis of genome-wide association studies (GWAS) is statistical inference of linkage disequilibrium (LD) between any polymorphic marker and a putative disease locus. Most methods widely implemented for such analyses are vulnerable to several key demographic factors and deliver a poor statistical power for detecting genuine associations and also a high false positive rate. Here, we present a likelihood-based statistical approach that accounts properly for non-random nature of case–control samples in regard of genotypic distribution at the loci in populations under study and confers flexibility to test for genetic association in presence of different confounding factors such as population structure, non-randomness of samples etc. Results We implemented this novel method together with several popular methods in the literature of GWAS, to re-analyze recently published Parkinson’s disease (PD) case–control samples. The real data analysis and computer simulation show that the new method confers not only significantly improved statistical power for detecting the associations but also robustness to the difficulties stemmed from non-randomly sampling and genetic structures when compared to its rivals. In particular, the new method detected 44 significant SNPs within 25 chromosomal regions of size < 1 Mb but only 6 SNPs in two of these regions were previously detected by the trend test based methods. It discovered two SNPs located 1.18 Mb and 0.18 Mb from the PD candidates, FGF20 and PARK8, without invoking false positive risk. Conclusions We developed a novel likelihood-based method which provides adequate estimation of LD and other population model parameters by using case and control samples, the ease in integration of these samples from multiple genetically divergent populations and thus confers statistically robust and powerful analyses of GWAS. On basis of simulation studies and analysis of real datasets, we demonstrated significant improvement of the new method over the non-parametric trend test, which is the most popularly implemented in the literature of GWAS. PMID:23394771
Replacing missing data between airborne SAR coherent image pairs
Musgrove, Cameron H.; West, James C.
2017-07-31
For synthetic aperture radar systems, missing data samples can cause severe image distortion. When multiple, coherent data collections exist and the missing data samples do not overlap between collections, there exists the possibility of replacing data samples between collections. For airborne radar, the known and unknown motion of the aircraft prevents direct data sample replacement to repair image features. Finally, this paper presents a method to calculate the necessary phase corrections to enable data sample replacement using only the collected radar data.
Replacing missing data between airborne SAR coherent image pairs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Musgrove, Cameron H.; West, James C.
For synthetic aperture radar systems, missing data samples can cause severe image distortion. When multiple, coherent data collections exist and the missing data samples do not overlap between collections, there exists the possibility of replacing data samples between collections. For airborne radar, the known and unknown motion of the aircraft prevents direct data sample replacement to repair image features. Finally, this paper presents a method to calculate the necessary phase corrections to enable data sample replacement using only the collected radar data.
Glass wool filters for concentrating waterborne viruses and agricultural zoonotic pathogens
USDA-ARS?s Scientific Manuscript database
The key first step in evaluating pathogen levels in suspected contaminated water is concentration. Concentration methods tend to be specific for a particular pathogen group or genus, for example viruses or Cryptosporidium, requiring multiple methods if the sampling program is targeting more than on...
A national-scale survey of 247 contaminants of emerging concern (CECs), including organic and inorganic chemical compounds, and microbial contaminants, was conducted in source and treated drinking water samples from 25 treatment plants across the United States. Multiple methods w...
USE OF MOLECULAR PROBES TO ASSESS GEOGRAPHIC DISTRIBUTION OF PFIESTERIA SPECIES. (R827084)
We have developed multiple polymerase chain reaction (PCR)-based methods for the
detection of Pfiesteria sp. in cultures and environmental samples. More than 2,100 water and
sediment samples from estuarine sites of the U.S. Atlantic and gulf coasts were assayed for the
p...
USDA-ARS?s Scientific Manuscript database
Most analytical methods for persistent organic pollutants (POPs) focus on targeted analytes. Therefore, analysis of multiple classes of POPs typically entails several sample preparations, fractionations, and injections, whereas other chemicals of possible interest are neglected. To analyze a wider...
The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models
ERIC Educational Resources Information Center
Schoeneberger, Jason A.
2016-01-01
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
We sampled 92 wetlands from four different basins in the United States to quantify observer repeatability in rapid wetland condition assessment using the Delaware Rapid Assessment Protocol (DERAP). In the Inland Bays basin of Delaware, 58 wetland sites were sampled by multiple ob...
ERIC Educational Resources Information Center
Whitehouse, Andrew J. O.; Mattes, Eugen; Maybery, Murray T.; Sawyer, Michael G.; Jacoby, Peter; Keelan, Jeffrey A.; Hickey, Martha
2012-01-01
Background: Preliminary evidence suggests that prenatal testosterone exposure may be associated with language delay. However, no study has examined a large sample of children at multiple time-points. Methods: Umbilical cord blood samples were obtained at 861 births and analysed for bioavailable testosterone (BioT) concentrations. When…
ERIC Educational Resources Information Center
Dehghan, Mahshid; Lopez Jaramillo, Patricio; Duenas, Ruby; Anaya, Lilliam Lima; Garcia, Ronald G.; Zhang, Xiaohe; Islam, Shofiqul; Merchant, Anwar T.
2012-01-01
Objective: To validate a food frequency questionnaire (FFQ) against multiple 24-hour dietary recalls (DRs) that could be used for Colombian adults. Methods: A convenience sample of 219 individuals participated in the study. The validity of the FFQ was evaluated against multiple DRs. Four dietary recalls were collected during the year, and an FFQ…
ERIC Educational Resources Information Center
Sussman, Joan E.; Tjaden, Kris
2012-01-01
Purpose: The primary purpose of this study was to compare percent correct word and sentence intelligibility scores for individuals with multiple sclerosis (MS) and Parkinson's disease (PD) with scaled estimates of speech severity obtained for a reading passage. Method: Speech samples for 78 talkers were judged, including 30 speakers with MS, 16…
Spectrometer capillary vessel and method of making same
Linehan, John C.; Yonker, Clement R.; Zemanian, Thomas S.; Franz, James A.
1995-01-01
The present invention is an arrangement of a glass capillary tube for use in spectroscopy. In particular, the invention is a capillary arranged in a manner permitting a plurality or multiplicity of passes of a sample material through a spectroscopic measurement zone. In a preferred embodiment, the multi-pass capillary is insertable within a standard NMR sample tube. The present invention further includes a method of making the multi-pass capillary tube and an apparatus for spinning the tube.
Rapid Sequencing of Complete env Genes from Primary HIV-1 Samples.
Laird Smith, Melissa; Murrell, Ben; Eren, Kemal; Ignacio, Caroline; Landais, Elise; Weaver, Steven; Phung, Pham; Ludka, Colleen; Hepler, Lance; Caballero, Gemma; Pollner, Tristan; Guo, Yan; Richman, Douglas; Poignard, Pascal; Paxinos, Ellen E; Kosakovsky Pond, Sergei L; Smith, Davey M
2016-07-01
The ability to study rapidly evolving viral populations has been constrained by the read length of next-generation sequencing approaches and the sampling depth of single-genome amplification methods. Here, we develop and characterize a method using Pacific Biosciences' Single Molecule, Real-Time (SMRT®) sequencing technology to sequence multiple, intact full-length human immunodeficiency virus-1 env genes amplified from viral RNA populations circulating in blood, and provide computational tools for analyzing and visualizing these data.
Multiple imputation for cure rate quantile regression with censored data.
Wu, Yuanshan; Yin, Guosheng
2017-03-01
The main challenge in the context of cure rate analysis is that one never knows whether censored subjects are cured or uncured, or whether they are susceptible or insusceptible to the event of interest. Considering the susceptible indicator as missing data, we propose a multiple imputation approach to cure rate quantile regression for censored data with a survival fraction. We develop an iterative algorithm to estimate the conditionally uncured probability for each subject. By utilizing this estimated probability and Bernoulli sample imputation, we can classify each subject as cured or uncured, and then employ the locally weighted method to estimate the quantile regression coefficients with only the uncured subjects. Repeating the imputation procedure multiple times and taking an average over the resultant estimators, we obtain consistent estimators for the quantile regression coefficients. Our approach relaxes the usual global linearity assumption, so that we can apply quantile regression to any particular quantile of interest. We establish asymptotic properties for the proposed estimators, including both consistency and asymptotic normality. We conduct simulation studies to assess the finite-sample performance of the proposed multiple imputation method and apply it to a lung cancer study as an illustration. © 2016, The International Biometric Society.
Monitoring heavy metal Cr in soil based on hyperspectral data using regression analysis
NASA Astrophysics Data System (ADS)
Zhang, Ningyu; Xu, Fuyun; Zhuang, Shidong; He, Changwei
2016-10-01
Heavy metal pollution in soils is one of the most critical problems in the global ecology and environment safety nowadays. Hyperspectral remote sensing and its application is capable of high speed, low cost, less risk and less damage, and provides a good method for detecting heavy metals in soil. This paper proposed a new idea of applying regression analysis of stepwise multiple regression between the spectral data and monitoring the amount of heavy metal Cr by sample points in soil for environmental protection. In the measurement, a FieldSpec HandHeld spectroradiometer is used to collect reflectance spectra of sample points over the wavelength range of 325-1075 nm. Then the spectral data measured by the spectroradiometer is preprocessed to reduced the influence of the external factors, and the preprocessed methods include first-order differential equation, second-order differential equation and continuum removal method. The algorithms of stepwise multiple regression are established accordingly, and the accuracy of each equation is tested. The results showed that the accuracy of first-order differential equation works best, which makes it feasible to predict the content of heavy metal Cr by using stepwise multiple regression.
Yang, Xi; Han, Guoqiang; Cai, Hongmin; Song, Yan
2017-03-31
Revealing data with intrinsically diagonal block structures is particularly useful for analyzing groups of highly correlated variables. Earlier researches based on non-negative matrix factorization (NMF) have been shown to be effective in representing such data by decomposing the observed data into two factors, where one factor is considered to be the feature and the other the expansion loading from a linear algebra perspective. If the data are sampled from multiple independent subspaces, the loading factor would possess a diagonal structure under an ideal matrix decomposition. However, the standard NMF method and its variants have not been reported to exploit this type of data via direct estimation. To address this issue, a non-negative matrix factorization with multiple constraints model is proposed in this paper. The constraints include an sparsity norm on the feature matrix and a total variational norm on each column of the loading matrix. The proposed model is shown to be capable of efficiently recovering diagonal block structures hidden in observed samples. An efficient numerical algorithm using the alternating direction method of multipliers model is proposed for optimizing the new model. Compared with several benchmark models, the proposed method performs robustly and effectively for simulated and real biological data.
Image sensor with high dynamic range linear output
NASA Technical Reports Server (NTRS)
Yadid-Pecht, Orly (Inventor); Fossum, Eric R. (Inventor)
2007-01-01
Designs and operational methods to increase the dynamic range of image sensors and APS devices in particular by achieving more than one integration times for each pixel thereof. An APS system with more than one column-parallel signal chains for readout are described for maintaining a high frame rate in readout. Each active pixel is sampled for multiple times during a single frame readout, thus resulting in multiple integration times. The operation methods can also be used to obtain multiple integration times for each pixel with an APS design having a single column-parallel signal chain for readout. Furthermore, analog-to-digital conversion of high speed and high resolution can be implemented.
Xun-Ping, W; An, Z
2017-07-27
Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.
Multi-chain Markov chain Monte Carlo methods for computationally expensive models
NASA Astrophysics Data System (ADS)
Huang, M.; Ray, J.; Ren, H.; Hou, Z.; Bao, J.
2017-12-01
Markov chain Monte Carlo (MCMC) methods are used to infer model parameters from observational data. The parameters are inferred as probability densities, thus capturing estimation error due to sparsity of the data, and the shortcomings of the model. Multiple communicating chains executing the MCMC method have the potential to explore the parameter space better, and conceivably accelerate the convergence to the final distribution. We present results from tests conducted with the multi-chain method to show how the acceleration occurs i.e., for loose convergence tolerances, the multiple chains do not make much of a difference. The ensemble of chains also seems to have the ability to accelerate the convergence of a few chains that might start from suboptimal starting points. Finally, we show the performance of the chains in the estimation of O(10) parameters using computationally expensive forward models such as the Community Land Model, where the sampling burden is distributed over multiple chains.
On sample size of the kruskal-wallis test with application to a mouse peritoneal cavity study.
Fan, Chunpeng; Zhang, Donghui; Zhang, Cun-Hui
2011-03-01
As the nonparametric generalization of the one-way analysis of variance model, the Kruskal-Wallis test applies when the goal is to test the difference between multiple samples and the underlying population distributions are nonnormal or unknown. Although the Kruskal-Wallis test has been widely used for data analysis, power and sample size methods for this test have been investigated to a much lesser extent. This article proposes new power and sample size calculation methods for the Kruskal-Wallis test based on the pilot study in either a completely nonparametric model or a semiparametric location model. No assumption is made on the shape of the underlying population distributions. Simulation results show that, in terms of sample size calculation for the Kruskal-Wallis test, the proposed methods are more reliable and preferable to some more traditional methods. A mouse peritoneal cavity study is used to demonstrate the application of the methods. © 2010, The International Biometric Society.
Multiple-wavelength spectroscopic quantitation of light-absorbing species in scattering media
Nathel, Howard; Cartland, Harry E.; Colston, Jr., Billy W.; Everett, Matthew J.; Roe, Jeffery N.
2000-01-01
An oxygen concentration measurement system for blood hemoglobin comprises a multiple-wavelength low-coherence optical light source that is coupled by single mode fibers through a splitter and combiner and focused on both a target tissue sample and a reference mirror. Reflections from both the reference mirror and from the depths of the target tissue sample are carried back and mixed to produce interference fringes in the splitter and combiner. The reference mirror is set such that the distance traversed in the reference path is the same as the distance traversed into and back from the target tissue sample at some depth in the sample that will provide light attenuation information that is dependent on the oxygen in blood hemoglobin in the target tissue sample. Two wavelengths of light are used to obtain concentrations. The method can be used to measure total hemoglobin concentration [Hb.sub.deoxy +Hb.sub.oxy ] or total blood volume in tissue and in conjunction with oxygen saturation measurements from pulse oximetry can be used to absolutely quantify oxyhemoglobin [HbO.sub.2 ] in tissue. The apparatus and method provide a general means for absolute quantitation of an absorber dispersed in a highly scattering medium.
Extreme Quantile Estimation in Binary Response Models
1990-03-01
in Cancer Research," Biometria , VoL 66, pp. 307-316. Hsi, B.P. [1969], ’The Multiple Sample Up-and-Down Method in Bioassay," Journal of the American...New Method of Estimation," Biometria , VoL 53, pp. 439-454. Wetherill, G.B. [1976], Sequential Methods in Statistics, London: Chapman and Hall. Wu, C.FJ
Zhang, Haixia; Zhao, Junkang; Gu, Caijiao; Cui, Yan; Rong, Huiying; Meng, Fanlong; Wang, Tong
2015-05-01
The study of the medical expenditure and its influencing factors among the students enrolling in Urban Resident Basic Medical Insurance (URBMI) in Taiyuan indicated that non response bias and selection bias coexist in dependent variable of the survey data. Unlike previous studies only focused on one missing mechanism, a two-stage method to deal with two missing mechanisms simultaneously was suggested in this study, combining multiple imputation with sample selection model. A total of 1 190 questionnaires were returned by the students (or their parents) selected in child care settings, schools and universities in Taiyuan by stratified cluster random sampling in 2012. In the returned questionnaires, 2.52% existed not missing at random (NMAR) of dependent variable and 7.14% existed missing at random (MAR) of dependent variable. First, multiple imputation was conducted for MAR by using completed data, then sample selection model was used to correct NMAR in multiple imputation, and a multi influencing factor analysis model was established. Based on 1 000 times resampling, the best scheme of filling the random missing values is the predictive mean matching (PMM) method under the missing proportion. With this optimal scheme, a two stage survey was conducted. Finally, it was found that the influencing factors on annual medical expenditure among the students enrolling in URBMI in Taiyuan included population group, annual household gross income, affordability of medical insurance expenditure, chronic disease, seeking medical care in hospital, seeking medical care in community health center or private clinic, hospitalization, hospitalization canceled due to certain reason, self medication and acceptable proportion of self-paid medical expenditure. The two-stage method combining multiple imputation with sample selection model can deal with non response bias and selection bias effectively in dependent variable of the survey data.
Cheng, Dongwan; Zheng, Li; Hou, Junjie; Wang, Jifeng; Xue, Peng; Yang, Fuquan; Xu, Tao
2015-01-01
The absolute quantification of target proteins in proteomics involves stable isotope dilution coupled with multiple reactions monitoring mass spectrometry (SID-MRM-MS). The successful preparation of stable isotope-labeled internal standard peptides is an important prerequisite for the SID-MRM absolute quantification methods. Dimethyl labeling has been widely used in relative quantitative proteomics and it is fast, simple, reliable, cost-effective, and applicable to any protein sample, making it an ideal candidate method for the preparation of stable isotope-labeled internal standards. MRM mass spectrometry is of high sensitivity, specificity, and throughput characteristics and can quantify multiple proteins simultaneously, including low-abundance proteins in precious samples such as pancreatic islets. In this study, a new method for the absolute quantification of three proteases involved in insulin maturation, namely PC1/3, PC2 and CPE, was developed by coupling a stable isotope dimethyl labeling strategy for internal standard peptide preparation with SID-MRM-MS quantitative technology. This method offers a new and effective approach for deep understanding of the functional status of pancreatic β cells and pathogenesis in diabetes.
Rešková, Z; Koreňová, J; Kuchta, T
2014-04-01
A total of 256 isolates of Staphylococcus aureus were isolated from 98 samples (34 swabs and 64 food samples) obtained from small or medium meat- and cheese-processing plants in Slovakia. The strains were genotypically characterized by multiple locus variable number of tandem repeats analysis (MLVA), involving multiplex polymerase chain reaction (PCR) with subsequent separation of the amplified DNA fragments by an automated flow-through gel electrophoresis. With the panel of isolates, MLVA produced 31 profile types, which was a sufficient discrimination to facilitate the description of spatial and temporal aspects of contamination. Further data on MLVA discrimination were obtained by typing a subpanel of strains by multiple locus sequence typing (MLST). MLVA coupled to automated electrophoresis proved to be an effective, comparatively fast and inexpensive method for tracing S. aureus contamination of food-processing factories. Subspecies genotyping of microbial contaminants in food-processing factories may facilitate identification of spatial and temporal aspects of the contamination. This may help to properly manage the process hygiene. With S. aureus, multiple locus variable number of tandem repeats analysis (MLVA) proved to be an effective method for the purpose, being sufficiently discriminative, yet comparatively fast and inexpensive. The application of automated flow-through gel electrophoresis to separation of DNA fragments produced by multiplex PCR helped to improve the accuracy and speed of the method. © 2013 The Society for Applied Microbiology.
Refining lunar impact chronology through high spatial resolution 40Ar/39Ar dating of impact melts
Mercer, Cameron M.; Young, Kelsey E.; Weirich, John R.; Hodges, Kip V.; Jolliff, Bradley L.; Wartho, Jo-Anne; van Soest, Matthijs C.
2015-01-01
Quantitative constraints on the ages of melt-forming impact events on the Moon are based primarily on isotope geochronology of returned samples. However, interpreting the results of such studies can often be difficult because the provenance region of any sample returned from the lunar surface may have experienced multiple impact events over the course of billions of years of bombardment. We illustrate this problem with new laser microprobe 40Ar/39Ar data for two Apollo 17 impact melt breccias. Whereas one sample yields a straightforward result, indicating a single melt-forming event at ca. 3.83 Ga, data from the other sample document multiple impact melt–forming events between ca. 3.81 Ga and at least as young as ca. 3.27 Ga. Notably, published zircon U/Pb data indicate the existence of even older melt products in the same sample. The revelation of multiple impact events through 40Ar/39Ar geochronology is likely not to have been possible using standard incremental heating methods alone, demonstrating the complementarity of the laser microprobe technique. Evidence for 3.83 Ga to 3.81 Ga melt components in these samples reinforces emerging interpretations that Apollo 17 impact breccia samples include a significant component of ejecta from the Imbrium basin impact. Collectively, our results underscore the need to quantitatively resolve the ages of different melt generations from multiple samples to improve our current understanding of the lunar impact record, and to establish the absolute ages of important impact structures encountered during future exploration missions in the inner Solar System. PMID:26601128
Mercer, Cameron M; Young, Kelsey E; Weirich, John R; Hodges, Kip V; Jolliff, Bradley L; Wartho, Jo-Anne; van Soest, Matthijs C
2015-02-01
Quantitative constraints on the ages of melt-forming impact events on the Moon are based primarily on isotope geochronology of returned samples. However, interpreting the results of such studies can often be difficult because the provenance region of any sample returned from the lunar surface may have experienced multiple impact events over the course of billions of years of bombardment. We illustrate this problem with new laser microprobe (40)Ar/(39)Ar data for two Apollo 17 impact melt breccias. Whereas one sample yields a straightforward result, indicating a single melt-forming event at ca. 3.83 Ga, data from the other sample document multiple impact melt-forming events between ca. 3.81 Ga and at least as young as ca. 3.27 Ga. Notably, published zircon U/Pb data indicate the existence of even older melt products in the same sample. The revelation of multiple impact events through (40)Ar/(39)Ar geochronology is likely not to have been possible using standard incremental heating methods alone, demonstrating the complementarity of the laser microprobe technique. Evidence for 3.83 Ga to 3.81 Ga melt components in these samples reinforces emerging interpretations that Apollo 17 impact breccia samples include a significant component of ejecta from the Imbrium basin impact. Collectively, our results underscore the need to quantitatively resolve the ages of different melt generations from multiple samples to improve our current understanding of the lunar impact record, and to establish the absolute ages of important impact structures encountered during future exploration missions in the inner Solar System.
Tran, Ngoc Han; Chen, Hongjie; Do, Thanh Van; Reinhard, Martin; Ngo, Huu Hao; He, Yiliang; Gin, Karina Yew-Hoong
2016-10-01
A robust and sensitive analytical method was developed for the simultaneous analysis of 21 target antimicrobials in different environmental water samples. Both single SPE and tandem SPE cartridge systems were investigated to simultaneously extract multiple classes of antimicrobials. Experimental results showed that good extraction efficiencies (84.5-105.6%) were observed for the vast majority of the target analytes when extraction was performed using the tandem SPE cartridge (SB+HR-X) system under an extraction pH of 3.0. HPLC-MS/MS parameters were optimized for simultaneous analysis of all the target analytes in a single injection. Quantification of target antimicrobials in water samples was accomplished using 15 isotopically labeled internal standards (ILISs), which allowed the efficient compensation of the losses of target analytes during sample preparation and correction of matrix effects during UHPLC-MS/MS as well as instrument fluctuations in MS/MS signal intensity. Method quantification limit (MQL) for most target analytes based on SPE was below 5ng/L for surface waters, 10ng/L for treated wastewater effluents, and 15ng/L for raw wastewater. The method was successfully applied to detect and quantify the occurrence of the target analytes in raw influent, treated effluent and surface water samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Robust phase retrieval of complex-valued object in phase modulation by hybrid Wirtinger flow method
NASA Astrophysics Data System (ADS)
Wei, Zhun; Chen, Wen; Yin, Tiantian; Chen, Xudong
2017-09-01
This paper presents a robust iterative algorithm, known as hybrid Wirtinger flow (HWF), for phase retrieval (PR) of complex objects from noisy diffraction intensities. Numerical simulations indicate that the HWF method consistently outperforms conventional PR methods in terms of both accuracy and convergence rate in multiple phase modulations. The proposed algorithm is also more robust to low oversampling ratios, loose constraints, and noisy environments. Furthermore, compared with traditional Wirtinger flow, sample complexity is largely reduced. It is expected that the proposed HWF method will find applications in the rapidly growing coherent diffractive imaging field for high-quality image reconstruction with multiple modulations, as well as other disciplines where PR is needed.
Evaluation of seven aquatic sampling methods for amphibians and other aquatic fauna
Gunzburger, M.S.
2007-01-01
To design effective and efficient research and monitoring programs researchers must have a thorough understanding of the capabilities and limitations of their sampling methods. Few direct comparative studies exist for aquatic sampling methods for amphibians. The objective of this study was to simultaneously employ seven aquatic sampling methods in 10 wetlands to compare amphibian species richness and number of individuals detected with each method. Four sampling methods allowed counts of individuals (metal dipnet, D-frame dipnet, box trap, crayfish trap), whereas the other three methods allowed detection of species (visual encounter, aural, and froglogger). Amphibian species richness was greatest with froglogger, box trap, and aural samples. For anuran species, the sampling methods by which each life stage was detected was related to relative length of larval and breeding periods and tadpole size. Detection probability of amphibians varied across sampling methods. Box trap sampling resulted in the most precise amphibian count, but the precision of all four count-based methods was low (coefficient of variation > 145 for all methods). The efficacy of the four count sampling methods at sampling fish and aquatic invertebrates was also analyzed because these predatory taxa are known to be important predictors of amphibian habitat distribution. Species richness and counts were similar for fish with the four methods, whereas invertebrate species richness and counts were greatest in box traps. An effective wetland amphibian monitoring program in the southeastern United States should include multiple sampling methods to obtain the most accurate assessment of species community composition at each site. The combined use of frogloggers, crayfish traps, and dipnets may be the most efficient and effective amphibian monitoring protocol. ?? 2007 Brill Academic Publishers.
Comparability among four invertebrate sampling methods, Fountain Creek Basin, Colorado, 2010-2012
Zuellig, Robert E.; Bruce, James F.; Stogner, Sr., Robert W.; Brown, Krystal D.
2014-01-01
The U.S. Geological Survey, in cooperation with Colorado Springs City Engineering and Colorado Springs Utilities, designed a study to determine if sampling method and sample timing resulted in comparable samples and assessments of biological condition. To accomplish this task, annual invertebrate samples were collected concurrently using four sampling methods at 15 U.S. Geological Survey streamflow gages in the Fountain Creek basin from 2010 to 2012. Collectively, the four methods are used by local (U.S. Geological Survey cooperative monitoring program) and State monitoring programs (Colorado Department of Public Health and Environment) in the Fountain Creek basin to produce two distinct sample types for each program that target single-and multiple-habitats. This study found distinguishable differences between single-and multi-habitat sample types using both community similarities and multi-metric index values, while methods from each program within sample type were comparable. This indicates that the Colorado Department of Public Health and Environment methods were compatible with the cooperative monitoring program methods within multi-and single-habitat sample types. Comparisons between September and October samples found distinguishable differences based on community similarities for both sample types, whereas only differences were found for single-habitat samples when multi-metric index values were considered. At one site, differences between September and October index values from single-habitat samples resulted in opposing assessments of biological condition. Direct application of the results to inform the revision of the existing Fountain Creek basin U.S. Geological Survey cooperative monitoring program are discussed.
An improved SRC method based on virtual samples for face recognition
NASA Astrophysics Data System (ADS)
Fu, Lijun; Chen, Deyun; Lin, Kezheng; Li, Ao
2018-07-01
The sparse representation classifier (SRC) performs classification by evaluating which class leads to the minimum representation error. However, in real world, the number of available training samples is limited due to noise interference, training samples cannot accurately represent the test sample linearly. Therefore, in this paper, we first produce virtual samples by exploiting original training samples at the aim of increasing the number of training samples. Then, we take the intra-class difference as data representation of partial noise, and utilize the intra-class differences and training samples simultaneously to represent the test sample in a linear way according to the theory of SRC algorithm. Using weighted score level fusion, the respective representation scores of the virtual samples and the original training samples are fused together to obtain the final classification results. The experimental results on multiple face databases show that our proposed method has a very satisfactory classification performance.
Simultaneous multi-beam planar array IR (pair) spectroscopy
Elmore, Douglas L.; Rabolt, John F.; Tsao, Mei-Wei
2005-09-13
An apparatus and method capable of providing spatially multiplexed IR spectral information simultaneously in real-time for multiple samples or multiple spatial areas of one sample using IR absorption phenomena requires no moving parts or Fourier Transform during operation, and self-compensates for background spectra and degradation of component performance over time. IR spectral information and chemical analysis of the samples is determined by using one or more IR sources, sampling accessories for positioning the samples, optically dispersive elements, a focal plane array (FPA) arranged to detect the dispersed light beams, and a processor and display to control the FPA, and display an IR spectrograph. Fiber-optic coupling can be used to allow remote sensing. Portability, reliability, and ruggedness is enhanced due to the no-moving part construction. Applications include determining time-resolved orientation and characteristics of materials, including polymer monolayers. Orthogonal polarizers may be used to determine certain material characteristics.
Cho, Jin-Young; Lee, Hyoung-Joo; Jeong, Seul-Ki; Paik, Young-Ki
2017-12-01
Mass spectrometry (MS) is a widely used proteome analysis tool for biomedical science. In an MS-based bottom-up proteomic approach to protein identification, sequence database (DB) searching has been routinely used because of its simplicity and convenience. However, searching a sequence DB with multiple variable modification options can increase processing time, false-positive errors in large and complicated MS data sets. Spectral library searching is an alternative solution, avoiding the limitations of sequence DB searching and allowing the detection of more peptides with high sensitivity. Unfortunately, this technique has less proteome coverage, resulting in limitations in the detection of novel and whole peptide sequences in biological samples. To solve these problems, we previously developed the "Combo-Spec Search" method, which uses manually multiple references and simulated spectral library searching to analyze whole proteomes in a biological sample. In this study, we have developed a new analytical interface tool called "Epsilon-Q" to enhance the functions of both the Combo-Spec Search method and label-free protein quantification. Epsilon-Q performs automatically multiple spectral library searching, class-specific false-discovery rate control, and result integration. It has a user-friendly graphical interface and demonstrates good performance in identifying and quantifying proteins by supporting standard MS data formats and spectrum-to-spectrum matching powered by SpectraST. Furthermore, when the Epsilon-Q interface is combined with the Combo-Spec search method, called the Epsilon-Q system, it shows a synergistic function by outperforming other sequence DB search engines for identifying and quantifying low-abundance proteins in biological samples. The Epsilon-Q system can be a versatile tool for comparative proteome analysis based on multiple spectral libraries and label-free quantification.
Faruki, Hawazin; Mayhew, Gregory M; Fan, Cheng; Wilkerson, Matthew D; Parker, Scott; Kam-Morgan, Lauren; Eisenberg, Marcia; Horten, Bruce; Hayes, D Neil; Perou, Charles M; Lai-Goldman, Myla
2016-06-01
Context .- A histologic classification of lung cancer subtypes is essential in guiding therapeutic management. Objective .- To complement morphology-based classification of lung tumors, a previously developed lung subtyping panel (LSP) of 57 genes was tested using multiple public fresh-frozen gene-expression data sets and a prospectively collected set of formalin-fixed, paraffin-embedded lung tumor samples. Design .- The LSP gene-expression signature was evaluated in multiple lung cancer gene-expression data sets totaling 2177 patients collected from 4 platforms: Illumina RNAseq (San Diego, California), Agilent (Santa Clara, California) and Affymetrix (Santa Clara) microarrays, and quantitative reverse transcription-polymerase chain reaction. Gene centroids were calculated for each of 3 genomic-defined subtypes: adenocarcinoma, squamous cell carcinoma, and neuroendocrine, the latter of which encompassed both small cell carcinoma and carcinoid. Classification by LSP into 3 subtypes was evaluated in both fresh-frozen and formalin-fixed, paraffin-embedded tumor samples, and agreement with the original morphology-based diagnosis was determined. Results .- The LSP-based classifications demonstrated overall agreement with the original clinical diagnosis ranging from 78% (251 of 322) to 91% (492 of 538 and 869 of 951) in the fresh-frozen public data sets and 84% (65 of 77) in the formalin-fixed, paraffin-embedded data set. The LSP performance was independent of tissue-preservation method and gene-expression platform. Secondary, blinded pathology review of formalin-fixed, paraffin-embedded samples demonstrated concordance of 82% (63 of 77) with the original morphology diagnosis. Conclusions .- The LSP gene-expression signature is a reproducible and objective method for classifying lung tumors and demonstrates good concordance with morphology-based classification across multiple data sets. The LSP panel can supplement morphologic assessment of lung cancers, particularly when classification by standard methods is challenging.
A simple and reliable method reducing sulfate to sulfide for multiple sulfur isotope analysis.
Geng, Lei; Savarino, Joel; Savarino, Clara A; Caillon, Nicolas; Cartigny, Pierre; Hattori, Shohei; Ishino, Sakiko; Yoshida, Naohiro
2018-02-28
Precise analysis of four sulfur isotopes of sulfate in geological and environmental samples provides the means to extract unique information in wide geological contexts. Reduction of sulfate to sulfide is the first step to access such information. The conventional reduction method suffers from a cumbersome distillation system, long reaction time and large volume of the reducing solution. We present a new and simple method enabling the process of multiple samples at one time with a much reduced volume of reducing solution. One mL of reducing solution made of HI and NaH 2 PO 2 was added to a septum glass tube with dry sulfate. The tube was heated at 124°C and the produced H 2 S was purged with inert gas (He or N 2 ) through gas-washing tubes and then collected by NaOH solution. The collected H 2 S was converted into Ag 2 S by adding AgNO 3 solution and the co-precipitated Ag 2 O was removed by adding a few drops of concentrated HNO 3 . Within 2-3 h, a 100% yield was observed for samples with 0.2-2.5 μmol Na 2 SO 4 . The reduction rate was much slower for BaSO 4 and a complete reduction was not observed. International sulfur reference materials, NBS-127, SO-5 and SO-6, were processed with this method, and the measured against accepted δ 34 S values yielded a linear regression line which had a slope of 0.99 ± 0.01 and a R 2 value of 0.998. The new methodology is easy to handle and allows us to process multiple samples at a time. It has also demonstrated good reproducibility in terms of H 2 S yield and for further isotope analysis. It is thus a good alternative to the conventional manual method, especially when processing samples with limited amount of sulfate available. © 2017 The Authors. Rapid Communications in Mass Spectrometry Pubished by John Wiley & Sons Ltd.
Zhang, Guang Lan; Keskin, Derin B.; Lin, Hsin-Nan; Lin, Hong Huang; DeLuca, David S.; Leppanen, Scott; Milford, Edgar L.; Reinherz, Ellis L.; Brusic, Vladimir
2014-01-01
Human leukocyte antigens (HLA) are important biomarkers because multiple diseases, drug toxicity, and vaccine responses reveal strong HLA associations. Current clinical HLA typing is an elimination process requiring serial testing. We present an alternative in situ synthesized DNA-based microarray method that contains hundreds of thousands of probes representing a complete overlapping set covering 1,610 clinically relevant HLA class I alleles accompanied by computational tools for assigning HLA type to 4-digit resolution. Our proof-of-concept experiment included 21 blood samples, 18 cell lines, and multiple controls. The method is accurate, robust, and amenable to automation. Typing errors were restricted to homozygous samples or those with very closely related alleles from the same locus, but readily resolved by targeted DNA sequencing validation of flagged samples. High-throughput HLA typing technologies that are effective, yet inexpensive, can be used to analyze the world’s populations, benefiting both global public health and personalized health care. PMID:25505899
Meta-analysis with missing study-level sample variance data.
Chowdhry, Amit K; Dworkin, Robert H; McDermott, Michael P
2016-07-30
We consider a study-level meta-analysis with a normally distributed outcome variable and possibly unequal study-level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing-completely-at-random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta-regression to impute the missing sample variances. Our method takes advantage of study-level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross-over studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Two-dimensional correlation spectroscopy — Biannual survey 2007-2009
NASA Astrophysics Data System (ADS)
Noda, Isao
2010-06-01
The publication activities in the field of 2D correlation spectroscopy are surveyed with the emphasis on papers published during the last two years. Pertinent review articles and conference proceedings are discussed first, followed by the examination of noteworthy developments in the theory and applications of 2D correlation spectroscopy. Specific topics of interest include Pareto scaling, analysis of randomly sampled spectra, 2D analysis of data obtained under multiple perturbations, evolution of 2D spectra along additional variables, comparison and quantitative analysis of multiple 2D spectra, orthogonal sample design to eliminate interfering cross peaks, quadrature orthogonal signal correction and other data transformation techniques, data pretreatment methods, moving window analysis, extension of kernel and global phase angle analysis, covariance and correlation coefficient mapping, variant forms of sample-sample correlation, and different display methods. Various static and dynamic perturbation methods used in 2D correlation spectroscopy, e.g., temperature, composition, chemical reactions, H/D exchange, physical phenomena like sorption, diffusion and phase transitions, optical and biological processes, are reviewed. Analytical probes used in 2D correlation spectroscopy include IR, Raman, NIR, NMR, X-ray, mass spectrometry, chromatography, and others. Application areas of 2D correlation spectroscopy are diverse, encompassing synthetic and natural polymers, liquid crystals, proteins and peptides, biomaterials, pharmaceuticals, food and agricultural products, solutions, colloids, surfaces, and the like.
A Self-Directed Method for Cell-Type Identification and Separation of Gene Expression Microarrays
Zuckerman, Neta S.; Noam, Yair; Goldsmith, Andrea J.; Lee, Peter P.
2013-01-01
Gene expression analysis is generally performed on heterogeneous tissue samples consisting of multiple cell types. Current methods developed to separate heterogeneous gene expression rely on prior knowledge of the cell-type composition and/or signatures - these are not available in most public datasets. We present a novel method to identify the cell-type composition, signatures and proportions per sample without need for a-priori information. The method was successfully tested on controlled and semi-controlled datasets and performed as accurately as current methods that do require additional information. As such, this method enables the analysis of cell-type specific gene expression using existing large pools of publically available microarray datasets. PMID:23990767
Bannerman, J A; Costamagna, A C; McCornack, B P; Ragsdale, D W
2015-06-01
Generalist natural enemies play an important role in controlling soybean aphid, Aphis glycines (Hemiptera: Aphididae), in North America. Several sampling methods are used to monitor natural enemy populations in soybean, but there has been little work investigating their relative bias, precision, and efficiency. We compare five sampling methods: quadrats, whole-plant counts, sweep-netting, walking transects, and yellow sticky cards to determine the most practical methods for sampling the three most prominent species, which included Harmonia axyridis (Pallas), Coccinella septempunctata L. (Coleoptera: Coccinellidae), and Orius insidiosus (Say) (Hemiptera: Anthocoridae). We show an important time by sampling method interaction indicated by diverging community similarities within and between sampling methods as the growing season progressed. Similarly, correlations between sampling methods for the three most abundant species over multiple time periods indicated differences in relative bias between sampling methods and suggests that bias is not consistent throughout the growing season, particularly for sticky cards and whole-plant samples. Furthermore, we show that sticky cards produce strongly biased capture rates relative to the other four sampling methods. Precision and efficiency differed between sampling methods and sticky cards produced the most precise (but highly biased) results for adult natural enemies, while walking transects and whole-plant counts were the most efficient methods for detecting coccinellids and O. insidiosus, respectively. Based on bias, precision, and efficiency considerations, the most practical sampling methods for monitoring in soybean include walking transects for coccinellid detection and whole-plant counts for detection of small predators like O. insidiosus. Sweep-netting and quadrat samples are also useful for some applications, when efficiency is not paramount. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E
2014-06-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.
Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.
2017-01-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608
Liu, Chengfang; Lu, Louise; Zhang, Linxiu; Bai, Yu; Medina, Alexis; Rozelle, Scott; Smith, Darvin Scott; Zhou, Changhai; Zang, Wei
2017-09-01
Soil-transmitted helminths, or parasitic intestinal worms, are among the most prevalent and geographically widespread parasitic infections in the world. Accurate diagnosis and quantification of helminth infection are critical for informing and assessing deworming interventions. The Kato-Katz thick smear technique, the most widely used laboratory method to quantitatively assess infection prevalence and infection intensity of helminths, has often been compared with other methods. Only a few small-scale studies, however, have considered ways to improve its diagnostic sensitivity. This study, conducted among 4,985 school-age children in an area of rural China with moderate prevalence of helminth infection, examines the effect on diagnostic sensitivity of the Kato-Katz technique when two fecal samples collected over consecutive days are examined and compared with a single sample. A secondary aim was to consider cost-effectiveness by calculating an estimate of the marginal costs of obtaining an additional fecal sample. Our findings show that analysis of an additional fecal sample led to increases of 23%, 26%, and 100% for Ascaris lumbricoides, Trichuris trichiura , and hookworm prevalence, respectively. The cost of collecting a second fecal sample for our study population was approximately USD4.60 per fecal sample. Overall, the findings suggest that investing 31% more capital in fecal sample collection prevents an underestimation of prevalence by about 21%, and hence improves the diagnostic sensitivity of the Kato-Katz method. Especially in areas with light-intensity infections of soil-transmitted helminths and limited public health resources, more accurate epidemiological surveillance using multiple fecal samples will critically inform decisions regarding infection control and prevention.
Weighted regression analysis and interval estimators
Donald W. Seegrist
1974-01-01
A method for deriving the weighted least squares estimators for the parameters of a multiple regression model. Confidence intervals for expected values, and prediction intervals for the means of future samples are given.
Che, W W; Frey, H Christopher; Lau, Alexis K H
2014-12-01
Population and diary sampling methods are employed in exposure models to sample simulated individuals and their daily activity on each simulation day. Different sampling methods may lead to variations in estimated human exposure. In this study, two population sampling methods (stratified-random and random-random) and three diary sampling methods (random resampling, diversity and autocorrelation, and Markov-chain cluster [MCC]) are evaluated. Their impacts on estimated children's exposure to ambient fine particulate matter (PM2.5 ) are quantified via case studies for children in Wake County, NC for July 2002. The estimated mean daily average exposure is 12.9 μg/m(3) for simulated children using the stratified population sampling method, and 12.2 μg/m(3) using the random sampling method. These minor differences are caused by the random sampling among ages within census tracts. Among the three diary sampling methods, there are differences in the estimated number of individuals with multiple days of exposures exceeding a benchmark of concern of 25 μg/m(3) due to differences in how multiday longitudinal diaries are estimated. The MCC method is relatively more conservative. In case studies evaluated here, the MCC method led to 10% higher estimation of the number of individuals with repeated exposures exceeding the benchmark. The comparisons help to identify and contrast the capabilities of each method and to offer insight regarding implications of method choice. Exposure simulation results are robust to the two population sampling methods evaluated, and are sensitive to the choice of method for simulating longitudinal diaries, particularly when analyzing results for specific microenvironments or for exposures exceeding a benchmark of concern. © 2014 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Wang, Ji; Fischer, Debra A.; Horch, Elliott P.; Xie, Ji-Wei
2015-06-01
As hundreds of gas giant planets have been discovered, we study how these planets form and evolve in different stellar environments, specifically in multiple stellar systems. In such systems, stellar companions may have a profound influence on gas giant planet formation and evolution via several dynamical effects such as truncation and perturbation. We select 84 Kepler Objects of Interest (KOIs) with gas giant planet candidates. We obtain high-angular resolution images using telescopes with adaptive optics (AO) systems. Together with the AO data, we use archival radial velocity data and dynamical analysis to constrain the presence of stellar companions. We detect 59 stellar companions around 40 KOIs for which we develop methods of testing their physical association. These methods are based on color information and galactic stellar population statistics. We find evidence of suppressive planet formation within 20 AU by comparing stellar multiplicity. The stellar multiplicity rate (MR) for planet host stars is {0}-0+5% within 20 AU. In comparison, the stellar MR is 18% ± 2% for the control sample, i.e., field stars in the solar neighborhood. The stellar MR for planet host stars is 34% ± 8% for separations between 20 and 200 AU, which is higher than the control sample at 12% ± 2%. Beyond 200 AU, stellar MRs are comparable between planet host stars and the control sample. We discuss the implications of the results on gas giant planet formation and evolution.
Spectrometer capillary vessel and method of making same
Linehan, J.C.; Yonker, C.R.; Zemanian, T.S.; Franz, J.A.
1995-11-21
The present invention is an arrangement of a glass capillary tube for use in spectroscopy. In particular, the invention is a capillary arranged in a manner permitting a plurality or multiplicity of passes of a sample material through a spectroscopic measurement zone. In a preferred embodiment, the multi-pass capillary is insertable within a standard NMR sample tube. The present invention further includes a method of making the multi-pass capillary tube and an apparatus for spinning the tube. 13 figs.
Rapid Sequencing of Complete env Genes from Primary HIV-1 Samples
Eren, Kemal; Ignacio, Caroline; Landais, Elise; Weaver, Steven; Phung, Pham; Ludka, Colleen; Hepler, Lance; Caballero, Gemma; Pollner, Tristan; Guo, Yan; Richman, Douglas; Poignard, Pascal; Paxinos, Ellen E.; Kosakovsky Pond, Sergei L.
2016-01-01
Abstract The ability to study rapidly evolving viral populations has been constrained by the read length of next-generation sequencing approaches and the sampling depth of single-genome amplification methods. Here, we develop and characterize a method using Pacific Biosciences’ Single Molecule, Real-Time (SMRT®) sequencing technology to sequence multiple, intact full-length human immunodeficiency virus-1 env genes amplified from viral RNA populations circulating in blood, and provide computational tools for analyzing and visualizing these data. PMID:29492273
Device and method for automated separation of a sample of whole blood into aliquots
Burtis, Carl A.; Johnson, Wayne F.
1989-01-01
A device and a method for automated processing and separation of an unmeasured sample of whole blood into multiple aliquots of plasma. Capillaries are radially oriented on a rotor, with the rotor defining a sample chamber, transfer channels, overflow chamber, overflow channel, vent channel, cell chambers, and processing chambers. A sample of whole blood is placed in the sample chamber, and when the rotor is rotated, the blood moves outward through the transfer channels to the processing chambers where the blood is centrifugally separated into a solid cellular component and a liquid plasma component. When the rotor speed is decreased, the plasma component backfills the capillaries resulting in uniform aliquots of plasma which may be used for subsequent analytical procedures.
Cordeiro, Fernanda B; Ferreira, Christina R; Sobreira, Tiago Jose P; Yannell, Karen E; Jarmusch, Alan K; Cedenho, Agnaldo P; Lo Turco, Edson G; Cooks, R Graham
2017-09-15
We describe multiple reaction monitoring (MRM)-profiling, which provides accelerated discovery of discriminating molecular features, and its application to human polycystic ovary syndrome (PCOS) diagnosis. The discovery phase of the MRM-profiling seeks molecular features based on some prior knowledge of the chemical functional groups likely to be present in the sample. It does this through use of a limited number of pre-chosen and chemically specific neutral loss and/or precursor ion MS/MS scans. The output of the discovery phase is a set of precursor/product transitions. In the screening phase these MRM transitions are used to interrogate multiple samples (hence the name MRM-profiling). MRM-profiling was applied to follicular fluid samples of 22 controls and 29 clinically diagnosed PCOS patients. Representative samples were delivered by flow injection to a triple quadrupole mass spectrometer set to perform a number of pre-chosen and chemically specific neutral loss and/or precursor ion MS/MS scans. The output of this discovery phase was a set of 1012 precursor/product transitions. In the screening phase each individual sample was interrogated for these MRM transitions. Principal component analysis (PCA) and receiver operating characteristic (ROC) curves were used for statistical analysis. To evaluate the method's performance, half the samples were used to build a classification model (testing set) and half were blinded (validation set). Twenty transitions were used for the classification of the blind samples, most of them (N = 19) showed lower abundances in the PCOS group and corresponded to phosphatidylethanolamine (PE) and phosphatidylserine (PS) lipids. Agreement of 73% with clinical diagnosis was found when classifying the 26 blind samples. MRM-profiling is a supervised method characterized by its simplicity, speed and the absence of chromatographic separation. It can be used to rapidly isolate discriminating molecules in healthy/disease conditions by tailored screening of signals associated with hundreds of molecules in complex samples. Copyright © 2017 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Ghamrawi, Norma
2014-01-01
This study examined teachers' use of the Multiple Intelligences Theory on vocabulary acquisition by preschoolers during English as a second language (ESL) classes in a K-12 school in Lebanon. Eighty kindergartners (KG II, aged 5 years) and eight teachers constituted the sample. The study used mixed methods, including observations of videotaped…
Dermatoglyphic features in patients with multiple sclerosis
Sabanciogullari, Vedat; Cevik, Seyda; Karacan, Kezban; Bolayir, Ertugrul; Cimen, Mehmet
2014-01-01
Objective: To examine dermatoglyphic features to clarify implicated genetic predisposition in the etiology of multiple sclerosis (MS). Methods: The study was conducted between January and December 2013 in the Departments of Anatomy, and Neurology, Cumhuriyet University School of Medicine, Sivas, Turkey. The dermatoglyphic data of 61 patients, and a control group consisting of 62 healthy adults obtained with a digital scanner were transferred to a computer environment. The ImageJ program was used, and atd, dat, adt angles, a-b ridge count, sample types of all fingers, and ridge counts were calculated. Results: In both hands of the patients with MS, the a-b ridge count and ridge counts in all fingers increased, and the differences in these values were statistically significant. There was also a statistically significant increase in the dat angle in both hands of the MS patients. On the contrary, there was no statistically significant difference between the groups in terms of dermal ridge samples, and the most frequent sample in both groups was the ulnar loop. Conclusions: Aberrations in the distribution of dermatoglyphic samples support the genetic predisposition in MS etiology. Multiple sclerosis susceptible individuals may be determined by analyzing dermatoglyphic samples. PMID:25274586
Dependability of Data Derived from Time Sampling Methods with Multiple Observation Targets
ERIC Educational Resources Information Center
Johnson, Austin H.; Chafouleas, Sandra M.; Briesch, Amy M.
2017-01-01
In this study, generalizability theory was used to examine the extent to which (a) time-sampling methodology, (b) number of simultaneous behavior targets, and (c) individual raters influenced variance in ratings of academic engagement for an elementary-aged student. Ten graduate-student raters, with an average of 7.20 hr of previous training in…
Advanced analysis techniques for uranium assay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geist, W. H.; Ensslin, Norbert; Carrillo, L. A.
2001-01-01
Uranium has a negligible passive neutron emission rate making its assay practicable only with an active interrogation method. The active interrogation uses external neutron sources to induce fission events in the uranium in order to determine the mass. This technique requires careful calibration with standards that are representative of the items to be assayed. The samples to be measured are not always well represented by the available standards which often leads to large biases. A technique of active multiplicity counting is being developed to reduce some of these assay difficulties. Active multiplicity counting uses the measured doubles and triples countmore » rates to determine the neutron multiplication (f4) and the product of the source-sample coupling ( C ) and the 235U mass (m). Since the 35U mass always appears in the multiplicity equations as the product of Cm, the coupling needs to be determined before the mass can be known. A relationship has been developed that relates the coupling to the neutron multiplication. The relationship is based on both an analytical derivation and also on empirical observations. To determine a scaling constant present in this relationship, known standards must be used. Evaluation of experimental data revealed an improvement over the traditional calibration curve analysis method of fitting the doubles count rate to the 235Um ass. Active multiplicity assay appears to relax the requirement that the calibration standards and unknown items have the same chemical form and geometry.« less
Assessment of Sample Preparation Bias in Mass Spectrometry-Based Proteomics.
Klont, Frank; Bras, Linda; Wolters, Justina C; Ongay, Sara; Bischoff, Rainer; Halmos, Gyorgy B; Horvatovich, Péter
2018-04-17
For mass spectrometry-based proteomics, the selected sample preparation strategy is a key determinant for information that will be obtained. However, the corresponding selection is often not based on a fit-for-purpose evaluation. Here we report a comparison of in-gel (IGD), in-solution (ISD), on-filter (OFD), and on-pellet digestion (OPD) workflows on the basis of targeted (QconCAT-multiple reaction monitoring (MRM) method for mitochondrial proteins) and discovery proteomics (data-dependent acquisition, DDA) analyses using three different human head and neck tissues (i.e., nasal polyps, parotid gland, and palatine tonsils). Our study reveals differences between the sample preparation methods, for example, with respect to protein and peptide losses, quantification variability, protocol-induced methionine oxidation, and asparagine/glutamine deamidation as well as identification of cysteine-containing peptides. However, none of the methods performed best for all types of tissues, which argues against the existence of a universal sample preparation method for proteome analysis.
Miller, Arthur L; Drake, Pamela L; Murphy, Nathaniel C; Cauda, Emanuele G; LeBouf, Ryan F; Markevicius, Gediminas
Miners are exposed to silica-bearing dust which can lead to silicosis, a potentially fatal lung disease. Currently, airborne silica is measured by collecting filter samples and sending them to a laboratory for analysis. Since this may take weeks, a field method is needed to inform decisions aimed at reducing exposures. This study investigates a field-portable Fourier transform infrared (FTIR) method for end-of-shift (EOS) measurement of silica on filter samples. Since the method entails localized analyses, spatial uniformity of dust deposition can affect accuracy and repeatability. The study, therefore, assesses the influence of radial deposition uniformity on the accuracy of the method. Using laboratory-generated Minusil and coal dusts and three different types of sampling systems, multiple sets of filter samples were prepared. All samples were collected in pairs to create parallel sets for training and validation. Silica was measured by FTIR at nine locations across the face of each filter and the data analyzed using a multiple regression analysis technique that compared various models for predicting silica mass on the filters using different numbers of "analysis shots." It was shown that deposition uniformity is independent of particle type (kaolin vs. silica), which suggests the role of aerodynamic separation is negligible. Results also reflected the correlation between the location and number of shots versus the predictive accuracy of the models. The coefficient of variation (CV) for the models when predicting mass of validation samples was 4%-51% depending on the number of points analyzed and the type of sampler used, which affected the uniformity of radial deposition on the filters. It was shown that using a single shot at the center of the filter yielded predictivity adequate for a field method, (93% return, CV approximately 15%) for samples collected with 3-piece cassettes.
40 CFR 53.35 - Test procedure for Class II and Class III methods for PM 2.5 and PM −2.5.
Code of Federal Regulations, 2014 CFR
2014-07-01
... section. All reference method samplers shall be of single-filter design (not multi-filter, sequential sample design). Each candidate method shall be setup and operated in accordance with its associated... precision specified in table C-4 of this subpart. (g) Test for additive and multiplicative bias (comparative...
40 CFR 53.35 - Test procedure for Class II and Class III methods for PM 2.5 and PM −2.5.
Code of Federal Regulations, 2013 CFR
2013-07-01
... section. All reference method samplers shall be of single-filter design (not multi-filter, sequential sample design). Each candidate method shall be setup and operated in accordance with its associated... precision specified in table C-4 of this subpart. (g) Test for additive and multiplicative bias (comparative...
Howard B. Stauffer; Cynthia J. Zabel; Jeffrey R. Dunk
2005-01-01
We compared a set of competing logistic regression habitat selection models for Northern Spotted Owls (Strix occidentalis caurina) in California. The habitat selection models were estimated, compared, evaluated, and tested using multiple sample datasets collected on federal forestlands in northern California. We used Bayesian methods in interpreting...
Multiple data sources improve DNA-based mark-recapture population estimates of grizzly bears.
Boulanger, John; Kendall, Katherine C; Stetz, Jeffrey B; Roon, David A; Waits, Lisette P; Paetkau, David
2008-04-01
A fundamental challenge to estimating population size with mark-recapture methods is heterogeneous capture probabilities and subsequent bias of population estimates. Confronting this problem usually requires substantial sampling effort that can be difficult to achieve for some species, such as carnivores. We developed a methodology that uses two data sources to deal with heterogeneity and applied this to DNA mark-recapture data from grizzly bears (Ursus arctos). We improved population estimates by incorporating additional DNA "captures" of grizzly bears obtained by collecting hair from unbaited bear rub trees concurrently with baited, grid-based, hair snag sampling. We consider a Lincoln-Petersen estimator with hair snag captures as the initial session and rub tree captures as the recapture session and develop an estimator in program MARK that treats hair snag and rub tree samples as successive sessions. Using empirical data from a large-scale project in the greater Glacier National Park, Montana, USA, area and simulation modeling we evaluate these methods and compare the results to hair-snag-only estimates. Empirical results indicate that, compared with hair-snag-only data, the joint hair-snag-rub-tree methods produce similar but more precise estimates if capture and recapture rates are reasonably high for both methods. Simulation results suggest that estimators are potentially affected by correlation of capture probabilities between sample types in the presence of heterogeneity. Overall, closed population Huggins-Pledger estimators showed the highest precision and were most robust to sparse data, heterogeneity, and capture probability correlation among sampling types. Results also indicate that these estimators can be used when a segment of the population has zero capture probability for one of the methods. We propose that this general methodology may be useful for other species in which mark-recapture data are available from multiple sources.
Sarvin, Boris; Fedorova, Elizaveta; Shpigun, Oleg; Titova, Maria; Nikitin, Mikhail; Kochkin, Dmitry; Rodin, Igor; Stavrianidi, Andrey
2018-03-30
In this paper, the ultrasound assisted extraction method for isolation of steroidal glycosides from D. deltoidea plant cell suspension culture with a subsequent HPLC-MS determination was developed. After the organic solvent was selected via a two-factor experiment the optimization via Latin Square 4 × 4 experimental design was carried out for the following parameters: extraction time, organic solvent concentration in extraction solution and the ratio of solvent to sample. It was also shown that the ultrasound assisted extraction method is not suitable for isolation of steroidal glycosides from the D. deltoidea plant material. The results were double-checked using the multiple successive extraction method and refluxing extraction. Optimal conditions for the extraction of steroidal glycosides by the ultrasound assisted extraction method were: extraction time, 60 min; acetonitrile (water) concentration in extraction solution, 50%; the ratio of solvent to sample, 400 mL/g. Also, the developed method was tested on D. deltoidea cell suspension cultures of different terms and conditions of cultivation. The completeness of the extraction was confirmed using the multiple successive extraction method. Copyright © 2018 Elsevier B.V. All rights reserved.
Tsai, Yu-Shuen; Aguan, Kripamoy; Pal, Nikhil R.; Chung, I-Fang
2011-01-01
Informative genes from microarray data can be used to construct prediction model and investigate biological mechanisms. Differentially expressed genes, the main targets of most gene selection methods, can be classified as single- and multiple-class specific signature genes. Here, we present a novel gene selection algorithm based on a Group Marker Index (GMI), which is intuitive, of low-computational complexity, and efficient in identification of both types of genes. Most gene selection methods identify only single-class specific signature genes and cannot identify multiple-class specific signature genes easily. Our algorithm can detect de novo certain conditions of multiple-class specificity of a gene and makes use of a novel non-parametric indicator to assess the discrimination ability between classes. Our method is effective even when the sample size is small as well as when the class sizes are significantly different. To compare the effectiveness and robustness we formulate an intuitive template-based method and use four well-known datasets. We demonstrate that our algorithm outperforms the template-based method in difficult cases with unbalanced distribution. Moreover, the multiple-class specific genes are good biomarkers and play important roles in biological pathways. Our literature survey supports that the proposed method identifies unique multiple-class specific marker genes (not reported earlier to be related to cancer) in the Central Nervous System data. It also discovers unique biomarkers indicating the intrinsic difference between subtypes of lung cancer. We also associate the pathway information with the multiple-class specific signature genes and cross-reference to published studies. We find that the identified genes participate in the pathways directly involved in cancer development in leukemia data. Our method gives a promising way to find genes that can involve in pathways of multiple diseases and hence opens up the possibility of using an existing drug on other diseases as well as designing a single drug for multiple diseases. PMID:21909426
Bruce, James F.; Roberts, James J.; Zuellig, Robert E.
2018-05-24
The U.S. Geological Survey (USGS), in cooperation with Colorado Springs City Engineering and Colorado Springs Utilities, analyzed previously collected invertebrate data to determine the comparability among four sampling methods and two versions (2010 and 2017) of the Colorado Benthic Macroinvertebrate Multimetric Index (MMI). For this study, annual macroinvertebrate samples were collected concurrently (in space and time) at 15 USGS surface-water gaging stations in the Fountain Creek Basin from 2010 to 2012 using four sampling methods. The USGS monitoring project in the basin uses two of the methods and the Colorado Department of Public Health and Environment recommends the other two. These methods belong to two distinct sample types, one that targets single habitats and one that targets multiple habitats. The study results indicate that there are significant differences in MMI values obtained from the single-habitat and multihabitat sample types but methods from each program within each sample type produced comparable values. This study also determined that MMI values calculated by different versions of the Colorado Benthic Macroinvertebrate MMI are indistinguishable. This indicates that the Colorado Department of Public Health and Environment methods are comparable with the USGS monitoring project methods for single-habitat and multihabitat sample types. This report discusses the direct application of the study results to inform the revision of the existing USGS monitoring project in the Fountain Creek Basin.
High throughput analysis of samples in flowing liquid
Ambrose, W. Patrick; Grace, W. Kevin; Goodwin, Peter M.; Jett, James H.; Orden, Alan Van; Keller, Richard A.
2001-01-01
Apparatus and method enable imaging multiple fluorescent sample particles in a single flow channel. A flow channel defines a flow direction for samples in a flow stream and has a viewing plane perpendicular to the flow direction. A laser beam is formed as a ribbon having a width effective to cover the viewing plane. Imaging optics are arranged to view the viewing plane to form an image of the fluorescent sample particles in the flow stream, and a camera records the image formed by the imaging optics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakayasu, Ernesto S.; Nicora, Carrie D.; Sims, Amy C.
2016-05-03
ABSTRACT Integrative multi-omics analyses can empower more effective investigation and complete understanding of complex biological systems. Despite recent advances in a range of omics analyses, multi-omic measurements of the same sample are still challenging and current methods have not been well evaluated in terms of reproducibility and broad applicability. Here we adapted a solvent-based method, widely applied for extracting lipids and metabolites, to add proteomics to mass spectrometry-based multi-omics measurements. Themetabolite,protein, andlipidextraction (MPLEx) protocol proved to be robust and applicable to a diverse set of sample types, including cell cultures, microbial communities, and tissues. To illustrate the utility of thismore » protocol, an integrative multi-omics analysis was performed using a lung epithelial cell line infected with Middle East respiratory syndrome coronavirus, which showed the impact of this virus on the host glycolytic pathway and also suggested a role for lipids during infection. The MPLEx method is a simple, fast, and robust protocol that can be applied for integrative multi-omic measurements from diverse sample types (e.g., environmental,in vitro, and clinical). IMPORTANCEIn systems biology studies, the integration of multiple omics measurements (i.e., genomics, transcriptomics, proteomics, metabolomics, and lipidomics) has been shown to provide a more complete and informative view of biological pathways. Thus, the prospect of extracting different types of molecules (e.g., DNAs, RNAs, proteins, and metabolites) and performing multiple omics measurements on single samples is very attractive, but such studies are challenging due to the fact that the extraction conditions differ according to the molecule type. Here, we adapted an organic solvent-based extraction method that demonstrated broad applicability and robustness, which enabled comprehensive proteomics, metabolomics, and lipidomics analyses from the same sample.« less
Comparison of Techniques for Sampling Adult Necrophilous Insects From Pig Carcasses.
Cruise, Angela; Hatano, Eduardo; Watson, David W; Schal, Coby
2018-02-06
Studies of the pre-colonization interval and mechanisms driving necrophilous insect ecological succession depend on effective sampling of adult insects and knowledge of their diel and successional activity patterns. The number of insects trapped, their diversity, and diel periodicity were compared with four sampling methods on neonate pigs. Sampling method, time of day and decomposition age of the pigs significantly affected the number of insects sampled from pigs. We also found significant interactions of sampling method and decomposition day, time of sampling and decomposition day. No single method was superior to the other methods during all three decomposition days. Sampling times after noon yielded the largest samples during the first 2 d of decomposition. On day 3 of decomposition however, all sampling times were equally effective. Therefore, to maximize insect collections from neonate pigs, the method used to sample must vary by decomposition day. The suction trap collected the most species-rich samples, but sticky trap samples were the most diverse, when both species richness and evenness were factored into a Shannon diversity index. Repeated sampling during the noon to 18:00 hours period was most effective to obtain the maximum diversity of trapped insects. The integration of multiple sampling techniques would most effectively sample the necrophilous insect community. However, because all four tested methods were deficient at sampling beetle species, future work should focus on optimizing the most promising methods, alone or in combinations, and incorporate hand-collections of beetles. © The Author(s) 2018. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
High-density grids for efficient data collection from multiple crystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baxter, Elizabeth L.; Aguila, Laura; Alonso-Mori, Roberto
Higher throughput methods to mount and collect data from multiple small and radiation-sensitive crystals are important to support challenging structural investigations using microfocus synchrotron beamlines. Furthermore, efficient sample-delivery methods are essential to carry out productive femtosecond crystallography experiments at X-ray free-electron laser (XFEL) sources such as the Linac Coherent Light Source (LCLS). To address these needs, a high-density sample grid useful as a scaffold for both crystal growth and diffraction data collection has been developed and utilized for efficient goniometer-based sample delivery at synchrotron and XFEL sources. A single grid contains 75 mounting ports and fits inside an SSRL cassettemore » or uni-puck storage container. The use of grids with an SSRL cassette expands the cassette capacity up to 7200 samples. Grids may also be covered with a polymer film or sleeve for efficient room-temperature data collection from multiple samples. New automated routines have been incorporated into theBlu-Ice/DCSSexperimental control system to support grids, including semi-automated grid alignment, fully automated positioning of grid ports, rastering and automated data collection. Specialized tools have been developed to support crystallization experiments on grids, including a universal adaptor, which allows grids to be filled by commercial liquid-handling robots, as well as incubation chambers, which support vapor-diffusion and lipidic cubic phase crystallization experiments. Experiments in which crystals were loaded into grids or grown on grids using liquid-handling robots and incubation chambers are described. As a result, crystals were screened at LCLS-XPP and SSRL BL12-2 at room temperature and cryogenic temperatures.« less
High-density grids for efficient data collection from multiple crystals
Baxter, Elizabeth L.; Aguila, Laura; Alonso-Mori, Roberto; Barnes, Christopher O.; Bonagura, Christopher A.; Brehmer, Winnie; Brunger, Axel T.; Calero, Guillermo; Caradoc-Davies, Tom T.; Chatterjee, Ruchira; Degrado, William F.; Fraser, James S.; Ibrahim, Mohamed; Kern, Jan; Kobilka, Brian K.; Kruse, Andrew C.; Larsson, Karl M.; Lemke, Heinrik T.; Lyubimov, Artem Y.; Manglik, Aashish; McPhillips, Scott E.; Norgren, Erik; Pang, Siew S.; Soltis, S. M.; Song, Jinhu; Thomaston, Jessica; Tsai, Yingssu; Weis, William I.; Woldeyes, Rahel A.; Yachandra, Vittal; Yano, Junko; Zouni, Athina; Cohen, Aina E.
2016-01-01
Higher throughput methods to mount and collect data from multiple small and radiation-sensitive crystals are important to support challenging structural investigations using microfocus synchrotron beamlines. Furthermore, efficient sample-delivery methods are essential to carry out productive femtosecond crystallography experiments at X-ray free-electron laser (XFEL) sources such as the Linac Coherent Light Source (LCLS). To address these needs, a high-density sample grid useful as a scaffold for both crystal growth and diffraction data collection has been developed and utilized for efficient goniometer-based sample delivery at synchrotron and XFEL sources. A single grid contains 75 mounting ports and fits inside an SSRL cassette or uni-puck storage container. The use of grids with an SSRL cassette expands the cassette capacity up to 7200 samples. Grids may also be covered with a polymer film or sleeve for efficient room-temperature data collection from multiple samples. New automated routines have been incorporated into the Blu-Ice/DCSS experimental control system to support grids, including semi-automated grid alignment, fully automated positioning of grid ports, rastering and automated data collection. Specialized tools have been developed to support crystallization experiments on grids, including a universal adaptor, which allows grids to be filled by commercial liquid-handling robots, as well as incubation chambers, which support vapor-diffusion and lipidic cubic phase crystallization experiments. Experiments in which crystals were loaded into grids or grown on grids using liquid-handling robots and incubation chambers are described. Crystals were screened at LCLS-XPP and SSRL BL12-2 at room temperature and cryogenic temperatures. PMID:26894529
High-density grids for efficient data collection from multiple crystals
Baxter, Elizabeth L.; Aguila, Laura; Alonso-Mori, Roberto; ...
2015-11-03
Higher throughput methods to mount and collect data from multiple small and radiation-sensitive crystals are important to support challenging structural investigations using microfocus synchrotron beamlines. Furthermore, efficient sample-delivery methods are essential to carry out productive femtosecond crystallography experiments at X-ray free-electron laser (XFEL) sources such as the Linac Coherent Light Source (LCLS). To address these needs, a high-density sample grid useful as a scaffold for both crystal growth and diffraction data collection has been developed and utilized for efficient goniometer-based sample delivery at synchrotron and XFEL sources. A single grid contains 75 mounting ports and fits inside an SSRL cassettemore » or uni-puck storage container. The use of grids with an SSRL cassette expands the cassette capacity up to 7200 samples. Grids may also be covered with a polymer film or sleeve for efficient room-temperature data collection from multiple samples. New automated routines have been incorporated into theBlu-Ice/DCSSexperimental control system to support grids, including semi-automated grid alignment, fully automated positioning of grid ports, rastering and automated data collection. Specialized tools have been developed to support crystallization experiments on grids, including a universal adaptor, which allows grids to be filled by commercial liquid-handling robots, as well as incubation chambers, which support vapor-diffusion and lipidic cubic phase crystallization experiments. Experiments in which crystals were loaded into grids or grown on grids using liquid-handling robots and incubation chambers are described. As a result, crystals were screened at LCLS-XPP and SSRL BL12-2 at room temperature and cryogenic temperatures.« less
Li, Yan-Fei; Qiao, Lu-Qin; Li, Fang-Wei; Ding, Yi; Yang, Zi-Jun; Wang, Ming-Lin
2014-09-26
Based on a modified quick, easy, cheap, effective, rugged and safe (QuEChERS) sample preparation method with Fe3O4 magnetic nanoparticles (MNPs) as the adsorbing material and gas chromatography-tandem mass spectrometry (GC-MS/MS) determination in multiple reaction monitoring (MRM) mode, we established a new method for the determination of multiple pesticides in vegetables and fruits. It was determined that bare MNPs have excellent function as adsorbent when purified, and it is better to be separated from the extract. The amount of MNPs influenced the clean-up performance and recoveries. To achieve the optimum performance of modified QuEChERS towards the target analytes, several parameters including the amount of the adsorbents and purification time were investigated. Under the optimum conditions, recoveries were evaluated in four representative matrices (tomato, cucumber, orange and apple) with the spiked concentrations of 10 μg kg(-1), 50 μg kg(-1)and 200 μg kg(-1) in all cases. The results showed that the recovery of 101 pesticides ranged between 71.5 and 111.7%, and the relative standard deviation was less than 10.5%. The optimum clean-up system improved the purification efficiency and simultaneously obtained satisfactory recoveries of multiple pesticides, including planar-ring pesticides. In short, the modified QuEChERS method in addition to MNPs used for removing impurities improved the speed of sample pre-treatment and exhibited an enhanced performance and purifying effect. Copyright © 2014 Elsevier B.V. All rights reserved.
Multiple-Parameter Estimation Method Based on Spatio-Temporal 2-D Processing for Bistatic MIMO Radar
Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping
2015-01-01
A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters’ outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results. PMID:26694385
Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping
2015-12-14
A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters' outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boedicker, J.; Li, L; Kline, T
2008-01-01
This article describes plug-based microfluidic technology that enables rapid detection and drug susceptibility screening of bacteria in samples, including complex biological matrices, without pre-incubation. Unlike conventional bacterial culture and detection methods, which rely on incubation of a sample to increase the concentration of bacteria to detectable levels, this method confines individual bacteria into droplets nanoliters in volume. When single cells are confined into plugs of small volume such that the loading is less than one bacterium per plug, the detection time is proportional to plug volume. Confinement increases cell density and allows released molecules to accumulate around the cell, eliminatingmore » the pre-incubation step and reducing the time required to detect the bacteria. We refer to this approach as stochastic confinement. Using the microfluidic hybrid method, this technology was used to determine the antibiogram - or chart of antibiotic sensitivity - of methicillin-resistant Staphylococcus aureus (MRSA) to many antibiotics in a single experiment and to measure the minimal inhibitory concentration (MIC) of the drug cefoxitin (CFX) against this strain. In addition, this technology was used to distinguish between sensitive and resistant strains of S. aureus in samples of human blood plasma. High-throughput microfluidic techniques combined with single-cell measurements also enable multiple tests to be performed simultaneously on a single sample containing bacteria. This technology may provide a method of rapid and effective patient-specific treatment of bacterial infections and could be extended to a variety of applications that require multiple functional tests of bacterial samples on reduced timescales.« less
Antenna pattern interpolation by generalized Whittaker reconstruction
NASA Astrophysics Data System (ADS)
Tjonneland, K.; Lindley, A.; Balling, P.
Whittaker reconstruction is an effective tool for interpolation of band limited data. Whittaker originally introduced the interpolation formula termed the cardinal function as the function that represents a set of equispaced samples but has no periodic components of period less than twice the sample spacing. It appears that its use for reflector antennas was pioneered in France. The method is now a useful tool in the analysis and design of multiple beam reflector antenna systems. A good description of the method has been given by Bucci et al. This paper discusses some problems encountered with the method and their solution.
Communication: Multiple atomistic force fields in a single enhanced sampling simulation
NASA Astrophysics Data System (ADS)
Hoang Viet, Man; Derreumaux, Philippe; Nguyen, Phuong H.
2015-07-01
The main concerns of biomolecular dynamics simulations are the convergence of the conformational sampling and the dependence of the results on the force fields. While the first issue can be addressed by employing enhanced sampling techniques such as simulated tempering or replica exchange molecular dynamics, repeating these simulations with different force fields is very time consuming. Here, we propose an automatic method that includes different force fields into a single advanced sampling simulation. Conformational sampling using three all-atom force fields is enhanced by simulated tempering and by formulating the weight parameters of the simulated tempering method in terms of the energy fluctuations, the system is able to perform random walk in both temperature and force field spaces. The method is first demonstrated on a 1D system and then validated by the folding of the 10-residue chignolin peptide in explicit water.
Liu, Gui-Long; Huang, Shi-Hong; Shi, Che-Si; Zeng, Bin; Zhang, Ke-Shi; Zhong, Xian-Ci
2018-02-10
Using copper thin-walled tubular specimens, the subsequent yield surfaces under pre-tension, pre-torsion and pre-combined tension-torsion are measured, where the single-sample and multi-sample methods are applied respectively to determine the yield stresses at specified offset strain. The rule and characteristics of the evolution of the subsequent yield surface are investigated. Under the conditions of different pre-strains, the influence of test point number, test sequence and specified offset strain on the measurement of subsequent yield surface and the concave phenomenon for measured yield surface are studied. Moreover, the feasibility and validity of the two methods are compared. The main conclusions are drawn as follows: (1) For the single or multi-sample method, the measured subsequent yield surfaces are remarkably different from cylindrical yield surfaces proposed by the classical plasticity theory; (2) there are apparent differences between the test results from the two kinds of methods: the multi-sample method is not influenced by the number of test points, test order and the cumulative effect of residual plastic strain resulting from the other test point, while those are very influential in the single-sample method; and (3) the measured subsequent yield surface may appear concave, which can be transformed to convex for single-sample method by changing the test sequence. However, for the multiple-sample method, the concave phenomenon will disappear when a larger offset strain is specified.
NASA Astrophysics Data System (ADS)
Ryu, Inkeon; Kim, Daekeun
2018-04-01
A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.
Advances in spectroscopic methods for quantifying soil carbon
Liebig, Mark; Franzluebbers, Alan J.; Follett, Ronald F.; Hively, W. Dean; Reeves, James B.; McCarty, Gregory W.; Calderon, Francisco
2012-01-01
The gold standard for soil C determination is combustion. However, this method requires expensive consumables, is limited to the determination of the total carbon and in the number of samples which can be processed (~100/d). With increased interest in soil C sequestration, faster methods are needed. Thus, interest in methods based on diffuse reflectance spectroscopy in the visible, near-infrared or mid-infrared ranges using either proximal or remote sensing. These methods have the ability to analyze more samples (2 to 3X/d) or huge areas (imagery) and do multiple analytes simultaneously, but require calibrations relating spectral and reference data and have specific problems, i.e., remote sensing is capable of scanning entire watersheds, thus reducing the sampling needed, but is limiting to the surface layer of tilled soils and by difficulty in obtaining proper calibration reference values. The objective of this discussion is the present state of spectroscopic methods for soil C determination.
Methods to Detect Nitric Oxide and its Metabolites in Biological Samples
Bryan, Nathan S.; Grisham, Matthew B.
2007-01-01
Nitric oxide (NO) methodology is a complex and often confusing science and the focus of many debates and discussion concerning NO biochemistry. NO is involved in many physiological processes including regulation of blood pressure, immune response and neural communication. Therefore its accurate detection and quantification is critical to understanding health and disease. Due to the extremely short physiological half life of this gaseous free radical, alternative strategies for the detection of reaction products of NO biochemistry have been developed. The quantification of NO metabolites in biological samples provides valuable information with regards to in vivo NO production, bioavailability and metabolism. Simply sampling a single compartment such as blood or plasma may not always provide an accurate assessment of whole body NO status, particularly in tissues. Therefore, extrapolation of plasma or blood NO status to specific tissues of interest is no longer a valid approach. As a result, methods continue to be developed and validated which allow the detection and quantification of NO and NO-related products/metabolites in multiple compartments of experimental animals in vivo. The methods described in this review is not an exhaustive or comprehensive discussion of all methods available for the detection of NO but rather a description of the most commonly used and practical methods which allow accurate and sensitive quantification of NO products/metabolites in multiple biological matrices under normal physiological conditions. PMID:17664129
Chang, Ho-Won; Sung, Youlboong; Kim, Kyoung-Ho; Nam, Young-Do; Roh, Seong Woon; Kim, Min-Soo; Jeon, Che Ok; Bae, Jin-Woo
2008-08-15
A crucial problem in the use of previously developed genome-probing microarrays (GPM) has been the inability to use uncultivated bacterial genomes to take advantage of the high sensitivity and specificity of GPM in microbial detection and monitoring. We show here a method, digital multiple displacement amplification (MDA), to amplify and analyze various genomes obtained from single uncultivated bacterial cells. We used 15 genomes from key microbes involved in dichloromethane (DCM)-dechlorinating enrichment as microarray probes to uncover the bacterial population dynamics of samples without PCR amplification. Genomic DNA amplified from single cells originating from uncultured bacteria with 80.3-99.4% similarity to 16S rRNA genes of cultivated bacteria. The digital MDA-GPM method successfully monitored the dynamics of DCM-dechlorinating communities from different phases of enrichment status. Without a priori knowledge of microbial diversity, the digital MDA-GPM method could be designed to monitor most microbial populations in a given environmental sample.
Comparison of scavenging capacities of vegetables by ORAC and EPR.
Kameya, Hiromi; Watanabe, Jun; Takano-Ishikawa, Yuko; Todoriki, Setsuko
2014-02-15
Reactive oxygen species (ROS) are considered to be causative agents of many health problems. In spite of this, the radical-specific scavenging capacities of food samples have not been well studied. In the present work, we have developed an electron paramagnetic resonance (EPR) spin trapping method for analysis of the scavenging capacities of food samples for multiple ROS, utilising the same photolysis procedure for generating each type of radical. The optimal conditions for effective evaluation of hydroxyl, superoxide, and alkoxyl radical scavenging capacity were determined. Quantification of radical adducts was found to be highly reproducible, with variations of less than 4%. The optimised EPR spin trapping method was used to analyse the scavenging capacities of 54 different vegetable extracts for multiple radicals, and the results were compared with oxygen radical absorption capacity values. Good correlations between the two methods were observed for superoxide and alkoxyl radicals, but not for hydroxyl. Copyright © 2013 Elsevier Ltd. All rights reserved.
Occupancy Estimation and Modeling : Inferring Patterns and Dynamics of Species Occurrence
MacKenzie, D.I.; Nichols, J.D.; Royle, J. Andrew; Pollock, K.H.; Bailey, L.L.; Hines, J.E.
2006-01-01
This is the first book to examine the latest methods in analyzing presence/absence data surveys. Using four classes of models (single-species, single-season; single-species, multiple season; multiple-species, single-season; and multiple-species, multiple-season), the authors discuss the practical sampling situation, present a likelihood-based model enabling direct estimation of the occupancy-related parameters while allowing for imperfect detectability, and make recommendations for designing studies using these models. It provides authoritative insights into the latest in estimation modeling; discusses multiple models which lay the groundwork for future study designs; addresses critical issues of imperfect detectibility and its effects on estimation; and explores the role of probability in estimating in detail.
Benefits of Multiple Methods for Evaluating HIV Counseling and Testing Sites in Pennsylvania.
ERIC Educational Resources Information Center
Encandela, John A.; Gehl, Mary Beth; Silvestre, Anthony; Schelzel, George
1999-01-01
Examines results from two methods used to evaluate publicly funded human immunodeficiency virus (HIV) counseling and testing in Pennsylvania. Results of written mail surveys of all sites and interviews from a random sample of 30 sites were similar in terms of questions posed and complementary in other ways. (SLD)
Stream macroinvertebrate collection methods described in the Rapid Bioassessment Protocols (RBPs) have been used widely throughout the U.S. The first edition of the RBP manual in 1989 described a single habitat approach that focused on riffles and runs, where macroinvertebrate d...
Key CCL viruses will be rapidly detected at low levels in water samples concentrated by a rapid HFUF or a new thin-sheet (TSM) electropositive filter adsorption-elution method and compared with the approved EPA method (1MDS VIRADEL). A unified and rapid virus concentration, n...
Methods for detecting total coliform bacteria in drinking water were compared using 1483 different drinking water samples from 15 small community water systems in Vermont and New Hampshire. The methods included the membrane filter (MF) technique, a ten tube fermentation tube tech...
Waterborne infectious diseases are a major public health concern worldwide. Few methods have been established that are capable of measuring human exposure to multiple waterborne pathogens simultaneously using non-invasive samples such as saliva. Most current methods measure expos...
Testing biological liquid samples using modified m-line spectroscopy method
NASA Astrophysics Data System (ADS)
Augusciuk, Elzbieta; Rybiński, Grzegorz
2005-09-01
Non-chemical method of detection of sugar concentration in biological (animal and plant source) liquids has been investigated. Simplified set was build to show the easy way of carrying out the survey and to make easy to gather multiple measurements for error detecting and statistics. Method is suggested as easy and cheap alternative for chemical methods of measuring sugar concentration, but needing a lot effort to be made precise.
NASA Astrophysics Data System (ADS)
Bonelli, Maria Grazia; Ferrini, Mauro; Manni, Andrea
2016-12-01
The assessment of metals and organic micropollutants contamination in agricultural soils is a difficult challenge due to the extensive area used to collect and analyze a very large number of samples. With Dioxins and dioxin-like PCBs measurement methods and subsequent the treatment of data, the European Community advises the develop low-cost and fast methods allowing routing analysis of a great number of samples, providing rapid measurement of these compounds in the environment, feeds and food. The aim of the present work has been to find a method suitable to describe the relations occurring between organic and inorganic contaminants and use the value of the latter in order to forecast the former. In practice, the use of a metal portable soil analyzer coupled with an efficient statistical procedure enables the required objective to be achieved. Compared to Multiple Linear Regression, the Artificial Neural Networks technique has shown to be an excellent forecasting method, though there is no linear correlation between the variables to be analyzed.
ERIC Educational Resources Information Center
MacCarthy, Patrick; And Others
1989-01-01
Analytical methods are reviewed for: alkali and alkaline earth metals; transition metals; precious metals; group 12, 13, 14, and 15 metals, nonmetals; radionuclides; multiple metals; anions; gases; chromatography; mass spectroscopy; photometry; sampling; volatile compounds; surfactants; detergents; pesticides; herbicides; and fungicides. (MVL)
Multiple Comparisons of Observation Means--Are the Means Significantly Different?
ERIC Educational Resources Information Center
Fahidy, T. Z.
2009-01-01
Several currently popular methods of ascertaining which treatment (population) means are different, via random samples obtained under each treatment, are briefly described and illustrated by evaluating catalyst performance in a chemical reactor.
Validation of spot-testing kits to determine iodine content in salt.
Pandav, C. S.; Arora, N. K.; Krishnan, A.; Sankar, R.; Pandav, S.; Karmarkar, M. G.
2000-01-01
Iodine deficiency disorders are a major public health problem, and salt iodization is the most widely practised intervention for their elimination. For the intervention to be successful and sustainable, it is vital to monitor the iodine content of salt regularly. Iodometric titration, the traditional method for measuring iodine content, has problems related to accessibility and cost. The newer spot-testing kits are inexpensive, require minimal training, and provide immediate results. Using data from surveys to assess the availability of iodized salt in two states in India, Madhya Pradesh and the National Capital Territory of Delhi, we tested the suitability of such a kit in field situations. Salt samples from Delhi were collected from 30 schools, chosen using the Expanded Programme on Immunization (EPI) cluster sampling technique. A single observer made the measurement for iodine content using the kit. Salt samples from Madhya Pradesh were from 30 rural and 30 urban clusters, identified by using census data and the EPI cluster sampling technique. In each cluster, salt samples were collected from 10 randomly selected households and all retailers. The 15 investigators performing the survey estimated the iodine content of salt samples in the field using the kit. All the samples were brought to the central laboratory in Delhi, where iodine content was estimated using iodometric titration as a reference method. The agreement between the kit and titration values decreased as the number of observers increased. Although sensitivity was not much affected by the increase in the number of observers (93.3% for a single observer and 93.9% for multiple observers), specificity decreased sharply (90.4% for a single observer and 40.4% for multiple observers). Due to the low specificity and resulting high numbers of false-positives for the kit when used by multiple observers ("real-life situations"), kits were likely to consistently overestimate the availability of iodized salt. This overestimation could result in complacency. Therefore, we conclude that until a valid alternative is available, the titration method should be used for monitoring the iodine content of salt at all levels, from producer to consumer, to ensure effectiveness of the programme. PMID:10994281
Lin, An-Jun; Yang, Tao; Jiang, Shao-Yong
2014-04-15
Previous studies have indicated that prior chemical purification of samples, although complex and time-consuming, is essential in obtaining precise and accurate results for sulfur isotope ratios using multiple-collector inductively coupled plasma mass spectrometry (MC-ICP-MS). In this study, we introduce a new, rapid and precise MC-ICP-MS method for sulfur isotope determination from water samples without chemical purification. The analytical work was performed on an MC-ICP-MS instrument with medium mass resolution (m/Δm ~ 3000). Standard-sample bracketing (SSB) was used to correct samples throughout the analytical sessions. Reference materials included an Alfa-S (ammonium sulfate) standard solution, ammonium sulfate provided by the lab of the authors and fresh seawater from the South China Sea. A range of matrix-matched Alfa-S standard solutions and ammonium sulfate solutions was used to investigate the matrix (salinity) effect (matrix was added in the form of NaCl). A seawater sample was used to confirm the reliability of the method. Using matrix-matched (salinity-matched) Alfa-S as the working standard, the measured δ(34)S value of AS (-6.73 ± 0.09‰) was consistent with the reference value (-6.78 ± 0.07‰) within the uncertainty, suggesting that this method could be recommended for the measurement of water samples without prior chemical purification. The δ(34)S value determination for the unpurified seawater also yielded excellent results (21.03 ± 0.18‰) that are consistent with the reference value (20.99‰), thus confirming the feasibility of the technique. The data and the results indicate that it is feasible to use MC-ICP-MS and matrix-matched working standards to measure the sulfur isotopic compositions of water samples directly without chemical purification. In comparison with the existing MC-ICP-MS techniques, the new method is better for directly measuring δ(34)S values in water samples with complex matrices; therefore, it can significantly accelerate analytical turnover. Copyright © 2014 John Wiley & Sons, Ltd.
Development of quantitative screen for 1550 chemicals with GC-MS.
Bergmann, Alan J; Points, Gary L; Scott, Richard P; Wilson, Glenn; Anderson, Kim A
2018-05-01
With hundreds of thousands of chemicals in the environment, effective monitoring requires high-throughput analytical techniques. This paper presents a quantitative screening method for 1550 chemicals based on statistical modeling of responses with identification and integration performed using deconvolution reporting software. The method was evaluated with representative environmental samples. We tested biological extracts, low-density polyethylene, and silicone passive sampling devices spiked with known concentrations of 196 representative chemicals. A multiple linear regression (R 2 = 0.80) was developed with molecular weight, logP, polar surface area, and fractional ion abundance to predict chemical responses within a factor of 2.5. Linearity beyond the calibration had R 2 > 0.97 for three orders of magnitude. Median limits of quantitation were estimated to be 201 pg/μL (1.9× standard deviation). The number of detected chemicals and the accuracy of quantitation were similar for environmental samples and standard solutions. To our knowledge, this is the most precise method for the largest number of semi-volatile organic chemicals lacking authentic standards. Accessible instrumentation and software make this method cost effective in quantifying a large, customizable list of chemicals. When paired with silicone wristband passive samplers, this quantitative screen will be very useful for epidemiology where binning of concentrations is common. Graphical abstract A multiple linear regression of chemical responses measured with GC-MS allowed quantitation of 1550 chemicals in samples such as silicone wristbands.
Neural Network and Nearest Neighbor Algorithms for Enhancing Sampling of Molecular Dynamics.
Galvelis, Raimondas; Sugita, Yuji
2017-06-13
The free energy calculations of complex chemical and biological systems with molecular dynamics (MD) are inefficient due to multiple local minima separated by high-energy barriers. The minima can be escaped using an enhanced sampling method such as metadynamics, which apply bias (i.e., importance sampling) along a set of collective variables (CV), but the maximum number of CVs (or dimensions) is severely limited. We propose a high-dimensional bias potential method (NN2B) based on two machine learning algorithms: the nearest neighbor density estimator (NNDE) and the artificial neural network (ANN) for the bias potential approximation. The bias potential is constructed iteratively from short biased MD simulations accounting for correlation among CVs. Our method is capable of achieving ergodic sampling and calculating free energy of polypeptides with up to 8-dimensional bias potential.
Samejima, Keijiro; Otani, Masahiro; Murakami, Yasuko; Oka, Takami; Kasai, Misao; Tsumoto, Hiroki; Kohda, Kohfuku
2007-10-01
A sensitive method for the determination of polyamines in mammalian cells was described using electrospray ionization and time-of-flight mass spectrometer. This method was 50-fold more sensitive than the previous method using ionspray ionization and quadrupole mass spectrometer. The method employed the partial purification and derivatization of polyamines, but allowed a measurement of multiple samples which contained picomol amounts of polyamines. Time required for data acquisition of one sample was approximately 2 min. The method was successfully applied for the determination of reduced spermidine and spermine contents in cultured cells under the inhibition of aminopropyltransferases. In addition, a new proper internal standard was proposed for the tracer experiment using (15)N-labeled polyamines.
System health monitoring using multiple-model adaptive estimation techniques
NASA Astrophysics Data System (ADS)
Sifford, Stanley Ryan
Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.
MetaMeta: integrating metagenome analysis tools to improve taxonomic profiling.
Piro, Vitor C; Matschkowski, Marcel; Renard, Bernhard Y
2017-08-14
Many metagenome analysis tools are presently available to classify sequences and profile environmental samples. In particular, taxonomic profiling and binning methods are commonly used for such tasks. Tools available among these two categories make use of several techniques, e.g., read mapping, k-mer alignment, and composition analysis. Variations on the construction of the corresponding reference sequence databases are also common. In addition, different tools provide good results in different datasets and configurations. All this variation creates a complicated scenario to researchers to decide which methods to use. Installation, configuration and execution can also be difficult especially when dealing with multiple datasets and tools. We propose MetaMeta: a pipeline to execute and integrate results from metagenome analysis tools. MetaMeta provides an easy workflow to run multiple tools with multiple samples, producing a single enhanced output profile for each sample. MetaMeta includes a database generation, pre-processing, execution, and integration steps, allowing easy execution and parallelization. The integration relies on the co-occurrence of organisms from different methods as the main feature to improve community profiling while accounting for differences in their databases. In a controlled case with simulated and real data, we show that the integrated profiles of MetaMeta overcome the best single profile. Using the same input data, it provides more sensitive and reliable results with the presence of each organism being supported by several methods. MetaMeta uses Snakemake and has six pre-configured tools, all available at BioConda channel for easy installation (conda install -c bioconda metameta). The MetaMeta pipeline is open-source and can be downloaded at: https://gitlab.com/rki_bioinformatics .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pech-May, Nelson Wilbur; Department of Applied Physics, CINVESTAV Unidad Mérida, carretera Antigua a Progreso km6, A.P. 73 Cordemex, Mérida Yucatán 97310, México; Mendioroz, Arantza
2014-10-15
In this work, we have extended the front-face flash method to retrieve simultaneously the thermal diffusivity and the optical absorption coefficient of semitransparent plates. A complete theoretical model that allows calculating the front surface temperature rise of the sample has been developed. It takes into consideration additional effects, such as multiple reflections of the heating light beam inside the sample, heat losses by convection and radiation, transparency of the sample to infrared wavelengths, and heating pulse duration. Measurements performed on calibrated solids, covering a wide range of absorption coefficients (from transparent to opaque) and thermal diffusivities, validate the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di, Sheng; Berrocal, Eduardo; Cappello, Franck
The silent data corruption (SDC) problem is attracting more and more attentions because it is expected to have a great impact on exascale HPC applications. SDC faults are hazardous in that they pass unnoticed by hardware and can lead to wrong computation results. In this work, we formulate SDC detection as a runtime one-step-ahead prediction method, leveraging multiple linear prediction methods in order to improve the detection results. The contributions are twofold: (1) we propose an error feedback control model that can reduce the prediction errors for different linear prediction methods, and (2) we propose a spatial-data-based even-sampling method tomore » minimize the detection overheads (including memory and computation cost). We implement our algorithms in the fault tolerance interface, a fault tolerance library with multiple checkpoint levels, such that users can conveniently protect their HPC applications against both SDC errors and fail-stop errors. We evaluate our approach by using large-scale traces from well-known, large-scale HPC applications, as well as by running those HPC applications on a real cluster environment. Experiments show that our error feedback control model can improve detection sensitivity by 34-189% for bit-flip memory errors injected with the bit positions in the range [20,30], without any degradation on detection accuracy. Furthermore, memory size can be reduced by 33% with our spatial-data even-sampling method, with only a slight and graceful degradation in the detection sensitivity.« less
Coes, Alissa L.; Paretti, Nicholas V.; Foreman, William T.; Iverson, Jana L.; Alvarez, David A.
2014-01-01
A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19–23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method.
Coes, Alissa L; Paretti, Nicholas V; Foreman, William T; Iverson, Jana L; Alvarez, David A
2014-03-01
A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19-23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method. Published by Elsevier B.V.
ERIC Educational Resources Information Center
Kover, Sara T.; McDuffie, Andrea; Abbeduto, Leonard; Brown, W. Ted
2012-01-01
Purpose: In this study, the authors examined the impact of sampling context on multiple aspects of expressive language in male participants with fragile X syndrome in comparison to male participants with Down syndrome or typical development. Method: Participants with fragile X syndrome (n = 27), ages 10-17 years, were matched groupwise on…
Analysis of Sequence Data Under Multivariate Trait-Dependent Sampling.
Tao, Ran; Zeng, Donglin; Franceschini, Nora; North, Kari E; Boerwinkle, Eric; Lin, Dan-Yu
2015-06-01
High-throughput DNA sequencing allows for the genotyping of common and rare variants for genetic association studies. At the present time and for the foreseeable future, it is not economically feasible to sequence all individuals in a large cohort. A cost-effective strategy is to sequence those individuals with extreme values of a quantitative trait. We consider the design under which the sampling depends on multiple quantitative traits. Under such trait-dependent sampling, standard linear regression analysis can result in bias of parameter estimation, inflation of type I error, and loss of power. We construct a likelihood function that properly reflects the sampling mechanism and utilizes all available data. We implement a computationally efficient EM algorithm and establish the theoretical properties of the resulting maximum likelihood estimators. Our methods can be used to perform separate inference on each trait or simultaneous inference on multiple traits. We pay special attention to gene-level association tests for rare variants. We demonstrate the superiority of the proposed methods over standard linear regression through extensive simulation studies. We provide applications to the Cohorts for Heart and Aging Research in Genomic Epidemiology Targeted Sequencing Study and the National Heart, Lung, and Blood Institute Exome Sequencing Project.
Mitsui, Jun; Fukuda, Yoko; Azuma, Kyo; Tozaki, Hirokazu; Ishiura, Hiroyuki; Takahashi, Yuji; Goto, Jun; Tsuji, Shoji
2010-07-01
We have recently found that multiple rare variants of the glucocerebrosidase gene (GBA) confer a robust risk for Parkinson disease, supporting the 'common disease-multiple rare variants' hypothesis. To develop an efficient method of identifying rare variants in a large number of samples, we applied multiplexed resequencing using a next-generation sequencer to identification of rare variants of GBA. Sixteen sets of pooled DNAs from six pooled DNA samples were prepared. Each set of pooled DNAs was subjected to polymerase chain reaction to amplify the target gene (GBA) covering 6.5 kb, pooled into one tube with barcode indexing, and then subjected to extensive sequence analysis using the SOLiD System. Individual samples were also subjected to direct nucleotide sequence analysis. With the optimization of data processing, we were able to extract all the variants from 96 samples with acceptable rates of false-positive single-nucleotide variants.
Burtis, C.A.; Johnson, W.F.; Walker, W.A.
1985-08-05
A rotor and disc assembly for use in a centrifugal fast analyzer. The assembly is designed to process multiple samples of whole blood followed by aliquoting of the resultant serum into precisely measured samples for subsequent chemical analysis. The assembly requires minimal operator involvement with no mechanical pipetting. The system comprises: (1) a whole blood sample disc; (2) a serum sample disc; (3) a sample preparation rotor; and (4) an analytical rotor. The blood sample disc and serum sample disc are designed with a plurality of precision bore capillary tubes arranged in a spoked array. Samples of blood are loaded into the blood sample disc by capillary action and centrifugally discharged into cavities of the sample preparation rotor where separation of serum and solids is accomplished. The serum is loaded into the capillaries of the serum sample disc by capillary action and subsequently centrifugally expelled into cuvettes of the analyticaly rotor for conventional methods. 5 figs.
Burtis, Carl A.; Johnson, Wayne F.; Walker, William A.
1988-01-01
A rotor and disc assembly for use in a centrifugal fast analyzer. The assembly is designed to process multiple samples of whole blood followed by aliquoting of the resultant serum into precisely measured samples for subsequent chemical analysis. The assembly requires minimal operator involvement with no mechanical pipetting. The system comprises (1) a whole blood sample disc, (2) a serum sample disc, (3) a sample preparation rotor, and (4) an analytical rotor. The blood sample disc and serum sample disc are designed with a plurality of precision bore capillary tubes arranged in a spoked array. Samples of blood are loaded into the blood sample disc in capillary tubes filled by capillary action and centrifugally discharged into cavities of the sample preparation rotor where separation of serum and solids is accomplished. The serum is loaded into the capillaries of the serum sample disc by capillary action and subsequently centrifugally expelled into cuvettes of the analytical rotor for analysis by conventional methods.
Processing methods for differential analysis of LC/MS profile data
Katajamaa, Mikko; Orešič, Matej
2005-01-01
Background Liquid chromatography coupled to mass spectrometry (LC/MS) has been widely used in proteomics and metabolomics research. In this context, the technology has been increasingly used for differential profiling, i.e. broad screening of biomolecular components across multiple samples in order to elucidate the observed phenotypes and discover biomarkers. One of the major challenges in this domain remains development of better solutions for processing of LC/MS data. Results We present a software package MZmine that enables differential LC/MS analysis of metabolomics data. This software is a toolbox containing methods for all data processing stages preceding differential analysis: spectral filtering, peak detection, alignment and normalization. Specifically, we developed and implemented a new recursive peak search algorithm and a secondary peak picking method for improving already aligned results, as well as a normalization tool that uses multiple internal standards. Visualization tools enable comparative viewing of data across multiple samples. Peak lists can be exported into other data analysis programs. The toolbox has already been utilized in a wide range of applications. We demonstrate its utility on an example of metabolic profiling of Catharanthus roseus cell cultures. Conclusion The software is freely available under the GNU General Public License and it can be obtained from the project web page at: . PMID:16026613
Processing methods for differential analysis of LC/MS profile data.
Katajamaa, Mikko; Oresic, Matej
2005-07-18
Liquid chromatography coupled to mass spectrometry (LC/MS) has been widely used in proteomics and metabolomics research. In this context, the technology has been increasingly used for differential profiling, i.e. broad screening of biomolecular components across multiple samples in order to elucidate the observed phenotypes and discover biomarkers. One of the major challenges in this domain remains development of better solutions for processing of LC/MS data. We present a software package MZmine that enables differential LC/MS analysis of metabolomics data. This software is a toolbox containing methods for all data processing stages preceding differential analysis: spectral filtering, peak detection, alignment and normalization. Specifically, we developed and implemented a new recursive peak search algorithm and a secondary peak picking method for improving already aligned results, as well as a normalization tool that uses multiple internal standards. Visualization tools enable comparative viewing of data across multiple samples. Peak lists can be exported into other data analysis programs. The toolbox has already been utilized in a wide range of applications. We demonstrate its utility on an example of metabolic profiling of Catharanthus roseus cell cultures. The software is freely available under the GNU General Public License and it can be obtained from the project web page at: http://mzmine.sourceforge.net/.
NASA Astrophysics Data System (ADS)
Hus, Jean-Christophe; Bruschweiler, Rafael
2002-07-01
A general method is presented for the reconstruction of interatomic vector orientations from nuclear magnetic resonance (NMR) spectroscopic data of tensor interactions of rank 2, such as dipolar coupling and chemical shielding anisotropy interactions, in solids and partially aligned liquid-state systems. The method, called PRIMA, is based on a principal component analysis of the covariance matrix of the NMR parameters collected for multiple alignments. The five nonzero eigenvalues and their eigenvectors efficiently allow the approximate reconstruction of the vector orientations of the underlying interactions. The method is demonstrated for an isotropic distribution of sample orientations as well as for finite sets of orientations and internuclear vectors encountered in protein systems.
Covariance Matrix Estimation for Massive MIMO
NASA Astrophysics Data System (ADS)
Upadhya, Karthik; Vorobyov, Sergiy A.
2018-04-01
We propose a novel pilot structure for covariance matrix estimation in massive multiple-input multiple-output (MIMO) systems in which each user transmits two pilot sequences, with the second pilot sequence multiplied by a random phase-shift. The covariance matrix of a particular user is obtained by computing the sample cross-correlation of the channel estimates obtained from the two pilot sequences. This approach relaxes the requirement that all the users transmit their uplink pilots over the same set of symbols. We derive expressions for the achievable rate and the mean-squared error of the covariance matrix estimate when the proposed method is used with staggered pilots. The performance of the proposed method is compared with existing methods through simulations.
The Mediated MIMIC Model for Understanding the Underlying Mechanism of DIF.
Cheng, Ying; Shao, Can; Lathrop, Quinn N
2016-02-01
Due to its flexibility, the multiple-indicator, multiple-causes (MIMIC) model has become an increasingly popular method for the detection of differential item functioning (DIF). In this article, we propose the mediated MIMIC model method to uncover the underlying mechanism of DIF. This method extends the usual MIMIC model by including one variable or multiple variables that may completely or partially mediate the DIF effect. If complete mediation effect is found, the DIF effect is fully accounted for. Through our simulation study, we find that the mediated MIMIC model is very successful in detecting the mediation effect that completely or partially accounts for DIF, while keeping the Type I error rate well controlled for both balanced and unbalanced sample sizes between focal and reference groups. Because it is successful in detecting such mediation effects, the mediated MIMIC model may help explain DIF and give guidance in the revision of a DIF item.
The Mediated MIMIC Model for Understanding the Underlying Mechanism of DIF
Cheng, Ying; Shao, Can; Lathrop, Quinn N.
2015-01-01
Due to its flexibility, the multiple-indicator, multiple-causes (MIMIC) model has become an increasingly popular method for the detection of differential item functioning (DIF). In this article, we propose the mediated MIMIC model method to uncover the underlying mechanism of DIF. This method extends the usual MIMIC model by including one variable or multiple variables that may completely or partially mediate the DIF effect. If complete mediation effect is found, the DIF effect is fully accounted for. Through our simulation study, we find that the mediated MIMIC model is very successful in detecting the mediation effect that completely or partially accounts for DIF, while keeping the Type I error rate well controlled for both balanced and unbalanced sample sizes between focal and reference groups. Because it is successful in detecting such mediation effects, the mediated MIMIC model may help explain DIF and give guidance in the revision of a DIF item.
Zou, W; Ouyang, H
2016-02-01
We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.
Development and validation of an HPLC–MS/MS method to determine clopidogrel in human plasma
Liu, Gangyi; Dong, Chunxia; Shen, Weiwei; Lu, Xiaopei; Zhang, Mengqi; Gui, Yuzhou; Zhou, Qinyi; Yu, Chen
2015-01-01
A quantitative method for clopidogrel using online-SPE tandem LC–MS/MS was developed and fully validated according to the well-established FDA guidelines. The method achieves adequate sensitivity for pharmacokinetic studies, with lower limit of quantifications (LLOQs) as low as 10 pg/mL. Chromatographic separations were performed on reversed phase columns Kromasil Eternity-2.5-C18-UHPLC for both methods. Positive electrospray ionization in multiple reaction monitoring (MRM) mode was employed for signal detection and a deuterated analogue (clopidogrel-d4) was used as internal standard (IS). Adjustments in sample preparation, including introduction of an online-SPE system proved to be the most effective method to solve the analyte back-conversion in clinical samples. Pooled clinical samples (two levels) were prepared and successfully used as real-sample quality control (QC) in the validation of back-conversion testing under different conditions. The result showed that the real samples were stable in room temperature for 24 h. Linearity, precision, extraction recovery, matrix effect on spiked QC samples and stability tests on both spiked QCs and real sample QCs stored in different conditions met the acceptance criteria. This online-SPE method was successfully applied to a bioequivalence study of 75 mg single dose clopidogrel tablets in 48 healthy male subjects. PMID:26904399
Robinson, Mark D; De Souza, David P; Keen, Woon Wai; Saunders, Eleanor C; McConville, Malcolm J; Speed, Terence P; Likić, Vladimir A
2007-10-29
Gas chromatography-mass spectrometry (GC-MS) is a robust platform for the profiling of certain classes of small molecules in biological samples. When multiple samples are profiled, including replicates of the same sample and/or different sample states, one needs to account for retention time drifts between experiments. This can be achieved either by the alignment of chromatographic profiles prior to peak detection, or by matching signal peaks after they have been extracted from chromatogram data matrices. Automated retention time correction is particularly important in non-targeted profiling studies. A new approach for matching signal peaks based on dynamic programming is presented. The proposed approach relies on both peak retention times and mass spectra. The alignment of more than two peak lists involves three steps: (1) all possible pairs of peak lists are aligned, and similarity of each pair of peak lists is estimated; (2) the guide tree is built based on the similarity between the peak lists; (3) peak lists are progressively aligned starting with the two most similar peak lists, following the guide tree until all peak lists are exhausted. When two or more experiments are performed on different sample states and each consisting of multiple replicates, peak lists within each set of replicate experiments are aligned first (within-state alignment), and subsequently the resulting alignments are aligned themselves (between-state alignment). When more than two sets of replicate experiments are present, the between-state alignment also employs the guide tree. We demonstrate the usefulness of this approach on GC-MS metabolic profiling experiments acquired on wild-type and mutant Leishmania mexicana parasites. We propose a progressive method to match signal peaks across multiple GC-MS experiments based on dynamic programming. A sensitive peak similarity function is proposed to balance peak retention time and peak mass spectra similarities. This approach can produce the optimal alignment between an arbitrary number of peak lists, and models explicitly within-state and between-state peak alignment. The accuracy of the proposed method was close to the accuracy of manually-curated peak matching, which required tens of man-hours for the analyzed data sets. The proposed approach may offer significant advantages for processing of high-throughput metabolomics data, especially when large numbers of experimental replicates and multiple sample states are analyzed.
Caldas, Sergiane S; Bolzan, Cátia M; Cerqueira, Maristela B; Tomasini, Débora; Furlong, Eliana B; Fagundes, Carlos; Primel, Ednei G
2011-11-23
A new method for the determination of clomazone, fipronil, tebuconazole, propiconazole, and azoxystrobin in samples of rice paddy soil is presented. The extraction of the pesticides from soil samples was performed by using a modified quick, easy, cheap, effective, rugged, and safe (QuEChERS) method. Some extraction conditions such as salt addition, sample acidification, use of buffer, and cleanup step were evaluated. The optimized method dealt with a single extraction of the compounds under study with acidified acetonitrile, followed by the addition of MgSO(4) and NaCl prior to the final determination by liquid chromatography-atmospheric chemical pressure ionization-tandem mass spectrometry. Validation studies were carried out in soil samples. Recoveries of the spiked samples ranged between 70.3 and 120% with relative standard deviation lower than 18.2%. The limits of quantification were between 10 and 50 μg kg(-1). The method was applied to the analysis of real samples of soils where rice is cultivated.
Solid-Phase Extraction (SPE): Principles and Applications in Food Samples.
Ötles, Semih; Kartal, Canan
2016-01-01
Solid-Phase Extraction (SPE) is a sample preparation method that is practised on numerous application fields due to its many advantages compared to other traditional methods. SPE was invented as an alternative to liquid/liquid extraction and eliminated multiple disadvantages, such as usage of large amount of solvent, extended operation time/procedure steps, potential sources of error, and high cost. Moreover, SPE can be plied to the samples combined with other analytical methods and sample preparation techniques optionally. SPE technique is a useful tool for many purposes through its versatility. Isolation, concentration, purification and clean-up are the main approaches in the practices of this method. Food structures represent a complicated matrix and can be formed into different physical stages, such as solid, viscous or liquid. Therefore, sample preparation step particularly has an important role for the determination of specific compounds in foods. SPE offers many opportunities not only for analysis of a large diversity of food samples but also for optimization and advances. This review aims to provide a comprehensive overview on basic principles of SPE and its applications for many analytes in food matrix.
2012-01-01
Background ChIP-seq provides new opportunities to study allele-specific protein-DNA binding (ASB). However, detecting allelic imbalance from a single ChIP-seq dataset often has low statistical power since only sequence reads mapped to heterozygote SNPs are informative for discriminating two alleles. Results We develop a new method iASeq to address this issue by jointly analyzing multiple ChIP-seq datasets. iASeq uses a Bayesian hierarchical mixture model to learn correlation patterns of allele-specificity among multiple proteins. Using the discovered correlation patterns, the model allows one to borrow information across datasets to improve detection of allelic imbalance. Application of iASeq to 77 ChIP-seq samples from 40 ENCODE datasets and 1 genomic DNA sample in GM12878 cells reveals that allele-specificity of multiple proteins are highly correlated, and demonstrates the ability of iASeq to improve allelic inference compared to analyzing each individual dataset separately. Conclusions iASeq illustrates the value of integrating multiple datasets in the allele-specificity inference and offers a new tool to better analyze ASB. PMID:23194258
Fu, Glenn K; Wilhelmy, Julie; Stern, David; Fan, H Christina; Fodor, Stephen P A
2014-03-18
We present a new approach for the sensitive detection and accurate quantitation of messenger ribonucleic acid (mRNA) gene transcripts in single cells. First, the entire population of mRNAs is encoded with molecular barcodes during reverse transcription. After amplification of the gene targets of interest, molecular barcodes are counted by sequencing or scored on a simple hybridization detector to reveal the number of molecules in the starting sample. Since absolute quantities are measured, calibration to standards is unnecessary, and many of the relative quantitation challenges such as polymerase chain reaction (PCR) bias are avoided. We apply the method to gene expression analysis of minute sample quantities and demonstrate precise measurements with sensitivity down to sub single-cell levels. The method is an easy, single-tube, end point assay utilizing standard thermal cyclers and PCR reagents. Accurate and precise measurements are obtained without any need for cycle-to-cycle intensity-based real-time monitoring or physical partitioning into multiple reactions (e.g., digital PCR). Further, since all mRNA molecules are encoded with molecular barcodes, amplification can be used to generate more material for multiple measurements and technical replicates can be carried out on limited samples. The method is particularly useful for small sample quantities, such as single-cell experiments. Digital encoding of cellular content preserves true abundance levels and overcomes distortions introduced by amplification.
Monitoring corneal crosslinking using phase-decorrelation OCT (Conference Presentation)
NASA Astrophysics Data System (ADS)
Blackburn, Brecken J.; Gu, Shi; Jenkins, Michael W.; Rollins, Andrew M.
2017-02-01
Viscosity is often a critical characteristic of biological fluids such as blood and mucus. However, traditional rheology is often inadequate when only small quantities of sample are available. A robust method to measure viscosity of microquantities of biological samples could lead to a better understanding and diagnosis of diseases. Here, we present a method to measure viscosity by observing particle Brownian motion within a sample. M-mode optical coherence tomography (OCT) imaging, obtained with a phase-sensitive 47 kHz spectral domain system, yields a viscosity measurement from multiple 200-1000 microsecond frames. This very short period of continuous acquisition, as compared to laser speckle decorrelation, decreases sensitivity to bulk motion, thereby potentially enabling in vivo and in situ applications. The theory linking g(1) first-order image autocorrelation to viscosity is derived from first principles of Brownian motion and the Stokes-Einstein relation. To improve precision, multiple windows acquired over 500 milliseconds are analyzed and the resulting linear fit parameters are averaged. Verification experiments were performed with 200 µL samples of glycerol and water with polystyrene microbeads. Lateral bulk motion up to 2 mm/s was tolerated and accurate viscosity measurements were obtained to a depth of 400 µm or more. Additionally, the method measured a significant decrease of the apparent diffusion constant of soft tissue after formalin fixation, suggesting potential for mapping tissue stiffness over a volume.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
Manifold Regularized Experimental Design for Active Learning.
Zhang, Lining; Shum, Hubert P H; Shao, Ling
2016-12-02
Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.
Heterogeneous Multi-Metric Learning for Multi-Sensor Fusion
2011-07-01
distance”. One of the most widely used methods is the k-nearest neighbor ( KNN ) method [4], which labels an input data sample to be the class with majority...despite of its simplicity, it can be an effective candidate and can be easily extended to handle multiple sensors. Distance based method such as KNN relies...Neighbor (LMNN) method [21] which will be briefly reviewed in the sequel. LMNN method tries to learn an optimal metric specifically for KNN classifier. The
NASA Astrophysics Data System (ADS)
Baronian, Vahan; Bourgeois, Laurent; Chapuis, Bastien; Recoquillay, Arnaud
2018-07-01
This paper presents an application of the linear sampling method to ultrasonic non destructive testing of an elastic waveguide. In particular, the NDT context implies that both the solicitations and the measurements are located on the surface of the waveguide and are given in the time domain. Our strategy consists in using a modal formulation of the linear sampling method at multiple frequencies, such modal formulation being justified theoretically in Bourgeois et al (2011 Inverse Problems 27 055001) for rigid obstacles and in Bourgeois and Lunéville (2013 Inverse Problems 29 025017) for cracks. Our strategy requires the inversion of some emission and reception matrices which deserve some special attention due to potential ill-conditioning. The feasibility of our method is proved with the help of artificial data as well as real data.
Chen, Hsiao-Wei; Wu, Chun-Feng; Chu, Lichieh Julie; Chiang, Wei-Fang; Wu, Chih-Ching; Yu, Jau-Song; Tsai, Cheng-Han; Liang, Kung-Hao; Chang, Yu-Sun; Wu, Maureen; Ou Yang, Wei-Ting
2017-01-01
Multiple (selected) reaction monitoring (MRM/SRM) of peptides is a growing technology for target protein quantification because it is more robust, precise, accurate, high-throughput, and multiplex-capable than antibody-based techniques. The technique has been applied clinically to the large-scale quantification of multiple target proteins in different types of fluids. However, previous MRM-based studies have placed less focus on sample-preparation workflow and analytical performance in the precise quantification of proteins in saliva, a noninvasively sampled body fluid. In this study, we evaluated the analytical performance of a simple and robust multiple reaction monitoring (MRM)-based targeted proteomics approach incorporating liquid chromatography with mass spectrometry detection (LC-MRM/MS). This platform was used to quantitatively assess the biomarker potential of a group of 56 salivary proteins that have previously been associated with human cancers. To further enhance the development of this technology for assay of salivary samples, we optimized the workflow for salivary protein digestion and evaluated quantification performance, robustness and technical limitations in analyzing clinical samples. Using a clinically well-characterized cohort of two independent clinical sample sets (total n = 119), we quantitatively characterized these protein biomarker candidates in saliva specimens from controls and oral squamous cell carcinoma (OSCC) patients. The results clearly showed a significant elevation of most targeted proteins in saliva samples from OSCC patients compared with controls. Overall, this platform was capable of assaying the most highly multiplexed panel of salivary protein biomarkers, highlighting the clinical utility of MRM in oral cancer biomarker research. PMID:28235782
Park, Douglas L; Coates, Scott; Brewer, Vickery A; Garber, Eric A E; Abouzied, Mohamed; Johnson, Kurt; Ritter, Bruce; McKenzie, Deborah
2005-01-01
Performance Tested Method multiple laboratory validations for the detection of peanut protein in 4 different food matrixes were conducted under the auspices of the AOAC Research Institute. In this blind study, 3 commercially available ELISA test kits were validated: Neogen Veratox for Peanut, R-Biopharm RIDASCREEN FAST Peanut, and Tepnel BioKits for Peanut Assay. The food matrixes used were breakfast cereal, cookies, ice cream, and milk chocolate spiked at 0 and 5 ppm peanut. Analyses of the samples were conducted by laboratories representing industry and international and U.S governmental agencies. All 3 commercial test kits successfully identified spiked and peanut-free samples. The validation study required 60 analyses on test samples at the target level 5 microg peanut/g food and 60 analyses at a peanut-free level, which was designed to ensure that the lower 95% confidence limit for the sensitivity and specificity would not be <90%. The probability that a test sample contains an allergen given a prevalence rate of 5% and a positive test result using a single test kit analysis with 95% sensitivity and 95% specificity, which was demonstrated for these test kits, would be 50%. When 2 test kits are run simultaneously on all samples, the probability becomes 95%. It is therefore recommended that all field samples be analyzed with at least 2 of the validated kits.
Multi-point objective-oriented sequential sampling strategy for constrained robust design
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhang, Siliang; Chen, Wei
2015-03-01
Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.
Shade, Paul A.; Menasche, David B.; Bernier, Joel V.; ...
2016-03-01
An evolving suite of X-ray characterization methods are presently available to the materials community, providing a great opportunity to gain new insight into material behavior and provide critical validation data for materials models. Two critical and related issues are sample repositioning during anin situexperiment and registration of multiple data sets after the experiment. To address these issues, a method is described which utilizes a focused ion-beam scanning electron microscope equipped with a micromanipulator to apply gold fiducial markers to samples for X-ray measurements. The method is demonstrated with a synchrotron X-ray experiment involvingin situloading of a titanium alloy tensile specimen.
ERIC Educational Resources Information Center
Thompson, Bruce
The relationship between analysis of variance (ANOVA) methods and their analogs (analysis of covariance and multiple analyses of variance and covariance--collectively referred to as OVA methods) and the more general analytic case is explored. A small heuristic data set is used, with a hypothetical sample of 20 subjects, randomly assigned to five…
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A direct approach to point and interval estimation of Cronbach's coefficient alpha for multiple component measuring instruments is outlined. The procedure is based on a latent variable modeling application with widely circulated software. As a by-product, using sample data the method permits ascertaining whether the population discrepancy…
Müller, Marco; Wasmer, Katharina; Vetter, Walter
2018-06-29
Countercurrent chromatography (CCC) is an all liquid based separation technique typically used for the isolation and purification of natural compounds. The simplicity of the method makes it easy to scale up CCC separations from analytical to preparative and even industrial scale. However, scale-up of CCC separations requires two different instruments with varying coil dimensions. Here we developed two variants of the CCC multiple injection mode as an alternative to increase the throughput and enhance productivity of a CCC separation when using only one instrument. The concept is based on the parallel injection of samples at different points in the CCC column system and the simultaneous separation using one pump only. The wiring of the CCC setup was modified by the insertion of a 6-port selection valve, multiple T-pieces and sample loops. Furthermore, the introduction of storage sample loops enabled the CCC system to be used with repeated injection cycles. Setup and advantages of both multiple injection modes were shown by the isolation of the furan fatty acid 11-(3,4-dimethyl-5-pentylfuran-2-yl)-undecanoic acid (11D5-EE) from an ethyl ester oil rich in 4,7,10,13,16,19-docosahexaenoic acid (DHA-EE). 11D5-EE was enriched in one step from 1.9% to 99% purity. The solvent consumption per isolated amount of analyte could be reduced by ∼40% compared to increased throughput CCC and by ∼5% in the repeated multiple injection mode which also facilitated the isolation of the major compound (DHA-EE) in the sample. Copyright © 2018 Elsevier B.V. All rights reserved.
[Determination of biphenyl ether herbicides in water using HPLC with cloud-point extraction].
He, Cheng-Yan; Li, Yuan-Qian; Wang, Shen-Jiao; Ouyang, Hua-Xue; Zheng, Bo
2010-01-01
To determine residues of multiple biphenyl ether herbicides simultaneously in water using high performance liquid chromatography (HPLC) with cloud-point extraction. The residues of eight biphenyl ether herbicides (including bentazone, fomesafen, acifluorfen, aclonifen, bifenox, fluoroglycofenethy, nitrofen, oxyfluorfen) in water samples were extracted with cloud-point extraction of Triton X-114. The analytes were separated and determined using reverse phase HPLC with ultraviolet detector at 300 nm. Optimized conditions for the pretreatment of water samples and the parameters of chromatographic separation applied. There was a good linear correlation between the concentration and the peak area of the analytes in the range of 0.05-2.00 mg/L (r = 0.9991-0.9998). Except bentazone, the spiked recoveries of the biphenyl ether herbicides in the water samples ranged from 80.1% to 100.9%, with relative standard deviations ranging from 2.70% to 6.40%. The detection limit of the method ranged from 0.10 microg/L to 0.50 microg/L. The proposed method is simple, rapid and sensitive, and can meet the requirements of determination of multiple biphenyl ether herbicides simultaneously in natural waters.
Herrington, Jason S; Fan, Zhi-Hua Tina; Lioy, Paul J; Zhang, Junfeng Jim
2007-01-15
Airborne aldehyde and ketone (carbonyl) sampling methodologies based on derivatization with 2,4-dinitrophenylhydrazine (DNPH)-coated solid sorbents could unequivocally be considered the "gold" standard. Originally developed in the late 1970s, these methods have been extensively evaluated and developed up to the present day. However, these methods have been inadequately evaluated for the long-term (i.e., 24 h or greater) sampling collection efficiency (CE) of carbonyls other than formaldehyde. The current body of literature fails to demonstrate that DNPH-coated solid sorbent sampling methods have acceptable CEs for the long-term sampling of carbonyls other than formaldehyde. Despite this, such methods are widely used to report the concentrations of multiple carbonyls from long-term sampling, assuming approximately 100% CEs. Laboratory experiments were conducted in this study to evaluate the long-term formaldehyde and acetaldehyde sampling CEs for several commonly used DNPH-coated solid sorbents. Results from sampling known concentrations of formaldehyde and acetaldehyde generated in a dynamic atmosphere generation system demonstrate that the 24-hour formaldehyde sampling CEs ranged from 83 to 133%, confirming the findings made in previous studies. However, the 24-hour acetaldehyde sampling CEs ranged from 1 to 62%. Attempts to increase the acetaldehyde CEs by adding acid to the samples post sampling were unsuccessful. These results indicate that assuming approximately 100% CEs for 24-hour acetaldehyde sampling, as commonly done with DNPH-coated solid sorbent methods, would substantially under estimate acetaldehyde concentrations.
Conley, Stephen; Faloona, Ian; Mehrotra, Shobhit; ...
2017-09-13
Airborne estimates of greenhouse gas emissions are becoming more prevalent with the advent of rapid commercial development of trace gas instrumentation featuring increased measurement accuracy, precision, and frequency, and the swelling interest in the verification of current emission inventories. Multiple airborne studies have indicated that emission inventories may underestimate some hydrocarbon emission sources in US oil- and gas-producing basins. Consequently, a proper assessment of the accuracy of these airborne methods is crucial to interpreting the meaning of such discrepancies. We present a new method of sampling surface sources of any trace gas for which fast and precise measurements can be mademore » and apply it to methane, ethane, and carbon dioxide on spatial scales of ~1000 m, where consecutive loops are flown around a targeted source region at multiple altitudes. Using Reynolds decomposition for the scalar concentrations, along with Gauss's theorem, we show that the method accurately accounts for the smaller-scale turbulent dispersion of the local plume, which is often ignored in other average mass balance methods. With the help of large eddy simulations (LES) we further show how the circling radius can be optimized for the micrometeorological conditions encountered during any flight. Furthermore, by sampling controlled releases of methane and ethane on the ground we can ascertain that the accuracy of the method, in appropriate meteorological conditions, is often better than 10 %, with limits of detection below 5 kg h -1 for both methane and ethane. Because of the FAA-mandated minimum flight safe altitude of 150 m, placement of the aircraft is critical to preventing a large portion of the emission plume from flowing underneath the lowest aircraft sampling altitude, which is generally the leading source of uncertainty in these measurements. Finally, we show how the accuracy of the method is strongly dependent on the number of sampling loops and/or time spent sampling the source plume.« less
NASA Astrophysics Data System (ADS)
Conley, Stephen; Faloona, Ian; Mehrotra, Shobhit; Suard, Maxime; Lenschow, Donald H.; Sweeney, Colm; Herndon, Scott; Schwietzke, Stefan; Pétron, Gabrielle; Pifer, Justin; Kort, Eric A.; Schnell, Russell
2017-09-01
Airborne estimates of greenhouse gas emissions are becoming more prevalent with the advent of rapid commercial development of trace gas instrumentation featuring increased measurement accuracy, precision, and frequency, and the swelling interest in the verification of current emission inventories. Multiple airborne studies have indicated that emission inventories may underestimate some hydrocarbon emission sources in US oil- and gas-producing basins. Consequently, a proper assessment of the accuracy of these airborne methods is crucial to interpreting the meaning of such discrepancies. We present a new method of sampling surface sources of any trace gas for which fast and precise measurements can be made and apply it to methane, ethane, and carbon dioxide on spatial scales of ˜ 1000 m, where consecutive loops are flown around a targeted source region at multiple altitudes. Using Reynolds decomposition for the scalar concentrations, along with Gauss's theorem, we show that the method accurately accounts for the smaller-scale turbulent dispersion of the local plume, which is often ignored in other average mass balance
methods. With the help of large eddy simulations (LES) we further show how the circling radius can be optimized for the micrometeorological conditions encountered during any flight. Furthermore, by sampling controlled releases of methane and ethane on the ground we can ascertain that the accuracy of the method, in appropriate meteorological conditions, is often better than 10 %, with limits of detection below 5 kg h-1 for both methane and ethane. Because of the FAA-mandated minimum flight safe altitude of 150 m, placement of the aircraft is critical to preventing a large portion of the emission plume from flowing underneath the lowest aircraft sampling altitude, which is generally the leading source of uncertainty in these measurements. Finally, we show how the accuracy of the method is strongly dependent on the number of sampling loops and/or time spent sampling the source plume.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conley, Stephen; Faloona, Ian; Mehrotra, Shobhit
Airborne estimates of greenhouse gas emissions are becoming more prevalent with the advent of rapid commercial development of trace gas instrumentation featuring increased measurement accuracy, precision, and frequency, and the swelling interest in the verification of current emission inventories. Multiple airborne studies have indicated that emission inventories may underestimate some hydrocarbon emission sources in US oil- and gas-producing basins. Consequently, a proper assessment of the accuracy of these airborne methods is crucial to interpreting the meaning of such discrepancies. We present a new method of sampling surface sources of any trace gas for which fast and precise measurements can be mademore » and apply it to methane, ethane, and carbon dioxide on spatial scales of ~1000 m, where consecutive loops are flown around a targeted source region at multiple altitudes. Using Reynolds decomposition for the scalar concentrations, along with Gauss's theorem, we show that the method accurately accounts for the smaller-scale turbulent dispersion of the local plume, which is often ignored in other average mass balance methods. With the help of large eddy simulations (LES) we further show how the circling radius can be optimized for the micrometeorological conditions encountered during any flight. Furthermore, by sampling controlled releases of methane and ethane on the ground we can ascertain that the accuracy of the method, in appropriate meteorological conditions, is often better than 10 %, with limits of detection below 5 kg h -1 for both methane and ethane. Because of the FAA-mandated minimum flight safe altitude of 150 m, placement of the aircraft is critical to preventing a large portion of the emission plume from flowing underneath the lowest aircraft sampling altitude, which is generally the leading source of uncertainty in these measurements. Finally, we show how the accuracy of the method is strongly dependent on the number of sampling loops and/or time spent sampling the source plume.« less
Li, Tengfei; Cui, Zhimin; Wang, Yan; Yang, Wen; Li, Duo; Song, QinXin; Sun, Luning; Ding, Li
2018-03-20
As an orally active iron chelator, deferasirox forms its ion complexes in the prepared plasma samples and LC-MS mobile phase where ferric ion exists, and then comparing with the nominal concentration level, a lower detected concentration level of deferasirox would be obtained after LC-MS analysis, if no proper treatment was adopted. Meanwhile, the phenomenon would be observed that multiple repeat injections of the same deferasirox plasma sample in the same tube would show the lower and lower detected concentration levels of deferasirox, which caused by more and more ferric ions from the injection needle dissolved in the sample solution as multiple repeated injections. The addition of a proper concentration of EDTA in the mobile phase and the sample will competitively inhibit deferasirox from complexing with ferric ion, and prevent the decrease of deferasirox concentration. In this paper, an LC-MS/MS method was developed and validated for the determination of deferasirox in human plasma. To achieve the protein precipitation, the analytes were extracted from aliquots of 200 μL human plasma with acetonitrile. Chromatographic separation was performed on an ODS-C18 column with the mobile phase consisted of methanol and 0.1% formic acid containing 0.04 mM ethylenediamine tetraacetate dihydrate (EDTA) (80:20, v/v) at a flow rate of 0.5 mL/min. Deferasirox and the internal standard (IS, mifepristone) were detected using electrospray ionization in positive ion multiple reaction monitoring mode by monitoring the precursor-to-product ion transitions m/z 374.2 → 108.1 for deferasirox and m/z 430.1 → 372.2 for the IS. The method exhibited good linearity over the concentration range of 0.04-40 μg/mL for deferasirox. The method was successfully applied to a pharmacokinetic study in 10 Chinese healthy volunteers after oral administration of deferasirox. Copyright © 2018 Elsevier B.V. All rights reserved.
Vaccarella, Salvatore; Söderlund-Strand, Anna; Franceschi, Silvia; Plummer, Martyn; Dillner, Joakim
2013-01-01
To evaluate the pattern of co-infection of human papillomavirus (HPV) types in both sexes in Sweden. Cell samples from genital swabs, first-void urine, and genital swabs immersed in first-void urine were collected in the present cross-sectional High Throughput HPV Monitoring study. Overall, 31,717 samples from women and 9,949 from men (mean age 25) were tested for 16 HPV types using mass spectrometry. Multilevel logistic regression was used to estimate the expected number of multiple infections with specific HPV types, adjusted for age, type of sample, and accounting for correlations between HPV types due to unobserved risk factors using sample-level random effects. Bonferroni correction was used to allow for multiple comparisons (120). Observed-to-expected ratio for any multiple infections was slightly above unity in both sexes, but, for most 2-type combinations, there was no evidence of significant departure from expected numbers. HPV6/18 was found more often and HPV51/68 and 6/68 less often than expected. However, HPV68 tended to be generally underrepresented in co-infections, suggesting a sub-optimal performance of our testing method for this HPV type. We found no evidence for positive or negative clustering between HPV types included in the current prophylactic vaccines and other untargeted oncogenic types, in either sex.
NASA Astrophysics Data System (ADS)
Alpers, C. N.; Marvin-DiPasquale, M. C.; Fleck, J.; Ackerman, J. T.; Eagles-Smith, C.; Stewart, A. R.; Windham-Myers, L.
2016-12-01
Many watersheds in the western U.S. have mercury (Hg) contamination from historical mining of Hg and precious metals (gold and silver), which were concentrated using Hg amalgamation (mid 1800's to early 1900's). Today, specialized sampling and analytical protocols for characterizing Hg and methylmercury (MeHg) in water, sediment, and biota generate high-quality data to inform management of land, water, and biological resources. Collection of vertically and horizontally integrated water samples in flowing streams and use of a Teflon churn splitter or cone splitter ensure that samples and subsamples are representative. Both dissolved and particulate components of Hg species in water are quantified because each responds to different hydrobiogeochemical processes. Suspended particles trapped on pre-combusted (Hg-free) glass- or quartz-fiber filters are analyzed for total mercury (THg), MeHg, and reactive divalent mercury. Filtrates are analyzed for THg and MeHg to approximate the dissolved fraction. The sum of concentrations in particulate and filtrate fractions represents whole water, equivalent to an unfiltered sample. This approach improves upon analysis of filtered and unfiltered samples and computation of particulate concentration by difference; volume filtered is adjusted based on suspended-sediment concentration to minimize particulate non-detects. Information from bed-sediment sampling is enhanced by sieving into multiple size fractions and determining detailed grain-size distribution. Wet sieving ensures particle disaggregation; sieve water is retained and fines are recovered by centrifugation. Speciation analysis by sequential extraction and examination of heavy mineral concentrates by scanning electron microscopy provide additional information regarding Hg mineralogy and geochemistry. Biomagnification of MeHg in food webs is tracked using phytoplankton, zooplankton, aquatic and emergent vegetation, invertebrates, fish, and birds. Analysis of zooplankton in multiple size fractions from multiple depths in reservoirs can provide insight into food-web dynamics. The presentation will highlight application of these methods in several Hg-contaminated watersheds, with emphasis on understanding seasonal variability in designing effective sampling strategies.
Computer-aided visualization and analysis system for sequence evaluation
Chee, M.S.
1998-08-18
A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device. 27 figs.
Computer-aided visualization and analysis system for sequence evaluation
Chee, Mark S.; Wang, Chunwei; Jevons, Luis C.; Bernhart, Derek H.; Lipshutz, Robert J.
2004-05-11
A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.
Computer-aided visualization and analysis system for sequence evaluation
Chee, Mark S.
1998-08-18
A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.
Computer-aided visualization and analysis system for sequence evaluation
Chee, Mark S.
2003-08-19
A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.
McKeever, P E; Letica, L H; Shakui, P; Averill, D R
1988-09-01
Multiple wells (M-wells) have been made over tissue sections on single microscopic slides to simultaneously localize binding specificity of many antibodies. More than 20 individual 4-microliter wells over tissue have been applied/slide, representing more than a 5-fold improvement in wells/slide and a 25-fold reduction in reagent volume over previous methods. More than 30 wells/slide have been applied over cellular monolayers. To produce the improvement, previous strategies of placing specimens into wells were changed to instead create wells over the specimen. We took advantage of the hydrophobic properties of paint to surround the wells and to segregate the various different primary antibodies. Segregation was complete on wells alternating with and without primary monoclonal antibody. The procedure accommodates both frozen and paraffin sections, yielding slides which last more than a year. After monoclonal antibody detection, standard histologic stains can be applied as counterstains. M-wells are suitable for localizing binding of multiple reagents or sample unknowns (polyclonal or monoclonal antibodies, hybridoma supernatants, body fluids, lectins) to either tissues or cells. Their small sample volume and large number of sample wells/slide could be particularly useful for early screening of hybridoma supernatants and for titration curves in immunohistochemistry (McKeever PE, Shakui P, Letica LH, Averill DR: J Histochem Cytochem 36:931, 1988).
Unsupervised multiple kernel learning for heterogeneous data integration.
Mariette, Jérôme; Villa-Vialaneix, Nathalie
2018-03-15
Recent high-throughput sequencing advances have expanded the breadth of available omics datasets and the integrated analysis of multiple datasets obtained on the same samples has allowed to gain important insights in a wide range of applications. However, the integration of various sources of information remains a challenge for systems biology since produced datasets are often of heterogeneous types, with the need of developing generic methods to take their different specificities into account. We propose a multiple kernel framework that allows to integrate multiple datasets of various types into a single exploratory analysis. Several solutions are provided to learn either a consensus meta-kernel or a meta-kernel that preserves the original topology of the datasets. We applied our framework to analyse two public multi-omics datasets. First, the multiple metagenomic datasets, collected during the TARA Oceans expedition, was explored to demonstrate that our method is able to retrieve previous findings in a single kernel PCA as well as to provide a new image of the sample structures when a larger number of datasets are included in the analysis. To perform this analysis, a generic procedure is also proposed to improve the interpretability of the kernel PCA in regards with the original data. Second, the multi-omics breast cancer datasets, provided by The Cancer Genome Atlas, is analysed using a kernel Self-Organizing Maps with both single and multi-omics strategies. The comparison of these two approaches demonstrates the benefit of our integration method to improve the representation of the studied biological system. Proposed methods are available in the R package mixKernel, released on CRAN. It is fully compatible with the mixOmics package and a tutorial describing the approach can be found on mixOmics web site http://mixomics.org/mixkernel/. jerome.mariette@inra.fr or nathalie.villa-vialaneix@inra.fr. Supplementary data are available at Bioinformatics online.
Boo, Chelsea C; Parker, Christine H; Jackson, Lauren S
2018-01-01
Food allergy is a growing public health concern, with many individuals reporting allergies to multiple food sources. Compliance with food labeling regulations and prevention of inadvertent cross-contact in manufacturing requires the use of reliable methods for the detection and quantitation of allergens in processed foods. In this work, a novel liquid chromatography-tandem mass spectrometry multiple-reaction monitoring method for multiallergen detection and quantitation of egg, milk, and peanut was developed and evaluated in an allergen-incurred baked sugar cookie matrix. A systematic evaluation of method parameters, including sample extraction, concentration, and digestion, were optimized for candidate allergen peptide markers. The optimized method enabled the reliable detection and quantitation of egg, milk, and peanut allergens in sugar cookies, with allergen concentrations as low as 5 ppm allergen-incurred ingredient.
Automated solid-phase extraction workstations combined with quantitative bioanalytical LC/MS.
Huang, N H; Kagel, J R; Rossi, D T
1999-03-01
An automated solid-phase extraction workstation was used to develop, characterize and validate an LC/MS/MS method for quantifying a novel lipid-regulating drug in dog plasma. Method development was facilitated by workstation functions that allowed wash solvents of varying organic composition to be mixed and tested automatically. Precision estimates for this approach were within 9.8% relative standard deviation (RSD) across the calibration range. Accuracy for replicate determinations of quality controls was between -7.2 and +6.2% relative error (RE) over 5-1,000 ng/ml(-1). Recoveries were evaluated for a wide variety of wash solvents, elution solvents and sorbents. Optimized recoveries were generally > 95%. A sample throughput benchmark for the method was approximately equal 8 min per sample. Because of parallel sample processing, 100 samples were extracted in less than 120 min. The approach has proven useful for use with LC/MS/MS, using a multiple reaction monitoring (MRM) approach.
Mercury (Hg) emissions from coal utilities are difficult to control. Hg eludes capture by most air pollution control devices (APCDs). To determine the gaseous Hg species in stack gases, U.S. EPA Method 5 type sampling is used. In this type of sampling a hole is drilled into th...
An improved initialization center k-means clustering algorithm based on distance and density
NASA Astrophysics Data System (ADS)
Duan, Yanling; Liu, Qun; Xia, Shuyin
2018-04-01
Aiming at the problem of the random initial clustering center of k means algorithm that the clustering results are influenced by outlier data sample and are unstable in multiple clustering, a method of central point initialization method based on larger distance and higher density is proposed. The reciprocal of the weighted average of distance is used to represent the sample density, and the data sample with the larger distance and the higher density are selected as the initial clustering centers to optimize the clustering results. Then, a clustering evaluation method based on distance and density is designed to verify the feasibility of the algorithm and the practicality, the experimental results on UCI data sets show that the algorithm has a certain stability and practicality.
Driver fatigue detection through multiple entropy fusion analysis in an EEG-based system.
Min, Jianliang; Wang, Ping; Hu, Jianfeng
2017-01-01
Driver fatigue is an important contributor to road accidents, and fatigue detection has major implications for transportation safety. The aim of this research is to analyze the multiple entropy fusion method and evaluate several channel regions to effectively detect a driver's fatigue state based on electroencephalogram (EEG) records. First, we fused multiple entropies, i.e., spectral entropy, approximate entropy, sample entropy and fuzzy entropy, as features compared with autoregressive (AR) modeling by four classifiers. Second, we captured four significant channel regions according to weight-based electrodes via a simplified channel selection method. Finally, the evaluation model for detecting driver fatigue was established with four classifiers based on the EEG data from four channel regions. Twelve healthy subjects performed continuous simulated driving for 1-2 hours with EEG monitoring on a static simulator. The leave-one-out cross-validation approach obtained an accuracy of 98.3%, a sensitivity of 98.3% and a specificity of 98.2%. The experimental results verified the effectiveness of the proposed method, indicating that the multiple entropy fusion features are significant factors for inferring the fatigue state of a driver.
Evaluation of cotton-fabric bleaching using hydrogen peroxide and Blue LED
NASA Astrophysics Data System (ADS)
de Oliveira, Bruno P.; Moriyama, Lilian T.; Bagnato, Vanderlei S.
2015-06-01
The raw cotton production requires multiple steps being one of them the removal of impurities acquired during previous processes. This procedure is widely used by textile industries around the world and is called bleaching. The raw cotton is composed by cellulosic and non-cellulosic materials like waxes, pectins and oils, which are responsible for its characteristic yellowish color. The bleaching process aims to remove the non-cellulosic materials concentration in the fabric, increasing its whiteness degree. The most used bleaching method utilizes a bath in an alkali solution of hydrogen peroxide, stabilizers and buffer solutions under high temperature. In the present study we evaluated the possibility of using a blue illumination for the bleaching process. We used blue LEDs (450 nm) to illuminate an acid hydrogen peroxide solution at room temperature. The samples treated by this method were compared with the conventional bleaching process through a colorimetric analysis and by a multiple comparison visual inspection by volunteers. The samples were also studied by a tensile test in order to verify the integrity of the cloth after bleaching. The results of fabric visual inspection and colorimetric analysis showed a small advantage for the sample treated by the standard method. The tensile test showed an increasing on the yield strength of the cloth after blue light bleaching. The presented method has great applicability potential due to the similar results compared to the standard method, with relative low cost and reduced production of chemical waste.
1972-01-01
The membrane methods described in Report 71 on the bacteriological examination of water supplies (Report, 1969) for the enumeration of coliform organisms and Escherichia coli in waters, together with a glutamate membrane method, were compared with the glutamate multiple tube method recommended in Report 71 and an incubation procedure similar to that used for membranes with the first 4 hr. at 30° C., and with MacConkey broth in multiple tubes. Although there were some differences between individual laboratories, the combined results from all participating laboratories showed that standard and extended membrane methods gave significantly higher results than the glutamate tube method for coliform organisms in both chlorinated and unchlorinated waters, but significantly lower results for Esch. coli with chlorinated waters and equivocal results with unchlorinated waters. Extended membranes gave higher results than glutamate tubes in larger proportions of samples than did standard membranes. Although transport membranes did not do so well as standard membrane methods, the results were usually in agreement with glutamate tubes except for Esch. coli in chlorinated waters. The glutamate membranes were unsatisfactory. Preliminary incubation of glutamate at 30° C. made little difference to the results. PMID:4567313
Cleveland, Danielle; Brumbaugh, William G.; MacDonald, Donald D.
2017-01-01
Evaluations of sediment quality conditions are commonly conducted using whole-sediment chemistry analyses but can be enhanced by evaluating multiple lines of evidence, including measures of the bioavailable forms of contaminants. In particular, porewater chemistry data provide information that is directly relevant for interpreting sediment toxicity data. Various methods for sampling porewater for trace metals and dissolved organic carbon (DOC), which is an important moderator of metal bioavailability, have been employed. The present study compares the peeper, push point, centrifugation, and diffusive gradients in thin films (DGT) methods for the quantification of 6 metals and DOC. The methods were evaluated at low and high concentrations of metals in 3 sediments having different concentrations of total organic carbon and acid volatile sulfide and different particle-size distributions. At low metal concentrations, centrifugation and push point sampling resulted in up to 100 times higher concentrations of metals and DOC in porewater compared with peepers and DGTs. At elevated metal levels, the measured concentrations were in better agreement among the 4 sampling techniques. The results indicate that there can be marked differences among operationally different porewater sampling methods, and it is unclear if there is a definitive best method for sampling metals and DOC in porewater.
Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S
2015-02-01
With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.
Lange, Vinzenz; Malmström, Johan A; Didion, John; King, Nichole L; Johansson, Björn P; Schäfer, Juliane; Rameseder, Jonathan; Wong, Chee-Hong; Deutsch, Eric W; Brusniak, Mi-Youn; Bühlmann, Peter; Björck, Lars; Domon, Bruno; Aebersold, Ruedi
2008-08-01
In many studies, particularly in the field of systems biology, it is essential that identical protein sets are precisely quantified in multiple samples such as those representing differentially perturbed cell states. The high degree of reproducibility required for such experiments has not been achieved by classical mass spectrometry-based proteomics methods. In this study we describe the implementation of a targeted quantitative approach by which predetermined protein sets are first identified and subsequently quantified at high sensitivity reliably in multiple samples. This approach consists of three steps. First, the proteome is extensively mapped out by multidimensional fractionation and tandem mass spectrometry, and the data generated are assembled in the PeptideAtlas database. Second, based on this proteome map, peptides uniquely identifying the proteins of interest, proteotypic peptides, are selected, and multiple reaction monitoring (MRM) transitions are established and validated by MS2 spectrum acquisition. This process of peptide selection, transition selection, and validation is supported by a suite of software tools, TIQAM (Targeted Identification for Quantitative Analysis by MRM), described in this study. Third, the selected target protein set is quantified in multiple samples by MRM. Applying this approach we were able to reliably quantify low abundance virulence factors from cultures of the human pathogen Streptococcus pyogenes exposed to increasing amounts of plasma. The resulting quantitative protein patterns enabled us to clearly define the subset of virulence proteins that is regulated upon plasma exposure.
Li, Han-Qing; Mei, Jian-Gang; Cao, Hong-Qin; Shao, Liang-Jing; Zhai, Yong-Ping
2017-12-01
To establish a multiple myeloma specimen bank applied for molecular biological researches and to explore the methods of specimen collection, transportation, storage, quality control and the management of specimen bank. Bone marrow and blood samples were collected from multiple myeloma patients, plasma cell sorting were operated after the separation of mononuclear cells from bone marrow specimens. The plasma cells were divided into 2 parts, one was added with proper amount of TRIzol and then kept in -80 °C refrigerator for subsequent RNA extraction, the other was added with proper amount of calf serum cell frozen liquid and then kept in -80 °C refrigerator for subsequent cryopreservation of DNA extraction after numbered respectively. Serum and plasma were separated from peripheral blood, specimens of serum and plasma were then stored at -80 °C refrigerator after registration. Meantime, the myeloma specimen information management system was established, managed and maintained by specially-assigned persons and continuous modification and improvement in the process of use as to facilitate the rapid collection, management, query of the effective samples and clinical data. A total of 244 portions plasma cells, 564 portions of serum, and 1005 portions of plasma were collected, clinical characters were documented. A multiple myeloma specimen bank have been established initially, which can provide quality samples and related clinical information for molecular biological research on multiple myeloma.
Systems and methods for integrating ion mobility and ion trap mass spectrometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Yehia M.; Garimella, Sandilya; Prost, Spencer A.
Described herein are examples of systems and methods for integrating IMS and MS systems. In certain examples, systems and methods for decoding double multiplexed data are described. The systems and methods can also perform multiple refining procedures in order to minimize the demultiplexing artifacts. The systems and methods can be used, for example, for the analysis of proteomic and petroleum samples, where the integration of IMS and high mass resolution are used for accurate assignment of molecular formulae.
An Accurate Framework for Arbitrary View Pedestrian Detection in Images
NASA Astrophysics Data System (ADS)
Fan, Y.; Wen, G.; Qiu, S.
2018-01-01
We consider the problem of detect pedestrian under from images collected under various viewpoints. This paper utilizes a novel framework called locality-constrained affine subspace coding (LASC). Firstly, the positive training samples are clustered into similar entities which represent similar viewpoint. Then Principal Component Analysis (PCA) is used to obtain the shared feature of each viewpoint. Finally, the samples that can be reconstructed by linear approximation using their top- k nearest shared feature with a small error are regarded as a correct detection. No negative samples are required for our method. Histograms of orientated gradient (HOG) features are used as the feature descriptors, and the sliding window scheme is adopted to detect humans in images. The proposed method exploits the sparse property of intrinsic information and the correlations among the multiple-views samples. Experimental results on the INRIA and SDL human datasets show that the proposed method achieves a higher performance than the state-of-the-art methods in form of effect and efficiency.
Methods for Multiplex Template Sampling in Digital PCR Assays
Petriv, Oleh I.; Heyries, Kevin A.; VanInsberghe, Michael; Walker, David; Hansen, Carl L.
2014-01-01
The efficient use of digital PCR (dPCR) for precision copy number analysis requires high concentrations of target molecules that may be difficult or impossible to obtain from clinical samples. To solve this problem we present a strategy, called Multiplex Template Sampling (MTS), that effectively increases template concentrations by detecting multiple regions of fragmented target molecules. Three alternative assay approaches are presented for implementing MTS analysis of chromosome 21, providing a 10-fold concentration enhancement while preserving assay precision. PMID:24854517
Methods for multiplex template sampling in digital PCR assays.
Petriv, Oleh I; Heyries, Kevin A; VanInsberghe, Michael; Walker, David; Hansen, Carl L
2014-01-01
The efficient use of digital PCR (dPCR) for precision copy number analysis requires high concentrations of target molecules that may be difficult or impossible to obtain from clinical samples. To solve this problem we present a strategy, called Multiplex Template Sampling (MTS), that effectively increases template concentrations by detecting multiple regions of fragmented target molecules. Three alternative assay approaches are presented for implementing MTS analysis of chromosome 21, providing a 10-fold concentration enhancement while preserving assay precision.
Quantitative and fingerprint analyses of Chinese sweet tea plant (Rubus Suavissimus S. Lee)
Chou, Guixin; Xu, Shun-Jun; Liu, Dong; Koh, Gar Yee; Zhang, Jian; Liu, Zhijun
2009-01-01
Quality of botanical food is increasingly assessed by the content of multiple bioactive compounds. In this study we report, for the first time, an HPLC fingerprinting method for the quality evaluation of Rubus suavissimus leaves possessing multiple bioactivities. Five constituents, gallic acid, rutin, ellagic acid, rubusoside, and steviol monoside were quantified and used in developing qualitative chromatographic fingerprints. The limits of detection and quantification ranged from 0.29 μg/mL to 37.86 μg/mL. The relative standard deviations (RSDs) of intra- and inter-day precisions were no more than 3.14% and 3.01%, respectively. The average recoveries were between 93.1% and 97.5%. The developed method was validated in analyzing fourteen leaf samples with satisfactory results. The contents of the five marker compounds accounted for an average of about 6% w/w with a variability of 16% among the fourteen samples collected from a single site and year. Gallic acid was the least whereas steviol monoside the most variable compounds among the fourteen leaf samples. The characteristic compound rubusoside that is responsible for the sweet taste accounted for 5% of leaf weight. The validated method can now be used to quantitatively and qualitatively assess the quality of Rubus suavissimus leaves as traditional beverage or potential medicines. PMID:19138116
Quantification of multiple elements in dried blood spot samples.
Pedersen, Lise; Andersen-Ranberg, Karen; Hollergaard, Mads; Nybo, Mads
2017-08-01
Dried blood spots (DBS) is a unique matrix that offers advantages compared to conventional blood collection making it increasingly popular in large population studies. We here describe development and validation of a method to determine multiple elements in DBS. Elements were extracted from punches and analyzed using inductively coupled plasma-mass spectrometry (ICP-MS). The method was evaluated with quality controls with defined element concentration and blood spiked with elements to assess accuracy and imprecision. DBS element concentrations were compared with concentrations in venous blood. Samples with different hematocrit were spotted onto filter paper to assess hematocrit effect. The established method was precise and accurate for measurement of most elements in DBS. There was a significant but relatively weak correlation between measurement of the elements Mg, K, Fe, Cu, Zn, As and Se in DBS and venous whole blood. Hematocrit influenced the DBS element measurement, especially for K, Fe and Zn. Trace elements can be measured with high accuracy and low imprecision in DBS, but contribution of signal from the filter paper influences measurement of some elements present at low concentrations. Simultaneous measurement of K and Fe in DBS extracts may be used to estimate sample hematocrit. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E
2012-03-01
In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.
Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao
2017-04-01
Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.
Graves, T.A.; Kendall, Katherine C.; Royle, J. Andrew; Stetz, J.B.; Macleod, A.C.
2011-01-01
Few studies link habitat to grizzly bear Ursus arctos abundance and these have not accounted for the variation in detection or spatial autocorrelation. We collected and genotyped bear hair in and around Glacier National Park in northwestern Montana during the summer of 2000. We developed a hierarchical Markov chain Monte Carlo model that extends the existing occupancy and count models by accounting for (1) spatially explicit variables that we hypothesized might influence abundance; (2) separate sub-models of detection probability for two distinct sampling methods (hair traps and rub trees) targeting different segments of the population; (3) covariates to explain variation in each sub-model of detection; (4) a conditional autoregressive term to account for spatial autocorrelation; (5) weights to identify most important variables. Road density and per cent mesic habitat best explained variation in female grizzly bear abundance; spatial autocorrelation was not supported. More female bears were predicted in places with lower road density and with more mesic habitat. Detection rates of females increased with rub tree sampling effort. Road density best explained variation in male grizzly bear abundance and spatial autocorrelation was supported. More male bears were predicted in areas of low road density. Detection rates of males increased with rub tree and hair trap sampling effort and decreased over the sampling period. We provide a new method to (1) incorporate multiple detection methods into hierarchical models of abundance; (2) determine whether spatial autocorrelation should be included in final models. Our results suggest that the influence of landscape variables is consistent between habitat selection and abundance in this system.
Fischer, Jesse R.; Quist, Michael C.
2014-01-01
All freshwater fish sampling methods are biased toward particular species, sizes, and sexes and are further influenced by season, habitat, and fish behavior changes over time. However, little is known about gear-specific biases for many common fish species because few multiple-gear comparison studies exist that have incorporated seasonal dynamics. We sampled six lakes and impoundments representing a diversity of trophic and physical conditions in Iowa, USA, using multiple gear types (i.e., standard modified fyke net, mini-modified fyke net, sinking experimental gill net, bag seine, benthic trawl, boat-mounted electrofisher used diurnally and nocturnally) to determine the influence of sampling methodology and season on fisheries assessments. Specifically, we describe the influence of season on catch per unit effort, proportional size distribution, and the number of samples required to obtain 125 stock-length individuals for 12 species of recreational and ecological importance. Mean catch per unit effort generally peaked in the spring and fall as a result of increased sampling effectiveness in shallow areas and seasonal changes in habitat use (e.g., movement offshore during summer). Mean proportional size distribution decreased from spring to fall for white bass Morone chrysops, largemouth bass Micropterus salmoides, bluegill Lepomis macrochirus, and black crappie Pomoxis nigromaculatus, suggesting selectivity for large and presumably sexually mature individuals in the spring and summer. Overall, the mean number of samples required to sample 125 stock-length individuals was minimized in the fall with sinking experimental gill nets, a boat-mounted electrofisher used at night, and standard modified nets for 11 of the 12 species evaluated. Our results provide fisheries scientists with relative comparisons between several recommended standard sampling methods and illustrate the effects of seasonal variation on estimates of population indices that will be critical to the future development of standardized sampling methods for freshwater fish in lentic ecosystems.
Experiences of patients with multiple sclerosis from group counseling.
Mazaheri, Mina; Fanian, Nasrin; Zargham-Boroujeni, Ali
2011-01-01
Group counseling is one of the most important methods in somatic and psychological rehabilitation of the multiple sclerosis (M.S.) patients. Knowing these patients' experiences, feelings, believes and emotion based on learning in group is necessary to indicate the importance of group discussion on quality of life of the patients. This study was done to achieve experiences of M.S. patients from group training. This was a qualitative study using phenomenological method. The samples were selected using purposeful sampling. Ten patients from M.S. society who had passed group training were included in the study. The group training was done through seven sessions weekly and voluntarily. The participants were interviewed using in-depth interview. The average time of each interview was between 30-50 minutes which has been recorded digitally and moved to a compact disc to transcribe and analysis. The data analyzed using 7-step Colaizzi method. The data were transformed into 158 codes, 12 sub-concepts and 4 main concepts including emotional consequences, communication, quality of life and needs. M.S can lead to multiple problems in patients such as somatic, behavioral, emotional and social disorders. Group psychotherapy is one of the methods which can decrease these problems and improve rehabilitation of the patients. Group discussion helps patients to overcome adverse feelings, behaviors and thoughts and guides them to move in a meaningful life. It also can improve quality of life and mental health of the patients.
Inglis, Jeremy D.; Maassen, Joel; Kara, Azim; ...
2017-04-28
This study presents a total evaporation method for the analysis of sub-picogram quantities of Pu, utilizing an array of multiple ion counters. Data from three standards are presented to assess the utility of the technique. An external precision of 1.5% RSD (2σ) was achieved on aliquots approaching 100 fg for the minor 240Pu isotope. Accurate analysis of <1 femtogram of 240Pu, is achievable, with an external reproducibility of better than 10% RSD (2σ). Finally, this new technique represents a significant advance in the total evaporation method and will allow routine measurement of femtogram sized Pu samples by thermal ionization massmore » spectrometry.« less
NASA Technical Reports Server (NTRS)
Bunting, Charles F.; Yu, Shih-Pin
2006-01-01
This paper emphasizes the application of numerical methods to explore the ideas related to shielding effectiveness from a statistical view. An empty rectangular box is examined using a hybrid modal/moment method. The basic computational method is presented followed by the results for single- and multiple observation points within the over-moded empty structure. The statistics of the field are obtained by using frequency stirring, borrowed from the ideas connected with reverberation chamber techniques, and extends the ideas of shielding effectiveness well into the multiple resonance regions. The study presented in this paper will address the average shielding effectiveness over a broad spatial sample within the enclosure as the frequency is varied.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inglis, Jeremy D.; Maassen, Joel; Kara, Azim
This study presents a total evaporation method for the analysis of sub-picogram quantities of Pu, utilizing an array of multiple ion counters. Data from three standards are presented to assess the utility of the technique. An external precision of 1.5% RSD (2σ) was achieved on aliquots approaching 100 fg for the minor 240Pu isotope. Accurate analysis of <1 femtogram of 240Pu, is achievable, with an external reproducibility of better than 10% RSD (2σ). Finally, this new technique represents a significant advance in the total evaporation method and will allow routine measurement of femtogram sized Pu samples by thermal ionization massmore » spectrometry.« less
Practicable group testing method to evaluate weight/weight GMO content in maize grains.
Mano, Junichi; Yanaka, Yuka; Ikezu, Yoko; Onishi, Mari; Futo, Satoshi; Minegishi, Yasutaka; Ninomiya, Kenji; Yotsuyanagi, Yuichi; Spiegelhalter, Frank; Akiyama, Hiroshi; Teshima, Reiko; Hino, Akihiro; Naito, Shigehiro; Koiwa, Tomohiro; Takabatake, Reona; Furui, Satoshi; Kitta, Kazumi
2011-07-13
Because of the increasing use of maize hybrids with genetically modified (GM) stacked events, the established and commonly used bulk sample methods for PCR quantification of GM maize in non-GM maize are prone to overestimate the GM organism (GMO) content, compared to the actual weight/weight percentage of GM maize in the grain sample. As an alternative method, we designed and assessed a group testing strategy in which the GMO content is statistically evaluated based on qualitative analyses of multiple small pools, consisting of 20 maize kernels each. This approach enables the GMO content evaluation on a weight/weight basis, irrespective of the presence of stacked-event kernels. To enhance the method's user-friendliness in routine application, we devised an easy-to-use PCR-based qualitative analytical method comprising a sample preparation step in which 20 maize kernels are ground in a lysis buffer and a subsequent PCR assay in which the lysate is directly used as a DNA template. This method was validated in a multilaboratory collaborative trial.
Bingley, Polly J; Rafkin, Lisa E; Matheson, Della; Steck, Andrea K; Yu, Liping; Henderson, Courtney; Beam, Craig A; Boulware, David C
2015-12-01
Islet autoantibody testing provides the basis for assessment of risk of progression to type 1 diabetes. We set out to determine the feasibility and acceptability of dried capillary blood spot-based screening to identify islet autoantibody-positive relatives potentially eligible for inclusion in prevention trials. Dried blood spot (DBS) and venous samples were collected from 229 relatives participating in the TrialNet Pathway to Prevention Study. Both samples were tested for glutamic acid decarboxylase, islet antigen 2, and zinc transporter 8 autoantibodies, and venous samples were additionally tested for insulin autoantibodies and islet cell antibodies. We defined multiple autoantibody positive as two or more autoantibodies in venous serum and DBS screen positive if one or more autoantibodies were detected. Participant questionnaires compared the sample collection methods. Of 44 relatives who were multiple autoantibody positive in venous samples, 42 (95.5%) were DBS screen positive, and DBS accurately detected 145 of 147 autoantibody-negative relatives (98.6%). Capillary blood sampling was perceived as more painful than venous blood draw, but 60% of participants would prefer initial screening using home fingerstick with clinic visits only required if autoantibodies were found. Capillary blood sampling could facilitate screening for type 1 diabetes prevention studies.
The influence of serial fecal sampling on the diagnosis of giardiasis in humans, dogs, and cats.
Uchôa, Flávia Fernandes de Mendonça; Sudré, Adriana Pittella; Macieira, Daniel de Barros; Almosny, Nádia Regina Pereira
2017-08-24
Giardia infection is a common clinical problem in humans and pets. The diagnosis of giardiasis is challenging as hosts intermittently excrete protozoan cysts in their feces. In the present study, we comparatively evaluated two methods of serial fecal sampling in humans, dogs, and cats from Rio de Janeiro, Brazil. The Faust et al. technique was used to examine fecal specimens collected in triplicate from 133 patients (52 humans, 60 dogs, and 21 cats). Specimens from 74 patients were received from the group assigned to carry out sampling on consecutive days - 34 humans, 35 dogs, and 5 cats, and specimens from 59 patients were received from the group assigned to carry out sampling on non-consecutive, separate days - 18 human beings, 25 dogs, and 16 cats. G. duodenalis cysts were found in stools of 30 individuals. Multiple stool sampling resulted in an increase in the number of samples that were positive for Giardia in both groups. The authors therefore conclude that multiple stool sampling increases the sensitivity of the Faust et al . technique to detect G. duodenalis cysts in samples from humans, cats and dogs.
Multivariate analysis: greater insights into complex systems
USDA-ARS?s Scientific Manuscript database
Many agronomic researchers measure and collect multiple response variables in an effort to understand the more complex nature of the system being studied. Multivariate (MV) statistical methods encompass the simultaneous analysis of all random variables (RV) measured on each experimental or sampling ...
Gray, B.R.; Haro, R.J.; Rogala, J.T.; Sauer, J.S.
2005-01-01
1. Macroinvertebrate count data often exhibit nested or hierarchical structure. Examples include multiple measurements along each of a set of streams, and multiple synoptic measurements from each of a set of ponds. With data exhibiting hierarchical structure, outcomes at both sampling (e.g. Within stream) and aggregated (e.g. Stream) scales are often of interest. Unfortunately, methods for modelling hierarchical count data have received little attention in the ecological literature. 2. We demonstrate the use of hierarchical count models using fingernail clam (Family: Sphaeriidae) count data and habitat predictors derived from sampling and aggregated spatial scales. The sampling scale corresponded to that of a standard Ponar grab (0.052 m(2)) and the aggregated scale to impounded and backwater regions within 38-197 km reaches of the Upper Mississippi River. Impounded and backwater regions were resampled annually for 10 years. Consequently, measurements on clams were nested within years. Counts were treated as negative binomial random variates, and means from each resampling event as random departures from the impounded and backwater region grand means. 3. Clam models were improved by the addition of covariates that varied at both the sampling and regional scales. Substrate composition varied at the sampling scale and was associated with model improvements, and reductions (for a given mean) in variance at the sampling scale. Inorganic suspended solids (ISS) levels, measured in the summer preceding sampling, also yielded model improvements and were associated with reductions in variances at the regional rather than sampling scales. ISS levels were negatively associated with mean clam counts. 4. Hierarchical models allow hierarchically structured data to be modelled without ignoring information specific to levels of the hierarchy. In addition, information at each hierarchical level may be modelled as functions of covariates that themselves vary by and within levels. As a result, hierarchical models provide researchers and resource managers with a method for modelling hierarchical data that explicitly recognises both the sampling design and the information contained in the corresponding data.
Sedimentation in mountain streams: A review of methods of measurement
Hedrick, Lara B.; Anderson, James T.; Welsh, Stuart A.; Lin, Lian-Shin
2013-01-01
The goal of this review paper is to provide a list of methods and devices used to measure sediment accumulation in wadeable streams dominated by cobble and gravel substrate. Quantitative measures of stream sedimentation are useful to monitor and study anthropogenic impacts on stream biota, and stream sedimentation is measurable with multiple sampling methods. Evaluation of sedimentation can be made by measuring the concentration of suspended sediment, or turbidity, and by determining the amount of deposited sediment, or sedimentation on the streambed. Measurements of deposited sediments are more time consuming and labor intensive than measurements of suspended sediments. Traditional techniques for characterizing sediment composition in streams include core sampling, the shovel method, visual estimation along transects, and sediment traps. This paper provides a comprehensive review of methodology, devices that can be used, and techniques for processing and analyzing samples collected to aid researchers in choosing study design and equipment.
Exposure assessment for endocrine disruptors: some considerations in the design of studies.
Rice, Carol; Birnbaum, Linda S; Cogliano, James; Mahaffey, Kathryn; Needham, Larry; Rogan, Walter J; vom Saal, Frederick S
2003-01-01
In studies designed to evaluate exposure-response relationships in children's development from conception through puberty, multiple factors that affect the generation of meaningful exposure metrics must be considered. These factors include multiple routes of exposure; the timing, frequency, and duration of exposure; need for qualitative and quantitative data; sample collection and storage protocols; and the selection and documentation of analytic methods. The methods for exposure data collection and analysis must be sufficiently robust to accommodate the a priori hypotheses to be tested, as well as hypotheses generated from the data. A number of issues that must be considered in study design are summarized here. PMID:14527851
Multilocus lod scores in large pedigrees: combination of exact and approximate calculations.
Tong, Liping; Thompson, Elizabeth
2008-01-01
To detect the positions of disease loci, lod scores are calculated at multiple chromosomal positions given trait and marker data on members of pedigrees. Exact lod score calculations are often impossible when the size of the pedigree and the number of markers are both large. In this case, a Markov Chain Monte Carlo (MCMC) approach provides an approximation. However, to provide accurate results, mixing performance is always a key issue in these MCMC methods. In this paper, we propose two methods to improve MCMC sampling and hence obtain more accurate lod score estimates in shorter computation time. The first improvement generalizes the block-Gibbs meiosis (M) sampler to multiple meiosis (MM) sampler in which multiple meioses are updated jointly, across all loci. The second one divides the computations on a large pedigree into several parts by conditioning on the haplotypes of some 'key' individuals. We perform exact calculations for the descendant parts where more data are often available, and combine this information with sampling of the hidden variables in the ancestral parts. Our approaches are expected to be most useful for data on a large pedigree with a lot of missing data. (c) 2007 S. Karger AG, Basel
Multilocus Lod Scores in Large Pedigrees: Combination of Exact and Approximate Calculations
Tong, Liping; Thompson, Elizabeth
2007-01-01
To detect the positions of disease loci, lod scores are calculated at multiple chromosomal positions given trait and marker data on members of pedigrees. Exact lod score calculations are often impossible when the size of the pedigree and the number of markers are both large. In this case, a Markov Chain Monte Carlo (MCMC) approach provides an approximation. However, to provide accurate results, mixing performance is always a key issue in these MCMC methods. In this paper, we propose two methods to improve MCMC sampling and hence obtain more accurate lod score estimates in shorter computation time. The first improvement generalizes the block-Gibbs meiosis (M) sampler to multiple meiosis (MM) sampler in which multiple meioses are updated jointly, across all loci. The second one divides the computations on a large pedigree into several parts by conditioning on the haplotypes of some ‘key’ individuals. We perform exact calculations for the descendant parts where more data are often available, and combine this information with sampling of the hidden variables in the ancestral parts. Our approaches are expected to be most useful for data on a large pedigree with a lot of missing data. PMID:17934317
Protein crystallography prescreen kit
Segelke, Brent W.; Krupka, Heike I.; Rupp, Bernhard
2007-10-02
A kit for prescreening protein concentration for crystallization includes a multiplicity of vials, a multiplicity of pre-selected reagents, and a multiplicity of sample plates. The reagents and a corresponding multiplicity of samples of the protein in solutions of varying concentrations are placed on sample plates. The sample plates containing the reagents and samples are incubated. After incubation the sample plates are examined to determine which of the sample concentrations are too low and which the sample concentrations are too high. The sample concentrations that are optimal for protein crystallization are selected and used.
Protein crystallography prescreen kit
Segelke, Brent W.; Krupka, Heike I.; Rupp, Bernhard
2005-07-12
A kit for prescreening protein concentration for crystallization includes a multiplicity of vials, a multiplicity of pre-selected reagents, and a multiplicity of sample plates. The reagents and a corresponding multiplicity of samples of the protein in solutions of varying concentrations are placed on sample plates. The sample plates containing the reagents and samples are incubated. After incubation the sample plates are examined to determine which of the sample concentrations are too low and which the sample concentrations are too high. The sample concentrations that are optimal for protein crystallization are selected and used.
Mouradi, Rand; Desai, Nisarg; Erdemir, Ahmet; Agarwal, Ashok
2012-01-01
Recent studies have shown that exposing human semen samples to cell phone radiation leads to a significant decline in sperm parameters. In daily living, a cell phone is usually kept in proximity to the groin, such as in a trouser pocket, separated from the testes by multiple layers of tissue. The aim of this study was to calculate the distance between cell phone and semen sample to set up an in vitro experiment that can mimic real life conditions (cell phone in trouser pocket separated by multiple tissue layers). For this reason, a computational model of scrotal tissues was designed by considering these separating layers, the results of which were used in a series of simulations using the Finite Difference Time Domain (FDTD) method. To provide an equivalent effect of multiple tissue layers, these results showed that the distance between a cell phone and semen sample should be 0.8 cm to 1.8 cm greater than the anticipated distance between a cell phone and the testes.
Analysis of multiple soybean phytonutrients by near-infrared reflectance spectroscopy.
Zhang, Gaoyang; Li, Penghui; Zhang, Wenfei; Zhao, Jian
2017-05-01
Improvement of the nutritional quality of soybean is usually facilitated by a vast range of soybean germplasm with enough information about their multiple phytonutrients. In order to acquire this essential information from a huge number of soybean samples, a rapid analytic method is urgently required. Here, a nondestructive near-infrared reflectance spectroscopy (NIRS) method was developed for rapid and accurate measurement of 25 nutritional components in soybean simultaneously, including fatty acids palmitic acid, stearic acid, oleic acid, linoleic acid, and linolenic acid, vitamin E (VE), α-VE, γ-VE, δ-VE, saponins, isoflavonoids, and flavonoids. Modified partial least squares regression and first, second, third, and fourth derivative transformation was applied for the model development. The 1 minus variance ratio (1-VR) value of the optimal model can reach between the highest 0.95 and lowest 0.64. The predicted values of phytonutrients in soybean using NIRS technology are comparable to those obtained from using the traditional spectrum or chemical methods. A robust NIRS can be adopted as a reliable method to evaluate complex plant constituents for screening large-scale samples of soybean germplasm resources or genetic populations for improvement of nutritional qualities. Graphical Abstract ᅟ.
Diverse expected gradient active learning for relative attributes.
You, Xinge; Wang, Ruxin; Tao, Dacheng
2014-07-01
The use of relative attributes for semantic understanding of images and videos is a promising way to improve communication between humans and machines. However, it is extremely labor- and time-consuming to define multiple attributes for each instance in large amount of data. One option is to incorporate active learning, so that the informative samples can be actively discovered and then labeled. However, most existing active-learning methods select samples one at a time (serial mode), and may therefore lose efficiency when learning multiple attributes. In this paper, we propose a batch-mode active-learning method, called diverse expected gradient active learning. This method integrates an informativeness analysis and a diversity analysis to form a diverse batch of queries. Specifically, the informativeness analysis employs the expected pairwise gradient length as a measure of informativeness, while the diversity analysis forces a constraint on the proposed diverse gradient angle. Since simultaneous optimization of these two parts is intractable, we utilize a two-step procedure to obtain the diverse batch of queries. A heuristic method is also introduced to suppress imbalanced multiclass distributions. Empirical evaluations of three different databases demonstrate the effectiveness and efficiency of the proposed approach.
Diverse Expected Gradient Active Learning for Relative Attributes.
You, Xinge; Wang, Ruxin; Tao, Dacheng
2014-06-02
The use of relative attributes for semantic understanding of images and videos is a promising way to improve communication between humans and machines. However, it is extremely labor- and time-consuming to define multiple attributes for each instance in large amount of data. One option is to incorporate active learning, so that the informative samples can be actively discovered and then labeled. However, most existing active-learning methods select samples one at a time (serial mode), and may therefore lose efficiency when learning multiple attributes. In this paper, we propose a batch-mode active-learning method, called Diverse Expected Gradient Active Learning (DEGAL). This method integrates an informativeness analysis and a diversity analysis to form a diverse batch of queries. Specifically, the informativeness analysis employs the expected pairwise gradient length as a measure of informativeness, while the diversity analysis forces a constraint on the proposed diverse gradient angle. Since simultaneous optimization of these two parts is intractable, we utilize a two-step procedure to obtain the diverse batch of queries. A heuristic method is also introduced to suppress imbalanced multi-class distributions. Empirical evaluations of three different databases demonstrate the effectiveness and efficiency of the proposed approach.
Baker, Laurie L; Mills Flemming, Joanna E; Jonsen, Ian D; Lidgard, Damian C; Iverson, Sara J; Bowen, W Don
2015-01-01
Paired with satellite location telemetry, animal-borne instruments can collect spatiotemporal data describing the animal's movement and environment at a scale relevant to its behavior. Ecologists have developed methods for identifying the area(s) used by an animal (e.g., home range) and those used most intensely (utilization distribution) based on location data. However, few have extended these models beyond their traditional roles as descriptive 2D summaries of point data. Here we demonstrate how the home range method, T-LoCoH, can be expanded to quantify collective sampling coverage by multiple instrumented animals using grey seals (Halichoerus grypus) equipped with GPS tags and acoustic transceivers on the Scotian Shelf (Atlantic Canada) as a case study. At the individual level, we illustrate how time and space-use metrics quantifying individual sampling coverage may be used to determine the rate of acoustic transmissions received. Grey seals collectively sampled an area of 11,308 km (2) and intensely sampled an area of 31 km (2) from June-December. The largest area sampled was in July (2094.56 km (2)) and the smallest area sampled occurred in August (1259.80 km (2)), with changes in sampling coverage observed through time. T-LoCoH provides an effective means to quantify changes in collective sampling effort by multiple instrumented animals and to compare these changes across time. We also illustrate how time and space-use metrics of individual instrumented seal movement calculated using T-LoCoH can be used to account for differences in the amount of time a bioprobe (biological sampling platform) spends in an area.
Measuring the scale dependence of intrinsic alignments using multiple shear estimates
NASA Astrophysics Data System (ADS)
Leonard, C. Danielle; Mandelbaum, Rachel
2018-06-01
We present a new method for measuring the scale dependence of the intrinsic alignment (IA) contamination to the galaxy-galaxy lensing signal, which takes advantage of multiple shear estimation methods applied to the same source galaxy sample. By exploiting the resulting correlation of both shape noise and cosmic variance, our method can provide an increase in the signal-to-noise of the measured IA signal as compared to methods which rely on the difference of the lensing signal from multiple photometric redshift bins. For a galaxy-galaxy lensing measurement which uses LSST sources and DESI lenses, the signal-to-noise on the IA signal from our method is predicted to improve by a factor of ˜2 relative to the method of Blazek et al. (2012), for pairs of shear estimates which yield substantially different measured IA amplitudes and highly correlated shape noise terms. We show that statistical error necessarily dominates the measurement of intrinsic alignments using our method. We also consider a physically motivated extension of the Blazek et al. (2012) method which assumes that all nearby galaxy pairs, rather than only excess pairs, are subject to IA. In this case, the signal-to-noise of the method of Blazek et al. (2012) is improved.
Kuan, Da-Han; Wang, I-Shun; Lin, Jiun-Rue; Yang, Chao-Han; Huang, Chi-Hsien; Lin, Yen-Hung; Lin, Chih-Ting; Huang, Nien-Tsu
2016-08-02
The hemoglobin-A1c test, measuring the ratio of glycated hemoglobin (HbA1c) to hemoglobin (Hb) levels, has been a standard assay in diabetes diagnosis that removes the day-to-day glucose level variation. Currently, the HbA1c test is restricted to hospitals and central laboratories due to the laborious, time-consuming whole blood processing and bulky instruments. In this paper, we have developed a microfluidic device integrating dual CMOS polysilicon nanowire sensors (MINS) for on-chip whole blood processing and simultaneous detection of multiple analytes. The micromachined polymethylmethacrylate (PMMA) microfluidic device consisted of a serpentine microchannel with multiple dam structures designed for non-lysed cells or debris trapping, uniform plasma/buffer mixing and dilution. The CMOS-fabricated polysilicon nanowire sensors integrated with the microfluidic device were designed for the simultaneous, label-free electrical detection of multiple analytes. Our study first measured the Hb and HbA1c levels in 11 clinical samples via these nanowire sensors. The results were compared with those of standard Hb and HbA1c measurement methods (Hb: the sodium lauryl sulfate hemoglobin detection method; HbA1c: cation-exchange high-performance liquid chromatography) and showed comparable outcomes. Finally, we successfully demonstrated the efficacy of the MINS device's on-chip whole blood processing followed by simultaneous Hb and HbA1c measurement in a clinical sample. Compared to current Hb and HbA1c sensing instruments, the MINS platform is compact and can simultaneously detect two analytes with only 5 μL of whole blood, which corresponds to a 300-fold blood volume reduction. The total assay time, including the in situ sample processing and analyte detection, was just 30 minutes. Based on its on-chip whole blood processing and simultaneous multiple analyte detection functionalities with a lower sample volume requirement and shorter process time, the MINS device can be effectively applied to real-time diabetes diagnostics and monitoring in point-of-care settings.
Computer-aided visualization and analysis system for sequence evaluation
Chee, Mark S.
1999-10-26
A computer system (1) for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area (814) and sample sequences in another area (816) on a display device (3).
Computer-aided visualization and analysis system for sequence evaluation
Chee, Mark S.
2001-06-05
A computer system (1) for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area (814) and sample sequences in another area (816) on a display device (3).
1996-09-01
Generalized Likelihood Ratio (GLR) and voting techniques. The third class consisted of multiple hypothesis filter detectors, specifically the MMAE. The...vector version, versus a tensor if we use the matrix version of the power spectral density estimate. Using this notation, we will derive an...as MATLAB , have an intrinsic sample covariance computation available, which makes this method quite easy to implement. In practice, the mean for the
Hierarchical screening for multiple mental disorders.
Batterham, Philip J; Calear, Alison L; Sunderland, Matthew; Carragher, Natacha; Christensen, Helen; Mackinnon, Andrew J
2013-10-01
There is a need for brief, accurate screening when assessing multiple mental disorders. Two-stage hierarchical screening, consisting of brief pre-screening followed by a battery of disorder-specific scales for those who meet diagnostic criteria, may increase the efficiency of screening without sacrificing precision. This study tested whether more efficient screening could be gained using two-stage hierarchical screening than by administering multiple separate tests. Two Australian adult samples (N=1990) with high rates of psychopathology were recruited using Facebook advertising to examine four methods of hierarchical screening for four mental disorders: major depressive disorder, generalised anxiety disorder, panic disorder and social phobia. Using K6 scores to determine whether full screening was required did not increase screening efficiency. However, pre-screening based on two decision tree approaches or item gating led to considerable reductions in the mean number of items presented per disorder screened, with estimated item reductions of up to 54%. The sensitivity of these hierarchical methods approached 100% relative to the full screening battery. Further testing of the hierarchical screening approach based on clinical criteria and in other samples is warranted. The results demonstrate that a two-phase hierarchical approach to screening multiple mental disorders leads to considerable increases efficiency gains without reducing accuracy. Screening programs should take advantage of prescreeners based on gating items or decision trees to reduce the burden on respondents. © 2013 Elsevier B.V. All rights reserved.
Chen, Yi-Ting; Chen, Hsiao-Wei; Wu, Chun-Feng; Chu, Lichieh Julie; Chiang, Wei-Fang; Wu, Chih-Ching; Yu, Jau-Song; Tsai, Cheng-Han; Liang, Kung-Hao; Chang, Yu-Sun; Wu, Maureen; Ou Yang, Wei-Ting
2017-05-01
Multiple (selected) reaction monitoring (MRM/SRM) of peptides is a growing technology for target protein quantification because it is more robust, precise, accurate, high-throughput, and multiplex-capable than antibody-based techniques. The technique has been applied clinically to the large-scale quantification of multiple target proteins in different types of fluids. However, previous MRM-based studies have placed less focus on sample-preparation workflow and analytical performance in the precise quantification of proteins in saliva, a noninvasively sampled body fluid. In this study, we evaluated the analytical performance of a simple and robust multiple reaction monitoring (MRM)-based targeted proteomics approach incorporating liquid chromatography with mass spectrometry detection (LC-MRM/MS). This platform was used to quantitatively assess the biomarker potential of a group of 56 salivary proteins that have previously been associated with human cancers. To further enhance the development of this technology for assay of salivary samples, we optimized the workflow for salivary protein digestion and evaluated quantification performance, robustness and technical limitations in analyzing clinical samples. Using a clinically well-characterized cohort of two independent clinical sample sets (total n = 119), we quantitatively characterized these protein biomarker candidates in saliva specimens from controls and oral squamous cell carcinoma (OSCC) patients. The results clearly showed a significant elevation of most targeted proteins in saliva samples from OSCC patients compared with controls. Overall, this platform was capable of assaying the most highly multiplexed panel of salivary protein biomarkers, highlighting the clinical utility of MRM in oral cancer biomarker research. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.
Reno, Philip L; Lovejoy, C Owen
2015-01-01
Sexual dimorphism in body size is often used as a correlate of social and reproductive behavior in Australopithecus afarensis. In addition to a number of isolated specimens, the sample for this species includes two small associated skeletons (A.L. 288-1 or "Lucy" and A.L. 128/129) and a geologically contemporaneous death assemblage of several larger individuals (A.L. 333). These have driven both perceptions and quantitative analyses concluding that Au. afarensis was markedly dimorphic. The Template Method enables simultaneous evaluation of multiple skeletal sites, thereby greatly expanding sample size, and reveals that A. afarensis dimorphism was similar to that of modern humans. A new very large partial skeleton (KSD-VP-1/1 or "Kadanuumuu") can now also be used, like Lucy, as a template specimen. In addition, the recently developed Geometric Mean Method has been used to argue that Au. afarensis was equally or even more dimorphic than gorillas. However, in its previous application Lucy and A.L. 128/129 accounted for 10 of 11 estimates of female size. Here we directly compare the two methods and demonstrate that including multiple measurements from the same partial skeleton that falls at the margin of the species size range dramatically inflates dimorphism estimates. Prevention of the dominance of a single specimen's contribution to calculations of multiple dimorphism estimates confirms that Au. afarensis was only moderately dimorphic.
Lovejoy, C. Owen
2015-01-01
Sexual dimorphism in body size is often used as a correlate of social and reproductive behavior in Australopithecus afarensis. In addition to a number of isolated specimens, the sample for this species includes two small associated skeletons (A.L. 288-1 or “Lucy” and A.L. 128/129) and a geologically contemporaneous death assemblage of several larger individuals (A.L. 333). These have driven both perceptions and quantitative analyses concluding that Au. afarensis was markedly dimorphic. The Template Method enables simultaneous evaluation of multiple skeletal sites, thereby greatly expanding sample size, and reveals that A. afarensis dimorphism was similar to that of modern humans. A new very large partial skeleton (KSD-VP-1/1 or “Kadanuumuu”) can now also be used, like Lucy, as a template specimen. In addition, the recently developed Geometric Mean Method has been used to argue that Au. afarensis was equally or even more dimorphic than gorillas. However, in its previous application Lucy and A.L. 128/129 accounted for 10 of 11 estimates of female size. Here we directly compare the two methods and demonstrate that including multiple measurements from the same partial skeleton that falls at the margin of the species size range dramatically inflates dimorphism estimates. Prevention of the dominance of a single specimen’s contribution to calculations of multiple dimorphism estimates confirms that Au. afarensis was only moderately dimorphic. PMID:25945314
NASA Astrophysics Data System (ADS)
Noor, M. J. Md; Ibrahim, A.; Rahman, A. S. A.
2018-04-01
Small strain triaxial test measurement is considered to be significantly accurate compared to the external strain measurement using conventional method due to systematic errors normally associated with the test. Three submersible miniature linear variable differential transducer (LVDT) mounted on yokes which clamped directly onto the soil sample at equally 120° from the others. The device setup using 0.4 N resolution load cell and 16 bit AD converter was capable of consistently resolving displacement of less than 1µm and measuring axial strains ranging from less than 0.001% to 2.5%. Further analysis of small strain local measurement data was performed using new Normalized Multiple Yield Surface Framework (NRMYSF) method and compared with existing Rotational Multiple Yield Surface Framework (RMYSF) prediction method. The prediction of shear strength based on combined intrinsic curvilinear shear strength envelope using small strain triaxial test data confirmed the significant improvement and reliability of the measurement and analysis methods. Moreover, the NRMYSF method shows an excellent data prediction and significant improvement toward more reliable prediction of soil strength that can reduce the cost and time of experimental laboratory test.
Schmidt, Martin; Van Bel, Michiel; Woloszynska, Magdalena; Slabbinck, Bram; Martens, Cindy; De Block, Marc; Coppens, Frederik; Van Lijsebettens, Mieke
2017-07-06
Cytosine methylation in plant genomes is important for the regulation of gene transcription and transposon activity. Genome-wide methylomes are studied upon mutation of the DNA methyltransferases, adaptation to environmental stresses or during development. However, from basic biology to breeding programs, there is a need to monitor multiple samples to determine transgenerational methylation inheritance or differential cytosine methylation. Methylome data obtained by sodium hydrogen sulfite (bisulfite)-conversion and next-generation sequencing (NGS) provide genome-wide information on cytosine methylation. However, a profiling method that detects cytosine methylation state dispersed over the genome would allow high-throughput analysis of multiple plant samples with distinct epigenetic signatures. We use specific restriction endonucleases to enrich for cytosine coverage in a bisulfite and NGS-based profiling method, which was compared to whole-genome bisulfite sequencing of the same plant material. We established an effective methylome profiling method in plants, termed plant-reduced representation bisulfite sequencing (plant-RRBS), using optimized double restriction endonuclease digestion, fragment end repair, adapter ligation, followed by bisulfite conversion, PCR amplification and NGS. We report a performant laboratory protocol and a straightforward bioinformatics data analysis pipeline for plant-RRBS, applicable for any reference-sequenced plant species. As a proof of concept, methylome profiling was performed using an Oryza sativa ssp. indica pure breeding line and a derived epigenetically altered line (epiline). Plant-RRBS detects methylation levels at tens of millions of cytosine positions deduced from bisulfite conversion in multiple samples. To evaluate the method, the coverage of cytosine positions, the intra-line similarity and the differential cytosine methylation levels between the pure breeding line and the epiline were determined. Plant-RRBS reproducibly covers commonly up to one fourth of the cytosine positions in the rice genome when using MspI-DpnII within a group of five biological replicates of a line. The method predominantly detects cytosine methylation in putative promoter regions and not-annotated regions in rice. Plant-RRBS offers high-throughput and broad, genome-dispersed methylation detection by effective read number generation obtained from reproducibly covered genome fractions using optimized endonuclease combinations, facilitating comparative analyses of multi-sample studies for cytosine methylation and transgenerational stability in experimental material and plant breeding populations.
Survival and multiplication of Legionella pneumophila in municipal drinking water systems.
States, S J; Conley, L F; Kuchta, J M; Oleck, B M; Lipovich, M J; Wolford, R S; Wadowsky, R M; McNamara, A M; Sykora, J L; Keleti, G
1987-01-01
Studies were conducted to investigate the survival and multiplication of Legionella spp. in public drinking water supplies. An attempt was made, over a period of several years, to isolate legionellae from a municipal system. Sampling sites included the river water supply, treatment plant, finished water reservoir system, mains, and distribution taps. Despite the use of several isolation techniques, Legionella spp. could not be detected in any of the samples other than those collected from the river. It was hypothesized that this was due to the maintenance of a chlorine residual throughout the system. To investigate the potential for Legionella growth, additional water samples, collected from throughout the system, were dechlorinated, pasteurized, and inoculated with Legionella pneumophila. Subsequent growth indicated that many of these samples, especially those collected from areas affected by an accumulation of algal materials, exhibited a much greater ability to support Legionella multiplication than did river water prior to treatment. Chemical analyses were also performed on these samples. Correlation of chemical data and experimental growth results indicated that the chemical environment significantly affects the ability of the water to support multiplication, with turbidity, organic carbon, and certain metals being of particular importance. These studies indicate that the potential exists for Legionella growth within municipal systems and support the hypothesis that public water supplies may contaminate the plumbing systems of hospitals and other large buildings. The results also suggest that useful methods to control this contamination include adequate treatment plant filtration, maintenance of a chlorine residual throughout the treatment and distribution network, and effective covering of open reservoirs. PMID:3606101
Array-based photoacoustic spectroscopy
Autrey, S. Thomas; Posakony, Gerald J.; Chen, Yu
2005-03-22
Methods and apparatus for simultaneous or sequential, rapid analysis of multiple samples by photoacoustic spectroscopy are disclosed. A photoacoustic spectroscopy sample array including a body having at least three recesses or affinity masses connected thereto is used in conjunction with a photoacoustic spectroscopy system. At least one acoustic detector is positioned near the recesses or affinity masses for detection of acoustic waves emitted from species of interest within the recesses or affinity masses.
Gu, Hui-Wen; Wu, Hai-Long; Yin, Xiao-Li; Li, Yong; Liu, Ya-Juan; Xia, Hui; Zhang, Shu-Rong; Jin, Yi-Feng; Sun, Xiao-Dong; Yu, Ru-Qin; Yang, Peng-Yuan; Lu, Hao-Jie
2014-10-27
β-blockers are the first-line therapeutic agents for treating cardiovascular diseases and also a class of prohibited substances in athletic competitions. In this work, a smart strategy that combines three-way liquid chromatography-mass spectrometry (LC-MS) data with second-order calibration method based on alternating trilinear decomposition (ATLD) algorithm was developed for simultaneous determination of ten β-blockers in human urine and plasma samples. This flexible strategy proved to be a useful tool to solve the problems of overlapped peaks and uncalibrated interferences encountered in quantitative LC-MS, and made the multi-targeted interference-free qualitative and quantitative analysis of β-blockers in complex matrices possible. The limits of detection were in the range of 2.0×10(-5)-6.2×10(-3) μg mL(-1), and the average recoveries were between 90 and 110% with standard deviations and average relative prediction errors less than 10%, indicating that the strategy could provide satisfactory prediction results for ten β-blockers in human urine and plasma samples only using liquid chromatography hyphenated single-quadrupole mass spectrometer in full scan mode. To further confirm the feasibility and reliability of the proposed method, the same batch samples were analyzed by multiple reaction monitoring (MRM) method. T-test demonstrated that there are no significant differences between the prediction results of the two methods. Considering the advantages of fast, low-cost, high sensitivity, and no need of complicated chromatographic and tandem mass spectrometric conditions optimization, the proposed strategy is expected to be extended as an attractive alternative method to quantify analyte(s) of interest in complex systems such as cells, biological fluids, food, environment, pharmaceuticals and other complex samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Zhang, Shanshan; Liu, Xiaofei; Qin, Jia'an; Yang, Meihua; Zhao, Hongzheng; Wang, Yong; Guo, Weiying; Ma, Zhijie; Kong, Weijun
2017-11-15
A simple and rapid gas chromatography-flame photometric detection (GC-FPD) method was developed for the determination of 12 organophosphorus pesticides (OPPs) in Salvia miltiorrhizae by using ultrasonication assisted one-step extraction (USAE) without any clean-up steps. Some crucial parameters such as type of extraction solvent were optimized to improve the method performance for trace analysis. Any clean-up steps were negligent as no interferences were detected in the GC-FPD chromatograms for sensitive detection. Under the optimized conditions, limits of detection (LODs) and quantitation (LOQs) for all pesticides were in the range of 0.001-0.002mg/kg and 0.002-0.01mg/kg and 0.002-0.01mg/kg, respectively, which were all below the regulatory maximum residue limits suggested. RSDs for method precision (intra- and inter-day variations) were lower than 6.8% in approval with international regulations. Average recovery rates for all pesticides at three fortification levels (0.5, 1.0 and 5.0mg/kg) were in the range of 71.2-101.0% with relative standard deviations (RSDs) <13%. The developed method was evaluated for its feasibility in the simultaneous pre-concentration and determination of 12 OPPs in 32 batches of real S. miltiorrhizae samples. Only one pesticide (dimethoate) out of the 12 targets was simultaneously detected in four samples at concentrations of 0.016-0.02mg/kg. Dichlorvos and omethoate were found in the same sample from Sichuan province at 0.004 and 0.027mg/kg, respectively. Malathion and monocrotophos were determined in the other two samples at 0.014 and 0.028mg/kg, respectively. All the positive samples were confirmed by LC-MS/MS. The simple, reliable and rapid USAE-GC-FPD method with many advantages over traditional techniques would be preferred for trace analysis of multiple pesticides in more complex matrices. Copyright © 2017 Elsevier B.V. All rights reserved.
Zhao, Yaju; Tang, Minmin; Liao, Qiaobo; Li, Zhoumin; Li, Hui; Xi, Kai; Tan, Li; Zhang, Mei; Xu, Danke; Chen, Hong-Yuan
2018-04-27
In this work, we demonstrate, for the first time, the development of a disposable MoS 2 -arrayed matrix-assisted laser desorption/ionization mass spectrometry (MALDI MS) chip combined with an immunoaffinity enrichment method for high-throughput, rapid, and simultaneous quantitation of multiple sulfonamides (SAs). The disposable MALDI MS chip was designed and fabricated by MoS 2 array formation on a commercial indium tin oxide (ITO) glass slide. A series of SAs were analyzed, and clear deprotonated signals were obtained in negative-ion mode. Compared with MoS 2 -arrayed commercial steel plate, the prepared MALDI MS chip exhibited comparable LDI efficiency, providing a good alternative and disposable substrate for MALDI MS analysis. Furthermore, internal standard (IS) was previously deposited onto the MoS 2 array to simplify the experimental process for MALDI MS quantitation. 96 sample spots could be analyzed within 10 min in one single chip to perform quantitative analysis, recovery studies, and real foodstuff detection. Upon targeted extraction and enrichment by antibody conjugated magnetic beads, five SAs were quantitatively determined by the IS-first method with the linear range of 0.5-10 ng/mL ( R 2 > 0.990). Good recoveries and repeatability were obtained for spiked pork, egg, and milk samples. SAs in several real foodstuffs were successfully identified and quantified. The developed method may provide a promising tool for the routine analysis of antibiotic residues in real samples.
Nakayasu, Ernesto S.; Nicora, Carrie D.; Sims, Amy C.; Burnum-Johnson, Kristin E.; Kim, Young-Mo; Kyle, Jennifer E.; Matzke, Melissa M.; Shukla, Anil K.; Chu, Rosalie K.; Schepmoes, Athena A.; Jacobs, Jon M.; Baric, Ralph S.; Webb-Robertson, Bobbie-Jo; Smith, Richard D.
2016-01-01
ABSTRACT Integrative multi-omics analyses can empower more effective investigation and complete understanding of complex biological systems. Despite recent advances in a range of omics analyses, multi-omic measurements of the same sample are still challenging and current methods have not been well evaluated in terms of reproducibility and broad applicability. Here we adapted a solvent-based method, widely applied for extracting lipids and metabolites, to add proteomics to mass spectrometry-based multi-omics measurements. The metabolite, protein, and lipid extraction (MPLEx) protocol proved to be robust and applicable to a diverse set of sample types, including cell cultures, microbial communities, and tissues. To illustrate the utility of this protocol, an integrative multi-omics analysis was performed using a lung epithelial cell line infected with Middle East respiratory syndrome coronavirus, which showed the impact of this virus on the host glycolytic pathway and also suggested a role for lipids during infection. The MPLEx method is a simple, fast, and robust protocol that can be applied for integrative multi-omic measurements from diverse sample types (e.g., environmental, in vitro, and clinical). IMPORTANCE In systems biology studies, the integration of multiple omics measurements (i.e., genomics, transcriptomics, proteomics, metabolomics, and lipidomics) has been shown to provide a more complete and informative view of biological pathways. Thus, the prospect of extracting different types of molecules (e.g., DNAs, RNAs, proteins, and metabolites) and performing multiple omics measurements on single samples is very attractive, but such studies are challenging due to the fact that the extraction conditions differ according to the molecule type. Here, we adapted an organic solvent-based extraction method that demonstrated broad applicability and robustness, which enabled comprehensive proteomics, metabolomics, and lipidomics analyses from the same sample. Author Video: An author video summary of this article is available. PMID:27822525
Methods for the preparation and analysis of solids and suspended solids for total mercury
Olund, Shane D.; DeWild, John F.; Olson, Mark L.; Tate, Michael T.
2004-01-01
The methods documented in this report are utilized by the Wisconsin District Mercury Lab for analysis of total mercury in solids (soils and sediments) and suspended solids (isolated on filters). Separate procedures are required for the different sample types. For solids, samples are prepared by room-temperature acid digestion and oxidation with aqua regia. The samples are brought up to volume with a 5 percent bromine monochloride solution to ensure complete oxidation and heated at 50?C in an oven overnight. Samples are then analyzed with an automated flow injection system incorporating a cold vapor atomic fluorescence spectrometer. A method detection limit of 0.3 ng of mercury per digestion bomb was established using multiple analyses of an environmental sample. Based on the range of masses processed, the minimum sample reporting limit varies from 0.6 ng/g to 6 ng/g. Suspended solids samples are oxidized with a 5 percent bromine monochloride solution and held at 50?C in an oven for 5 days. The samples are then analyzed with an automated flow injection system incorporating a cold vapor atomic fluorescence spectrometer. Using a certified reference material as a surrogate for an environmental sample, a method detection limit of 0.059 ng of mercury per filter was established. The minimum sample reporting limit varies from 0.059 ng/L to 1.18 ng/L, depending on the volume of water filtered.
Bailey-Wilson, Joan E.; Brennan, Jennifer S.; Bull, Shelley B; Culverhouse, Robert; Kim, Yoonhee; Jiang, Yuan; Jung, Jeesun; Li, Qing; Lamina, Claudia; Liu, Ying; Mägi, Reedik; Niu, Yue S.; Simpson, Claire L.; Wang, Libo; Yilmaz, Yildiz E.; Zhang, Heping; Zhang, Zhaogong
2012-01-01
Group 14 of Genetic Analysis Workshop 17 examined several issues related to analysis of complex traits using DNA sequence data. These issues included novel methods for analyzing rare genetic variants in an aggregated manner (often termed collapsing rare variants), evaluation of various study designs to increase power to detect effects of rare variants, and the use of machine learning approaches to model highly complex heterogeneous traits. Various published and novel methods for analyzing traits with extreme locus and allelic heterogeneity were applied to the simulated quantitative and disease phenotypes. Overall, we conclude that power is (as expected) dependent on locus-specific heritability or contribution to disease risk, large samples will be required to detect rare causal variants with small effect sizes, extreme phenotype sampling designs may increase power for smaller laboratory costs, methods that allow joint analysis of multiple variants per gene or pathway are more powerful in general than analyses of individual rare variants, population-specific analyses can be optimal when different subpopulations harbor private causal mutations, and machine learning methods may be useful for selecting subsets of predictors for follow-up in the presence of extreme locus heterogeneity and large numbers of potential predictors. PMID:22128066
Method for determining surface coverage by materials exhibiting different fluorescent properties
NASA Technical Reports Server (NTRS)
Chappelle, Emmett W. (Inventor); Daughtry, Craig S. T. (Inventor); Mcmurtrey, James E., III (Inventor)
1995-01-01
An improved method for detecting, measuring, and distinguishing crop residue, live vegetation, and mineral soil is presented. By measuring fluorescence in multiple bands, live and dead vegetation are distinguished. The surface of the ground is illuminated with ultraviolet radiation, inducing fluorescence in certain molecules. The emitted fluorescent emission induced by the ultraviolet radiation is measured by means of a fluorescence detector, consisting of a photodetector or video camera and filters. The spectral content of the emitted fluorescent emission is characterized at each point sampled, and the proportion of the sampled area covered by residue or vegetation is calculated.
Reply to “Ranking filter methods for concentrating pathogens in lake water”
Bushon, Rebecca N.; Francy, Donna S.; Gallardo, Vicente J.; Lindquist, H.D. Alan; Villegas, Eric N.; Ware, Michael W.
2013-01-01
Accurately comparing filtration methods is indeed difficult. Our method (1) and the method described by Borchardt et al. for determining recoveries are both acceptable approaches; however, each is designed to achieve a different research goal. Our study was designed to compare recoveries of multiple microorganisms in surface-water samples. Because, in practice, water-matrix effects come into play throughout filtration, concentration, and detection processes, we felt it important to incorporate those effects into the recovery results.
Multivariate survivorship analysis using two cross-sectional samples.
Hill, M E
1999-11-01
As an alternative to survival analysis with longitudinal data, I introduce a method that can be applied when one observes the same cohort in two cross-sectional samples collected at different points in time. The method allows for the estimation of log-probability survivorship models that estimate the influence of multiple time-invariant factors on survival over a time interval separating two samples. This approach can be used whenever the survival process can be adequately conceptualized as an irreversible single-decrement process (e.g., mortality, the transition to first marriage among a cohort of never-married individuals). Using data from the Integrated Public Use Microdata Series (Ruggles and Sobek 1997), I illustrate the multivariate method through an investigation of the effects of race, parity, and educational attainment on the survival of older women in the United States.
Adaptively biased molecular dynamics: An umbrella sampling method with a time-dependent potential
NASA Astrophysics Data System (ADS)
Babin, Volodymyr; Karpusenka, Vadzim; Moradi, Mahmoud; Roland, Christopher; Sagui, Celeste
We discuss an adaptively biased molecular dynamics (ABMD) method for the computation of a free energy surface for a set of reaction coordinates. The ABMD method belongs to the general category of umbrella sampling methods with an evolving biasing potential. It is characterized by a small number of control parameters and an O(t) numerical cost with simulation time t. The method naturally allows for extensions based on multiple walkers and replica exchange mechanism. The workings of the method are illustrated with a number of examples, including sugar puckering, and free energy landscapes for polymethionine and polyproline peptides, and for a short β-turn peptide. ABMD has been implemented into the latest version (Case et al., AMBER 10; University of California: San Francisco, 2008) of the AMBER software package and is freely available to the simulation community.
NASA Astrophysics Data System (ADS)
Huang, Po-Jung; Baghbani Kordmahale, Sina; Chou, Chao-Kai; Yamaguchi, Hirohito; Hung, Mien-Chie; Kameoka, Jun
2016-03-01
Signal transductions including multiple protein post-translational modifications (PTM), protein-protein interactions (PPI), and protein-nucleic acid interaction (PNI) play critical roles for cell proliferation and differentiation that are directly related to the cancer biology. Traditional methods, like mass spectrometry, immunoprecipitation, fluorescence resonance energy transfer, and fluorescence correlation spectroscopy require a large amount of sample and long processing time. "microchannel for multiple-parameter analysis of proteins in single-complex (mMAPS)"we proposed can reduce the process time and sample volume because this system is composed by microfluidic channels, fluorescence microscopy, and computerized data analysis. In this paper, we will present an automated mMAPS including integrated microfluidic device, automated stage and electrical relay for high-throughput clinical screening. Based on this result, we estimated that this automated detection system will be able to screen approximately 150 patient samples in a 24-hour period, providing a practical application to analyze tissue samples in a clinical setting.
Chan, Yvonne L; Schanzenbach, David; Hickerson, Michael J
2014-09-01
Methods that integrate population-level sampling from multiple taxa into a single community-level analysis are an essential addition to the comparative phylogeographic toolkit. Detecting how species within communities have demographically tracked each other in space and time is important for understanding the effects of future climate and landscape changes and the resulting acceleration of extinctions, biological invasions, and potential surges in adaptive evolution. Here, we present a statistical framework for such an analysis based on hierarchical approximate Bayesian computation (hABC) with the goal of detecting concerted demographic histories across an ecological assemblage. Our method combines population genetic data sets from multiple taxa into a single analysis to estimate: 1) the proportion of a community sample that demographically expanded in a temporally clustered pulse and 2) when the pulse occurred. To validate the accuracy and utility of this new approach, we use simulation cross-validation experiments and subsequently analyze an empirical data set of 32 avian populations from Australia that are hypothesized to have expanded from smaller refugia populations in the late Pleistocene. The method can accommodate data set heterogeneity such as variability in effective population size, mutation rates, and sample sizes across species and exploits the statistical strength from the simultaneous analysis of multiple species. This hABC framework used in a multitaxa demographic context can increase our understanding of the impact of historical climate change by determining what proportion of the community responded in concert or independently and can be used with a wide variety of comparative phylogeographic data sets as biota-wide DNA barcoding data sets accumulate. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Jasmine, Farzana; Shinkle, Justin; Sabarinathan, Mekala; Ahsan, Habibul; Pierce, Brandon L; Kibriya, Muhammad G
2018-03-12
Relative telomere length (RTL) is a potential biomarker of aging and risk for chronic disease. Previously, we developed a probe-based RTL assay on Luminex platform, where probes for Telomere (T) and reference gene (R) for a given DNA sample were tested in a single well. Here, we describe a method of pooling multiple samples in one well to increase the throughput and cost-effectiveness. We used four different microbeads for the same T-probe and four different microbeads for the same R-probe. Each pair of probe sets were hybridized to DNA in separate plates and then pooled in a single plate for all the subsequent steps. We used DNA samples from 60 independent individuals and repeated in multiple batches to test the precision. The precision was good to excellent with Intraclass correlation coefficient (ICC) of 0.908 (95% CI 0.856-0.942). More than 67% of the variation in the RTL could be explained by sample-to-sample variation; less than 0.1% variation was due to batch-to-batch variation and 0.3% variation was explained by bead-to-bead variation. We increased the throughput of RTL Luminex assay from 60 to 240 samples per run. The new assay was validated against the original Luminex assay without pooling (r = 0.79, P = 1.44 × 10 -15 ). In an independent set of samples (n = 550), the new assay showed a negative correlation of RTL with age (r = -0.41), a result providing external validation for the method. We describe a novel high throughput pooled-sample multiplex Luminex assay for RTL with good to excellent precision suitable for large-scale studies. © 2018 Wiley Periodicals, Inc.
Sampling challenges in a study examining refugee resettlement
2011-01-01
Background As almost half of all refugees currently under United Nations protection are from Afghanistan or Iraq and significant numbers have already been resettled outside the region of origin, it is likely that future research will examine their resettlement needs. A number of methodological challenges confront researchers working with culturally and linguistically diverse groups; however, few detailed articles are available to inform other studies. The aim of this paper is to outline challenges with sampling and recruitment of socially invisible refugee groups, describing the method adopted for a mixed methods exploratory study assessing mental health, subjective wellbeing and resettlement perspectives of Afghan and Kurdish refugees living in New Zealand and Australia. Sampling strategies used in previous studies with similar refugee groups were considered before determining the approach to recruitment Methods A snowball approach was adopted for the study, with multiple entry points into the communities being used to choose as wide a range of people as possible to provide further contacts and reduce selection bias. Census data was used to assess the representativeness of the sample. Results A sample of 193 former refugee participants was recruited in Christchurch (n = 98) and Perth (n = 95), 47% were of Afghan and 53% Kurdish ethnicity. A good gender balance (males 52%, females 48%) was achieved overall, mainly as a result of the sampling method used. Differences in the demographic composition of groups in each location were observed, especially in relation to the length of time spent in a refugee situation and time since arrival, reflecting variations in national humanitarian quota intakes. Although some measures were problematic, Census data comparison to assess reasonable representativeness of the study sample was generally reassuring. Conclusions Snowball sampling, with multiple initiation points to reduce selection bias, was necessary to locate and identify participants, provide reassurance and break down barriers. Personal contact was critical for both recruitment and data quality, and highlighted the importance of interviewer cultural sensitivity. Cross-national comparative studies, particularly relating to refugee resettlement within different policy environments, also need to take into consideration the differing pre-migration experiences and time since arrival of refugee groups, as these can add additional layers of complexity to study design and interpretation. PMID:21406104
Semi-automated 96-well liquid-liquid extraction for quantitation of drugs in biological fluids.
Zhang, N; Hoffman, K L; Li, W; Rossi, D T
2000-02-01
A semi-automated liquid-liquid extraction (LLE) technique for biological fluid sample preparation was introduced for the quantitation of four drugs in rat plasma. All liquid transferring during the sample preparation was automated using a Tomtec Quadra 96 Model 320 liquid handling robot, which processed up to 96 samples in parallel. The samples were either in 96-deep-well plate or tube-rack format. One plate of samples can be prepared in approximately 1.5 h, and the 96-well plate is directly compatible with the autosampler of an LC/MS system. Selection of organic solvents and recoveries are discussed. Also, precision, relative error, linearity and quantitation of the semi automated LLE method are estimated for four example drugs using LC/MS/MS with a multiple reaction monitoring (MRM) approach. The applicability of this method and future directions are evaluated.
Evaluating single-pass catch as a tool for identifying spatial pattern in fish distribution
Bateman, Douglas S.; Gresswell, Robert E.; Torgersen, Christian E.
2005-01-01
We evaluate the efficacy of single-pass electrofishing without blocknets as a tool for collecting spatially continuous fish distribution data in headwater streams. We compare spatial patterns in abundance, sampling effort, and length-frequency distributions from single-pass sampling of coastal cutthroat trout (Oncorhynchus clarki clarki) to data obtained from a more precise multiple-pass removal electrofishing method in two mid-sized (500–1000 ha) forested watersheds in western Oregon. Abundance estimates from single- and multiple-pass removal electrofishing were positively correlated in both watersheds, r = 0.99 and 0.86. There were no significant trends in capture probabilities at the watershed scale (P > 0.05). Moreover, among-sample variation in fish abundance was higher than within-sample error in both streams indicating that increased precision of unit-scale abundance estimates would provide less information on patterns of abundance than increasing the fraction of habitat units sampled. In the two watersheds, respectively, single-pass electrofishing captured 78 and 74% of the estimated population of cutthroat trout with 7 and 10% of the effort. At the scale of intermediate-sized watersheds, single-pass electrofishing exhibited a sufficient level of precision to be effective in detecting spatial patterns of cutthroat trout abundance and may be a useful tool for providing the context for investigating fish-habitat relationships at multiple scales.
Qu, Xiangmeng; Li, Min; Zhang, Hongbo; Lin, Chenglie; Wang, Fei; Xiao, Mingshu; Zhou, Yi; Shi, Jiye; Aldalbahi, Ali; Pei, Hao; Chen, Hong; Li, Li
2017-09-20
The development of a real-time continuous analytical platform for the pathogen detection is of great scientific importance for achieving better disease control and prevention. In this work, we report a rapid and recyclable microfluidic bioassay system constructed from oligonucleotide arrays for selective and sensitive continuous identification of DNA targets of fungal pathogens. We employ the thermal denaturation method to effectively regenerate the oligonucleotide arrays for multiple sample detection, which could considerably reduce the screening effort and costs. The combination of thermal denaturation and laser-induced fluorescence detection technique enables real-time continuous identification of multiple samples (<10 min per sample). As a proof of concept, we have demonstrated that two DNA targets of fungal pathogens (Botrytis cinerea and Didymella bryoniae) can be sequentially analyzed using our rapid microfluidic bioassay system, which provides a new paradigm in the design of microfluidic bioassay system and will be valuable for chemical and biomedical analysis.
Vodnick, David James; Dwivedi, Arpit; Keranen, Lucas Paul; Okerlund, Michael David; Schmitz, Roger William; Warren, Oden Lee; Young, Christopher David
2014-07-08
An automated testing system includes systems and methods to facilitate inline production testing of samples at a micro (multiple microns) or less scale with a mechanical testing instrument. In an example, the system includes a probe changing assembly for coupling and decoupling a probe of the instrument. The probe changing assembly includes a probe change unit configured to grasp one of a plurality of probes in a probe magazine and couple one of the probes with an instrument probe receptacle. An actuator is coupled with the probe change unit, and the actuator is configured to move and align the probe change unit with the probe magazine and the instrument probe receptacle. In another example, the automated testing system includes a multiple degree of freedom stage for aligning a sample testing location with the instrument. The stage includes a sample stage and a stage actuator assembly including translational and rotational actuators.
Vodnick, David James; Dwivedi, Arpit; Keranen, Lucas Paul; Okerlund, Michael David; Schmitz, Roger William; Warren, Oden Lee; Young, Christopher David
2015-01-27
An automated testing system includes systems and methods to facilitate inline production testing of samples at a micro (multiple microns) or less scale with a mechanical testing instrument. In an example, the system includes a probe changing assembly for coupling and decoupling a probe of the instrument. The probe changing assembly includes a probe change unit configured to grasp one of a plurality of probes in a probe magazine and couple one of the probes with an instrument probe receptacle. An actuator is coupled with the probe change unit, and the actuator is configured to move and align the probe change unit with the probe magazine and the instrument probe receptacle. In another example, the automated testing system includes a multiple degree of freedom stage for aligning a sample testing location with the instrument. The stage includes a sample stage and a stage actuator assembly including translational and rotational actuators.
Vodnick, David James; Dwivedi, Arpit; Keranen, Lucas Paul; Okerlund, Michael David; Schmitz, Roger William; Warren, Oden Lee; Young, Christopher David
2015-02-24
An automated testing system includes systems and methods to facilitate inline production testing of samples at a micro (multiple microns) or less scale with a mechanical testing instrument. In an example, the system includes a probe changing assembly for coupling and decoupling a probe of the instrument. The probe changing assembly includes a probe change unit configured to grasp one of a plurality of probes in a probe magazine and couple one of the probes with an instrument probe receptacle. An actuator is coupled with the probe change unit, and the actuator is configured to move and align the probe change unit with the probe magazine and the instrument probe receptacle. In another example, the automated testing system includes a multiple degree of freedom stage for aligning a sample testing location with the instrument. The stage includes a sample stage and a stage actuator assembly including translational and rotational actuators.
NASA Astrophysics Data System (ADS)
Raven, Sara
2015-09-01
Background: Studies have shown that students' knowledge of osmosis and diffusion and the concepts associated with these processes is often inaccurate. This is important to address, as these concepts not only provide the foundation for more advanced topics in biology and chemistry, but are also threaded throughout both state and national science standards. Purpose: In this study, designed to determine the completeness and accuracy of three specific students' knowledge of molecule movement, concentration gradients, and equilibrium, I sought to address the following question: Using multiple evaluative methods, how can students' knowledge of molecule movement, concentration gradients, and equilibrium be characterized? Sample: This study focuses on data gathered from three students - Emma, Henry, and Riley - all of whom were gifted/honors ninth-grade biology students at a suburban high school in the southeast United States. Design and Methods: Using various qualitative data analysis techniques, I analyzed multiple sources of data from the three students, including multiple-choice test results, written free-response answers, think-aloud interview responses, and student drawings. Results: Results of the analysis showed that students maintained misconceptions about molecule movement, concentration gradients, and equilibrium. The conceptual knowledge students demonstrated differed depending on the assessment method, with the most distinct differences appearing on the multiple-choice versus the free-response questions, and in verbal versus written formats. Conclusions: Multiple levels of assessment may be required to obtain an accurate picture of content knowledge, as free-response and illustrative tasks made it difficult for students to conceal any misconceptions. Using a variety of assessment methods within a section of the curriculum can arguably help to provide a deeper understanding of student knowledge and learning, as well as illuminate misconceptions that may have remained unknown if only one assessment method was used. Furthermore, beyond simply evaluating past learning, multiple assessment methods may aid in student comprehension of key concepts.
Networking Multiple Autonomous Air and Ocean Vehicles for Oceanographic Research and Monitoring
NASA Astrophysics Data System (ADS)
McGillivary, P. A.; Borges de Sousa, J.; Rajan, K.
2013-12-01
Autonomous underwater and surface vessels (AUVs and ASVs) are coming into wider use as components of oceanographic research, including ocean observing systems. Unmanned airborne vehicles (UAVs) are now available at modest cost, allowing multiple UAVs to be deployed with multiple AUVs and ASVs. For optimal use good communication and coordination among vehicles is essential. We report on the use of multiple AUVs networked in communication with multiple UAVs. The UAVs are augmented by inferential reasoning software developed at MBARI that allows UAVs to recognize oceanographic fronts and change their navigation and control. This in turn allows UAVs to automatically to map frontal features, as well as to direct AUVs and ASVs to proceed to such features and conduct sampling via onboard sensors to provide validation for airborne mapping. ASVs can also act as data nodes for communication between UAVs and AUVs, as well as collecting data from onboard sensors, while AUVs can sample the water column vertically. This allows more accurate estimation of phytoplankton biomass and productivity, and can be used in conjunction with UAV sampling to determine air-sea flux of gases (e.g. CO2, CH4, DMS) affecting carbon budgets and atmospheric composition. In particular we describe tests in July 2013 conducted off Sesimbra, Portugal in conjunction with the Portuguese Navy by the University of Porto and MBARI with the goal of tracking large fish in the upper water column with coordinated air/surface/underwater measurements. A thermal gradient was observed in the infrared by a low flying UAV, which was used to dispatch an AUV to obtain ground truth to demonstrate the event-response capabilities using such autonomous platforms. Additional field studies in the future will facilitate integration of multiple unmanned systems into research vessel operations. The strength of hardware and software tools described in this study is to permit fundamental oceanographic measurements of both ocean and atmosphere over temporal and spatial scales that have previously been problematic. The methods demonstrated are particularly suited to the study of oceanographic fronts and for tracking and mapping oil spills or plankton blooms. With the networked coordination of multiple autonomous systems, individual components may be changed out while ocean observations continue, allowing coarse to fine spatial studies of hydrographic features over temporal dimensions that would otherwise be difficult, including diurnal and tidal periods. Constraints on these methods currently involve coordination of data archiving systems into shipboard operating systems, familiarization of oceanographers with these methods, and existing nearshore airspace use constraints on UAVs. An important outcome of these efforts is to understand the methodology for using multiple heterogeneous autonomous vehicles for targeted science exploration.
Garvey, M I; Bradley, C W; Jumaa, P
2016-06-01
Over the last decade, carbapenemase-producing organisms (CPOs) have spread worldwide, becoming a major public health concern. This article reports the authors' experience in dealing with a burns patient infected with CPOs, and the decontamination methods employed to render a burns shock room safe for re-use. The shock room was cleaned after being vacated, but environmental sampling cultured multiple CPOs. A second decontamination was undertaken comprising a detergent, steam and hypochlorite clean followed by hydrogen peroxide misting, and no CPOs were cultured after subsequent environmental sampling. A burns patient harbouring CPOs contaminates the surroundings heavily, so standard cleaning is insufficient to reduce the environmental bioburden. Copyright © 2016 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Specht, W.L.
Macroinvertebrate sampling was performed at 16 locations in the Savannah River Site (SRS) streams using Hester-Dendy multiplate samplers and EPA Rapid Bioassessment Protocols (RBP). Some of the sampling locations were unimpacted, while other locations had been subject to various forms of perturbation by SRS activities. In general, the data from the Hester-Dendy multiplate samplers were more sensitive at detecting impacts than were the RBP data. We developed a Biotic Index for the Hester-Dendy data which incorporated eight community structure, function, and balance parameters. when tested using a data set that was unrelated to the data set that was used inmore » developing the Biotic Index, the index was very successful at detecting impact.« less
Sidler, Dominik; Schwaninger, Arthur; Riniker, Sereina
2016-10-21
In molecular dynamics (MD) simulations, free-energy differences are often calculated using free energy perturbation or thermodynamic integration (TI) methods. However, both techniques are only suited to calculate free-energy differences between two end states. Enveloping distribution sampling (EDS) presents an attractive alternative that allows to calculate multiple free-energy differences in a single simulation. In EDS, a reference state is simulated which "envelopes" the end states. The challenge of this methodology is the determination of optimal reference-state parameters to ensure equal sampling of all end states. Currently, the automatic determination of the reference-state parameters for multiple end states is an unsolved issue that limits the application of the methodology. To resolve this, we have generalised the replica-exchange EDS (RE-EDS) approach, introduced by Lee et al. [J. Chem. Theory Comput. 10, 2738 (2014)] for constant-pH MD simulations. By exchanging configurations between replicas with different reference-state parameters, the complexity of the parameter-choice problem can be substantially reduced. A new robust scheme to estimate the reference-state parameters from a short initial RE-EDS simulation with default parameters was developed, which allowed the calculation of 36 free-energy differences between nine small-molecule inhibitors of phenylethanolamine N-methyltransferase from a single simulation. The resulting free-energy differences were in excellent agreement with values obtained previously by TI and two-state EDS simulations.
NASA Astrophysics Data System (ADS)
Grotti, Marco; Abelmoschi, Maria Luisa; Soggia, Francesco; Tiberiade, Christian; Frache, Roberto
2000-12-01
The multivariate effects of Na, K, Mg and Ca as nitrates on the electrothermal atomisation of manganese, cadmium and iron were studied by multiple linear regression modelling. Since the models proved to efficiently predict the effects of the considered matrix elements in a wide range of concentrations, they were applied to correct the interferences occurring in the determination of trace elements in seawater after pre-concentration of the analytes. In order to obtain a statistically significant number of samples, a large volume of the certified seawater reference materials CASS-3 and NASS-3 was treated with Chelex-100 resin; then, the chelating resin was separated from the solution, divided into several sub-samples, each of them was eluted with nitric acid and analysed by electrothermal atomic absorption spectrometry (for trace element determinations) and inductively coupled plasma optical emission spectrometry (for matrix element determinations). To minimise any other systematic error besides that due to matrix effects, accuracy of the pre-concentration step and contamination levels of the procedure were checked by inductively coupled plasma mass spectrometric measurements. Analytical results obtained by applying the multiple linear regression models were compared with those obtained with other calibration methods, such as external calibration using acid-based standards, external calibration using matrix-matched standards and the analyte addition technique. Empirical models proved to efficiently reduce interferences occurring in the analysis of real samples, allowing an improvement of accuracy better than for other calibration methods.
Improvements to robotics-inspired conformational sampling in rosetta.
Stein, Amelie; Kortemme, Tanja
2013-01-01
To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC) method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new "next-generation KIC" method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions.
Improvements to Robotics-Inspired Conformational Sampling in Rosetta
Stein, Amelie; Kortemme, Tanja
2013-01-01
To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC) method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new “next-generation KIC” method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions. PMID:23704889
Using multiple travel paths to estimate daily travel distance in arboreal, group-living primates.
Steel, Ruth Irene
2015-01-01
Primate field studies often estimate daily travel distance (DTD) in order to estimate energy expenditure and/or test foraging hypotheses. In group-living species, the center of mass (CM) method is traditionally used to measure DTD; a point is marked at the group's perceived center of mass at a set time interval or upon each move, and the distance between consecutive points is measured and summed. However, for groups using multiple travel paths, the CM method potentially creates a central path that is shorter than the individual paths and/or traverses unused areas. These problems may compromise tests of foraging hypotheses, since distance and energy expenditure could be underestimated. To better understand the magnitude of these potential biases, I designed and tested the multiple travel paths (MTP) method, in which DTD was calculated by recording all travel paths taken by the group's members, weighting each path's distance based on its proportional use by the group, and summing the weighted distances. To compare the MTP and CM methods, DTD was calculated using both methods in three groups of Udzungwa red colobus monkeys (Procolobus gordonorum; group size 30-43) for a random sample of 30 days between May 2009 and March 2010. Compared to the CM method, the MTP method provided significantly longer estimates of DTD that were more representative of the actual distance traveled and the areas used by a group. The MTP method is more time-intensive and requires multiple observers compared to the CM method. However, it provides greater accuracy for testing ecological and foraging models.
Mean phase predictor for maximum a posteriori demodulator
NASA Technical Reports Server (NTRS)
Altes, Richard A. (Inventor)
1996-01-01
A system and method for optimal maximum a posteriori (MAP) demodulation using a novel mean phase predictor. The mean phase predictor conducts cumulative averaging over multiple blocks of phase samples to provide accurate prior mean phases, to be input into a MAP phase estimator.
McAdoo, Mitchell A.; Kozar, Mark D.
2017-11-14
This report describes a compilation of existing water-quality data associated with groundwater resources originating from abandoned underground coal mines in West Virginia. Data were compiled from multiple sources for the purpose of understanding the suitability of groundwater from abandoned underground coal mines for public supply, industrial, agricultural, and other uses. This compilation includes data collected for multiple individual studies conducted from July 13, 1973 through September 7, 2016. Analytical methods varied by the time period of data collection and requirements of the independent studies.This project identified 770 water-quality samples from 294 sites that could be attributed to abandoned underground coal mine aquifers originating from multiple coal seams in West Virginia.
Quantitation of Mycotoxins Using Direct Analysis in Real Time Mass Spectrometry (DART-MS).
Busman, Mark
2018-05-01
Ambient ionization represents a new generation of MS ion sources and is used for the rapid ionization of small molecules under ambient conditions. The combination of ambient ionization and MS allows the analysis of multiple food samples with simple or no sample treatment or in conjunction with prevailing sample preparation methods. Two ambient ionization methods, desorptive electrospray ionization (DESI) and direct analysis in real time (DART) have been adapted for food safety application. Both ionization techniques provide unique advantages and capabilities. DART has been used for a variety of qualitative and quantitative applications. In particular, mycotoxin contamination of food and feed materials has been addressed by DART-MS. Applications to mycotoxin analysis by ambient ionization MS and particularly DART-MS are summarized.
Analysis of suspended solids by single-particle scattering. [for Lake Superior pollution monitoring
NASA Technical Reports Server (NTRS)
Diehl, S. R.; Smith, D. T.; Sydor, M.
1979-01-01
Light scattering by individual particulates is used in a multiple-detector system to categorize the composition of suspended solids in terms of broad particulate categories. The scattering signatures of red clay and taconite tailings, the two primary particulate contaminants in western Lake Superior, along with two types of asbestiform fibers, amphibole and chrysolite, were studied in detail. A method was developed to predict the concentration of asbestiform fibers in filtration plant samples for which electron microscope analysis was done concurrently. Fiber levels as low as 50,000 fibers/liter were optically detectable. The method has application in optical categorization of samples for remote sensing purposes and offers a fast, inexpensive means for analyzing water samples from filtration plants for specific particulate contaminants.
Advances in spectroscopic methods for quantifying soil carbon
Reeves, James B.; McCarty, Gregory W.; Calderon, Francisco; Hively, W. Dean
2012-01-01
The current gold standard for soil carbon (C) determination is elemental C analysis using dry combustion. However, this method requires expensive consumables, is limited by the number of samples that can be processed (~100/d), and is restricted to the determination of total carbon. With increased interest in soil C sequestration, faster methods of analysis are needed, and there is growing interest in methods based on diffuse reflectance spectroscopy in the visible, near-infrared or mid-infrared spectral ranges. These spectral methods can decrease analytical requirements and speed sample processing, be applied to large landscape areas using remote sensing imagery, and be used to predict multiple analytes simultaneously. However, the methods require localized calibrations to establish the relationship between spectral data and reference analytical data, and also have additional, specific problems. For example, remote sensing is capable of scanning entire watersheds for soil carbon content but is limited to the surface layer of tilled soils and may require difficult and extensive field sampling to obtain proper localized calibration reference values. The objective of this chapter is to discuss the present state of spectroscopic methods for determination of soil carbon.
Vecchione, Gennaro; Casetta, Bruno; Chiapparino, Antonella; Bertolino, Alessandro; Tomaiuolo, Michela; Cappucci, Filomena; Gatta, Raffaella; Margaglione, Maurizio; Grandone, Elvira
2012-01-01
A simple liquid chromatographic tandem mass spectrometry (LC-MS/MS) method has been developed for simultaneous analysis of 17 basic and one acid psychotropic drugs in human plasma. The method relies on a protein precipitation step for sample preparation and offers high sensitivity, wide linearity without interferences from endogenous matrix components. Chromatography was run on a reversed-phase column with an acetonitrile-H₂O mixture. The quantification of target compounds was performed in multiple reaction monitoring (MRM) and by switching the ionization polarity within the analytical run. A further sensitivity increase was obtained by implementing the functionality "scheduled multiple reaction monitoring" (sMRM) offered by the recent version of the software package managing the instrument. The overall injection interval was less than 5.5 min. Regression coefficients of the calibration curves and limits of quantification (LOQ) showed a good coverage of over-therapeutic, therapeutic and sub-therapeutic ranges. Recovery rates, measured as percentage of recovery of spiked plasma samples, were ≥ 94%. Precision and accuracy data have been satisfactory for a therapeutic drug monitoring (TDM) service as for managing plasma samples from patients receiving psycho-pharmacological treatment. Copyright © 2012 Elsevier B.V. All rights reserved.
Chromatographic analysis of tryptophan metabolites
Sadok, Ilona; Gamian, Andrzej
2017-01-01
The kynurenine pathway generates multiple tryptophan metabolites called collectively kynurenines and leads to formation of the enzyme cofactor nicotinamide adenine dinucleotide. The first step in this pathway is tryptophan degradation, initiated by the rate‐limiting enzymes indoleamine 2,3‐dioxygenase, or tryptophan 2,3‐dioxygenase, depending on the tissue. The balanced kynurenine metabolism, which has been a subject of multiple studies in last decades, plays an important role in several physiological and pathological conditions such as infections, autoimmunity, neurological disorders, cancer, cataracts, as well as pregnancy. Understanding the regulation of tryptophan depletion provide novel diagnostic and treatment opportunities, however it requires reliable methods for quantification of kynurenines in biological samples with complex composition (body fluids, tissues, or cells). Trace concentrations, interference of sample components, and instability of some tryptophan metabolites need to be addressed using analytical methods. The novel separation approaches and optimized extraction protocols help to overcome difficulties in analyzing kynurenines within the complex tissue material. Recent developments in chromatography coupled with mass spectrometry provide new opportunity for quantification of tryptophan and its degradation products in various biological samples. In this review, we present current accomplishments in the chromatographic methodologies proposed for detection of tryptophan metabolites and provide a guide for choosing the optimal approach. PMID:28590049
Symptom profile of multiple chemical sensitivity in actual life.
Saito, Mariko; Kumano, Hiroaki; Yoshiuchi, Kazuhiro; Kokubo, Naomi; Ohashi, Kyoko; Yamamoto, Yoshiharu; Shinohara, Naohide; Yanagisawa, Yukio; Sakabe, Kou; Miyata, Mikio; Ishikawa, Satoshi; Kuboki, Tomifusa
2005-01-01
This study was conducted to confirm the definition of multiple chemical sensitivity (MCS) in actual life: that multiple symptoms are provoked in multiple organs by exposure to, and ameliorated by avoidance of, multiple chemicals at low levels. We used the Ecological Momentary Assessment to monitor everyday symptoms and the active sampling and passive sampling methods to measure environmental chemical exposure. Eighteen patients with MCS, diagnosed according to the 1999 consensus criteria, and 12 healthy controls participated in this study. Fourteen patients and 12 controls underwent 1-week measurement of physical and psychologic symptoms and of the levels of exposure to various chemicals. Linear mixed models were used to test the hypotheses regarding the symptom profile of MCS patients. Some causative chemicals were detected in 11 of 14 MCS patients. Two other patients did not report any hypersensitivity episodes, whereas passive sampling showed far less exposure to chemicals than control subjects. Another subject reported episodic symptoms but was excluded from the following analyses because no possible chemical was detected. Eleven of the 17 physical symptoms and all four mood subscales examined were significantly aggravated in the interview based on "patient-initiated symptom prompts." On the other hand, there were no differences in physical symptoms or mood subscales between MCS patients and control subjects in the interview based on "random prompts." MCS patients do not have either somatic or psychologic symptoms under chemical-free conditions, and symptoms may be provoked only when exposed to chemicals.