Densitometry By Acoustic Levitation
NASA Technical Reports Server (NTRS)
Trinh, Eugene H.
1989-01-01
"Static" and "dynamic" methods developed for measuring mass density of acoustically levitated solid particle or liquid drop. "Static" method, unknown density of sample found by comparison with another sample of known density. "Dynamic" method practiced with or without gravitational field. Advantages over conventional density-measuring techniques: sample does not have to make contact with container or other solid surface, size and shape of samples do not affect measurement significantly, sound field does not have to be know in detail, and sample can be smaller than microliter. Detailed knowledge of acoustic field not necessary.
A novel microfluidic flow focusing method
Jiang, Hai; Weng, Xuan; Li, Dongqing
2014-01-01
A new microfluidic method that allows hydrodynamic focusing in a microchannel with two sheath flows is demonstrated. The microchannel network consists of a T-shaped main channel and two T-shaped branch channels. The flows of the sample stream and the sheath streams in the microchannel are generated by electroosmotic flow-induced pressure gradients. In comparison with other flow focusing methods, this novel method does not expose the sample to electrical field, and does not need any external pumps, tubing, and valves. PMID:25538810
WIPP waste characterization program sampling and analysis guidance manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-01-01
The Waste Isolation Pilot Plant (WIPP) Waste Characterization Program Sampling and Analysis Guidance Manual (Guidance Manual) provides a unified source of information on the sampling and analytical techniques that enable Department of Energy (DOE) facilities to comply with the requirements established in the current revision of the Quality Assurance Program Plan (QAPP) for the WIPP Experimental-Waste Characterization Program (the Program). This Guidance Manual includes all of the sampling and testing methodologies accepted by the WIPP Project Office (DOE/WPO) for use in implementing the Program requirements specified in the QAPP. This includes methods for characterizing representative samples of transuranic (TRU) wastesmore » at DOE generator sites with respect to the gas generation controlling variables defined in the WIPP bin-scale and alcove test plans, as well as waste container headspace gas sampling and analytical procedures to support waste characterization requirements under the WIPP test program and the Resource Conservation and Recovery Act (RCRA). The procedures in this Guidance Manual are comprehensive and detailed and are designed to provide the necessary guidance for the preparation of site specific procedures. The use of these procedures is intended to provide the necessary sensitivity, specificity, precision, and comparability of analyses and test results. The solutions to achieving specific program objectives will depend upon facility constraints, compliance with DOE Orders and DOE facilities' operating contractor requirements, and the knowledge and experience of the TRU waste handlers and analysts. With some analytical methods, such as gas chromatography/mass spectrometry, the Guidance Manual procedures may be used directly. With other methods, such as nondestructive/destructive characterization, the Guidance Manual provides guidance rather than a step-by-step procedure.« less
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and... weigh ten pounds or less, or in any container where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated...
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and... weigh ten pounds or less, or in any container where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated...
Sampling from a Discrete Distribution While Preserving Monotonicity.
1982-02-01
in a table beforehand, this procedure, known as the inverse transform method, requires n storage spaces and EX comparisons on average, which may prove...limitations that deserve attention: a. In general, the alias method does not preserve a monotone relationship between U and X as does the inverse transform method...uses the inverse transform approach but with more information computed beforehand, as in the alias method. The proposed method is not new having been
Jiang, Wenyu; Simon, Richard
2007-12-20
This paper first provides a critical review on some existing methods for estimating the prediction error in classifying microarray data where the number of genes greatly exceeds the number of specimens. Special attention is given to the bootstrap-related methods. When the sample size n is small, we find that all the reviewed methods suffer from either substantial bias or variability. We introduce a repeated leave-one-out bootstrap (RLOOB) method that predicts for each specimen in the sample using bootstrap learning sets of size ln. We then propose an adjusted bootstrap (ABS) method that fits a learning curve to the RLOOB estimates calculated with different bootstrap learning set sizes. The ABS method is robust across the situations we investigate and provides a slightly conservative estimate for the prediction error. Even with small samples, it does not suffer from large upward bias as the leave-one-out bootstrap and the 0.632+ bootstrap, and it does not suffer from large variability as the leave-one-out cross-validation in microarray applications. Copyright (c) 2007 John Wiley & Sons, Ltd.
An investigation on the intra-sample distribution of cotton color by using image analysis
USDA-ARS?s Scientific Manuscript database
The colorimeter principle is widely used to measure cotton color. This method provides the sample’s color grade; but the result does not include information about the color distribution and any variation within the sample. We conducted an investigation that used image analysis method to study the ...
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation... where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated on the basis of count. (b) In all other...
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation... where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated on the basis of count. (b) In all other...
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation... where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated on the basis of count. (b) In all other...
On the enhanced sampling over energy barriers in molecular dynamics simulations.
Gao, Yi Qin; Yang, Lijiang
2006-09-21
We present here calculations of free energies of multidimensional systems using an efficient sampling method. The method uses a transformed potential energy surface, which allows an efficient sampling of both low and high energy spaces and accelerates transitions over barriers. It allows efficient sampling of the configuration space over and only over the desired energy range(s). It does not require predetermined or selected reaction coordinate(s). We apply this method to study the dynamics of slow barrier crossing processes in a disaccharide and a dipeptide system.
Fang, Yun; Wu, Hulin; Zhu, Li-Xing
2011-07-01
We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.
Ducat, Giseli; Felsner, Maria L; da Costa Neto, Pedro R; Quináia, Sueli P
2015-06-15
Recently the use of brown sugar has increased due to its nutritional characteristics, thus requiring a more rigid quality control. The development of a method for water content analysis in soft brown sugar is carried out for the first time by TG/DTA with application of different statistical tests. The results of the optimization study suggest that heating rates of 5°C min(-1) and an alumina sample holder improve the efficiency of the drying process. The validation study showed that thermo gravimetry presents good accuracy and precision for water content analysis in soft brown sugar samples. This technique offers advantages over other analytical methods as it does not use toxic and costly reagents or solvents, it does not need any sample preparation, and it allows the identification of the temperature at which water is completely eliminated in relation to other volatile degradation products. This is an important advantage over the official method (loss on drying). Copyright © 2015 Elsevier Ltd. All rights reserved.
DoE optimization of a mercury isotope ratio determination method for environmental studies.
Berni, Alex; Baschieri, Carlo; Covelli, Stefano; Emili, Andrea; Marchetti, Andrea; Manzini, Daniela; Berto, Daniela; Rampazzo, Federico
2016-05-15
By using the experimental design (DoE) technique, we optimized an analytical method for the determination of mercury isotope ratios by means of cold-vapor multicollector ICP-MS (CV-MC-ICP-MS) to provide absolute Hg isotopic ratio measurements with a suitable internal precision. By running 32 experiments, the influence of mercury and thallium internal standard concentrations, total measuring time and sample flow rate was evaluated. Method was optimized varying Hg concentration between 2 and 20 ng g(-1). The model finds out some correlations within the parameters affect the measurements precision and predicts suitable sample measurement precisions for Hg concentrations from 5 ng g(-1) Hg upwards. The method was successfully applied to samples of Manila clams (Ruditapes philippinarum) coming from the Marano and Grado lagoon (NE Italy), a coastal environment affected by long term mercury contamination mainly due to mining activity. Results show different extents of both mass dependent fractionation (MDF) and mass independent fractionation (MIF) phenomena in clams according to their size and sampling sites in the lagoon. The method is fit for determinations on real samples, allowing for the use of Hg isotopic ratios to study mercury biogeochemical cycles in complex ecosystems. Copyright © 2016 Elsevier B.V. All rights reserved.
Mounting Thin Samples For Electrical Measurements
NASA Technical Reports Server (NTRS)
Matus, L. G.; Summers, R. L.
1988-01-01
New method for mounting thin sample for electrical measurements involves use of vacuum chuck to hold a ceramic mounting plate, which holds sample. Contacts on mounting plate establish electrical connection to sample. Used to make electrical measurements over temperature range from 77 to 1,000 K and does not introduce distortions into magnetic field during Hall measurements.
Székely, György; Henriques, Bruno; Gil, Marco; Alvarez, Carlos
2014-09-01
This paper discusses a design of experiments (DoE) assisted optimization and robustness testing of a liquid chromatography-tandem mass spectrometry (LC-MS/MS) method development for the trace analysis of the potentially genotoxic 1,3-diisopropylurea (IPU) impurity in mometasone furoate glucocorticosteroid. Compared to the conventional trial-and-error method development, DoE is a cost-effective and systematic approach to system optimization by which the effects of multiple parameters and parameter interactions on a given response are considered. The LC and MS factors were studied simultaneously: flow (F), gradient (G), injection volume (Vinj), cone voltage (E(con)), and collision energy (E(col)). The optimization was carried out with respect to four responses: separation of peaks (Sep), peak area (A(p)), length of the analysis (T), and the signal-to-noise ratio (S/N). An optimization central composite face (CCF) DoE was conducted leading to the early discovery of carry-over effect which was further investigated in order to establish the maximum injectable sample load. A second DoE was conducted in order to obtain the optimal LC-MS/MS method. As part of the validation of the obtained method, its robustness was determined by conducting a fractional factorial of resolution III DoE, wherein column temperature and quadrupole resolution were considered as additional factors. The method utilizes a common Phenomenex Gemini NX C-18 HPLC analytical column with electrospray ionization and a triple quadrupole mass detector in multiple reaction monitoring (MRM) mode, resulting in short analyses with a 10-min runtime. The high sensitivity and low limit of quantification (LOQ) was achieved by (1) MRM mode (instead of single ion monitoring) and (2) avoiding the drawbacks of derivatization (incomplete reaction and time-consuming sample preparation). Quantitatively, the DoE method development strategy resulted in the robust trace analysis of IPU at 1.25 ng/mL absolute concentration corresponding to 0.25 ppm LOQ in 5 g/l mometasone furoate glucocorticosteroid. Validation was carried out in a linear range of 0.25-10 ppm and presented a relative standard deviation (RSD) of 1.08% for system precision. Regarding IPU recovery in mometasone furoate, spiked samples produced recoveries between 96 and 109 % in the range of 0.25 to 2 ppm. Copyright © 2013 John Wiley & Sons, Ltd.
Hyun, Noorie; Gastwirth, Joseph L; Graubard, Barry I
2018-03-26
Originally, 2-stage group testing was developed for efficiently screening individuals for a disease. In response to the HIV/AIDS epidemic, 1-stage group testing was adopted for estimating prevalences of a single or multiple traits from testing groups of size q, so individuals were not tested. This paper extends the methodology of 1-stage group testing to surveys with sample weighted complex multistage-cluster designs. Sample weighted-generalized estimating equations are used to estimate the prevalences of categorical traits while accounting for the error rates inherent in the tests. Two difficulties arise when using group testing in complex samples: (1) How does one weight the results of the test on each group as the sample weights will differ among observations in the same group. Furthermore, if the sample weights are related to positivity of the diagnostic test, then group-level weighting is needed to reduce bias in the prevalence estimation; (2) How does one form groups that will allow accurate estimation of the standard errors of prevalence estimates under multistage-cluster sampling allowing for intracluster correlation of the test results. We study 5 different grouping methods to address the weighting and cluster sampling aspects of complex designed samples. Finite sample properties of the estimators of prevalences, variances, and confidence interval coverage for these grouping methods are studied using simulations. National Health and Nutrition Examination Survey data are used to illustrate the methods. Copyright © 2018 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plemons, R.E.; Hopwood, W.H. Jr.; Hamilton, J.H.
For a number of years the Oak Ridge Y-12 Plant Laboratory has been analyzing coal predominately for the utilities department of the Y-12 Plant. All laboratory procedures, except a Leco sulfur method which used the Leco Instruction Manual as a reference, were written based on the ASTM coal analyses. Sulfur is analyzed at the present time by two methods, gravimetric and Leco. The laboratory has two major endeavors for monitoring the quality of its coal analyses. (1) A control program by the Plant Statistical Quality Control Department. Quality Control submits one sample for every nine samples submitted by the utilitiesmore » departments and the laboratory analyzes a control sample along with the utilities samples. (2) An exchange program with the DOE Coal Analysis Laboratory in Bruceton, Pennsylvania. The Y-12 Laboratory submits to the DOE Coal Laboratory, on even numbered months, a sample that Y-12 has analyzed. The DOE Coal Laboratory submits, on odd numbered months, one of their analyzed samples to the Y-12 Plant Laboratory to be analyzed. The results of these control and exchange programs are monitored not only by laboratory personnel, but also by Statistical Quality Control personnel who provide statistical evaluations. After analysis and reporting of results, all utilities samples are retained by the laboratory until the coal contracts have been settled. The utilities departments have responsibility for the initiation and preparation of the coal samples. The samples normally received by the laboratory have been ground to 4-mesh, reduced to 0.5-gallon quantities, and sealed in air-tight containers. Sample identification numbers and a Request for Analysis are generated by the utilities departments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.
Here, we present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography–mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arrangedmore » into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.« less
Enabling Advanced Wind-Tunnel Research Methods Using the NASA Langley 12-Foot Low Speed Tunnel
NASA Technical Reports Server (NTRS)
Busan, Ronald C.; Rothhaar, Paul M.; Croom, Mark A.; Murphy, Patrick C.; Grafton, Sue B.; O-Neal, Anthony W.
2014-01-01
Design of Experiment (DOE) testing methods were used to gather wind tunnel data characterizing the aerodynamic and propulsion forces and moments acting on a complex vehicle configuration with 10 motor-driven propellers, 9 control surfaces, a tilt wing, and a tilt tail. This paper describes the potential benefits and practical implications of using DOE methods for wind tunnel testing - with an emphasis on describing how it can affect model hardware, facility hardware, and software for control and data acquisition. With up to 23 independent variables (19 model and 2 tunnel) for some vehicle configurations, this recent test also provides an excellent example of using DOE methods to assess critical coupling effects in a reasonable timeframe for complex vehicle configurations. Results for an exploratory test using conventional angle of attack sweeps to assess aerodynamic hysteresis is summarized, and DOE results are presented for an exploratory test used to set the data sampling time for the overall test. DOE results are also shown for one production test characterizing normal force in the Cruise mode for the vehicle.
Sample Preparation of Corn Seed Tissue to Prevent Analyte Relocations for Mass Spectrometry Imaging
NASA Astrophysics Data System (ADS)
Kim, Shin Hye; Kim, Jeongkwon; Lee, Young Jin; Lee, Tae Geol; Yoon, Sohee
2017-08-01
Corn seed tissue sections were prepared by the tape support method using an adhesive tape, and mass spectrometry imaging (MSI) was performed. The effect of heat generated during sample preparation was investigated by time-of-flight secondary mass spectrometry (TOF-SIMS) imaging of corn seed tissue prepared by the tape support and the thaw-mounted methods. Unlike thaw-mounted sample preparation, the tape support method does not cause imaging distortion because of the absence of heat, which can cause migration of the analytes on the sample. By applying the tape-support method, the corn seed tissue was prepared without structural damage and MSI with accurate spatial information of analytes was successfully performed.
Coscollà, Clara; Navarro-Olivares, Santiago; Martí, Pedro; Yusà, Vicent
2014-02-01
When attempting to discover the important factors and then optimise a response by tuning these factors, experimental design (design of experiments, DoE) gives a powerful suite of statistical methodology. DoE identify significant factors and then optimise a response with respect to them in method development. In this work, a headspace-solid-phase micro-extraction (HS-SPME) combined with gas chromatography tandem mass spectrometry (GC-MS/MS) methodology for the simultaneous determination of six important organotin compounds namely monobutyltin (MBT), dibutyltin (DBT), tributyltin (TBT), monophenyltin (MPhT), diphenyltin (DPhT), triphenyltin (TPhT) has been optimized using a statistical design of experiments (DOE). The analytical method is based on the ethylation with NaBEt4 and simultaneous headspace-solid-phase micro-extraction of the derivative compounds followed by GC-MS/MS analysis. The main experimental parameters influencing the extraction efficiency selected for optimization were pre-incubation time, incubation temperature, agitator speed, extraction time, desorption temperature, buffer (pH, concentration and volume), headspace volume, sample salinity, preparation of standards, ultrasonic time and desorption time in the injector. The main factors (excitation voltage, excitation time, ion source temperature, isolation time and electron energy) affecting the GC-IT-MS/MS response were also optimized using the same statistical design of experiments. The proposed method presented good linearity (coefficient of determination R(2)>0.99) and repeatibilty (1-25%) for all the compounds under study. The accuracy of the method measured as the average percentage recovery of the compounds in spiked surface and marine waters was higher than 70% for all compounds studied. Finally, the optimized methodology was applied to real aqueous samples enabled the simultaneous determination of all compounds under study in surface and marine water samples obtained from Valencia region (Spain). © 2013 Elsevier B.V. All rights reserved.
A method for reducing sampling jitter in digital control systems
NASA Technical Reports Server (NTRS)
Anderson, T. O.; HURBD W. J.; Hurd, W. J.
1969-01-01
Digital phase lock loop system is designed by smoothing the proportional control with a low pass filter. This method does not significantly affect the loop dynamics when the smoothing filter bandwidth is wide compared to loop bandwidth.
Students' Entrepreneurial Self-Efficacy: Does the Teaching Method Matter?
ERIC Educational Resources Information Center
Abaho, Ernest; Olomi, Donath R.; Urassa, Goodluck Charles
2015-01-01
Purpose: The purpose of this paper is to examine the various entrepreneurship teaching methods in Uganda and how these methods relate to entrepreneurial self-efficacy (ESE). Design/methodology/approach: A sample of 522 final year students from selected universities and study programs was surveyed using self-reported questionnaires. Findings: There…
Method for Pre-Conditioning a Measured Surface Height Map for Model Validation
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2012-01-01
This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to eliminate the surface measurement noise or measurement errors can also suffer from aliasing effects. During re-sampling of a surface map, this software preserves the low spatial-frequency characteristic of a given surface map through the use of Zernike-polynomial fit coefficients, and maintains mid- and high-spatial-frequency characteristics of the given surface map by the use of a PSD model derived from the two-dimensional PSD data of the mid- and high-spatial-frequency components of the original surface map. Because this new method creates the new surface map in the desired sampling format from analytical expressions only, it does not encounter any aliasing effects and does not cause any discontinuity in the resultant surface map.
Database crime to crime match rate calculation.
Buckleton, John; Bright, Jo-Anne; Walsh, Simon J
2009-06-01
Guidance exists on how to count matches between samples in a crime sample database but we are unable to locate a definition of how to estimate a match rate. We propose a method that does not proceed from the match counting definition but which has a strong logic.
Analysis of drugs in human tissues by supercritical fluid extraction/immunoassay
NASA Astrophysics Data System (ADS)
Furton, Kenneth G.; Sabucedo, Alberta; Rein, Joseph; Hearn, W. L.
1997-02-01
A rapid, readily automated method has been developed for the quantitative analysis of phenobarbital from human liver tissues based on supercritical carbon dioxide extraction followed by fluorescence enzyme immunoassay. The method developed significantly reduces sample handling and utilizes the entire liver homogenate. The current method yields comparable recoveries and precision and does not require the use of an internal standard, although traditional GC/MS confirmation can still be performed on sample extracts. Additionally, the proposed method uses non-toxic, inexpensive carbon dioxide, thus eliminating the use of halogenated organic solvents.
METHOD DETECTION LIMITS AND NON-DETECTS IN THE WORLD OF MICROBIOLOGY
Examining indoor air for microorganisms is generally performed by sampling for viable microbes, growing them on sterile media, and counting the colony forming units. A negative result does not indicate that the source of the sample was free of fungi or bacteria, only that if pre...
Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.; ...
2016-11-25
Here, we present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography–mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arrangedmore » into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.« less
NASA Astrophysics Data System (ADS)
Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.; Docherty, Kenneth S.; Jimenez, Jose L.
2016-11-01
We present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography-mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arranged into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.
Fluorescence analyzer for lignin
Berthold, John W.; Malito, Michael L.; Jeffers, Larry
1993-01-01
A method and apparatus for measuring lignin concentration in a sample of wood pulp or black liquor comprises a light emitting arrangement for emitting an excitation light through optical fiber bundles into a probe which has an undiluted sensing end facing the sample. The excitation light causes the lignin concentration to produce fluorescent emission light which is then conveyed through the probe to analyzing equipment which measures the intensity of the emission light. Measures a This invention was made with Government support under Contract Number DOE: DE-FC05-90CE40905 awarded by the Department of Energy (DOE). The Government has certain rights in this invention.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bank, Tracy L.; Roth, Elliot A.; Tinker, Phillip
2016-04-17
Inductively Coupled Plasma Mass Spectrometry (ICP-MS) is used to measure the concentrations of rare earth elements (REE) in certified standard reference materials including shale and coal. The instrument used in this study is a Perkin Elmer Nexion 300D ICP-MS. The goal of the study is to identify sample preparation and operating conditions that optimized recovery of each element of concern. Additionally, the precision and accuracy of the technique are summarized and the drawbacks and limitations of the method are outlined.
Recording 2-D Nutation NQR Spectra by Random Sampling Method
Sinyavsky, Nikolaj; Jadzyn, Maciej; Ostafin, Michal; Nogaj, Boleslaw
2010-01-01
The method of random sampling was introduced for the first time in the nutation nuclear quadrupole resonance (NQR) spectroscopy where the nutation spectra show characteristic singularities in the form of shoulders. The analytic formulae for complex two-dimensional (2-D) nutation NQR spectra (I = 3/2) were obtained and the condition for resolving the spectral singularities for small values of an asymmetry parameter η was determined. Our results show that the method of random sampling of a nutation interferogram allows significant reduction of time required to perform a 2-D nutation experiment and does not worsen the spectral resolution. PMID:20949121
USDA-ARS?s Scientific Manuscript database
To determine if Campylobacter isolation method influenced antimicrobial susceptibility results, the minimum inhibitory concentrations (MIC) of nine antimicrobials were compared for 291 pairs of Campylobacter isolates recovered from chicken carcass rinse samples using direct plating and an enrichment...
Caught Ya! A School-Based Practical Activity to Evaluate the Capture-Mark-Release-Recapture Method
ERIC Educational Resources Information Center
Kingsnorth, Crawford; Cruickshank, Chae; Paterson, David; Diston, Stephen
2017-01-01
The capture-mark-release-recapture method provides a simple way to estimate population size. However, when used as part of ecological sampling, this method does not easily allow an opportunity to evaluate the accuracy of the calculation because the actual population size is unknown. Here, we describe a method that can be used to measure the…
Synthesis and Characterization of Novel Compound Clusters
1997-08-26
also be intrinsically stable, they cannot be formed by this plasma chemistry presumably because the metals are less reactive. Plasma chemistry reactions...samples without the presence of hydrogen. Vaporization of these composite samples produces the metal carbide clusters in many cases where plasma chemistry does...antimony or bismuth cannot be produced by the hydrocarbon plasma chemistry method, but they are produced readily from composite sample (metal film on
Likhoded, V G; Kuleshova, N V; Sergieva, N V; Konev, Iu V; Trubnikova, I A; Sudzhian, E V
2007-01-01
Method of Gram-negative bacteria endotoxins detection on the basis of their own spectrum of electromagnetic radiation frequency was developed. Frequency spectrum typical for chemotype Re glycolipid, which is a part of lypopolysaccharides in the majority of Gram-negative bacteria, was used. Two devices--"Mini- Expert-DT" (manufactured by IMEDIS, Moscow) and "Bicom" (manufactured by Regumed, Germany)--were used as generators of electromagnetic radiation. Detection of endotoxin using these devices was performed by electropuncture vegetative resonance test. Immunoenzyme reaction with antibodies to chemotype Re glycolipid was used during analysis of preparations for assessment of resonance-frequency method specificity. The study showed that resonance-frequency method can detect lypopolysaccharides of different enterobacteria in quantities up to 0.1 pg as well as bacteria which contain lypopolysaccharides. At the same time, this method does not detect such bacteria as Staphylococcus aureus, Bifidobacterium spp., Lactobacillus spp., and Candida albicans. The method does not require preliminary processing of blood samples and can be used for diagnostics of endotoxinemia, and detection of endotoxins in blood samples or injection solutions.
The U. S. Department of Energy SARP review training program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mauck, C.J.
1988-01-01
In support of its radioactive material packaging certification program, the U.S. Department of Energy (DOE) has established a special training workshop. The purpose of the two-week workshop is to develop skills in reviewing Safety Analysis Reports for Packagings (SARPs) and performing confirmatory analyses. The workshop, conducted by the Lawrence Livermore National Laboratory (LLNL) for DOE, is divided into two parts: methods of review and methods of analysis. The sessions covering methods of review are based on the DOE document, ''Packaging Review Guide for Reviewing Safety Analysis Reports for Packagings'' (PRG). The sessions cover relevant DOE Orders and all areas ofmore » review in the applicable Nuclear Regulatory Commission (NRC) Regulatory Guides. The technical areas addressed include structural and thermal behavior, materials, shielding, criticality, and containment. The course sessions on methods of analysis provide hands-on experience in the use of calculational methods and codes for reviewing SARPs. Analytical techniques and computer codes are discussed and sample problems are worked. Homework is assigned each night and over the included weekend; at the conclusion, a comprehensive take-home examination is given requiring six to ten hours to complete.« less
Wang, Rong
2015-01-01
In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.
ICPP environmental monitoring report for CY-1996
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neff, J.K.
1997-06-01
Summarized in this report are the data collected through Environmental Monitoring programs conducted at the Idaho Chemical Processing Plant (ICPP) by the Environmental Affairs Department. This report is published in response to DOE Order 5400.1. The ICPP is responsible for complying with all applicable Federal, State, Local and DOE Rules, Regulations and Orders. Radiological effluent and emissions are regulated by the DOE in accordance with the Derived Concentration Guides (DCGs) as presented in DOE Order 5400.5. The State of Idaho regulates nonradiological waste resulting from the ICPP operations including airborne, liquid, and solid waste. Quality Assurance activities have resulted inmore » the ICPP`s implementation of the Environmental Protection Agency (EPA) rules and guidelines pertaining to the collection, analyses, and reporting of environmentally related samples. Where no EPA methods for analyses existed for radionuclides, Lockheed Martin Idaho Technologies Company (LMITCO) methods were used.« less
Tavčar, Eva; Turk, Erika; Kreft, Samo
2012-01-01
The most commonly used technique for water content determination is Karl-Fischer titration with electrometric detection, requiring specialized equipment. When appropriate equipment is not available, the method can be performed through visual detection of a titration endpoint, which does not enable an analysis of colored samples. Here, we developed a method with spectrophotometric detection of a titration endpoint, appropriate for moisture determination of colored samples. The reaction takes place in a sealed 4 ml cuvette. Detection is performed at 520 nm. Titration endpoint is determined from the graph of absorbance plotted against titration volume. The method has appropriate reproducibility (RSD = 4.3%), accuracy, and linearity (R 2 = 0.997). PMID:22567558
Open charcoal chamber method for mass measurements of radon exhalation rate from soil surface.
Tsapalov, Andrey; Kovler, Konstantin; Miklyaev, Peter
2016-08-01
Radon exhalation rate from the soil surface can serve as an important criterion in the evaluation of radon hazard of the land. Recently published international standard ISO 11665-7 (2012) is based on the accumulation of radon gas in a closed container. At the same time since 1998 in Russia, as a part of engineering and environmental studies for the construction, radon flux measurements are made using an open charcoal chamber for a sampling duration of 3-5 h. This method has a well-defined metrological justification and was tested in both favorable and unfavorable conditions. The article describes the characteristics of the method, as well as the means of sampling and measurement of the activity of radon absorbed. The results of the metrological study suggest that regardless of the sampling conditions (weather, the mechanism and rate of radon transport in the soil, soil properties and conditions), uncertainty of method does not exceed 20%, while the combined standard uncertainty of radon exhalation rate measured from the soil surface does not exceed 30%. The results of the daily measurements of radon exhalation rate from the soil surface at the experimental site during one year are reported. Copyright © 2016 Elsevier Ltd. All rights reserved.
Molecular dynamics coupled with a virtual system for effective conformational sampling.
Hayami, Tomonori; Kasahara, Kota; Nakamura, Haruki; Higo, Junichi
2018-07-15
An enhanced conformational sampling method is proposed: virtual-system coupled canonical molecular dynamics (VcMD). Although VcMD enhances sampling along a reaction coordinate, this method is free from estimation of a canonical distribution function along the reaction coordinate. This method introduces a virtual system that does not necessarily obey a physical law. To enhance sampling the virtual system couples with a molecular system to be studied. Resultant snapshots produce a canonical ensemble. This method was applied to a system consisting of two short peptides in an explicit solvent. Conventional molecular dynamics simulation, which is ten times longer than VcMD, was performed along with adaptive umbrella sampling. Free-energy landscapes computed from the three simulations mutually converged well. The VcMD provided quicker association/dissociation motions of peptides than the conventional molecular dynamics did. The VcMD method is applicable to various complicated systems because of its methodological simplicity. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Rio Blanco, Colorado, Long-Term Hydrologic Monitoring Program Sampling and Analysis Results for 2009
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2009-12-21
The U.S. Department of Energy (DOE) Office of Legacy Management conducted annual sampling at the Rio Blanco, Colorado, Site, for the Long-Term Hydrologic Monitoring Program (LTHMP) on May 13 and 14, 2009. Samples were analyzed by the U.S. Environmental Protection Agency (EPA) Radiation&Indoor Environments National Laboratory in Las Vegas, Nevada. Samples were analyzed for gamma-emitting radionuclides by high-resolution gamma spectroscopy and tritium using the conventional and enriched methods.
NASA Astrophysics Data System (ADS)
Nasir, N. F.; Mirus, M. F.; Ismail, M.
2017-09-01
Crude glycerol which produced from transesterification reaction has limited usage if it does not undergo purification process. It also contains excess methanol, catalyst and soap. Conventionally, purification method of the crude glycerol involves high cost and complex processes. This study aimed to determine the effects of using different purification methods which are direct method (comprises of ion exchange and methanol removal steps) and multistep method (comprises of neutralization, filtration, ion exchange and methanol removal steps). Two crude glycerol samples were investigated; the self-produced sample through the transesterification process of palm oil and the sample obtained from biodiesel plant. Samples were analysed using Fourier Transform Infrared Spectroscopy, Gas Chromatography and High Performance Liquid Chromatography. The results of this study for both samples after purification have showed that the pure glycerol was successfully produced and fatty acid salts were eliminated. Also, the results indicated the absence of methanol in both samples after purification process. In short, the combination of 4 purification steps has contributed to a higher quality of glycerol. Multistep purification method gave a better result compared to the direct method as neutralization and filtration steps helped in removing most excess salt, fatty acid and catalyst.
Measurement of infrared optical constants with visible photons
NASA Astrophysics Data System (ADS)
Paterova, Anna; Yang, Hongzhi; An, Chengwu; Kalashnikov, Dmitry; Krivitsky, Leonid
2018-04-01
We demonstrate a new scheme for infrared spectroscopy with visible light sources and detectors. The technique relies on the nonlinear interference of correlated photons, produced via spontaneous parametric down conversion in a nonlinear crystal. Visible and infrared photons are split into two paths and the infrared photons interact with the sample under study. The photons are reflected back to the crystal, resembling a conventional Michelson interferometer. Interference of the visible photons is observed and it is dependent on the phases of all three interacting photons: pump, visible and infrared. The transmission coefficient and the refractive index of the sample in the infrared range can be inferred from the interference pattern of visible photons. The method does not require the use of potentially expensive and inefficient infrared detectors and sources, it can be applied to a broad variety of samples, and it does not require a priori knowledge of sample properties in the visible range.
2017-09-28
SECURITY CLASSIFICATION OF: In forensic DNA analysis, the interpretation of a sample acquired from the environment may be dependent upon the...sample acquired from the environment may be dependent upon the assumption on the number of individuals from which the evidence arose. Degraded and...NOCIt results to those obtained when allele counting or maxiumum likelihood estimator (MLE) methods are employed. NOCIt does not depend upon an AT and
Attenuation correction factors for cylindrical, disc and box geometry
NASA Astrophysics Data System (ADS)
Agarwal, Chhavi; Poi, Sanhita; Mhatre, Amol; Goswami, A.; Gathibandhe, M.
2009-08-01
In the present study, attenuation correction factors have been experimentally determined for samples having cylindrical, disc and box geometry and compared with the attenuation correction factors calculated by Hybrid Monte Carlo (HMC) method [ C. Agarwal, S. Poi, A. Goswami, M. Gathibandhe, R.A. Agrawal, Nucl. Instr. and. Meth. A 597 (2008) 198] and with the near-field and far-field formulations available in literature. It has been observed that the near-field formulae, although said to be applicable at close sample-detector geometry, does not work at very close sample-detector configuration. The advantage of the HMC method is that it is found to be valid for all sample-detector geometries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaber, Allan Benton
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
NASA Astrophysics Data System (ADS)
Fatima Khattak, Khanzadi; James Simpson, Thomas
2010-04-01
The efficacy of gamma irradiation as a method of decontamination for food and herbal materials is well established. In the present study, Glycyrrhiza glabra roots were irradiated at doses 5, 10, 15, 20 and 25 kGy in a cobalt-60 irradiator. The irradiated and un-irradiated control samples were evaluated for phenolic contents, antimicrobial activities and DPPH scavenging properties. The result of the present study showed that radiation treatment up to 20 kGy does not affect the antifungal and antibacterial activity of the plant. While sample irradiated at 25 kGy does showed changes in the antibacterial activity against some selected pathogens. No significant differences in the phenolic contents were observed for control and samples irradiated at 5, 10 and 15 kGy radiation doses. However, phenolic contents increased in samples treated with 20 and 25 kGy doses. The DPPH scavenging activity significantly ( p<0.05) increased in all irradiated samples of the plant.
Biopolymers for sample collection, protection, and preservation.
Sorokulova, Iryna; Olsen, Eric; Vodyanoy, Vitaly
2015-07-01
One of the principal challenges in the collection of biological samples from air, water, and soil matrices is that the target agents are not stable enough to be transferred from the collection point to the laboratory of choice without experiencing significant degradation and loss of viability. At present, there is no method to transport biological samples over considerable distances safely, efficiently, and cost-effectively without the use of ice or refrigeration. Current techniques of protection and preservation of biological materials have serious drawbacks. Many known techniques of preservation cause structural damages, so that biological materials lose their structural integrity and viability. We review applications of a novel bacterial preservation process, which is nontoxic and water soluble and allows for the storage of samples without refrigeration. The method is capable of protecting the biological sample from the effects of environment for extended periods of time and then allows for the easy release of these collected biological materials from the protective medium without structural or DNA damage. Strategies for sample collection, preservation, and shipment of bacterial, viral samples are described. The water-soluble polymer is used to immobilize the biological material by replacing the water molecules within the sample with molecules of the biopolymer. The cured polymer results in a solid protective film that is stable to many organic solvents, but quickly removed by the application of the water-based solution. The process of immobilization does not require the use of any additives, accelerators, or plastifiers and does not involve high temperature or radiation to promote polymerization.
Plasma heating for containerless and microgravity materials processing
NASA Technical Reports Server (NTRS)
Leung, Emily W. (Inventor); Man, Kin F. (Inventor)
1994-01-01
A method for plasma heating of levitated samples to be used in containerless microgravity processing is disclosed. A sample is levitated by electrostatic, electromagnetic, aerodynamic, or acoustic systems, as is appropriate for the physical properties of the particular sample. The sample is heated by a plasma torch at atmospheric pressure. A ground plate is provided to help direct the plasma towards the sample. In addition, Helmholtz coils are provided to produce a magnetic field that can be used to spiral the plasma around the sample. The plasma heating system is oriented such that it does not interfere with the levitation system.
Exact tests using two correlated binomial variables in contemporary cancer clinical trials.
Yu, Jihnhee; Kepner, James L; Iyer, Renuka
2009-12-01
New therapy strategies for the treatment of cancer are rapidly emerging because of recent technology advances in genetics and molecular biology. Although newer targeted therapies can improve survival without measurable changes in tumor size, clinical trial conduct has remained nearly unchanged. When potentially efficacious therapies are tested, current clinical trial design and analysis methods may not be suitable for detecting therapeutic effects. We propose an exact method with respect to testing cytostatic cancer treatment using correlated bivariate binomial random variables to simultaneously assess two primary outcomes. The method is easy to implement. It does not increase the sample size over that of the univariate exact test and in most cases reduces the sample size required. Sample size calculations are provided for selected designs.
Jacobs, Jon M.; Burnum-Johnson, Kristin E.; Baker, Erin M.; Smith, Richard D.; Gritsenko, Marina A.; Orton, Daniel
2017-05-16
Methods and systems for diagnosing or prognosing liver fibrosis in a subject are provided. In some examples, such methods and systems can include detecting liver fibrosis-related molecules in a sample obtained from the subject, comparing expression of the molecules in the sample to controls representing expression values expected in a subject who does not have liver fibrosis or who has non-progressing fibrosis, and diagnosing or prognosing liver fibrosis in the subject when differential expression of the molecules between the sample and the controls is detected. Kits for the diagnosis or prognosis of liver fibrosis in a subject are also provided which include reagents for detecting liver fibrosis related molecules.
Jacobs, Jon M.; Burnum-Johnson, Kristin E.; Baker, Erin M.; Smith, Richard D.; Gritsenko, Marina A.; Orton, Daniel
2015-09-15
Methods and systems for diagnosing or prognosing liver fibrosis in a subject are provided. In some examples, such methods and systems can include detecting liver fibrosis-related molecules in a sample obtained from the subject, comparing expression of the molecules in the sample to controls representing expression values expected in a subject who does not have liver fibrosis or who has non-progressing fibrosis, and diagnosing or prognosing liver fibrosis in the subject when differential expression of the molecules between the sample and the controls is detected. Kits for the diagnosis or prognosis of liver fibrosis in a subject are also provided which include reagents for detecting liver fibrosis related molecules.
Method for removal of phosgene from boron trichloride. [DOE patent application; mercury arc lamp
Freund, S.M.
1981-09-03
Selective ultraviolet photolysis using an unfiltered mercury arc lamp has been used to substantially reduce the phosgene impurity in a mixture of boron trichloride and phosgene. Infrared spectrophotometric analysis of the sample before and after irradiation shows that it is possible to highly purify commercially available boron trichloride with this method.
ERIC Educational Resources Information Center
Sevier, Carol; Chyung, Seung Youn; Callahan, Janet; Schrader, Cheryl B.
2012-01-01
A quasi-experimental study was conducted to investigate the effectiveness of using a service learning (SL) method on influencing introductory engineering students' motivation and ABET program outcomes, compared to the effectiveness of using a conventional, non-service-learning (NSL) method. The sample used in the study was 214 students enrolled in…
Kalivas, John H; Georgiou, Constantinos A; Moira, Marianna; Tsafaras, Ilias; Petrakis, Eleftherios A; Mousdis, George A
2014-04-01
Quantitative analysis of food adulterants is an important health and economic issue that needs to be fast and simple. Spectroscopy has significantly reduced analysis time. However, still needed are preparations of analyte calibration samples matrix matched to prediction samples which can be laborious and costly. Reported in this paper is the application of a newly developed pure component Tikhonov regularization (PCTR) process that does not require laboratory prepared or reference analysis methods, and hence, is a greener calibration method. The PCTR method requires an analyte pure component spectrum and non-analyte spectra. As a food analysis example, synchronous fluorescence spectra of extra virgin olive oil samples adulterated with sunflower oil is used. Results are shown to be better than those obtained using ridge regression with reference calibration samples. The flexibility of PCTR allows including reference samples and is generic for use with other instrumental methods and food products. Copyright © 2013 Elsevier Ltd. All rights reserved.
Confidence limit calculation for antidotal potency ratio derived from lethal dose 50
Manage, Ananda; Petrikovics, Ilona
2013-01-01
AIM: To describe confidence interval calculation for antidotal potency ratios using bootstrap method. METHODS: We can easily adapt the nonparametric bootstrap method which was invented by Efron to construct confidence intervals in such situations like this. The bootstrap method is a resampling method in which the bootstrap samples are obtained by resampling from the original sample. RESULTS: The described confidence interval calculation using bootstrap method does not require the sampling distribution antidotal potency ratio. This can serve as a substantial help for toxicologists, who are directed to employ the Dixon up-and-down method with the application of lower number of animals to determine lethal dose 50 values for characterizing the investigated toxic molecules and eventually for characterizing the antidotal protections by the test antidotal systems. CONCLUSION: The described method can serve as a useful tool in various other applications. Simplicity of the method makes it easier to do the calculation using most of the programming software packages. PMID:25237618
Study on the application of Raman spectroscopy on detecting water hardness.
Yang, Chang-Hu; Shi, Xiang-Hua; Yuan, Jian-Hui
2014-05-01
Laser Raman spectrum method was used to study the hardness index of four water samples. The ratio of bending vibration peak intensity to stretching vibration peak intensity of these water samples was measured. The results showed that as the total hardness of water decreases, so does the ratio. This offers a possible new approach to water quality analysis that is both simple and effective.
NASA Astrophysics Data System (ADS)
Kelson, Julia R.; Huntington, Katharine W.; Schauer, Andrew J.; Saenger, Casey; Lechler, Alex R.
2017-01-01
Carbonate clumped isotope (Δ47) thermometry has been applied to a wide range of problems in earth, ocean and biological sciences over the last decade, but is still plagued by discrepancies among empirical calibrations that show a range of Δ47-temperature sensitivities. The most commonly suggested causes of these discrepancies are the method of mineral precipitation and analytical differences, including the temperature of phosphoric acid used to digest carbonates. However, these mechanisms have yet to be tested in a consistent analytical setting, which makes it difficult to isolate the cause(s) of discrepancies and to evaluate which synthetic calibration is most appropriate for natural samples. Here, we systematically explore the impact of synthetic carbonate precipitation by replicating precipitation experiments of previous workers under a constant analytical setting. We (1) precipitate 56 synthetic carbonates at temperatures of 4-85 °C using different procedures to degas CO2, with and without the use of the enzyme carbonic anhydrase (CA) to promote rapid dissolved inorganic carbon (DIC) equilibration; (2) digest samples in phosphoric acid at both 90 °C and 25 °C; and (3) hold constant all analytical methods including acid preparation, CO2 purification, and mass spectrometry; and (4) reduce our data with 17O corrections that are appropriate for our samples. We find that the CO2 degassing method does not influence Δ47 values of these synthetic carbonates, and therefore probably only influences natural samples with very rapid degassing rates, like speleothems that precipitate out of drip solution with high pCO2. CA in solution does not influence Δ47 values in this work, suggesting that disequilibrium in the DIC pool is negligible. We also find the Δ47 values of samples reacted in 25 and 90 °C acid are within error of each other (once corrected with a constant acid fractionation factor). Taken together, our results show that the Δ47-temperature relationship does not measurably change with either the precipitation methods used in this study or acid digestion temperature. This leaves phosphoric acid preparation, CO2 gas purification, and/or data reduction methods as the possible sources of the discrepancy among published calibrations. In particular, the use of appropriate 17O corrections has the potential to reduce disagreement among calibrations. Our study nearly doubles the available synthetic carbonate calibration data for Δ47 thermometry (adding 56 samples to the 74 previously published samples). This large population size creates a robust calibration that enables us to examine the potential for calibration slope aliasing due to small sample size. The similarity of Δ47 values among carbonates precipitated under such diverse conditions suggests that many natural samples grown at 4-85 °C in moderate pH conditions (6-10) may also be described by our Δ47-temperature relationship.
Bauer, Amy E; Hubbard, Kirk R A; Johnson, April J; Messick, Joanne B; Weng, Hsin-Yi; Pogranichniy, Roman M
2016-04-01
Coxiella burnetii is the etiologic agent of the zoonotic disease Q fever and is considered to be endemic in domestic ruminants. Small ruminants in particular are important reservoirs for human infection. Serologic and molecular methods are both available for diagnosis of infection with C. burnetii, but there has been little research evaluating the prevalence of this organism in small ruminants outside of the context of clinical disease outbreaks. The objectives of this study were to estimate seroprevalence of C. burnetii and the prevalence of shedding of C. burnetii DNA in milk by goats in Indiana, USA, to evaluate potential risk factors for association with C. burnetii exposure and shedding, and to assess the level of agreement between the enzyme-linked immunosorbent assay (ELISA) and real-time polymerase chain reaction (PCR) tests used to estimate prevalence. A total of 649 does over 1 year of age and not pregnant at the time of sampling were included in the study. Serum samples were collected from 608 does representing 89 farms. Milk samples were collected from 387 does representing 85 farms. Both milk and serum samples were collected from 356 does representing 80 farms. The estimated individual seroprevalence and shedding prevalence in milk adjusted for clustering were 3.1% (n=23/608, 95% CI: 1.2-7.0%) and 2.5% (n=9/387, 9.5% CI: 1.0-5.6%) respectively. Estimated adjusted herd level C. burnetii seroprevalence and herd level shedding prevalence were 11.5% (n=10/89, 95% CI: 6.4-20.1%) and 7.0% (n=6/85, 95% CI: 3.3-14.6%) respectively. Based on a generalized estimating equation model (GEE), meat breeds of goat had 7.0 times increased odds of shedding C. burnetii DNA in milk samples as compared to dairy breeds. Agreement between tests as determined by Cohen's kappa was poor at both the individual (kappa=0.04, 95% CI: -0.1 to 0.2) and herd (kappa=0.2, 95% CI: -0.1 to 0.5) levels. This indicates that serologic screening alone is unlikely to prevent the introduction of does shedding C. burnetii into herds. Copyright © 2016 Elsevier B.V. All rights reserved.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Metal Halide Lamp Ballasts and Fixtures Energy Conservation Standards § 431.329 Enforcement. Process for Metal Halide Lamp Ballasts. This section sets forth procedures DOE will follow in pursuing alleged... with the following statistical sampling procedures for metal halide lamp ballasts, with the methods...
Design optimization of hydraulic turbine draft tube based on CFD and DOE method
NASA Astrophysics Data System (ADS)
Nam, Mun chol; Dechun, Ba; Xiangji, Yue; Mingri, Jin
2018-03-01
In order to improve performance of the hydraulic turbine draft tube in its design process, the optimization for draft tube is performed based on multi-disciplinary collaborative design optimization platform by combining the computation fluid dynamic (CFD) and the design of experiment (DOE) in this paper. The geometrical design variables are considered as the median section in the draft tube and the cross section in its exit diffuser and objective function is to maximize the pressure recovery factor (Cp). Sample matrixes required for the shape optimization of the draft tube are generated by optimal Latin hypercube (OLH) method of the DOE technique and their performances are evaluated through computational fluid dynamic (CFD) numerical simulation. Subsequently the main effect analysis and the sensitivity analysis of the geometrical parameters of the draft tube are accomplished. Then, the design optimization of the geometrical design variables is determined using the response surface method. The optimization result of the draft tube shows a marked performance improvement over the original.
Improved lossless intra coding for H.264/MPEG-4 AVC.
Lee, Yung-Lyul; Han, Ki-Hun; Sullivan, Gary J
2006-09-01
A new lossless intra coding method based on sample-by-sample differential pulse code modulation (DPCM) is presented as an enhancement of the H.264/MPEG-4 AVC standard. The H.264/AVC design includes a multidirectional spatial prediction method to reduce spatial redundancy by using neighboring samples as a prediction for the samples in a block of data to be encoded. In the new lossless intra coding method, the spatial prediction is performed based on samplewise DPCM instead of in the block-based manner used in the current H.264/AVC standard, while the block structure is retained for the residual difference entropy coding process. We show that the new method, based on samplewise DPCM, does not have a major complexity penalty, despite its apparent pipeline dependencies. Experiments show that the new lossless intra coding method reduces the bit rate by approximately 12% in comparison with the lossless intra coding method previously included in the H.264/AVC standard. As a result, the new method is currently being adopted into the H.264/AVC standard in a new enhancement project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paller, M.; Knox, A.; Kuhne, W.
2015-10-15
DOE sites conduct traditional environmental monitoring programs that require collecting, processing, and analyzing water, sediment, and fish samples. However, recently developed passive sampling technologies, such as Diffusive Gradient in Thin films (DGT), may measure the chemical phases that are available and toxic to organisms (the bioavailable fraction), thereby producing more accurate and economical results than traditional methods. Our laboratory study showed that dissolved copper concentrations measured by DGT probes were strongly correlated with the uptake of copper by Lumbriculus variegatus, an aquatic worm, and with concentrations of copper measured by conventional methods. Dissolved copper concentrations in DGT probes increased with timemore » of exposure, paralleling the increase in copper with time that ocurred in Lumbriculus. Additional studies with a combination of seven dissolved metals showed similar results. These findings support the use of DGT as a biomimetic monitoring tool and provide a basis for refinement of these methods for cost-effective environmental monitoring at DOE sites.« less
A simulation study on Bayesian Ridge regression models for several collinearity levels
NASA Astrophysics Data System (ADS)
Efendi, Achmad; Effrihan
2017-12-01
When analyzing data with multiple regression model if there are collinearities, then one or several predictor variables are usually omitted from the model. However, there sometimes some reasons, for instance medical or economic reasons, the predictors are all important and should be included in the model. Ridge regression model is not uncommon in some researches to use to cope with collinearity. Through this modeling, weights for predictor variables are used for estimating parameters. The next estimation process could follow the concept of likelihood. Furthermore, for the estimation nowadays the Bayesian version could be an alternative. This estimation method does not match likelihood one in terms of popularity due to some difficulties; computation and so forth. Nevertheless, with the growing improvement of computational methodology recently, this caveat should not at the moment become a problem. This paper discusses about simulation process for evaluating the characteristic of Bayesian Ridge regression parameter estimates. There are several simulation settings based on variety of collinearity levels and sample sizes. The results show that Bayesian method gives better performance for relatively small sample sizes, and for other settings the method does perform relatively similar to the likelihood method.
Mirante, Clara; Clemente, Isabel; Zambu, Graciette; Alexandre, Catarina; Ganga, Teresa; Mayer, Carlos; Brito, Miguel
2016-09-01
Helminth intestinal parasitoses are responsible for high levels of child mortality and morbidity. Hence, the capacity to diagnose these parasitoses and consequently ensure due treatment represents a factor of great importance. The main objective of this study involves comparing two methods of concentration, parasitrap and Kato-Katz, for the diagnosis of intestinal parasitoses in faecal samples. Sample processing made recourse to two different concentration methods: the commercial parasitrap® method and the Kato-Katz method. We correspondingly collected a total of 610 stool samples from pre-school and school age children. The results demonstrate the incidence of helminth parasites in 32.8% or 32.3% of the sample collected depending on whether the concentration method applied was either the parasitrap method or the Kato-Katz method. We detected a relatively high percentage of samples testing positive for two or more species of helminth parasites. We would highlight that in searching for larvae the Kato-Katz method does not prove as appropriate as the parasitrap method. Both techniques prove easily applicable even in field working conditions and returning mutually agreeing results. This study concludes in favour of the need for deworming programs and greater public awareness among the rural populations of Angola.
Off-line real-time FTIR analysis of a process step in imipenem production
NASA Astrophysics Data System (ADS)
Boaz, Jhansi R.; Thomas, Scott M.; Meyerhoffer, Steven M.; Staskiewicz, Steven J.; Lynch, Joseph E.; Egan, Richard S.; Ellison, Dean K.
1992-08-01
We have developed an FT-IR method, using a Spectra-Tech Monit-IR 400 systems, to monitor off-line the completion of a reaction in real-time. The reaction is moisture-sensitive and analysis by more conventional methods (normal-phase HPLC) is difficult to reproduce. The FT-IR method is based on the shift of a diazo band when a conjugated beta-diketone is transformed into a silyl enol ether during the reaction. The reaction mixture is examined directly by IR and does not require sample workup. Data acquisition time is less than one minute. The method has been validated for specificity, precision and accuracy. The results obtained by the FT-IR method for known mixtures and in-process samples compare favorably with those from a normal-phase HPLC method.
Sampling in the light of Wigner distribution.
Stern, Adrian; Javidi, Bahram
2004-03-01
We propose a new method for analysis of the sampling and reconstruction conditions of real and complex signals by use of the Wigner domain. It is shown that the Wigner domain may provide a better understanding of the sampling process than the traditional Fourier domain. For example, it explains how certain non-bandlimited complex functions can be sampled and perfectly reconstructed. On the basis of observations in the Wigner domain, we derive a generalization to the Nyquist sampling criterion. By using this criterion, we demonstrate simple preprocessing operations that can adapt a signal that does not fulfill the Nyquist sampling criterion. The preprocessing operations demonstrated can be easily implemented by optical means.
Group refractive index reconstruction with broadband interferometric confocal microscopy
Marks, Daniel L.; Schlachter, Simon C.; Zysk, Adam M.; Boppart, Stephen A.
2010-01-01
We propose a novel method of measuring the group refractive index of biological tissues at the micrometer scale. The technique utilizes a broadband confocal microscope embedded into a Mach–Zehnder interferometer, with which spectral interferograms are measured as the sample is translated through the focus of the beam. The method does not require phase unwrapping and is insensitive to vibrations in the sample and reference arms. High measurement stability is achieved because a single spectral interferogram contains all the information necessary to compute the optical path delay of the beam transmitted through the sample. Included are a physical framework defining the forward problem, linear solutions to the inverse problem, and simulated images of biologically relevant phantoms. PMID:18451922
NASA Technical Reports Server (NTRS)
Generazio, Edward R. (Inventor)
2012-01-01
A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.
ICPP environmental monitoring report CY-1994
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-05-01
Summarized in this report are the data collected through Environmental Monitoring programs conducted at the Idaho Chemical Processing Plant (ICPP) by the Environmental Protection Department. The ICPP is responsible for complying with all applicable Federal, State, Local and DOE Rules, Regulations and Orders. Radiological effluent and emissions are regulated by the DOE in accordance with the Derived Concentration Guides (DCGs) as presented in DOE Order 5400.5. The State of Idaho regulates nonradiological waste resulting from the ICPP operations including airborne, liquid, and solid waste. The Environmental Department updated the Quality Assurance (QA) Project Plan for Environmental Monitoring activities during themore » third quarter of 1992. QA activities have resulted in the ICPP`s implementation of the Environmental Protection Agency (EPA) rules and guidelines pertaining to the collection, analyses, and reporting of environmentally related samples. Where no EPA methods for analyses existed for radionuclides, LITCO methods were used.« less
A simple headspace equilibration method for measuring dissolved methane
Magen, C; Lapham, L.L.; Pohlman, John W.; Marshall, Kristin N.; Bosman, S.; Casso, Michael; Chanton, J.P.
2014-01-01
Dissolved methane concentrations in the ocean are close to equilibrium with the atmosphere. Because methane is only sparingly soluble in seawater, measuring it without contamination is challenging for samples collected and processed in the presence of air. Several methods for analyzing dissolved methane are described in the literature, yet none has conducted a thorough assessment of the method yield, contamination issues during collection, transport and storage, and the effect of temperature changes and preservative. Previous extraction methods transfer methane from water to gas by either a "sparge and trap" or a "headspace equilibration" technique. The gas is then analyzed for methane by gas chromatography. Here, we revisit the headspace equilibration technique and describe a simple, inexpensive, and reliable method to measure methane in fresh and seawater, regardless of concentration. Within the range of concentrations typically found in surface seawaters (2-1000 nmol L-1), the yield of the method nears 100% of what is expected from solubility calculation following the addition of known amount of methane. In addition to being sensitive (detection limit of 0.1 ppmv, or 0.74 nmol L-1), this method requires less than 10 min per sample, and does not use highly toxic chemicals. It can be conducted with minimum materials and does not require the use of a gas chromatograph at the collection site. It can therefore be used in various remote working environments and conditions.
Direct analysis of organic priority pollutants by IMS
NASA Technical Reports Server (NTRS)
Giam, C. S.; Reed, G. E.; Holliday, T. L.; Chang, L.; Rhodes, B. J.
1995-01-01
Many routine methods for monitoring of trace amounts of atmospheric organic pollutants consist of several steps. Typical steps are: (1) collection of the air sample; (2) trapping of organics from the sample; (3) extraction of the trapped organics; and (4) identification of the organics in the extract by GC (gas chromatography), HPLC (High Performance Liquid Chromatography), or MS (Mass Spectrometry). These methods are often cumbersome and time consuming. A simple and fast method for monitoring atmospheric organics using an IMS (Ion Mobility Spectrometer) is proposed. This method has a short sampling time and does not require extraction of the organics since the sample is placed directly in the IMS. The purpose of this study was to determine the responses in the IMS to organic 'priority pollutants'. Priority pollutants including representative polycyclic aromatic hydrocarbons (PAHs), phthalates, phenols, chlorinated pesticides, and polychlorinated biphenyls (PCB's) were analyzed in both the positive and negative detection mode at ambient atmospheric pressure. Detection mode and amount detected are presented.
PALATAL DYSMORPHOGENESIS: QUANTITATIVE RT-PCR
ABSTRACT
Palatal Dysmorphogenesis : Quantitative RT-PCR
Gary A. Held and Barbara D. Abbott
Reverse transcription PCR (RT-PCR) is a very sensitive method for detecting mRNA in tissue samples. However, as it is usually performed it is does not yield quantitativ...
Chemical measurement of urine volume
NASA Technical Reports Server (NTRS)
Sauer, R. L.
1978-01-01
Chemical method of measuring volume of urine samples using lithium chloride dilution technique, does not interfere with analysis, is faster, and more accurate than standard volumetric of specific gravity/weight techniques. Adaptation of procedure to urinalysis could prove generally practical for hospital mineral balance and catechoamine determinations.
NASA Technical Reports Server (NTRS)
Edgett, Kenneth S.; Anderson, Donald L.
1995-01-01
This paper describes an empirical method to correct TIMS (Thermal Infrared Multispectral Scanner) data for atmospheric effects by transferring calibration from a laboratory thermal emission spectrometer to the TIMS multispectral image. The method does so by comparing the laboratory spectra of samples gathered in the field with TIMS 6-point spectra for pixels at the location of field sampling sites. The transference of calibration also makes it possible to use spectra from the laboratory as endmembers in unmixing studies of TIMS data.
Treating Sample Covariances for Use in Strongly Coupled Atmosphere-Ocean Data Assimilation
NASA Astrophysics Data System (ADS)
Smith, Polly J.; Lawless, Amos S.; Nichols, Nancy K.
2018-01-01
Strongly coupled data assimilation requires cross-domain forecast error covariances; information from ensembles can be used, but limited sampling means that ensemble derived error covariances are routinely rank deficient and/or ill-conditioned and marred by noise. Thus, they require modification before they can be incorporated into a standard assimilation framework. Here we compare methods for improving the rank and conditioning of multivariate sample error covariance matrices for coupled atmosphere-ocean data assimilation. The first method, reconditioning, alters the matrix eigenvalues directly; this preserves the correlation structures but does not remove sampling noise. We show that it is better to recondition the correlation matrix rather than the covariance matrix as this prevents small but dynamically important modes from being lost. The second method, model state-space localization via the Schur product, effectively removes sample noise but can dampen small cross-correlation signals. A combination that exploits the merits of each is found to offer an effective alternative.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Findlay, Rick; Kautsky, Mark
2015-12-01
The U.S. Department of Energy (DOE) Office of Legacy Management conducted annual sampling at the Rulison, Colorado, Site for the Long-Term Hydrologic Monitoring Program (LTHMP) on May 20–22 and 27, 2015. Several of the land owners were not available to allow access to their respective properties, which created the need for several sample collection trips. This report documents the analytical results of the Rulison monitoring event and includes the trip report and the data validation package (Appendix A). The groundwater and surface water monitoring were shipped to the GEL Group Inc. laboratories for analysis. All requested analyses were successfully completed.more » Samples were analyzed for gamma-emitting radionuclides by high- resolution gamma spectrometry. Tritium was analyzed using two methods, the conventional tritium method, which has a detection limit on the order of 400 picocuries per liter (pCi/L), and the enriched method (for selected samples), which has a detection limit on the order of 3 pCi/L.« less
NASA Technical Reports Server (NTRS)
Roth, Don J.; Cosgriff, Laura M.; Harder, Bryan; Zhu, Dongming; Martin, Richard E.
2013-01-01
This study investigates the applicability of a novel noncontact single-sided terahertz electromagnetic measurement method for measuring thickness in dielectric coating systems having either dielectric or conductive substrate materials. The method does not require knowledge of the velocity of terahertz waves in the coating material. The dielectric coatings ranged from approximately 300 to 1400 m in thickness. First, the terahertz method was validated on a bulk dielectric sample to determine its ability to precisely measure thickness and density variation. Then, the method was studied on simulated coating systems. One simulated coating consisted of layered thin paper samples of varying thicknesses on a ceramic substrate. Another simulated coating system consisted of adhesive-backed Teflon adhered to conducting and dielectric substrates. Alumina samples that were coated with a ceramic adhesive layer were also investigated. Finally, the method was studied for thickness measurement of actual thermal barrier coatings (TBC) on ceramic substrates. The unique aspects and limitations of this method for thickness measurements are discussed.
Nascimento, Paloma Andrade Martins; Barsanelli, Paulo Lopes; Rebellato, Ana Paula; Pallone, Juliana Azevedo Lima; Colnago, Luiz Alberto; Pereira, Fabíola Manhas Verbi
2017-03-01
This study shows the use of time-domain (TD)-NMR transverse relaxation (T2) data and chemometrics in the nondestructive determination of fat content for powdered food samples such as commercial dried milk products. Most proposed NMR spectroscopy methods for measuring fat content correlate free induction decay or echo intensities with the sample's mass. The need for the sample's mass limits the analytical frequency of NMR determination, because weighing the samples is an additional step in this procedure. Therefore, the method proposed here is based on a multivariate model of T2 decay, measured with Carr-Purcell-Meiboom-Gill pulse sequence and reference values of fat content. The TD-NMR spectroscopy method shows high correlation (r = 0.95) with the lipid content, determined by the standard extraction method of Bligh and Dyer. For comparison, fat content determination was also performed using a multivariate model with near-IR (NIR) spectroscopy, which is also a nondestructive method. The advantages of the proposed TD-NMR method are that it (1) minimizes toxic residue generation, (2) performs measurements with high analytical frequency (a few seconds per analysis), and (3) does not require sample preparation (such as pelleting, needed for NIR spectroscopy analyses) or weighing the samples.
Two-dimensional imaging of two types of radicals by the CW-EPR method
NASA Astrophysics Data System (ADS)
Czechowski, Tomasz; Krzyminiewski, Ryszard; Jurga, Jan; Chlewicki, Wojciech
2008-01-01
The CW-EPR method of image reconstruction is based on sample rotation in a magnetic field with a constant gradient (50 G/cm). In order to obtain a projection (radical density distribution) along a given direction, the EPR spectra are recorded with and without the gradient. Deconvolution, then gives the distribution of the spin density. Projection at 36 different angles gives the information that is necessary for reconstruction of the radical distribution. The problem becomes more complex when there are at least two types of radicals in the sample, because the deconvolution procedure does not give satisfactory results. We propose a method to calculate the projections for each radical, based on iterative procedures. The images of density distribution for each radical obtained by our procedure have proved that the method of deconvolution, in combination with iterative fitting, provides correct results. The test was performed on a sample of polymer PPS Br 111 ( p-phenylene sulphide) with glass fibres and minerals. The results indicated a heterogeneous distribution of radicals in the sample volume. The images obtained were in agreement with the known shape of the sample.
Székely, Gy; Henriques, B; Gil, M; Ramos, A; Alvarez, C
2012-11-01
The present study reports on a liquid chromatography-tandem mass spectrometry (LC-MS/MS) method development strategy supported by design of experiments (DoE) for the trace analysis of 4-dimethylaminopyridine (DMAP). The conventional approaches for development of LC-MS/MS methods are usually via trial and error, varying intentionally the experimental factors which is time consuming and interactions between experimental factors are not considered. The LC factors chosen for the DoE study include flow (F), gradient (G) and injection volume (V(inj)) while cone voltage (E(con)) and collision energy (E(col)) were chosen as MS parameters. All of the five factors were studied simultaneously. The method was optimized with respect to four responses: separation of peaks (Sep), peak area (A(peak)), length of the analysis (T) and the signal to noise ratio (S/N). A quadratic model, namely central composite face (CCF) featuring 29 runs was used instead of a less powerful linear model since the increase in the number of injections was insignificant. In order to determine the robustness of the method a new set of DoE experiments was carried out applying robustness around the optimal conditions was evaluated applying a fractional factorial of resolution III with 11 runs, wherein additional factors - such as column temperature and quadrupole resolution - were considered. The method utilizes a Phenomenex Gemini NX C-18 HPLC analytical column with electrospray ionization and a triple quadrupole mass detector in multiple reaction monitoring (MRM) mode, resulting in short analyses with a 10min runtime. Drawbacks of derivatization, namely incomplete reaction and time consuming sample preparation, have been avoided and the change from SIM to MRM mode resulted in increased sensitivity and lower LOQ. The DoE method development strategy led to a method allowing the trace analysis of DMAP at 0.5 ng/ml absolute concentration which corresponds to a 0.1 ppm limit of quantification in 5mg/ml mometasone furoate glucocorticoid. The obtained method was validated in a linear range of 0.1-10 ppm and presented a %RSD of 0.02% for system precision. Regarding DMAP recovery in mometasone furoate, spiked samples produced %recoveries between 83 and 113% in the range of 0.1-2 ppm. Copyright © 2012 Elsevier B.V. All rights reserved.
DNA-PCR analysis of bloodstains sampled by the polyvinyl-alcohol method.
Schyma, C; Huckenbeck, W; Bonte, W
1999-01-01
Among the usual techniques of sampling gunshot residues (GSR), the polyvinyl-alcohol method (PVAL) includes the advantage of embedding all particles, foreign bodies and stains on the surface of the shooter's hand in exact and reproducible topographic localization. The aim of the present study on ten persons killed by firearms was to check the possibility of DNA-PCR typing of blood traces embedded in the PVAL gloves in a second step following GSR analysis. The results of these examinations verify that the PVAL technique does not include factors that inhibit successful PCR typing. Thus the PVAL method can be recommended as a combination technique to secure and preserve inorganic and biological traces at the same time.
College Students' Gambling Behavior: When Does It Become Harmful?
ERIC Educational Resources Information Center
Weinstock, Jeremiah; Whelan, James P.; Meyers, Andrew
2008-01-01
Objective: The authors investigated behavioral indicators of pathological gambling in a college student sample. Participants and Methods: The authors administered a diagnostic interview for pathological gambling to 159 college students, who also completed a demographic questionnaire, and a self-report measure of psychological distress. Results:…
Teaching the Rules of Debit and Credit
ERIC Educational Resources Information Center
Potts, Andrew J.
1974-01-01
A fundamental method of explaining the basic accounting principles and concepts (debit, credit, basic accounting equation) which includes visual aids, reference to local businesses, and drill, does much toward increasing the student's skill and enhancing his understanding of the subject matter. (Sample transparencies are included.) (Author/AJ)
Sampling benthic invertebrates in low gradient streams: does method make a difference?
The U.S. EPA's Wadeable Streams Assessment was the first survey ofs tream biological condition throughout the United States. Between 2000 and 2004, USEPA, states and tribes colelcted chemical, physical and biological data at 1,392 wadeable, perennial stream locations throughout t...
ICPP environmental monitoring report CY-1993: Environmental characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-05-01
Summarized in this report are the data collected through Environmental Monitoring programs conducted at the Idaho Chemical Processing Plant (ICPP) by the Environmental Safety & Health (ES&H) Department. This report is published in response to DOE Order 5400.1. This report covers the period from December 21, 1992 through December 20, 1993. The ICPP is responsible for complying with all applicable Federal, State, Local and DOE Rules, Regulations and Orders. Radiological effluent and emissions are regulated by the DOE in accordance with the Derived Concentration Guides (DCGs) as presented in DOE Order 5400.5. The State of Idaho regulates all nonradiological wastemore » resulting from the ICPP operations including all airborne, liquid, and solid waste. The ES&H Department updated the Quality Assurance (QA) Project Plan for Environmental Monitoring activities during the third quarter of 1992. QA activities have resulted in the ICPP`s implementation of the Environmental Protection Agency (EPA) rules and guidelines pertaining to the collection, analyses, and reporting of environmentally related samples. Where no EPA methods for analyses existed for radionuclides, WINCO methods were used.« less
Quantifying and Mitigating the Effect of Preferential Sampling on Phylodynamic Inference
Karcher, Michael D.; Palacios, Julia A.; Bedford, Trevor; Suchard, Marc A.; Minin, Vladimir N.
2016-01-01
Phylodynamics seeks to estimate effective population size fluctuations from molecular sequences of individuals sampled from a population of interest. One way to accomplish this task formulates an observed sequence data likelihood exploiting a coalescent model for the sampled individuals’ genealogy and then integrating over all possible genealogies via Monte Carlo or, less efficiently, by conditioning on one genealogy estimated from the sequence data. However, when analyzing sequences sampled serially through time, current methods implicitly assume either that sampling times are fixed deterministically by the data collection protocol or that their distribution does not depend on the size of the population. Through simulation, we first show that, when sampling times do probabilistically depend on effective population size, estimation methods may be systematically biased. To correct for this deficiency, we propose a new model that explicitly accounts for preferential sampling by modeling the sampling times as an inhomogeneous Poisson process dependent on effective population size. We demonstrate that in the presence of preferential sampling our new model not only reduces bias, but also improves estimation precision. Finally, we compare the performance of the currently used phylodynamic methods with our proposed model through clinically-relevant, seasonal human influenza examples. PMID:26938243
Late-stage galaxy mergers in cosmos to z ∼ 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lackner, C. N.; Silverman, J. D.; Salvato, M.
2014-12-01
The role of major mergers in galaxy and black hole formation is not well-constrained. To help address this, we develop an automated method to identify late-stage galaxy mergers before coalescence of the galactic cores. The resulting sample of mergers is distinct from those obtained using pair-finding and morphological indicators. Our method relies on median-filtering of high-resolution images to distinguish two concentrated galaxy nuclei at small separations. This method does not rely on low surface brightness features to identify mergers, and is therefore reliable to high redshift. Using mock images, we derive statistical contamination and incompleteness corrections for the fraction ofmore » late-stage mergers. The mock images show that our method returns an uncontaminated (<10%) sample of mergers with projected separations between 2.2 and 8 kpc out to z∼1. We apply our new method to a magnitude-limited (m{sub FW} {sub 814}<23) sample of 44,164 galaxies from the COSMOS HST/ACS catalog. Using a mass-complete sample with logM{sub ∗}/M{sub ⊙}>10.6 and 0.25« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
K. B. Campbell
2002-09-01
The Area 12 Fleet Operations Steam Cleaning Effluent site is located in the southeastern portion of the Area 12 Camp at the Nevada Test Site. This site is identified in the Federal Facility Agreement and Consent Order (1996) as Corrective Action Site (CAS) 12-19-01 and is the only CAS assigned to Corrective Action Unit (CAU) 339. Post-closure sampling and inspection of the site were completed on March 27, 2002. Post-closure monitoring activities were scheduled biennially (every two years) in the Post-Closure Monitoring Plan provided in the Closure Report for CAU 339: Area 12 Fleet Operations Steam Cleaning Effluent, Nevada Testmore » Site (U.S. Department of Energy, Nevada Operations Office [DOEN], 1997). A baseline for the site was established by sampling in 1997. Based on the recommendations from the 1999 post-closure monitoring report (DOE/NV, 1999), samples were collected in 2000, earlier than originally proposed, because the 1999 sample results did not provide the expected decrease in total petroleum hydrocarbon (TPH) concentrations at the site. Sampling results from 2000 (DOE/NV, 2000) and 2001 (DOE/NV, 2001) revealed favorable conditions for natural degradation at the CAU 339 site, but because of differing sample methods and heterogeneity of the soil, data results from 2000 and later were not directly correlated with previous results. Post-closure monitoring activities for 2002 consisted of the following: (1) Soil sample collection from three undisturbed plots (Plots A, B, and C, Figure 2). (2) Sample analysis for TPH as oil and bio-characterization parameters (Comparative Enumeration Assay [CEA] and Standard Nutrient Panel [SNP]). (3) Site inspection to evaluate the condition of the fencing and signs. (4) Preparation and submittal of the Post-Closure Monitoring Report.« less
Spatial-dependence recurrence sample entropy
NASA Astrophysics Data System (ADS)
Pham, Tuan D.; Yan, Hong
2018-03-01
Measuring complexity in terms of the predictability of time series is a major area of research in science and engineering, and its applications are spreading throughout many scientific disciplines, where the analysis of physiological signals is perhaps the most widely reported in literature. Sample entropy is a popular measure for quantifying signal irregularity. However, the sample entropy does not take sequential information, which is inherently useful, into its calculation of sample similarity. Here, we develop a method that is based on the mathematical principle of the sample entropy and enables the capture of sequential information of a time series in the context of spatial dependence provided by the binary-level co-occurrence matrix of a recurrence plot. Experimental results on time-series data of the Lorenz system, physiological signals of gait maturation in healthy children, and gait dynamics in Huntington's disease show the potential of the proposed method.
Jaki, Thomas; Allacher, Peter; Horling, Frank
2016-09-05
Detecting and characterizing of anti-drug antibodies (ADA) against a protein therapeutic are crucially important to monitor the unwanted immune response. Usually a multi-tiered approach that initially rapidly screens for positive samples that are subsequently confirmed in a separate assay is employed for testing of patient samples for ADA activity. In this manuscript we evaluate the ability of different methods used to classify subject with screening and competition based confirmatory assays. We find that for the overall performance of the multi-stage process the method used for confirmation is most important where a t-test is best when differences are moderate to large. Moreover we find that, when differences between positive and negative samples are not sufficiently large, using a competition based confirmation step does yield poor classification of positive samples. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Static versus dynamic sampling for data mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
John, G.H.; Langley, P.
1996-12-31
As data warehouses grow to the point where one hundred gigabytes is considered small, the computational efficiency of data-mining algorithms on large databases becomes increasingly important. Using a sample from the database can speed up the datamining process, but this is only acceptable if it does not reduce the quality of the mined knowledge. To this end, we introduce the {open_quotes}Probably Close Enough{close_quotes} criterion to describe the desired properties of a sample. Sampling usually refers to the use of static statistical tests to decide whether a sample is sufficiently similar to the large database, in the absence of any knowledgemore » of the tools the data miner intends to use. We discuss dynamic sampling methods, which take into account the mining tool being used and can thus give better samples. We describe dynamic schemes that observe a mining tool`s performance on training samples of increasing size and use these results to determine when a sample is sufficiently large. We evaluate these sampling methods on data from the UCI repository and conclude that dynamic sampling is preferable.« less
Carmona-Jiménez, Yolanda; García-Moreno, M Valme; Igartuburu, Jose M; Garcia Barroso, Carmelo
2014-12-15
The DPPH assay is one of the most commonly employed methods for measuring antioxidant activity. Even though this method is considered very simple and efficient, it does present various limitations which make it complicated to perform. The range of linearity between the DPPH inhibition percentage and sample concentration has been studied with a view to simplifying the method for characterising samples of wine origin. It has been concluded that all the samples are linear in a range of inhibition below 40%, which allows the analysis to be simplified. A new parameter more appropriate for the simplification, the EC20, has been proposed to express the assay results. Additionally, the reaction time was analysed with the object of avoiding the need for kinetic studies in the method. The simplifications considered offer a more functional method, without significant errors, which could be used for routine analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.
Fluorescence of Er3+:AlN Polycrystalline Ceramic
2012-01-01
This level of translucency does indicate a high density, and in fact samples were measured by Archimedes method to have average densities of 3.245...stimulated emission by the reciprocity principle [34,35]. 4. Discussion and conclusions In this work, we have reported what we believe to be the first
Algorithms that Defy the Gravity of Learning Curve
2017-04-28
three nearest neighbour-based anomaly detectors, i.e., an ensemble of nearest neigh- bours, a recent nearest neighbour-based ensemble method called iNNE...streams. Note that the change in sample size does not alter the geometrical data characteristics discussed here. 3.1 Experimental Methodology ...need to be answered. 3.6 Comparison with conventional ensemble methods Given the theoretical results, the third aim of this project (i.e., identify the
Model-free and analytical EAP reconstruction via spherical polar Fourier diffusion MRI.
Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid
2010-01-01
How to estimate the diffusion Ensemble Average Propagator (EAP) from the DWI signals in q-space is an open problem in diffusion MRI field. Many methods were proposed to estimate the Orientation Distribution Function (ODF) that is used to describe the fiber direction. However, ODF is just one of the features of the EAP. Compared with ODF, EAP has the full information about the diffusion process which reflects the complex tissue micro-structure. Diffusion Orientation Transform (DOT) and Diffusion Spectrum Imaging (DSI) are two important methods to estimate the EAP from the signal. However, DOT is based on mono-exponential assumption and DSI needs a lot of samplings and very large b values. In this paper, we propose Spherical Polar Fourier Imaging (SPFI), a novel model-free fast robust analytical EAP reconstruction method, which almost does not need any assumption of data and does not need too many samplings. SPFI naturally combines the DWI signals with different b-values. It is an analytical linear transformation from the q-space signal to the EAP profile represented by Spherical Harmonics (SH). We validated the proposed methods in synthetic data, phantom data and real data. It works well in all experiments, especially for the data with low SNR, low anisotropy, and non-exponential decay.
Nonclinical dose formulation analysis method validation and sample analysis.
Whitmire, Monica Lee; Bryan, Peter; Henry, Teresa R; Holbrook, John; Lehmann, Paul; Mollitor, Thomas; Ohorodnik, Susan; Reed, David; Wietgrefe, Holly D
2010-12-01
Nonclinical dose formulation analysis methods are used to confirm test article concentration and homogeneity in formulations and determine formulation stability in support of regulated nonclinical studies. There is currently no regulatory guidance for nonclinical dose formulation analysis method validation or sample analysis. Regulatory guidance for the validation of analytical procedures has been developed for drug product/formulation testing; however, verification of the formulation concentrations falls under the framework of GLP regulations (not GMP). The only current related regulatory guidance is the bioanalytical guidance for method validation. The fundamental parameters for bioanalysis and formulation analysis validations that overlap include: recovery, accuracy, precision, specificity, selectivity, carryover, sensitivity, and stability. Divergence in bioanalytical and drug product validations typically center around the acceptance criteria used. As the dose formulation samples are not true "unknowns", the concept of quality control samples that cover the entire range of the standard curve serving as the indication for the confidence in the data generated from the "unknown" study samples may not always be necessary. Also, the standard bioanalytical acceptance criteria may not be directly applicable, especially when the determined concentration does not match the target concentration. This paper attempts to reconcile the different practices being performed in the community and to provide recommendations of best practices and proposed acceptance criteria for nonclinical dose formulation method validation and sample analysis.
Model for spectral and chromatographic data
Jarman, Kristin [Richland, WA; Willse, Alan [Richland, WA; Wahl, Karen [Richland, WA; Wahl, Jon [Richland, WA
2002-11-26
A method and apparatus using a spectral analysis technique are disclosed. In one form of the invention, probabilities are selected to characterize the presence (and in another form, also a quantification of a characteristic) of peaks in an indexed data set for samples that match a reference species, and other probabilities are selected for samples that do not match the reference species. An indexed data set is acquired for a sample, and a determination is made according to techniques exemplified herein as to whether the sample matches or does not match the reference species. When quantification of peak characteristics is undertaken, the model is appropriately expanded, and the analysis accounts for the characteristic model and data. Further techniques are provided to apply the methods and apparatuses to process control, cluster analysis, hypothesis testing, analysis of variance, and other procedures involving multiple comparisons of indexed data.
Ditommaso, Savina; Giacomuzzi, Monica; Ricciardi, Elisa; Zotti, Carla M
2016-02-06
Legionella spp. are ubiquitous in aquatic habitats and water distribution systems, including dental unit waterlines (DUWLs). The aim of the present study was to determine the prevalence of Legionella in DUWLs and tap water samples using PMA-qPCR and standard culture methods. The total viable counts (TVCs) of aerobic heterotrophic bacteria in the samples were also determined. Legionella spp. were detected and quantified using the modified ISO 11731 culture method. Extracted genomic DNA was analysed using the iQ-Check Quanti Legionella spp. kit, and the TVCs were determined according to the ISO protocol 6222. Legionella spp. were detected in 100% of the samples using the PMA-qPCR method, whereas these bacteria were detected in only 7% of the samples using the culture method. The number of colony forming units (CFUs) of the TVCs in the DUWL and tap water samples differed, with the bacterial load being significantly lower in the tap water samples (p-value = 0). The counts obtained were within the Italian standard range established for potable water in only 5% of the DUWL water samples and in 77% of the tap water samples. Our results show that the level of Legionella spp. contamination determined using the culture method does not reflect the true scale of the problem, and consequently we recommend testing for the presence of aerobic heterotrophic bacteria based on the assumption that Legionella spp. are components of biofilms.
Ditommaso, Savina; Giacomuzzi, Monica; Ricciardi, Elisa; Zotti, Carla M.
2016-01-01
Legionella spp. are ubiquitous in aquatic habitats and water distribution systems, including dental unit waterlines (DUWLs). The aim of the present study was to determine the prevalence of Legionella in DUWLs and tap water samples using PMA-qPCR and standard culture methods. The total viable counts (TVCs) of aerobic heterotrophic bacteria in the samples were also determined. Legionella spp. were detected and quantified using the modified ISO 11731 culture method. Extracted genomic DNA was analysed using the iQ-Check Quanti Legionella spp. kit, and the TVCs were determined according to the ISO protocol 6222. Legionella spp. were detected in 100% of the samples using the PMA-qPCR method, whereas these bacteria were detected in only 7% of the samples using the culture method. The number of colony forming units (CFUs) of the TVCs in the DUWL and tap water samples differed, with the bacterial load being significantly lower in the tap water samples (p-value = 0). The counts obtained were within the Italian standard range established for potable water in only 5% of the DUWL water samples and in 77% of the tap water samples. Our results show that the level of Legionella spp. contamination determined using the culture method does not reflect the true scale of the problem, and consequently we recommend testing for the presence of aerobic heterotrophic bacteria based on the assumption that Legionella spp. are components of biofilms. PMID:26861373
Mao, Shihong; Goodrich, Robert J; Hauser, Russ; Schrader, Steven M; Chen, Zhen; Krawetz, Stephen A
2013-10-01
Different semen storage and sperm purification methods may affect the integrity of isolated spermatozoal RNA. RNA-Seq was applied to determine whether semen storage methods (pelleted vs. liquefied) and somatic cell lysis buffer (SCLB) vs. PureSperm (PS) purification methods affect the quantity and quality of sperm RNA. The results indicate that the method of semen storage does not markedly impact RNA profiling whereas the choice of purification can yield significant differences. RNA-Seq showed that the majority of mitochondrial and mid-piece associated transcripts were lost after SCLB purification, which indicated that the mid-piece of spermatozoa may have been compromised. In addition, the number of stable transcript pairs from SCLB-samples was less than that from the PS samples. This study supports the view that PS purification better maintains the integrity of spermatozoal RNAs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, J.; Talbott, J.
1984-01-01
Task 1. Methods development for the speciation of the polysulfides. Work on this task has been completed in December 1983 and reported accordingly in DOE/PC/40783-T13. Task 2. Methods development for the speciation of dithionite and polythionates. Work on Task 2 has been completed in June 1984 and has been reported accordingly in DOE/PC/40783-T15. Task 3. Total accounting of the sulfur balance in representative samples of synfuel process streams. A systematic and critical comparison of results, obtained in the analysis of sulfur moieties in representative samples of coal conversion process streams, revealed the following general trends. (a) In specimens of highmore » pH (9-10) and low redox potential (-0.3 to -0.4 volt versus NHE) sulfidic and polysulfidic sulfur moieties predominate. (b) In process streams of lower pH and more positive redox potential, higher oxidation states of sulfur (notably sulfate) account for most of the total sulfur present. (c) Oxidative wastewater treatment procedures by the PETC stripping process convert lower oxidation states of sulfur into thiosulfate and sulfate. In this context, remarkable similarities were observed between liquefaction and gasification process streams. However, the thiocyanate present in samples from the Grand Forks gasifier were impervious to the PETC stripping process. (d) Total sulfur contaminant levels in coal conversion process stream wastewater samples are primarily determined by the abundance of sulfur in the coal used as starting material than by the nature of the conversion process (liquefaction or gasification). 13 references.« less
Across-cohort QC analyses of GWAS summary statistics from complex traits.
Chen, Guo-Bo; Lee, Sang Hong; Robinson, Matthew R; Trzaskowski, Maciej; Zhu, Zhi-Xiang; Winkler, Thomas W; Day, Felix R; Croteau-Chonka, Damien C; Wood, Andrew R; Locke, Adam E; Kutalik, Zoltán; Loos, Ruth J F; Frayling, Timothy M; Hirschhorn, Joel N; Yang, Jian; Wray, Naomi R; Visscher, Peter M
2016-01-01
Genome-wide association studies (GWASs) have been successful in discovering SNP trait associations for many quantitative traits and common diseases. Typically, the effect sizes of SNP alleles are very small and this requires large genome-wide association meta-analyses (GWAMAs) to maximize statistical power. A trend towards ever-larger GWAMA is likely to continue, yet dealing with summary statistics from hundreds of cohorts increases logistical and quality control problems, including unknown sample overlap, and these can lead to both false positive and false negative findings. In this study, we propose four metrics and visualization tools for GWAMA, using summary statistics from cohort-level GWASs. We propose methods to examine the concordance between demographic information, and summary statistics and methods to investigate sample overlap. (I) We use the population genetics F st statistic to verify the genetic origin of each cohort and their geographic location, and demonstrate using GWAMA data from the GIANT Consortium that geographic locations of cohorts can be recovered and outlier cohorts can be detected. (II) We conduct principal component analysis based on reported allele frequencies, and are able to recover the ancestral information for each cohort. (III) We propose a new statistic that uses the reported allelic effect sizes and their standard errors to identify significant sample overlap or heterogeneity between pairs of cohorts. (IV) To quantify unknown sample overlap across all pairs of cohorts, we propose a method that uses randomly generated genetic predictors that does not require the sharing of individual-level genotype data and does not breach individual privacy.
Across-cohort QC analyses of GWAS summary statistics from complex traits
Chen, Guo-Bo; Lee, Sang Hong; Robinson, Matthew R; Trzaskowski, Maciej; Zhu, Zhi-Xiang; Winkler, Thomas W; Day, Felix R; Croteau-Chonka, Damien C; Wood, Andrew R; Locke, Adam E; Kutalik, Zoltán; Loos, Ruth J F; Frayling, Timothy M; Hirschhorn, Joel N; Yang, Jian; Wray, Naomi R; Visscher, Peter M
2017-01-01
Genome-wide association studies (GWASs) have been successful in discovering SNP trait associations for many quantitative traits and common diseases. Typically, the effect sizes of SNP alleles are very small and this requires large genome-wide association meta-analyses (GWAMAs) to maximize statistical power. A trend towards ever-larger GWAMA is likely to continue, yet dealing with summary statistics from hundreds of cohorts increases logistical and quality control problems, including unknown sample overlap, and these can lead to both false positive and false negative findings. In this study, we propose four metrics and visualization tools for GWAMA, using summary statistics from cohort-level GWASs. We propose methods to examine the concordance between demographic information, and summary statistics and methods to investigate sample overlap. (I) We use the population genetics Fst statistic to verify the genetic origin of each cohort and their geographic location, and demonstrate using GWAMA data from the GIANT Consortium that geographic locations of cohorts can be recovered and outlier cohorts can be detected. (II) We conduct principal component analysis based on reported allele frequencies, and are able to recover the ancestral information for each cohort. (III) We propose a new statistic that uses the reported allelic effect sizes and their standard errors to identify significant sample overlap or heterogeneity between pairs of cohorts. (IV) To quantify unknown sample overlap across all pairs of cohorts, we propose a method that uses randomly generated genetic predictors that does not require the sharing of individual-level genotype data and does not breach individual privacy. PMID:27552965
Analysis of EPA and DOE WIPP Air Sampling Data
During the April 2014 EPA visit to WIPP, EPA co-located four ambient air samplers with existing Department of Energy (DOE) ambient air samplers to independently corroborate DOE's reported air sampling results.
Mössbauer studies on some Argentinian soil: Mollisols from Bahia Blanca
NASA Astrophysics Data System (ADS)
Saragovi, C.; Labenski, F.; Duhalde, S. M.; Acebal, S.; Venegas, R.
1994-12-01
Clay fractions of a Mollisol sample as is, treated with ammonium oxalate (AO), with dithionite-citrate-bicarbonate (DCB) and with dithionite-ethilene-diamine-tetraacetic acid (D-EDTA) methods, were studied. Illite-montmorillonites together with hematites, goethites and maghemites, all of the AI-substituted and with a wide range of sizes, were identified. It is found that the AO attack extracts little iron, whereas the other two attacks extract the magnetic signal. Furthermore, the DCB attack facilitates the reduction of the Fe3+ ions, while the D-EDTA method does not. Instead, this attack extracts more clay mineral Fe ions. A comparison with large grain soil samples is made.
Holland, Alexander; Aboy, Mateo
2009-07-01
We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.
Dynamics of faecal egg count in natural infection of Haemonchus spp. in Indian goats
Agrawal, Nimisha; Sharma, Dinesh Kumar; Mandal, Ajoy; Rout, Pramod Kumar; Kushwah, Yogendra Kumar
2015-01-01
Aim: Dynamics of faecal egg count (FEC) in Haemonchus spp. infected goats of two Indian goat breeds, Jamunapari and Sirohi, in natural conditions was studied and effects of genetic and non-genetic factors were determined. Materials and Methods: A total of 1399 faecal samples of goats of Jamunapari and Sirohi breeds, maintained at CIRG, Makhdoom, Mathura, India and naturally infected with Haemonchus spp., were processed and FEC was performed. Raw data generated on FEC were transformed by loge (FEC+100) and transformed data (least squares mean of FEC [LFEC]) were analyzed using a mixed model least squares analysis for fitting constant. Fixed effects such as breed, physiological status, season and year of sampling and breed × physiological states interaction were used. Result: The incidence of Haemomchus spp. infection in Jamunapari and Sirohi does was 63.01 and 47.06%, respectively. The mean LFEC of both Jamunapari and Sirohi (does) at different physiological stages, namely dry, early pregnant, late pregnant early lactating and late lactating stages were compared. Breed, season and year of sampling had a significant effect on FEC in Haemomchus spp. infection. Effect of breed × physiological interaction was also significant. The late pregnant does of both breeds had higher FEC when compared to does in other stages. Conclusion: Breed difference in FEC was more pronounced at the time of post kidding (early lactation) when sharp change in FEC was observed. PMID:27046993
Phase retrieval by coherent modulation imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fucai; Chen, Bo; Morrison, Graeme R.
Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging (CDI) is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit-wave. This coherent modulation imaging (CMI) method removes inherent ambiguities of CDI and uses a reliable, rapidly converging iterative algorithm involving three planes. It works formore » extended samples, does not require tight support for convergence, and relaxes dynamic range requirements on the detector. CMI provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free electron laser.« less
Phase retrieval by coherent modulation imaging
Zhang, Fucai; Chen, Bo; Morrison, Graeme R.; ...
2016-11-18
Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging (CDI) is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit-wave. This coherent modulation imaging (CMI) method removes inherent ambiguities of CDI and uses a reliable, rapidly converging iterative algorithm involving three planes. It works formore » extended samples, does not require tight support for convergence, and relaxes dynamic range requirements on the detector. CMI provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free electron laser.« less
Phase retrieval by coherent modulation imaging.
Zhang, Fucai; Chen, Bo; Morrison, Graeme R; Vila-Comamala, Joan; Guizar-Sicairos, Manuel; Robinson, Ian K
2016-11-18
Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single-diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit wave. This coherent modulation imaging method removes inherent ambiguities of coherent diffraction imaging and uses a reliable, rapidly converging iterative algorithm involving three planes. It works for extended samples, does not require tight support for convergence and relaxes dynamic range requirements on the detector. Coherent modulation imaging provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free-electron lasers.
Magnetic Stirrer Method for the Detection of Trichinella Larvae in Muscle Samples.
Mayer-Scholl, Anne; Pozio, Edoardo; Gayda, Jennifer; Thaben, Nora; Bahn, Peter; Nöckler, Karsten
2017-03-03
Trichinellosis is a debilitating disease in humans and is caused by the consumption of raw or undercooked meat of animals infected with the nematode larvae of the genus Trichinella. The most important sources of human infections worldwide are game meat and pork or pork products. In many countries, the prevention of human trichinellosis is based on the identification of infected animals by means of the artificial digestion of muscle samples from susceptible animal carcasses. There are several methods based on the digestion of meat but the magnetic stirrer method is considered the gold standard. This method allows the detection of Trichinella larvae by microscopy after the enzymatic digestion of muscle samples and subsequent filtration and sedimentation steps. Although this method does not require special and expensive equipment, internal controls cannot be used. Therefore, stringent quality management should be applied throughout the test. The aim of the present work is to provide detailed handling instructions and critical control points of the method to analysts, based on the experience of the European Union Reference Laboratory for Parasites and the National Reference Laboratory of Germany for Trichinella.
ERIC Educational Resources Information Center
Vahabi, Mandana
2010-01-01
Objective: To test whether the format in which women receive probabilistic information about breast cancer and mammography affects their comprehension. Methods: A convenience sample of 180 women received pre-assembled randomized packages containing a breast health information brochure, with probabilities presented in either verbal or numeric…
Does Language Influence the Accuracy of Judgments of Stuttering in Children?
ERIC Educational Resources Information Center
Einarsdottir, Johanna; Ingham, Roger J.
2009-01-01
Purpose: To determine whether stuttering judgment accuracy is influenced by familiarity with the stuttering speaker's language. Method: Audiovisual 7-min speech samples from nine 3- to 5-year-olds were used. Icelandic children who stutter (CWS), preselected for different levels of stuttering, were subdivided into 5-s intervals. Ten experienced…
In-situ soil carbon analysis using inelastic neutron scattering
USDA-ARS?s Scientific Manuscript database
In situ soil carbon analysis using inelastic neutron scattering (INS) is based on the emission of 4.43 MeV gamma rays from carbon nuclei excited by fast neutrons. This in-situ method has excellent potential for easily measuring soil carbon since it does not require soil core sampling and processing ...
Social and Professional Support for Parents of Adolescents with Severe Intellectual Disabilities
ERIC Educational Resources Information Center
White, Nia; Hastings, Richard P.
2004-01-01
Background: Previous research has identified various dimensions of social support that are positively associated with parental well-being. However, most research does not include multiple measures of social support and uses heterogeneous samples in terms of child characteristics such as age and severity of intellectual disability. Methods:…
Does Fall History Influence Residential Adjustments?
ERIC Educational Resources Information Center
Leland, Natalie; Porell, Frank; Murphy, Susan L.
2011-01-01
Purpose of the study: To determine whether reported falls at baseline are associated with an older adult's decision to make a residential adjustment (RA) and the type of adjustment made in the subsequent 2 years. Design and Methods: Observations (n = 25,036) were from the Health and Retirement Study, a nationally representative sample of…
Education and Child Welfare Supervisor Performance: Does a Social Work Degree Matter?
ERIC Educational Resources Information Center
Perry, Robin E.
2006-01-01
Objective: To empirically examine whether the educational background of child welfare supervisors in Florida affects performance evaluations of their work. Method: A complete population sample (yielding a 58.5% response rate) of administrator and peer evaluations of child welfare workers' supervisors. ANOVA procedures were utilized to test if…
Biostatistics Series Module 5: Determining Sample Size
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437
Sato, Yuka; Seimiya, Masanori; Yoshida, Toshihiko; Sawabe, Yuji; Hokazono, Eisaku; Osawa, Susumu; Matsushita, Kazuyuki
2017-01-01
Background The indocyanine green retention rate is important for assessing the severity of liver disorders. In the conventional method, blood needs to be collected twice. In the present study, we developed an automated indocyanine green method that does not require blood sampling before intravenous indocyanine green injections and is applicable to an automated biochemical analyser. Methods The serum samples of 471 patients collected before and after intravenous indocyanine green injections and submitted to the clinical laboratory of our hospital were used as samples. The standard procedure established by the Japan Society of Hepatology was used as the standard method. In the automated indocyanine green method, serum collected after an intravenous indocyanine green injection was mixed with the saline reagent containing a surfactant, and the indocyanine green concentration was measured at a dominant wavelength of 805 nm and a complementary wavelength of 884 nm. Results The coefficient of variations of the within- and between-run reproducibilities of this method were 2% or lower, and dilution linearity passing the origin was noted up to 10 mg/L indocyanine green. The reagent was stable for four weeks or longer. Haemoglobin, bilirubin and chyle had no impact on the results obtained. The correlation coefficient between the standard method (x) and this method (y) was r=0.995; however, slight divergence was noted in turbid samples. Conclusion Divergence in turbid samples may have corresponded to false negativity with the standard procedure. Our method may be highly practical because blood sampling before indocyanine green loading is unnecessary and measurements are simple.
Reflexion on linear regression trip production modelling method for ensuring good model quality
NASA Astrophysics Data System (ADS)
Suprayitno, Hitapriya; Ratnasari, Vita
2017-11-01
Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.
Determination of residual solvents in pharmaceuticals by thermal desorption-GC/MS.
Hashimoto, K; Urakami, K; Fujiwara, Y; Terada, S; Watanabe, C
2001-05-01
A novel method for the determination of residual solvents in pharmaceuticals by thermal desorption (TD)-GC/MS has been established. A programmed temperature pyrolyzer (double shot pyrolyzer) is applied for the TD. This method does not require any sample pretreatment and allows very small amounts of the sample. Directly desorbed solvents from intact pharmaceuticals (ca. 1 mg) in the desorption cup (5 mm x 3.8 mm i.d.) were cryofocused at the head of a capillary column prior to a GC/MS analysis. The desorption temperature was set at a point about 20 degrees C higher than the melting point of each sample individually, and held for 3 min. The analytical results using 7 different pharmaceuticals were in agreement with those obtained by direct injection (DI) of the solution, followed by USP XXIII. This proposed TD-GC/MS method was demonstrated to be very useful for the identification and quantification of residual solvents. Furthermore, this method was simple, allowed rapid analysis and gave good repeatability.
Ophus, Colin; Rasool, Haider I.; Linck, Martin; ...
2016-11-30
We develop an automatic and objective method to measure and correct residual aberrations in atomic-resolution HRTEM complex exit waves for crystalline samples aligned along a low-index zone axis. Our method uses the approximate rotational point symmetry of a column of atoms or single atom to iteratively calculate a best-fit numerical phase plate for this symmetry condition, and does not require information about the sample thickness or precise structure. We apply our method to two experimental focal series reconstructions, imaging a β-Si 3N 4 wedge with O and N doping, and a single-layer graphene grain boundary. We use peak and latticemore » fitting to evaluate the precision of the corrected exit waves. We also apply our method to the exit wave of a Si wedge retrieved by off-axis electron holography. In all cases, the software correction of the residual aberration function improves the accuracy of the measured exit waves.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ophus, Colin; Rasool, Haider I.; Linck, Martin
We develop an automatic and objective method to measure and correct residual aberrations in atomic-resolution HRTEM complex exit waves for crystalline samples aligned along a low-index zone axis. Our method uses the approximate rotational point symmetry of a column of atoms or single atom to iteratively calculate a best-fit numerical phase plate for this symmetry condition, and does not require information about the sample thickness or precise structure. We apply our method to two experimental focal series reconstructions, imaging a β-Si 3N 4 wedge with O and N doping, and a single-layer graphene grain boundary. We use peak and latticemore » fitting to evaluate the precision of the corrected exit waves. We also apply our method to the exit wave of a Si wedge retrieved by off-axis electron holography. In all cases, the software correction of the residual aberration function improves the accuracy of the measured exit waves.« less
NASA Astrophysics Data System (ADS)
Liu, Jianjun; Kan, Jianquan
2018-04-01
In this paper, based on the terahertz spectrum, a new identification method of genetically modified material by support vector machine (SVM) based on affinity propagation clustering is proposed. This algorithm mainly uses affinity propagation clustering algorithm to make cluster analysis and labeling on unlabeled training samples, and in the iterative process, the existing SVM training data are continuously updated, when establishing the identification model, it does not need to manually label the training samples, thus, the error caused by the human labeled samples is reduced, and the identification accuracy of the model is greatly improved.
ICPP environmental monitoring report, CY 1989
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-01-01
Summarized in this report are the data collected through Environmental Monitoring programs conducted at the Idaho Chemical Processing Plant (ICPP) by the Environmental Assurance (EA) Section of the Environmental Compliance and SIS Operations (EC/SIS) Department. Published in response to DOE Order 5484.1, Chap. 3, this report covers the period from December 20, 1988 through December 19, 1989. The ICPP is responsible for complying with all applicable Federal, State, Local and DOE Rules, Regulations and Orders. Radiological effluent and emissions are regulated by the DOE in accordance with the Derived Concentration Guides (DCGs) as presented in DOE Order 5,400.05, and themore » State of Idaho Maximum Permissible Concentrations (MPCs). The Environmental Protection Agency (EPA) regulates all nonradiological waste resulting from the ICPP operations including all airborne, liquid, and solid waste. The EA Section completed a Quality Assurance (QA) Plan for Environmental Monitoring activities during the third quarter of 1986. QA activities have resulted in the ICPP's implementation of the Environmental Protection Agency rules and guidelines pertaining to the Collection, analyses, and reporting of environmentally related samples. Where no approved methods for analyses existed for radionuclides, currently used methods were submitted for the EPA approval. 17 figs., 11 tabs.« less
Quick, Josh; Grubaugh, Nathan D; Pullan, Steven T; Claro, Ingra M; Smith, Andrew D; Gangavarapu, Karthik; Oliveira, Glenn; Robles-Sikisaka, Refugio; Rogers, Thomas F; Beutler, Nathan A; Burton, Dennis R; Lewis-Ximenez, Lia Laura; de Jesus, Jaqueline Goes; Giovanetti, Marta; Hill, Sarah; Black, Allison; Bedford, Trevor; Carroll, Miles W; Nunes, Marcio; Alcantara, Luiz Carlos; Sabino, Ester C; Baylis, Sally A; Faria, Nuno; Loose, Matthew; Simpson, Jared T; Pybus, Oliver G; Andersen, Kristian G; Loman, Nicholas J
2018-01-01
Genome sequencing has become a powerful tool for studying emerging infectious diseases; however, genome sequencing directly from clinical samples without isolation remains challenging for viruses such as Zika, where metagenomic sequencing methods may generate insufficient numbers of viral reads. Here we present a protocol for generating coding-sequence complete genomes comprising an online primer design tool, a novel multiplex PCR enrichment protocol, optimised library preparation methods for the portable MinION sequencer (Oxford Nanopore Technologies) and the Illumina range of instruments, and a bioinformatics pipeline for generating consensus sequences. The MinION protocol does not require an internet connection for analysis, making it suitable for field applications with limited connectivity. Our method relies on multiplex PCR for targeted enrichment of viral genomes from samples containing as few as 50 genome copies per reaction. Viral consensus sequences can be achieved starting with clinical samples in 1-2 days following a simple laboratory workflow. This method has been successfully used by several groups studying Zika virus evolution and is facilitating an understanding of the spread of the virus in the Americas. PMID:28538739
Estimating numbers of females with cubs-of-the-year in the Yellowstone grizzly bear population
Keating, K.A.; Schwartz, C.C.; Haroldson, M.A.; Moody, D.
2001-01-01
For grizzly bears (Ursus arctos horribilis) in the Greater Yellowstone Ecosystem (GYE), minimum population size and allowable numbers of human-caused mortalities have been calculated as a function of the number of unique females with cubs-of-the-year (FCUB) seen during a 3- year period. This approach underestimates the total number of FCUB, thereby biasing estimates of population size and sustainable mortality. Also, it does not permit calculation of valid confidence bounds. Many statistical methods can resolve or mitigate these problems, but there is no universal best method. Instead, relative performances of different methods can vary with population size, sample size, and degree of heterogeneity among sighting probabilities for individual animals. We compared 7 nonparametric estimators, using Monte Carlo techniques to assess performances over the range of sampling conditions deemed plausible for the Yellowstone population. Our goal was to estimate the number of FCUB present in the population each year. Our evaluation differed from previous comparisons of such estimators by including sample coverage methods and by treating individual sightings, rather than sample periods, as the sample unit. Consequently, our conclusions also differ from earlier studies. Recommendations regarding estimators and necessary sample sizes are presented, together with estimates of annual numbers of FCUB in the Yellowstone population with bootstrap confidence bounds.
Pereira, Éderson R; de Almeida, Tarcísio S; Borges, Daniel L G; Carasek, Eduardo; Welz, Bernhard; Feldmann, Jörg; Campo Menoyo, Javier Del
2016-04-01
High-resolution continuum source graphite furnace atomic absorption spectrometry (HR-CS GF AAS) has been applied for the development of a method for the determination of total As in fish oil samples using direct analysis. The method does not use any sample pretreatment, besides dilution with 1-propanole, in order to decrease the oil viscosity. The stability and sensitivity of As were evaluated using ruthenium and iridium as permanent chemical modifiers and palladium added in solution over the sample. The best results were obtained with ruthenium as the permanent modifier and palladium in solution added to samples and standard solutions. Under these conditions, aqueous standard solutions could be used for calibration for the fish oil samples diluted with 1-propanole. The pyrolysis and atomization temperatures were 1400 °C and 2300 °C, respectively, and the limit of detection and characteristic mass were 30 pg and 43 pg, respectively. Accuracy and precision of the method have been evaluated using microwave-assisted acid digestion of the samples with subsequent determination by HR-CS GF AAS and ICP-MS; the results were in agreement (95% confidence level) with those of the proposed method. Copyright © 2015 Elsevier B.V. All rights reserved.
NK sensitivity of neuroblastoma cells determined by a highly sensitive coupled luminescent method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogbomo, Henry; Hahn, Anke; Geiler, Janina
2006-01-06
The measurement of natural killer (NK) cells toxicity against tumor or virus-infected cells especially in cases with small blood samples requires highly sensitive methods. Here, a coupled luminescent method (CLM) based on glyceraldehyde-3-phosphate dehydrogenase release from injured target cells was used to evaluate the cytotoxicity of interleukin-2 activated NK cells against neuroblastoma cell lines. In contrast to most other methods, CLM does not require the pretreatment of target cells with labeling substances which could be toxic or radioactive. The effective killing of tumor cells was achieved by low effector/target ratios ranging from 0.5:1 to 4:1. CLM provides highly sensitive, safe,more » and fast procedure for measurement of NK cell activity with small blood samples such as those obtained from pediatric patients.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaiser, Brooke LD; Wunschel, David S.; Sydor, Michael A.
2015-08-07
Proteomic analysis of bacterial samples provides valuable information about cellular responses and functions under different environmental pressures. Proteomic analysis is dependent upon efficient extraction of proteins from bacterial samples without introducing bias toward extraction of particular protein classes. While no single method can recover 100% of the bacterial proteins, selected protocols can improve overall protein isolation, peptide recovery, or enrich for certain classes of proteins. The method presented here is technically simple and does not require specialized equipment such as a mechanical disrupter. Our data reveal that for particularly challenging samples, such as B. anthracis Sterne spores, trichloroacetic acid extractionmore » improved the number of proteins identified within a sample compared to bead beating (714 vs 660, respectively). Further, TCA extraction enriched for 103 known spore specific proteins whereas bead beating resulted in 49 unique proteins. Analysis of C. botulinum samples grown to 5 days, composed of vegetative biomass and spores, showed a similar trend with improved protein yields and identification using our method compared to bead beating. Interestingly, easily lysed samples, such as B. anthracis vegetative cells, were equally as effectively processed via TCA and bead beating, but TCA extraction remains the easiest and most cost effective option. As with all assays, supplemental methods such as implementation of an alternative preparation method may provide additional insight to the protein biology of the bacteria being studied.« less
An evaluation of population index and estimation techniques for tadpoles in desert pools
Jung, Robin E.; Dayton, Gage H.; Williamson, Stephen J.; Sauer, John R.; Droege, Sam
2002-01-01
Using visual (VI) and dip net indices (DI) and double-observer (DOE), removal (RE), and neutral red dye capture-recapture (CRE) estimates, we counted, estimated, and censused Couch's spadefoot (Scaphiopus couchii) and canyon treefrog (Hyla arenicolor) tadpole populations in Big Bend National Park, Texas. Initial dye experiments helped us determine appropriate dye concentrations and exposure times to use in mesocosm and field trials. The mesocosm study revealed higher tadpole detection rates, more accurate population estimates, and lower coefficients of variation among pools compared to those from the field study. In both mesocosm and field studies, CRE was the best method for estimating tadpole populations, followed by DOE and RE. In the field, RE, DI, and VI often underestimated populations in pools with higher tadpole numbers. DI improved with increased sampling. Larger pools supported larger tadpole populations, and tadpole detection rates in general decreased with increasing pool volume and surface area. Hence, pool size influenced bias in tadpole sampling. Across all techniques, tadpole detection rates differed among pools, indicating that sampling bias was inherent and techniques did not consistently sample the same proportion of tadpoles in each pool. Estimating bias (i.e., calculating detection rates) therefore was essential in assessing tadpole abundance. Unlike VI and DOE, DI, RE, and CRE could be used in turbid waters in which tadpoles are not visible. The tadpole population estimates we used accommodated differences in detection probabilities in simple desert pool environments but may not work in more complex habitats.
Alvarado-Kristensson, Maria
2018-01-01
When using fluorescence microscope techniques to study cells, it is essential that the cell structure and contents are preserved after preparation of the samples, and that the preparation method employed does not create artefacts that can be perceived as cellular structure/components. γ-Tubulin forms filaments that in some cases are immunostained with an anti-γ-tubulin antibody, but this immunostaining is not reproducible [[1], [2
A modular approach for automated sample preparation and chemical analysis
NASA Technical Reports Server (NTRS)
Clark, Michael L.; Turner, Terry D.; Klingler, Kerry M.; Pacetti, Randolph
1994-01-01
Changes in international relations, especially within the past several years, have dramatically affected the programmatic thrusts of the U.S. Department of Energy (DOE). The DOE now is addressing the environmental cleanup required as a result of 50 years of nuclear arms research and production. One major obstacle in the remediation of these areas is the chemical determination of potentially contaminated material using currently acceptable practices. Process bottlenecks and exposure to hazardous conditions pose problems for the DOE. One proposed solution is the application of modular automated chemistry using Standard Laboratory Modules (SLM) to perform Standard Analysis Methods (SAM). The Contaminant Analysis Automation (CAA) Program has developed standards and prototype equipment that will accelerate the development of modular chemistry technology and is transferring this technology to private industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, T.; Hera, K.; Coleman, C.
2011-12-05
Evaluation of Defense Waste Processing Facility (DWPF) Chemical Process Cell (CPC) cycle time identified several opportunities to improve the CPC processing time. The Mechanical Systems & Custom Equipment Development (MS&CED) Section of the Savannah River National Laboratory (SRNL) recently completed the evaluation of one of these opportunities - the possibility of using an Isolok sampling valve as an alternative to the Hydragard valve for taking DWPF process samples at the Slurry Mix Evaporator (SME). The use of an Isolok for SME sampling has the potential to improve operability, reduce maintenance time, and decrease CPC cycle time. The SME acceptability testingmore » for the Isolok was requested in Task Technical Request (TTR) HLW-DWPF-TTR-2010-0036 and was conducted as outlined in Task Technical and Quality Assurance Plan (TTQAP) SRNLRP-2011-00145. RW-0333P QA requirements applied to the task, and the results from the investigation were documented in SRNL-STI-2011-00693. Measurement of the chemical composition of study samples was a critical component of the SME acceptability testing of the Isolok. A sampling and analytical plan supported the investigation with the analytical plan directing that the study samples be prepared by a cesium carbonate (Cs{sub 2}CO{sub 3}) fusion dissolution method and analyzed by Inductively Coupled Plasma - Optical Emission Spectroscopy (ICP-OES). The use of the cesium carbonate preparation method for the Isolok testing provided an opportunity for an additional assessment of this dissolution method, which is being investigated as a potential replacement for the two methods (i.e., sodium peroxide fusion and mixed acid dissolution) that have been used at the DWPF for the analysis of SME samples. Earlier testing of the Cs{sub 2}CO{sub 3} method yielded promising results which led to a TTR from Savannah River Remediation, LLC (SRR) to SRNL for additional support and an associated TTQAP to direct the SRNL efforts. A technical report resulting from this work was issued that recommended that the mixed acid method be replaced by the Cs{sub 2}CO{sub 3} method for the measurement of magnesium (Mg), sodium (Na), and zirconium (Zr) with additional testing of the method by DWPF Laboratory being needed before further implementation of the Cs{sub 2}CO{sub 3} method at that laboratory. While the SME acceptability testing of the Isolok does not address any of the open issues remaining after the publication of the recommendation for the replacement of the mixed acid method by the Cs{sub 2}CO{sub 3} method (since those issues are to be addressed by the DWPF Laboratory), the Cs{sub 2}CO{sub 3} testing associated with the Isolok testing does provide additional insight into the performance of the method as conducted by SRNL. The performance is to be investigated by looking to the composition measurement data generated by the samples of a standard glass, the Analytical Reference Glass - 1 (ARG-1), that were prepared by the Cs{sub 2}CO{sub 3} method and included in the SME acceptability testing of the Isolok. The measurements of these samples were presented as part of the study results, but no statistical analysis of these measurements was conducted as part of those results. It is the purpose of this report to provide that analysis, which was supported using JMP Version 7.0.2.« less
Movie denoising by average of warped lines.
Bertalmío, Marcelo; Caselles, Vicent; Pardo, Alvaro
2007-09-01
Here, we present an efficient method for movie denoising that does not require any motion estimation. The method is based on the well-known fact that averaging several realizations of a random variable reduces the variance. For each pixel to be denoised, we look for close similar samples along the level surface passing through it. With these similar samples, we estimate the denoised pixel. The method to find close similar samples is done via warping lines in spatiotemporal neighborhoods. For that end, we present an algorithm based on a method for epipolar line matching in stereo pairs which has per-line complexity O (N), where N is the number of columns in the image. In this way, when applied to the image sequence, our algorithm is computationally efficient, having a complexity of the order of the total number of pixels. Furthermore, we show that the presented method is unsupervised and is adapted to denoise image sequences with an additive white noise while respecting the visual details on the movie frames. We have also experimented with other types of noise with satisfactory results.
NASA Astrophysics Data System (ADS)
Harun, N.; Darmawan, E.; Nurani, L. H.
2017-11-01
Hibiscus sabdariffa contains flavonoid, triterpenoid, anthocyanin which function as immunostimulant. H. sabdariffa is considered safe for animal renal; nonetheless, there are known side effects of which need to be further investigated for human renal. This research aims to investigate the effect of calyx capsule-ethanol extract H. sabdariffa for renal function of healthy male and female for 30 days period by monitoring Scr and Clcr component in their blood samples. The method of this experimental research was by pre and post-treatment by involving 20 healthy volunteers who have met inclusion and exclusion criteria. The volunteers have completed the informed consent for this experiment. Furthermore, volunteers were divided into two groups (10 male and 10 female). Each group was given orally 500 mg of calyx capsule-ethanol extract H. sabdariffa per day for 30 days period. Blood tests were taken on day 0, day 30 after consuming the capsule and day 45 (15 days after the last day of capsule intake) in order to measure the Scr and Clcr concentration in the blood samples by using Jaffe dan Cockcroft-Gault method. The results of each sampling day were further analyzed statistically and compared using Repeated ANOVA dan Friedman test. The results suggest that there was a difference in the renal function on day 0, 30 and 45 samplings. However, there was no significant difference in Scr dan Clcr concentrations on female and male volunteers (p>0.05). Specifically, the type of gender affects Scr concentration (p<0.05) however, it does not affect Clcr concentration (p>0.05). In addition, age and Body Mass Index (BMI) does not affect Scr and Clcr concentrations (p>0.05). The side effects discovered through the monitoring increased in mixturition and bloatedness. Calyx capsule-ethanol extract H. sabdariffa does not affect on renal function of healthy volunteers.
A hybrid scanning mode for fast scanning ion conductance microscopy (SICM) imaging
Zhukov, Alex; Richards, Owen; Ostanin, Victor; Korchev, Yuri; Klenerman, David
2012-01-01
We have developed a new method of controlling the pipette for scanning ion conductance microscopy to obtain high-resolution images faster. The method keeps the pipette close to the surface during a single line scan but does not follow the exact surface topography, which is calculated by using the ion current. Using an FPGA platform we demonstrate this new method on model test samples and then on live cells. This method will be particularly useful to follow changes occurring on relatively flat regions of the cell surface at high spatial and temporal resolutions. PMID:22902298
Excited-State Effective Masses in Lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
George Fleming, Saul Cohen, Huey-Wen Lin
2009-10-01
We apply black-box methods, i.e. where the performance of the method does not depend upon initial guesses, to extract excited-state energies from Euclidean-time hadron correlation functions. In particular, we extend the widely used effective-mass method to incorporate multiple correlation functions and produce effective mass estimates for multiple excited states. In general, these excited-state effective masses will be determined by finding the roots of some polynomial. We demonstrate the method using sample lattice data to determine excited-state energies of the nucleon and compare the results to other energy-level finding techniques.
NASA Astrophysics Data System (ADS)
Lu, Xinguo; Chen, Dan
2017-08-01
Traditional supervised classifiers neglect a large amount of data which not have sufficient follow-up information, only work with labeled data. Consequently, the small sample size limits the advancement of design appropriate classifier. In this paper, a transductive learning method which combined with the filtering strategy in transductive framework and progressive labeling strategy is addressed. The progressive labeling strategy does not need to consider the distribution of labeled samples to evaluate the distribution of unlabeled samples, can effective solve the problem of evaluate the proportion of positive and negative samples in work set. Our experiment result demonstrate that the proposed technique have great potential in cancer prediction based on gene expression.
The Use and Validation of Qualitative Methods Used in Program Evaluation.
ERIC Educational Resources Information Center
Plucker, Frank E.
When conducting a two-year college program review, there are several advantages to supplementing the standard quantitative research approach with qualitative measures. Qualitative research does not depend on a large number of random samples, it uses a flexible design which can be refined as the research is executed, and it generates findings in a…
Does Gender Matter? An Exploratory Study of Perspectives across Genders, Age and Education
ERIC Educational Resources Information Center
Carinci, Sherrie; Wong, Pia Lindquist
2009-01-01
Using a convenience sample and survey research methods, the authors seek to better understand how perspectives on gender are shaped by individuals' age, level of education and gender. Study participants responded in writing to scenarios and survey questions, revealing their personal views on gender as an identity category and as a marker in the…
ERIC Educational Resources Information Center
Charalambous, Charalambos Y.; Hill, Heather C.; Ball, Deborah L.
2011-01-01
Several studies have documented prospective teachers' (PSTs) difficulties in offering instructional explanations. However, less is known about PSTs' learning to provide explanations. To address this gap, we trace changes in the explanations offered by a purposeful sample of PSTs before and after a mathematics content/methods course sequence.…
Makrlíková, Anna; Opekar, František; Tůma, Petr
2015-08-01
A computer-controlled hydrodynamic sample introduction method has been proposed for short-capillary electrophoresis. In the method, the BGE flushes sample from the loop of a six-way sampling valve and is carried to the injection end of the capillary. A short pressure impulse is generated in the electrolyte stream at the time when the sample zone is at the capillary, leading to injection of the sample into the capillary. Then the electrolyte flow is stopped and the separation voltage is turned on. This way of sample introduction does not involve movement of the capillary and both of its ends remain constantly in the solution during both sample injection and separation. The amount of sample introduced to the capillary is controlled by the duration of the pressure pulse. The new sample introduction method was tested in the determination of ammonia, creatinine, uric acid, and hippuric acid in human urine. The determination was performed in a capillary with an overall length of 10.5 cm, in two BGEs with compositions 50 mM MES + 5 mM NaOH (pH 5.1) and 1 M acetic acid + 1.5 mM crown ether 18-crown-6 (pH 2.4). A dual contactless conductivity/UV spectrometric detector was used for the detection. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Montagna, Maria Teresa; De Giglio, Osvalda; Cristina, Maria Luisa; Napoli, Christian; Pacifico, Claudia; Agodi, Antonella; Baldovin, Tatjana; Casini, Beatrice; Coniglio, Maria Anna; D'Errico, Marcello Mario; Delia, Santi Antonino; Deriu, Maria Grazia; Guida, Marco; Laganà, Pasqualina; Liguori, Giorgio; Moro, Matteo; Mura, Ida; Pennino, Francesca; Privitera, Gaetano; Romano Spica, Vincenzo; Sembeni, Silvia; Spagnolo, Anna Maria; Tardivo, Stefano; Torre, Ida; Valeriani, Federica; Albertini, Roberto; Pasquarella, Cesira
2017-06-22
Healthcare facilities (HF) represent an at-risk environment for legionellosis transmission occurring after inhalation of contaminated aerosols. In general, the control of water is preferred to that of air because, to date, there are no standardized sampling protocols. Legionella air contamination was investigated in the bathrooms of 11 HF by active sampling (Surface Air System and Coriolis ® μ) and passive sampling using settling plates. During the 8-hour sampling, hot tap water was sampled three times. All air samples were evaluated using culture-based methods, whereas liquid samples collected using the Coriolis ® μ were also analyzed by real-time PCR. Legionella presence in the air and water was then compared by sequence-based typing (SBT) methods. Air contamination was found in four HF (36.4%) by at least one of the culturable methods. The culturable investigation by Coriolis ® μ did not yield Legionella in any enrolled HF. However, molecular investigation using Coriolis ® μ resulted in eight HF testing positive for Legionella in the air. Comparison of Legionella air and water contamination indicated that Legionella water concentration could be predictive of its presence in the air. Furthermore, a molecular study of 12 L. pneumophila strains confirmed a match between the Legionella strains from air and water samples by SBT for three out of four HF that tested positive for Legionella by at least one of the culturable methods. Overall, our study shows that Legionella air detection cannot replace water sampling because the absence of microorganisms from the air does not necessarily represent their absence from water; nevertheless, air sampling may provide useful information for risk assessment. The liquid impingement technique appears to have the greatest capacity for collecting airborne Legionella if combined with molecular investigations.
Montagna, Maria Teresa; De Giglio, Osvalda; Cristina, Maria Luisa; Napoli, Christian; Pacifico, Claudia; Agodi, Antonella; Baldovin, Tatjana; Casini, Beatrice; Coniglio, Maria Anna; D’Errico, Marcello Mario; Delia, Santi Antonino; Deriu, Maria Grazia; Guida, Marco; Laganà, Pasqualina; Liguori, Giorgio; Moro, Matteo; Mura, Ida; Pennino, Francesca; Privitera, Gaetano; Romano Spica, Vincenzo; Sembeni, Silvia; Spagnolo, Anna Maria; Tardivo, Stefano; Torre, Ida; Valeriani, Federica; Albertini, Roberto; Pasquarella, Cesira
2017-01-01
Healthcare facilities (HF) represent an at-risk environment for legionellosis transmission occurring after inhalation of contaminated aerosols. In general, the control of water is preferred to that of air because, to date, there are no standardized sampling protocols. Legionella air contamination was investigated in the bathrooms of 11 HF by active sampling (Surface Air System and Coriolis®μ) and passive sampling using settling plates. During the 8-hour sampling, hot tap water was sampled three times. All air samples were evaluated using culture-based methods, whereas liquid samples collected using the Coriolis®μ were also analyzed by real-time PCR. Legionella presence in the air and water was then compared by sequence-based typing (SBT) methods. Air contamination was found in four HF (36.4%) by at least one of the culturable methods. The culturable investigation by Coriolis®μ did not yield Legionella in any enrolled HF. However, molecular investigation using Coriolis®μ resulted in eight HF testing positive for Legionella in the air. Comparison of Legionella air and water contamination indicated that Legionella water concentration could be predictive of its presence in the air. Furthermore, a molecular study of 12 L. pneumophila strains confirmed a match between the Legionella strains from air and water samples by SBT for three out of four HF that tested positive for Legionella by at least one of the culturable methods. Overall, our study shows that Legionella air detection cannot replace water sampling because the absence of microorganisms from the air does not necessarily represent their absence from water; nevertheless, air sampling may provide useful information for risk assessment. The liquid impingement technique appears to have the greatest capacity for collecting airborne Legionella if combined with molecular investigations. PMID:28640202
Practical quantum random number generator based on measuring the shot noise of vacuum states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen Yong; Zou Hongxin; Tian Liang
2010-06-15
The shot noise of vacuum states is a kind of quantum noise and is totally random. In this paper a nondeterministic random number generation scheme based on measuring the shot noise of vacuum states is presented and experimentally demonstrated. We use a homodyne detector to measure the shot noise of vacuum states. Considering that the frequency bandwidth of our detector is limited, we derive the optimal sampling rate so that sampling points have the least correlation with each other. We also choose a method to extract random numbers from sampling values, and prove that the influence of classical noise canmore » be avoided with this method so that the detector does not have to be shot-noise limited. The random numbers generated with this scheme have passed ent and diehard tests.« less
Ultra sound absorption measurements in rock samples at low temperatures
NASA Technical Reports Server (NTRS)
Herminghaus, C.; Berckhemer, H.
1974-01-01
A new technique, comparable with the reverberation method in room acoustics, is described. It allows Q-measurements at rock samples of arbitrary shape in the frequency range of 50 to 600 kHz in vacuum (.1 mtorr) and at low temperatures (+20 to -180 C). The method was developed in particular to investigate rock samples under lunar conditions. Ultrasound absorption has been measured at volcanics, breccia, gabbros, feldspar and quartz of different grain size and texture yielding the following results: evacuation raises Q mainly through lowering the humidity in the rock. In a dry compact rock, the effect of evacuation is small. With decreasing temperature, Q generally increases. Between +20 and -30 C, Q does not change much. With further decrease of temperature in many cases distinct anomalies appear, where Q becomes frequency dependent.
An opportunity cost approach to sample size calculation in cost-effectiveness analysis.
Gafni, A; Walter, S D; Birch, S; Sendi, P
2008-01-01
The inclusion of economic evaluations as part of clinical trials has led to concerns about the adequacy of trial sample size to support such analysis. The analytical tool of cost-effectiveness analysis is the incremental cost-effectiveness ratio (ICER), which is compared with a threshold value (lambda) as a method to determine the efficiency of a health-care intervention. Accordingly, many of the methods suggested to calculating the sample size requirements for the economic component of clinical trials are based on the properties of the ICER. However, use of the ICER and a threshold value as a basis for determining efficiency has been shown to be inconsistent with the economic concept of opportunity cost. As a result, the validity of the ICER-based approaches to sample size calculations can be challenged. Alternative methods for determining improvements in efficiency have been presented in the literature that does not depend upon ICER values. In this paper, we develop an opportunity cost approach to calculating sample size for economic evaluations alongside clinical trials, and illustrate the approach using a numerical example. We compare the sample size requirement of the opportunity cost method with the ICER threshold method. In general, either method may yield the larger required sample size. However, the opportunity cost approach, although simple to use, has additional data requirements. We believe that the additional data requirements represent a small price to pay for being able to perform an analysis consistent with both concept of opportunity cost and the problem faced by decision makers. Copyright (c) 2007 John Wiley & Sons, Ltd.
A Spectral Method for Color Quantitation of a Protein Drug Solution.
Swartz, Trevor E; Yin, Jian; Patapoff, Thomas W; Horst, Travis; Skieresz, Susan M; Leggett, Gordon; Morgan, Charles J; Rahimi, Kimia; Marhoul, Joseph; Kabakoff, Bruce
2016-01-01
Color is an important quality attribute for biotherapeutics. In the biotechnology industry, a visual method is most commonly utilized for color characterization of liquid drug protein solutions. The color testing method is used for both batch release and on stability testing for quality control. Using that method, an analyst visually determines the color of the sample by choosing the closest matching European Pharmacopeia reference color solution. The requirement to judge the best match makes it a subjective method. Furthermore, the visual method does not capture data on hue or chroma that would allow for improved product characterization and the ability to detect subtle differences between samples. To overcome these challenges, we describe a quantitative method for color determination that greatly reduces the variability in measuring color and allows for a more precise understanding of color differences. Following color industry standards established by International Commission on Illumination, this method converts a protein solution's visible absorption spectra to L*a*b* color space. Color matching is achieved within the L*a*b* color space, a practice that is already widely used in other industries. The work performed here is to facilitate the adoption and transition for the traditional visual assessment method to a quantitative spectral method. We describe here the algorithm used such that the quantitative spectral method correlates with the currently used visual method. In addition, we provide the L*a*b* values for the European Pharmacopeia reference color solutions required for the quantitative method. We have determined these L*a*b* values by gravimetrically preparing and measuring multiple lots of the reference color solutions. We demonstrate that the visual assessment and the quantitative spectral method are comparable using both low- and high-concentration antibody solutions and solutions with varying turbidity. In the biotechnology industry, a visual assessment is the most commonly used method for color characterization, batch release, and stability testing of liquid protein drug solutions. Using this method, an analyst visually determines the color of the sample by choosing the closest match to a standard color series. This visual method can be subjective because it requires an analyst to make a judgment of the best match of color of the sample to the standard color series, and it does not capture data on hue and chroma that would allow for improved product characterization and the ability to detect subtle differences between samples. To overcome these challenges, we developed a quantitative spectral method for color determination that greatly reduces the variability in measuring color and allows for a more precise understanding of color differences. The details of the spectral quantitative method are described. A comparison between the visual assessment method and spectral quantitative method is presented. This study supports the transition to a quantitative spectral method from the visual assessment method for quality testing of protein solutions. © PDA, Inc. 2016.
Bucher, Denis; Pierce, Levi C T; McCammon, J Andrew; Markwick, Phineus R L
2011-04-12
We have implemented the accelerated molecular dynamics approach (Hamelberg, D.; Mongan, J.; McCammon, J. A. J. Chem. Phys. 2004, 120 (24), 11919) in the framework of ab initio MD (AIMD). Using three simple examples, we demonstrate that accelerated AIMD (A-AIMD) can be used to accelerate solvent relaxation in AIMD simulations and facilitate the detection of reaction coordinates: (i) We show, for one cyclohexane molecule in the gas phase, that the method can be used to accelerate the rate of the chair-to-chair interconversion by a factor of ∼1 × 10(5), while allowing for the reconstruction of the correct canonical distribution of low-energy states; (ii) We then show, for a water box of 64 H(2)O molecules, that A-AIMD can also be used in the condensed phase to accelerate the sampling of water conformations, without affecting the structural properties of the solvent; and (iii) The method is then used to compute the potential of mean force (PMF) for the dissociation of Na-Cl in water, accelerating the convergence by a factor of ∼3-4 compared to conventional AIMD simulations.(2) These results suggest that A-AIMD is a useful addition to existing methods for enhanced conformational and phase-space sampling in solution. While the method does not make the use of collective variables superfluous, it also does not require the user to define a set of collective variables that can capture all the low-energy minima on the potential energy surface. This property may prove very useful when dealing with highly complex multidimensional systems that require a quantum mechanical treatment.
Integrating conventional and inverse representation for face recognition.
Xu, Yong; Li, Xuelong; Yang, Jian; Lai, Zhihui; Zhang, David
2014-10-01
Representation-based classification methods are all constructed on the basis of the conventional representation, which first expresses the test sample as a linear combination of the training samples and then exploits the deviation between the test sample and the expression result of every class to perform classification. However, this deviation does not always well reflect the difference between the test sample and each class. With this paper, we propose a novel representation-based classification method for face recognition. This method integrates conventional and the inverse representation-based classification for better recognizing the face. It first produces conventional representation of the test sample, i.e., uses a linear combination of the training samples to represent the test sample. Then it obtains the inverse representation, i.e., provides an approximation representation of each training sample of a subject by exploiting the test sample and training samples of the other subjects. Finally, the proposed method exploits the conventional and inverse representation to generate two kinds of scores of the test sample with respect to each class and combines them to recognize the face. The paper shows the theoretical foundation and rationale of the proposed method. Moreover, this paper for the first time shows that a basic nature of the human face, i.e., the symmetry of the face can be exploited to generate new training and test samples. As these new samples really reflect some possible appearance of the face, the use of them will enable us to obtain higher accuracy. The experiments show that the proposed conventional and inverse representation-based linear regression classification (CIRLRC), an improvement to linear regression classification (LRC), can obtain very high accuracy and greatly outperforms the naive LRC and other state-of-the-art conventional representation based face recognition methods. The accuracy of CIRLRC can be 10% greater than that of LRC.
Determination of antimicrobial susceptibilities on infected urines without isolation
NASA Technical Reports Server (NTRS)
Picciolo, G. L.; Chappelle, E. W.; Deming, J. W.; Shrock, C. G.; Vellend, H.; Barza, M. J.; Weinstein, L. (Inventor)
1979-01-01
A method is described for the quick determination of the susceptibilities of various unidentified bacteria contained in an aqueous physiological fluid sample, particularly urine, to one or more antibiotics. A bacterial adenosine triphosphate (ATP) assay is carried out after the elimination of non-bacterial ATP to determine whether an infection exists. If an infection does exist, a portion of the sample is further processed, including subjecting parts of the portion to one or more antibiotics. Growth of the bacteria in the parts are determined, again by an ATP assay, to determine whether the unidentified bacteria in the sample are susceptible to the antibiotic or antibiotics under test.
A simple linear model for estimating ozone AOT40 at forest sites from raw passive sampling data.
Ferretti, Marco; Cristofolini, Fabiana; Cristofori, Antonella; Gerosa, Giacomo; Gottardini, Elena
2012-08-01
A rapid, empirical method is described for estimating weekly AOT40 from ozone concentrations measured with passive samplers at forest sites. The method is based on linear regression and was developed after three years of measurements in Trentino (northern Italy). It was tested against an independent set of data from passive sampler sites across Italy. It provides good weekly estimates compared with those measured by conventional monitors (0.85 ≤R(2)≤ 0.970; 97 ≤ RMSE ≤ 302). Estimates obtained using passive sampling at forest sites are comparable to those obtained by another estimation method based on modelling hourly concentrations (R(2) = 0.94; 131 ≤ RMSE ≤ 351). Regression coefficients of passive sampling are similar to those obtained with conventional monitors at forest sites. Testing against an independent dataset generated by passive sampling provided similar results (0.86 ≤R(2)≤ 0.99; 65 ≤ RMSE ≤ 478). Errors tend to accumulate when weekly AOT40 estimates are summed to obtain the total AOT40 over the May-July period, and the median deviation between the two estimation methods based on passive sampling is 11%. The method proposed does not require any assumptions, complex calculation or modelling technique, and can be useful when other estimation methods are not feasible, either in principle or in practice. However, the method is not useful when estimates of hourly concentrations are of interest.
Apell, Jennifer N; Gschwend, Philip M
2016-11-01
Superfund sites with sediments contaminated by hydrophobic organic compounds (HOCs) can be difficult to characterize because of the complex nature of sorption to sediments. Porewater concentrations, which are often used to model transport of HOCs from the sediment bed into overlying water, benthic organisms, and the larger food web, are traditionally estimated using sediment concentrations and sorption coefficients estimated using equilibrium partitioning (EqP) theory. However, researchers have begun using polymeric samplers to determine porewater concentrations since this method does not require knowledge of the sediment's sorption properties. In this work, polyethylene passive samplers were deployed into sediments in the field (in situ passive sampling) and mixed with sediments in the laboratory (ex situ active sampling) that were contaminated with polychlorinated biphenyls (PCBs). The results show that porewater concentrations based on in situ and ex situ sampling generally agreed within a factor of two, but in situ concentrations were consistently lower than ex situ porewater concentrations. Imprecision arising from in situ passive sampling procedures does not explain this bias suggesting that field processes like bioirrigation may cause the differences observed between in situ and ex situ polymeric samplers. Copyright © 2016 Elsevier Ltd. All rights reserved.
Yamashita, Hitoyoshi; Morita, Masamune; Sugiura, Haruka; Fujiwara, Kei; Onoe, Hiroaki; Takinoue, Masahiro
2015-04-01
We report an easy-to-use generation method of biologically compatible monodisperse water-in-oil microdroplets using a glass-capillary-based microfluidic device in a tabletop mini-centrifuge. This device does not require complicated microfabrication; furthermore, only a small sample volume is required in experiments. Therefore, we believe that this method will assist biochemical and cell-biological experiments. Copyright © 2014 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, O.R.; Siegrist, R.L.; Mitchell, T.J.
1993-11-01
Fine-textured soils and sediments contaminated by trichloroethylene (TCE) and other chlorinated organics present a serious environmental restoration challenge at US Department of Energy (DOE) sites. DOE and Martin Marietta Energy Systems, Inc. initiated a research and demonstration project at Oak Ridge National Laboratory. The goal of the project was to demonstrate a process for closure and environmental restoration of the X-231B Solid Waste Management Unit at the DOE Portsmouth Gaseous Diffusion Plant. The X-231B Unit was used from 1976 to 1983 as a land disposal site for waste oils and solvents. Silt and clay deposits beneath the unit were contaminatedmore » with volatile organic compounds and low levels of radioactive substances. The shallow groundwater was also contaminated, and some contaminants were at levels well above drinking water standards. This document begins with a summary of the subsurface physical and contaminant characteristics obtained from investigative studies conducted at the X-231B Unit prior to January 1992 (Sect. 2). This is then followed by a description of the sample collection and analysis methods used during the baseline sampling conducted in January 1992 (Sect. 3). The results of this sampling event were used to develop spatial models for VOC contaminant distribution within the X-231B Unit.« less
Detection of picosecond electrical pulses using the intrinsic Franz{endash}Keldysh effect
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lampin, J. F.; Desplanque, L.; Mollot, F.
2001-06-25
We report time-resolved measurements of ultrafast electrical pulses propagating on a coplanar transmission line using the intrinsic Franz{endash}Keldysh effect. A low-temperature-grown GaAs layer deposited on a GaAs substrate allows generation and also detection of ps pulses via electroabsorption sampling (EAS). This all-optical method does not require any external sampling probe. A typical rise time of 1.1 ps has been measured. EAS is a good candidate for use in THz characterization of ultrafast devices. {copyright} 2001 American Institute of Physics.
Tripathi, Ashish; McNulty, Ian; Shpyrko, Oleg G
2014-01-27
Ptychographic coherent x-ray diffractive imaging is a form of scanning microscopy that does not require optics to image a sample. A series of scanned coherent diffraction patterns recorded from multiple overlapping illuminated regions on the sample are inverted numerically to retrieve its image. The technique recovers the phase lost by detecting the diffraction patterns by using experimentally known constraints, in this case the measured diffraction intensities and the assumed scan positions on the sample. The spatial resolution of the recovered image of the sample is limited by the angular extent over which the diffraction patterns are recorded and how well these constraints are known. Here, we explore how reconstruction quality degrades with uncertainties in the scan positions. We show experimentally that large errors in the assumed scan positions on the sample can be numerically determined and corrected using conjugate gradient descent methods. We also explore in simulations the limits, based on the signal to noise of the diffraction patterns and amount of overlap between adjacent scan positions, of just how large these errors can be and still be rendered tractable by this method.
Bloomstein, Edward I.; Bloomstein, Eleana; Hoover, D.B.; Smith, D.B.
1990-01-01
As part of our research into new methods for the assessment of mineral deposits, the U.S. Geological Survey has recently begun investigation of the CHIM method. As part of our studies, translation of a Russian manual on the CHIM methodology and eight articles from the Russian literature were transit ted to provide background for our own research. The translations were done by Earth Science Translation Services of Albuquerque, New Mexico, and are presented as received, without editing on our part. Below is a bibliography of the translated articles.For approximately the past 20 years Russian geoscientists have been applying an electrogeochemical sampling technique given the Russian acronym CHIM, derived from Chastichnoe Izvlechennye Metallov which translates as "partial extraction of metals". In this technique a direct current is introduced into the earth through collector electrodes similar to "porous pots" used in electrical geophysical applications. The solution in the cathode is dilute nitric acid, and current is passed through the cathode for times ranging from 6 hours to 48 hours or more. Electrical connections to the nitric acid are made through an inner conductor that is typically spectroscopically pure graphite. At the cathode, mobile cations collect on the graphite or in the nitric acid solution, both of which serve as the geochemical sampling media. These media are then analyzed by appropriate methods for the ions of interest. In most applications of the CHIM method only mobile cations are sampled, although Russian literature does refer to collection of anions as well. More recently the CHIM method has been applied by the Peoples Republic of China and the Indian Geological Survey.The literature indicates that the method has advantages over other geochemical sampling techniques by providing increased sensitivity to the metals being searched for, especially where deposits are covered by substantial overburden. In some cases success has been claimed with overburden in excess of 500 meters. The technique appears to have been applied principally to exploration for base- and precious-metal deposits, but does not appear to be limited to these. References are made in the literature to its application in the search for nickel, cobalt, molybdenum, uranium, tin, REE, tungsten, berylium, and oil and gas.
Daniels, Brodie; Coutsoudis, Anna; Autran, Chloe; Amundson Mansen, Kimberly; Israel-Ballard, Kiersten; Bode, Lars
2017-08-01
Human milk oligosaccharides (HMOs) have important protective functions in human milk. A low-cost remote pasteurisation temperature-monitoring system has been designed using FoneAstra, a cell phone-based networked sensing system to monitor simulated flash heat pasteurisation. To compare the pasteurisation effect on HMOs of the FoneAstra FH method with the current Sterifeed Holder method used by human milk banks. Donor human milk samples (n = 48) were obtained from a human milk bank and pasteurised using the two pasteurisation methods. HMOs were purified from samples and labelled before separation using high-performance liquid chromatography. Concentrations of total HMOs, sialylated and fucosylated HMOs and individual HMOs using the two pasteurisation methods were compared using repeated-measures ANOVA. The study demonstrated no difference in total concentration of HMOs between the two pasteurisation methods and a small but significant increase in the total concentration of HMOs regardless of pasteurisation methods compared with controls (unpasteurised samples) (p<0.0001). The FoneAstra FH pasteurisation system does not negatively affect oligosaccharides in human milk and therefore is a possible alternative for providing safely sterilised human milk for low- and middle-income countries.
Generation and coherent detection of QPSK signal using a novel method of digital signal processing
NASA Astrophysics Data System (ADS)
Zhao, Yuan; Hu, Bingliang; He, Zhen-An; Xie, Wenjia; Gao, Xiaohui
2018-02-01
We demonstrate an optical quadrature phase-shift keying (QPSK) signal transmitter and an optical receiver for demodulating optical QPSK signal with homodyne detection and digital signal processing (DSP). DSP on the homodyne detection scheme is employed without locking the phase of the local oscillator (LO). In this paper, we present an extracting one-dimensional array of down-sampling method for reducing unwanted samples of constellation diagram measurement. Such a novel scheme embodies the following major advantages over the other conventional optical QPSK signal detection methods. First, this homodyne detection scheme does not need strict requirement on LO in comparison with linear optical sampling, such as having a flat spectral density and phase over the spectral support of the source under test. Second, the LabVIEW software is directly used for recovering the QPSK signal constellation without employing complex DSP circuit. Third, this scheme is applicable to multilevel modulation formats such as M-ary PSK and quadrature amplitude modulation (QAM) or higher speed signals by making minor changes.
Bottled water: analysis of mycotoxins by LC-MS/MS.
Mata, A T; Ferreira, J P; Oliveira, B R; Batoréu, M C; Barreto Crespo, M T; Pereira, V J; Bronze, M R
2015-06-01
The presence of mycotoxins in food samples has been widely studied as well as its impact in human health, however, information about its distribution in the environment is scarce. An analytical method comprising a solid phase extraction procedure followed by liquid chromatography tandem mass spectrometry analysis was implemented and validated for the trace analysis of mycotoxins in drinking bottled waters. Limits of quantification achieved for the method were between 0.2ngL(-1) for aflatoxins and ochratoxin, and 2.0ngL(-1) for fumonisins and neosolaniol. The method was applied to real samples. Aflatoxin B2 was the most frequently detected mycotoxin in water samples, with a maximum concentration of 0.48±0.05ngL(-1) followed by aflatoxin B1, aflatoxin G1 and ochratoxin A. The genera Cladosporium, Fusarium and Penicillium were the fungi more frequently detected. These results show that the consumption of these waters does not represent a toxicological risk for an adult. Copyright © 2015 Elsevier Ltd. All rights reserved.
Laser fluorometric analysis of plants for uranium exploration
Harms, T.F.; Ward, F.N.; Erdman, J.A.
1981-01-01
A preliminary test of biogeochemical exploration for locating uranium occurrences in the Marfa Basin, Texas, was conducted in 1978. Only 6 of 74 plant samples (mostly catclaw mimosa, Mimosa biuncifera) contained uranium in amounts above the detection limit (0.4 ppm in the ash) of the conventional fluorometric method. The samples were then analyzed using a Scintrex UA-3 uranium analyzer* * Use of trade names in this paper is for descriptive purposes only and does not constitute endorsement by the U.S. Geological Survey. - an instrument designed for direct analysis of uranium in water, and which can be conveniently used in a mobile field laboratory. The detection limit for uranium in plant ash (0.05 ppm) by this method is almost an order of magnitude lower than with the fluorometric conventional method. Only 1 of the 74 samples contained uranium below the detection limit of the new method. Accuracy and precision were determined to be satisfactory. Samples of plants growing on mineralized soils and nonmineralized soils show a 15-fold difference in uranium content; whereas the soils themselves (analyzed by delayed neutron activation analysis) show only a 4-fold difference. The method involves acid digestion of ashed tissue, extraction of uranium into ethyl acetate, destruction of the ethyl acetate, dissolution of the residue in 0.005% nitric acid, and measurement. ?? 1981.
Ensemble-Biased Metadynamics: A Molecular Simulation Method to Sample Experimental Distributions
Marinelli, Fabrizio; Faraldo-Gómez, José D.
2015-01-01
We introduce an enhanced-sampling method for molecular dynamics (MD) simulations referred to as ensemble-biased metadynamics (EBMetaD). The method biases a conventional MD simulation to sample a molecular ensemble that is consistent with one or more probability distributions known a priori, e.g., experimental intramolecular distance distributions obtained by double electron-electron resonance or other spectroscopic techniques. To this end, EBMetaD adds an adaptive biasing potential throughout the simulation that discourages sampling of configurations inconsistent with the target probability distributions. The bias introduced is the minimum necessary to fulfill the target distributions, i.e., EBMetaD satisfies the maximum-entropy principle. Unlike other methods, EBMetaD does not require multiple simulation replicas or the introduction of Lagrange multipliers, and is therefore computationally efficient and straightforward in practice. We demonstrate the performance and accuracy of the method for a model system as well as for spin-labeled T4 lysozyme in explicit water, and show how EBMetaD reproduces three double electron-electron resonance distance distributions concurrently within a few tens of nanoseconds of simulation time. EBMetaD is integrated in the open-source PLUMED plug-in (www.plumed-code.org), and can be therefore readily used with multiple MD engines. PMID:26083917
Clinch River remedial investigation task 9 -- benthic macroinvertebrates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, E.M. Jr.
1994-08-01
This report summarizes the results of Task 9 of the TVA/Department of Energy (DOE) Interagency Agreement supporting DOE`s Clinch River Remedial Investigation. Species lists and densities (numbers/m{sup 2}) of benthic macroinvertebrates sampled at 16 sites in the Clinch River and Poplar Creek embayments of upper Watts Bar Reservoir near Oak Ridge, Tennessee, in March, 1994, are presented and briefly discussed. Data are also analyzed to assess and compare quality of benthic communities at each site, according to methods developed for TVA`s Reservoir Vital Signs Monitoring Program. Results of this study will be incorporated with other program tasks in a comprehensivemore » report prepared by Oak Ridge National Laboratory in 1995, which will, in part, assess the effect of sediment contaminants on benthic macroinvertebrate communities in Watts Bar Reservoir.« less
Cross-Domain Semi-Supervised Learning Using Feature Formulation.
Xingquan Zhu
2011-12-01
Semi-Supervised Learning (SSL) traditionally makes use of unlabeled samples by including them into the training set through an automated labeling process. Such a primitive Semi-Supervised Learning (pSSL) approach suffers from a number of disadvantages including false labeling and incapable of utilizing out-of-domain samples. In this paper, we propose a formative Semi-Supervised Learning (fSSL) framework which explores hidden features between labeled and unlabeled samples to achieve semi-supervised learning. fSSL regards that both labeled and unlabeled samples are generated from some hidden concepts with labeling information partially observable for some samples. The key of the fSSL is to recover the hidden concepts, and take them as new features to link labeled and unlabeled samples for semi-supervised learning. Because unlabeled samples are only used to generate new features, but not to be explicitly included in the training set like pSSL does, fSSL overcomes the inherent disadvantages of the traditional pSSL methods, especially for samples not within the same domain as the labeled instances. Experimental results and comparisons demonstrate that fSSL significantly outperforms pSSL-based methods for both within-domain and cross-domain semi-supervised learning.
ERIC Educational Resources Information Center
Hartono, Edy; Wahyudi, Sugeng; Harahap, Pahlawansjah; Yuniawan, Ahyar
2017-01-01
This study aims to analyze the relationship between lecturers' performance and their teaching competence, measured by antecedent variables of organizational learning and need for achievement. It used the Structure Equation Model as data analysis technique, and the random sampling method to collect data from 207 lecturers of private universities in…
ERIC Educational Resources Information Center
Gallagher, Kathryn E.; Parrott, Dominic J.
2011-01-01
Objective: This study provided the first direct test of the cognitive underpinnings of the attention-allocation model and attempted to replicate and extend past behavioral findings for this model as an explanation for alcohol-related aggression. Method: A diverse community sample (55% African American) of men (N = 159) between 21 and 35 years of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lord, David; Allen, Ray; Rudeen, David
The Crude Oil Characterization Research Study is designed to evaluate whether crude oils currently transported in North America, including those produced from "tight" formations, exhibit physical or chemical properties that are distinct from conventional crudes, and how these properties associate with combustion hazards with may be realized during transportation and handling.
Robust kernel collaborative representation for face recognition
NASA Astrophysics Data System (ADS)
Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong
2015-05-01
One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.
NASA Astrophysics Data System (ADS)
Michałowski, P. P.; Pasternak, I.; Strupiński, W.
2018-01-01
In this study, we demonstrate that graphene grown on Ge does not contain any copper contamination, and identify some of the errors affecting the accuracy of commonly used measurement methods. Indeed, one of these, the secondary ion mass spectrometry (SIMS) technique, reveals copper contamination in Ge-based graphene but does not take into account the effect of the presence of the graphene layer. We have shown that this layer increases negative ionization significantly, and thus yields false results, but also that the graphene enhances, by an order of two, the magnitude of the intensity of SIMS signals when compared with a similar graphene-free sample, enabling much better detection limits. This forms the basis of a new measurement procedure, graphene enhanced SIMS (GESIMS) (pending European patent application no. EP 16461554.4), which allows for the precise estimation of the realistic distribution of dopants and contamination in graphene. In addition, we present evidence that the GESIMS effect leads to unexpected mass interferences with double-ionized species, and that these interferences are negligible in samples without graphene. The GESIMS method also shows that graphene transferred from Cu results in increased copper contamination.
Probability Density Functions of Observed Rainfall in Montana
NASA Technical Reports Server (NTRS)
Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.
1995-01-01
The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.
In April 2014, U.S. Environmental Protection Agency (EPA) environmental monitoring and assessment team members reviewed DOE's air sampling plan, visited DOE's air samplers and placed air samplers onsite near existing DOE samplers to corroborate results.
NASA Technical Reports Server (NTRS)
Nebenfuhr, A.; Lomax, T. L.
1998-01-01
We have developed an improved method for determination of gene expression levels with RT-PCR. The procedure is rapid and does not require extensive optimization or densitometric analysis. Since the detection of individual transcripts is PCR-based, small amounts of tissue samples are sufficient for the analysis of expression patterns in large gene families. Using this method, we were able to rapidly screen nine members of the Aux/IAA family of auxin-responsive genes and identify those genes which vary in message abundance in a tissue- and light-specific manner. While not offering the accuracy of conventional semi-quantitative or competitive RT-PCR, our method allows quick screening of large numbers of genes in a wide range of RNA samples with just a thermal cycler and standard gel analysis equipment.
Study of sampling systems for comets and Mars
NASA Technical Reports Server (NTRS)
Amundsen, R. J.; Clark, B. C.
1987-01-01
Several aspects of the techniques that can be applied to acquisition and preservation of samples from Mars and a cometary nucleus were examined. Scientific approaches to sampling, grounded in proven engineering methods are the key to achieving the maximum science value from the sample return mission. If development of these approaches for collecting and preserving does not preceed mission definition, it is likely that only suboptimal techniques will be available because of the constraints of formal schedule timelines and the normal pressure to select only the most conservative and least sophisticated approaches when development has lagged the mission milestones. With a reasonable investment now, before the final mission definition, the sampling approach can become highly developed, ready for implementation, and mature enough to help set the requirements for the mission hardware and its performance.
Eggenkamp, H G M; Louvat, P
2018-04-30
In natural samples bromine is present in trace amounts, and measurement of stable Br isotopes necessitates its separation from the matrix. Most methods described previously need large samples or samples with high Br/Cl ratios. The use of metals as reagents, proposed in previous Br distillation methods, must be avoided for multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) analyses, because of risk of cross-contamination, since the instrument is also used to measure stable isotopes of metals. Dedicated to water and evaporite samples with low Br/Cl ratios, the proposed method is a simple distillation that separates bromide from chloride for isotopic analyses by MC-ICP-MS. It is based on the difference in oxidation potential between chloride and bromide in the presence of nitric acid. The sample is mixed with dilute (1:5) nitric acid in a distillation flask and heated over a candle flame for 10 min. The distillate (bromine) is trapped in an ammonia solution and reduced to bromide. Chloride is only distilled to a very small extent. The obtained solution can be measured directly by MC-ICP-MS for stable Br isotopes. The method was tested for a variety of volumes, ammonia concentrations, pH values and distillation times and compared with the classic ion-exchange chromatography method. The method more efficiently separates Br from Cl, so that samples with lower Br/Cl ratios can be analysed, with Br isotope data in agreement with those obtained by previous methods. Unlike other Br extraction methods based on oxidation, the distillation method presented here does not use any metallic ion for redox reactions that could contaminate the mass spectrometer. It is efficient in separating Br from samples with low Br/Cl ratios. The method ensures reproducible recovery yields and a long-term reproducibility of ±0.11‰ (1 standard deviation). The distillation method was successfully applied to samples with low Br/Cl ratios and low Br amounts (down to 20 μg). Copyright © 2018 John Wiley & Sons, Ltd.
Ji, Yuan; Wang, Sue-Jane
2013-01-01
The 3 + 3 design is the most common choice among clinicians for phase I dose-escalation oncology trials. In recent reviews, more than 95% of phase I trials have been based on the 3 + 3 design. Given that it is intuitive and its implementation does not require a computer program, clinicians can conduct 3 + 3 dose escalations in practice with virtually no logistic cost, and trial protocols based on the 3 + 3 design pass institutional review board and biostatistics reviews quickly. However, the performance of the 3 + 3 design has rarely been compared with model-based designs in simulation studies with matched sample sizes. In the vast majority of statistical literature, the 3 + 3 design has been shown to be inferior in identifying true maximum-tolerated doses (MTDs), although the sample size required by the 3 + 3 design is often orders-of-magnitude smaller than model-based designs. In this article, through comparative simulation studies with matched sample sizes, we demonstrate that the 3 + 3 design has higher risks of exposing patients to toxic doses above the MTD than the modified toxicity probability interval (mTPI) design, a newly developed adaptive method. In addition, compared with the mTPI design, the 3 + 3 design does not yield higher probabilities in identifying the correct MTD, even when the sample size is matched. Given that the mTPI design is equally transparent, costless to implement with free software, and more flexible in practical situations, we highly encourage its adoption in early dose-escalation studies whenever the 3 + 3 design is also considered. We provide free software to allow direct comparisons of the 3 + 3 design with other model-based designs in simulation studies with matched sample sizes. PMID:23569307
Stability of Azacitidine in Sterile Water for Injection
Walker, Scott E; Charbonneau, Lauren F; Law, Shirley; Earle, Craig
2012-01-01
Background: The product monograph for azacitidine states that once reconstituted, the drug may be held for only 30 min at room temperature or 8 h at 4°C. Standard doses result in wastage of a portion of each vial, and the cost of this wastage is significant, adding about $156 000 to annual drug expenditures at the authors’ institution. Objective: To evaluate the stability of azacitidine after reconstitution. Methods: Vials of azacitidine were reconstituted with sterile water for injection. At the time of reconstitution, the temperature of the diluent was 4°C for samples to be stored at 4°C or −20°C and room temperature for samples to be stored at 23°C. Solutions of azacitidine (10 or 25 mg/mL) were stored in polypropylene syringes and glass vials at room temperature (23°C), 4°C, or −20°C. The concentration of azacitidine was determined by a validated, stability-indicating liquid chromatographic method in serial samples over 9.6 h at room temperature, over 4 days at 4°C, and over 23 days at −20°C. The recommended expiry date was determined on the basis of time to reach 90% of the initial concentration according to the fastest observed degradation rates (i.e., lower limit of 95% confidence interval). Results: Azacitidine degradation was very sensitive to temperature but not storage container (glass vial or polypropylene syringe). Reconstitution with cold sterile water reduced degradation. At 23°C, 15% of the initial concentration was lost after 9.6 h; at 4°C, 32% was lost after 4 days; and at −20°C, less than 5% was lost after 23 days. Conclusions: More than 90% of the initial azacitidine concentration will be retained, with 97.5% confidence, if, during the life of the product, storage at 23°C does not exceed 2 h, storage at 4°C does not exceed 8 h, and storage at −20°C does not exceed 4 days. These expiry dates could substantially reduce wastage and cost where the time between doses does not exceed 4 days. PMID:23129863
Enhanced sampling simulations of DNA step parameters.
Karolak, Aleksandra; van der Vaart, Arjan
2014-12-15
A novel approach for the selection of step parameters as reaction coordinates in enhanced sampling simulations of DNA is presented. The method uses three atoms per base and does not require coordinate overlays or idealized base pairs. This allowed for a highly efficient implementation of the calculation of all step parameters and their Cartesian derivatives in molecular dynamics simulations. Good correlation between the calculated and actual twist, roll, tilt, shift, and slide parameters is obtained, while the correlation with rise is modest. The method is illustrated by its application to the methylated and unmethylated 5'-CATGTGACGTCACATG-3' double stranded DNA sequence. One-dimensional umbrella simulations indicate that the flexibility of the central CG step is only marginally affected by methylation. © 2014 Wiley Periodicals, Inc.
Methods for collection and analysis of aquatic biological and microbiological samples
Britton, L.J.; Greeson, P.E.
1989-01-01
The series of chapters on techniques describes methods used by the U.S. Geological Survey for planning and conducting water-resources investigations. The material is arranged under major subject headings called books and is further subdivided into sections and chapters. Book 5 is on laboratory analysis. Section A is on water. The unit of publication, the chapter, is limited to a narrow field of subject matter. "Methods for Collection and Analysis of Aquatic Biological and Microbiological Samples" is the fourth chapter to be published under Section A of Book 5. The chapter number includes the letter of the section.This chapter was prepared by several aquatic biologists and microbiologists of the U.S. Geological Survey to provide accurate and precise methods for the collection and analysis of aquatic biological and microbiological samples.Use of brand, firm, and trade names in this chapter is for identification purposes only and does not constitute endorsement by the U.S. Geological Survey.This chapter supersedes "Methods for Collection and Analysis of Aquatic Biological and Microbiological Samples" edited by P.E. Greeson, T.A. Ehlke, G.A. Irwin, B.W. Lium, and K.V. Slack (U.S. Geological Survey Techniques of Water-Resources Investigations, Book 5, Chapter A4, 1977) and also supersedes "A Supplement to-Methods for Collection and Analysis of Aquatic Biological and Microbiological Samples" by P.E. Greeson (U.S. Geological Survey Techniques of Water-Resources Investigations, Book 5, Chapter A4), Open-File Report 79-1279, 1979.
Method to make accurate concentration and isotopic measurements for small gas samples
NASA Astrophysics Data System (ADS)
Palmer, M. R.; Wahl, E.; Cunningham, K. L.
2013-12-01
Carbon isotopic ratio measurements of CO2 and CH4 provide valuable insight into carbon cycle processes. However, many of these studies, like soil gas, soil flux, and water head space experiments, provide very small gas sample volumes, too small for direct measurement by current constant-flow Cavity Ring-Down (CRDS) isotopic analyzers. Previously, we addressed this issue by developing a sample introduction module which enabled the isotopic ratio measurement of 40ml samples or smaller. However, the system, called the Small Sample Isotope Module (SSIM), does dilute the sample during the delivery with inert carrier gas which causes a ~5% reduction in concentration. The isotopic ratio measurements are not affected by this small dilution, but researchers are naturally interested accurate concentration measurements. We present the accuracy and precision of a new method of using this delivery module which we call 'double injection.' Two portions of the 40ml of the sample (20ml each) are introduced to the analyzer, the first injection of which flushes out the diluting gas and the second injection is measured. The accuracy of this new method is demonstrated by comparing the concentration and isotopic ratio measurements for a gas sampled directly and that same gas measured through the SSIM. The data show that the CO2 concentration measurements were the same within instrument precision. The isotopic ratio precision (1σ) of repeated measurements was 0.16 permil for CO2 and 1.15 permil for CH4 at ambient concentrations. This new method provides a significant enhancement in the information provided by small samples.
HyperCard to SPSS: improving data integrity.
Gostel, R
1993-01-01
This article describes a database design that captures responses in a HyperCard stack and moves the data to SPSS for the Macintosh without the need to rekey data. Pregnant women used an interactive computer application with a touch screen to answer questions and receive educational information about fetal alcohol syndrome. A database design was created to capture survey responses through interaction with a computer by a sample of prenatal women during formative evaluation trials. The author does not compare this method of data collection to other methods. This article simply describes the method of data collection as a useful research tool.
Measuring fluorescence polarization with a dichrometer.
Sutherland, John C
2017-09-01
A method for obtaining fluorescence polarization data from an instrument designed to measure circular and linear dichroism is compared with a previously reported approach. The new method places a polarizer between the sample and a detector mounted perpendicular to the direction of the incident beam and results in determination of the fluorescence polarization ratio, whereas the previous method does not use a polarizer and yields the fluorescence anisotropy. A similar analysis with the detector located axially with the excitation beam demonstrates that there is no frequency modulated signal due to fluorescence polarization in the absence of a polarizer. Copyright © 2017. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutherland, John C.
In this article, a method for obtaining fluorescence polarization data from an instrument designed to measure circular and linear dichroism is compared with a previously reported approach. The new method places a polarizer between the sample and a detector mounted perpendicular to the direction of the incident beam and results in determination of the fluorescence polarization ratio, whereas the previous method does not use a polarizer and yields the fluorescence anisotropy. A similar analysis with the detector located axially with the excitation beam demonstrates that there is no frequency modulated signal due to fluorescence polarization in the absence of amore » polarizer.« less
Live/Dead Bacterial Spore Assay Using DPA-Triggered Tb Luminescence
NASA Technical Reports Server (NTRS)
Ponce, Adrian
2003-01-01
A method of measuring the fraction of bacterial spores in a sample that remain viable exploits DPA-triggered luminescence of Tb(3+) and is based partly on the same principles as those described earlier. Unlike prior methods for performing such live/dead assays of bacterial spores, this method does not involve counting colonies formed by cultivation (which can take days), or counting of spores under a microscope, and works whether or not bacterial spores are attached to other small particles (i.e., dust), and can be implemented on a time scale of about 20 minutes.
Paz, Andrea; Crawford, Andrew J
2012-11-01
Molecular markers offer a universal source of data for quantifying biodiversity. DNA barcoding uses a standardized genetic marker and a curated reference database to identify known species and to reveal cryptic diversity within wellsampled clades. Rapid biological inventories, e.g. rapid assessment programs (RAPs), unlike most barcoding campaigns, are focused on particular geographic localities rather than on clades. Because of the potentially sparse phylogenetic sampling, the addition of DNA barcoding to RAPs may present a greater challenge for the identification of named species or for revealing cryptic diversity. In this article we evaluate the use of DNA barcoding for quantifying lineage diversity within a single sampling site as compared to clade-based sampling, and present examples from amphibians. We compared algorithms for identifying DNA barcode clusters (e.g. species, cryptic species or Evolutionary Significant Units) using previously published DNA barcode data obtained from geography-based sampling at a site in Central Panama, and from clade-based sampling in Madagascar. We found that clustering algorithms based on genetic distance performed similarly on sympatric as well as clade-based barcode data, while a promising coalescent-based method performed poorly on sympatric data. The various clustering algorithms were also compared in terms of speed and software implementation. Although each method has its shortcomings in certain contexts, we recommend the use of the ABGD method, which not only performs fairly well under either sampling method, but does so in a few seconds and with a user-friendly Web interface.
NASA Astrophysics Data System (ADS)
Vizet, Jérémy; Manhas, Sandeep; Tran, Jacqueline; Validire, Pierre; Benali, Abdelali; Garcia-Caurel, Enric; Pierangelo, Angelo; Martino, Antonello De; Pagnoux, Dominique
2016-07-01
This paper reports a technique based on spectrally differential measurement for determining the full Mueller matrix of a biological sample through an optical fiber. In this technique, two close wavelengths were used simultaneously, one for characterizing the fiber and the other for characterizing the assembly of fiber and sample. The characteristics of the fiber measured at one wavelength were used to decouple its contribution from the measurement on the assembly of fiber and sample and then to extract sample Mueller matrix at the second wavelength. The proof of concept was experimentally validated by measuring polarimetric parameters of various calibrated optical components through the optical fiber. Then, polarimetric images of histological cuts of human colon tissues were measured, and retardance, diattenuation, and orientation of the main axes of fibrillar regions were displayed. Finally, these images were successfully compared with images obtained by a free space Mueller microscope. As the reported method does not use any moving component, it offers attractive integration possibilities with an endoscopic probe.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hao; Mey, Antonia S. J. S.; Noé, Frank
2014-12-07
We propose a discrete transition-based reweighting analysis method (dTRAM) for analyzing configuration-space-discretized simulation trajectories produced at different thermodynamic states (temperatures, Hamiltonians, etc.) dTRAM provides maximum-likelihood estimates of stationary quantities (probabilities, free energies, expectation values) at any thermodynamic state. In contrast to the weighted histogram analysis method (WHAM), dTRAM does not require data to be sampled from global equilibrium, and can thus produce superior estimates for enhanced sampling data such as parallel/simulated tempering, replica exchange, umbrella sampling, or metadynamics. In addition, dTRAM provides optimal estimates of Markov state models (MSMs) from the discretized state-space trajectories at all thermodynamic states. Under suitablemore » conditions, these MSMs can be used to calculate kinetic quantities (e.g., rates, timescales). In the limit of a single thermodynamic state, dTRAM estimates a maximum likelihood reversible MSM, while in the limit of uncorrelated sampling data, dTRAM is identical to WHAM. dTRAM is thus a generalization to both estimators.« less
X-RAY IMAGING Achieving the third dimension using coherence
Robinson, Ian; Huang, Xiaojing
2017-01-25
X-ray imaging is extensively used in medical and materials science. Traditionally, the depth dimension is obtained by turning the sample to gain different views. The famous penetrating properties of X-rays mean that projection views of the subject sample can be readily obtained in the linear absorption regime. 180 degrees of projections can then be combined using computed tomography (CT) methods to obtain a full 3D image, a technique extensively used in medical imaging. In the work now presented in Nature Materials, Stephan Hruszkewycz and colleagues have demonstrated genuine 3D imaging by a new method called 3D Bragg projection ptychography1. Ourmore » approach combines the 'side view' capability of using Bragg diffraction from a crystalline sample with the coherence capabilities of ptychography. Thus, it results in a 3D image from a 2D raster scan of a coherent beam across a sample that does not have to be rotated.« less
Vizet, Jérémy; Manhas, Sandeep; Tran, Jacqueline; Validire, Pierre; Benali, Abdelali; Garcia-Caurel, Enric; Pierangelo, Angelo; De Martino, Antonello; Pagnoux, Dominique
2016-07-01
This paper reports a technique based on spectrally differential measurement for determining the full Mueller matrix of a biological sample through an optical fiber. In this technique, two close wavelengths were used simultaneously, one for characterizing the fiber and the other for characterizing the assembly of fiber and sample. The characteristics of the fiber measured at one wavelength were used to decouple its contribution from the measurement on the assembly of fiber and sample and then to extract sample Mueller matrix at the second wavelength. The proof of concept was experimentally validated by measuring polarimetric parameters of various calibrated optical components through the optical fiber. Then, polarimetric images of histological cuts of human colon tissues were measured, and retardance, diattenuation, and orientation of the main axes of fibrillar regions were displayed. Finally, these images were successfully compared with images obtained by a free space Mueller microscope. As the reported method does not use any moving component, it offers attractive integration possibilities with an endoscopic probe.
Results of the radiological survey of the Carpenter Steel Facility, Reading, Pennsylvania
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cottrell, W.D.; Carrier, R.F.
1990-07-01
In 1944, experimental uranium-forming work was conducted by Carpenter Technology Corporation at the Carpenter Steel Facility in Reading, Pennsylvania, under contract to the Manhattan Engineer District (MED). The fabrication method, aimed at producing sounder uranium metal and improving the yields of rods from billets, was reportedly soon discarded as unsatisfactory. As part of the Department of Energy's (DOE) efforts to verify the closeout status of facilities under contract to agencies preceding DOE during early nuclear energy development, the site was included in the Formerly Utilized Sites Remedial Action Program (FUSRAP). At the request of DOE, the Measurement Applications and Developmentmore » Group of the Health and Safety Research Division of Oak Ridge National Laboratory performed a radiological assessment survey in July and August 1988. The purpose of the survey was to determine if past operations had deposited radioactive residues in the facility, and whether those residuals were in significant quantities when compared to DOE guidelines. The survey included gamma scanning; direct measurements of alpha activity levels and beta-gamma dose rates; sampling for transferable alpha and beta-gamma residuals on selected surfaces; and sampling of soil, debris and currently used processing materials for radionuclide analysis. All survey results were within DOE FUSRAP guidelines derived to determine the eligibility of a site for remedial action. These guidelines are derived to ensure that unrestricted use of the property will not result in any measurable radiological hazard to the site occupants or the general public. 4 refs., 5 figs., 5 tabs.« less
Ryan, K; Williams, D Gareth; Balding, David J
2016-11-01
Many DNA profiles recovered from crime scene samples are of a quality that does not allow them to be searched against, nor entered into, databases. We propose a method for the comparison of profiles arising from two DNA samples, one or both of which can have multiple donors and be affected by low DNA template or degraded DNA. We compute likelihood ratios to evaluate the hypothesis that the two samples have a common DNA donor, and hypotheses specifying the relatedness of two donors. Our method uses a probability distribution for the genotype of the donor of interest in each sample. This distribution can be obtained from a statistical model, or we can exploit the ability of trained human experts to assess genotype probabilities, thus extracting much information that would be discarded by standard interpretation rules. Our method is compatible with established methods in simple settings, but is more widely applicable and can make better use of information than many current methods for the analysis of mixed-source, low-template DNA profiles. It can accommodate uncertainty arising from relatedness instead of or in addition to uncertainty arising from noisy genotyping. We describe a computer program GPMDNA, available under an open source licence, to calculate LRs using the method presented in this paper. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Gibbs, B F; Alli, I; Mulligan, C N
1996-02-23
A method for the determination of aspartame (N-L-alpha-aspartyl-L-phenylalanine methyl ester) and its metabolites, applicable on a routine quality assurance basis, is described. Liquid samples (diet Coke, 7-Up, Pepsi, etc.) were injected directly onto a mini-cartridge reversed-phase column on a high-performance liquid chromatographic system, whereas solid samples (Equal, hot chocolate powder, pudding, etc.) were extracted with water. Optimising chromatographic conditions resulted in resolved components of interest within 12 min. The by-products were confirmed by mass spectrometry. Although the method was developed on a two-pump HPLC system fitted with a diode-array detector, it is straightforward and can be transformed to the simplest HPLC configuration. Using a single-piston pump (with damper), a fixed-wavelength detector and a recorder/integrator, the degradation of products can be monitored as they decompose. The results obtained were in harmony with previously reported tedious methods. The method is simple, rapid, quantitative and does not involve complex, hazardous or toxic chemistry.
NASA Astrophysics Data System (ADS)
Mallah, Muhammad Ali; Sherazi, Syed Tufail Hussain; Bhanger, Muhammad Iqbal; Mahesar, Sarfaraz Ahmed; Bajeer, Muhammad Ashraf
2015-04-01
A transmission FTIR spectroscopic method was developed for direct, inexpensive and fast quantification of paracetamol content in solid pharmaceutical formulations. In this method paracetamol content is directly analyzed without solvent extraction. KBr pellets were formulated for the acquisition of FTIR spectra in transmission mode. Two chemometric models: simple Beer's law and partial least squares employed over the spectral region of 1800-1000 cm-1 for quantification of paracetamol content had a regression coefficient of (R2) of 0.999. The limits of detection and quantification using FTIR spectroscopy were 0.005 mg g-1 and 0.018 mg g-1, respectively. Study for interference was also done to check effect of the excipients. There was no significant interference from the sample matrix. The results obviously showed the sensitivity of transmission FTIR spectroscopic method for pharmaceutical analysis. This method is green in the sense that it does not require large volumes of hazardous solvents or long run times and avoids prior sample preparation.
Mayenite Synthesized Using the Citrate Sol-Gel Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ude, Sabina N; Rawn, Claudia J; Meisner, Roberta A
2014-01-01
A citrate sol-gel method has been used to synthesize mayenite (Ca12Al14O33). X-ray powder diffraction data show that the samples synthesized using the citrate sol-gel method contained CaAl2O4 and CaCO3 along with mayenite when fired ex-situ in air at 800 C but were single phase when fired at 900 C and above. Using high temperature x-ray diffraction, data collected in-situ in air at temperatures of 600 C and below showed only amorphous content; however, data collected at higher temperatures indicated the first phase to crystallize is CaCO3. High temperature x-ray diffraction data collected in 4% H2/96% N2 does not show themore » presence of CaCO3, and Ca12Al14O33 starts to form around 850 C. In comparison, x-ray powder diffraction data collected ex-situ on samples synthesized using traditional solid-state synthesis shows that single phase was not reached until samples were fired at 1350 C. DTA/TGA data collected either in a nitrogen environment or air on samples synthesized using the citrate gel method suggest the complete decomposition of metastable phases and the formation of mayenite at 900 C, although the phase evolution is very different depending on the environment. Brunauer-Emmett-Teller (BET) measurements showed a slightly higher surface area of 7.4 0.1 m2/g in the citrate gel synthesized samples compared to solid-state synthesized sample with a surface area of 1.61 0.02 m2/g. SEM images show a larger particle size for samples synthesized using the solid-state method compared to those synthesized using the citrate gel method.« less
Label-Free, Flow-Imaging Methods for Determination of Cell Concentration and Viability.
Sediq, A S; Klem, R; Nejadnik, M R; Meij, P; Jiskoot, Wim
2018-05-30
To investigate the potential of two flow imaging microscopy (FIM) techniques (Micro-Flow Imaging (MFI) and FlowCAM) to determine total cell concentration and cell viability. B-lineage acute lymphoblastic leukemia (B-ALL) cells of 2 different donors were exposed to ambient conditions. Samples were taken at different days and measured with MFI, FlowCAM, hemocytometry and automated cell counting. Dead and live cells from a fresh B-ALL cell suspension were fractionated by flow cytometry in order to derive software filters based on morphological parameters of separate cell populations with MFI and FlowCAM. The filter sets were used to assess cell viability in the measured samples. All techniques gave fairly similar cell concentration values over the whole incubation period. MFI showed to be superior with respect to precision, whereas FlowCAM provided particle images with a higher resolution. Moreover, both FIM methods were able to provide similar results for cell viability as the conventional methods (hemocytometry and automated cell counting). FIM-based methods may be advantageous over conventional cell methods for determining total cell concentration and cell viability, as FIM measures much larger sample volumes, does not require labeling, is less laborious and provides images of individual cells.
Simulation of cryolipolysis as a novel method for noninvasive fat layer reduction.
Majdabadi, Abbas; Abazari, Mohammad
2016-12-20
Regarding previous problems in conventional liposuction methods, the need for development of new fat removal operations was appreciated. In this study we are going to simulate one of the novel methods, cryolipolysis, aimed to tackle those drawbacks. We think that simulation of clinical procedures contributes considerably in efficacious performance of the operations. To do this we have attempted to simulate temperature distribution in a sample fat of the human body. Using Abaqus software we have presented the graphical display of temperature-time variations within the medium. Findings of our simulation indicate that tissue temperature decreases after cold exposure of about 30 min. It can be seen that the minimum temperature of tissue occurs in shallow layers of the sample and the temperature in deeper layers of the sample remains nearly unchanged. It is clear that cold exposure time of more than the specific time (t > 30 min) does not result in considerable changes. Numerous clinical studies have proved the efficacy of cryolipolysis. This noninvasive technique has eliminated some of drawbacks of conventional methods. Findings of our simulation clearly prove the efficiency of this method, especially for superficial fat layers.
Using continuous in-situ measurements to adaptively trigger urban storm water samples
NASA Astrophysics Data System (ADS)
Wong, B. P.; Kerkez, B.
2015-12-01
Until cost-effective in-situ sensors are available for biological parameters, nutrients and metals, automated samplers will continue to be the primary source of reliable water quality measurements. Given limited samples bottles, however, autosamplers often obscure insights on nutrient sources and biogeochemical processes which would otherwise be captured using a continuous sampling approach. To that end, we evaluate the efficacy a novel method to measure first-flush nutrient dynamics in flashy, urban watersheds. Our approach reduces the number of samples required to capture water quality dynamics by leveraging an internet-connected sensor node, which is equipped with a suite of continuous in-situ sensors and an automated sampler. To capture both the initial baseflow as well as storm concentrations, a cloud-hosted adaptive algorithm analyzes the high-resolution sensor data along with local weather forecasts to optimize a sampling schedule. The method was tested in a highly developed urban catchment in Ann Arbor, Michigan and collected samples of nitrate, phosphorus, and suspended solids throughout several storm events. Results indicate that the watershed does not exhibit first flush dynamics, a behavior that would have been obscured when using a non-adaptive sampling approach.
Discovering Deeply Divergent RNA Viruses in Existing Metatranscriptome Data with Machine Learning
NASA Astrophysics Data System (ADS)
Rivers, A. R.
2016-02-01
Most sampling of RNA viruses and phages has been directed toward a narrow range of hosts and environments. Several marine metagenomic studies have examined the RNA viral fraction in aquatic samples and found a number of picornaviruses and uncharacterized sequences. The lack of homology to known protein families has limited the discovery of new RNA viruses. We developed a computational method for identifying RNA viruses that relies on information in the codon transition probabilities of viral sequences to train a classifier. This approach does not rely on homology, but it has higher information content than other reference-free methods such as tetranucleotide frequency. Training and validation with RefSeq data gave true positive and true negative rates of 99.6% and 99.5% on the highly imbalanced validation sets (0.2% viruses) that, like the metatranscriptomes themselves, contain mostly non-viral sequences. To further test the method, a validation dataset of putative RNA virus genomes were identified in metatransciptomes by the presence of RNA dependent RNA polymerase, an essential gene for RNA viruses. The classifier successfully identified 99.4% of those contigs as viral. This approach is currently being extended to screen all metatranscriptome data sequenced at the DOE Joint Genome Institute, presently 4.5 Gb of assembled data from 504 public projects representing a wide range of marine, aquatic and terrestrial environments.
On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood.
Karabatsos, George
2018-06-01
This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon previous methods because it provides an omnibus test of the entire hierarchy of cancellation axioms, beyond double cancellation. It does so while accounting for the posterior uncertainty that is inherent in the empirical orderings that are implied by these axioms, together. The new method is illustrated through a test of the cancellation axioms on a classic survey data set, and through the analysis of simulated data.
Stochastic derivative-free optimization using a trust region framework
Larson, Jeffrey; Billups, Stephen C.
2016-02-17
This study presents a trust region algorithm to minimize a function f when one has access only to noise-corrupted function values f¯. The model-based algorithm dynamically adjusts its step length, taking larger steps when the model and function agree and smaller steps when the model is less accurate. The method does not require the user to specify a fixed pattern of points used to build local models and does not repeatedly sample points. If f is sufficiently smooth and the noise is independent and identically distributed with mean zero and finite variance, we prove that our algorithm produces iterates suchmore » that the corresponding function gradients converge in probability to zero. As a result, we present a prototype of our algorithm that, while simplistic in its management of previously evaluated points, solves benchmark problems in fewer function evaluations than do existing stochastic approximation methods.« less
Zarem, Cori; Crapnell, Tara; Tiltges, Lisa; Madlinger, Laura; Reynolds, Lauren; Lukas, Karen; Pineda, Roberta
2014-01-01
Purpose Determine perceptions about positioning for preterm infants in the NICU. Design Twenty-item survey Sample Neonatal nurses (n=68) and speech, physical, and occupational therapists (n=8). Main Outcome Variable Perceptions about positioning were obtained, and differences in perceptions between nurses and therapists were explored. Results Ninety-nine percent of respondents agreed that positioning is important for the well-being of the infant. Sixty-two percent of nurses and 86% of therapists identified the Dandle Roo as the ideal method of neonatal positioning. Forty-four percent of nurses and 57% of therapists reported the Dandle Roo is the easiest positioning method to use in the NICU. Some perceptions differed: therapists were more likely to report the Sleep Sack does not hold the infant in good alignment. Nurses were more likely to report the infant does not sleep well in traditional positioning. PMID:23477978
TRACE ELEMENT ANALYSES OF URANIUM MATERIALS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beals, D; Charles Shick, C
The Savannah River National Laboratory (SRNL) has developed an analytical method to measure many trace elements in a variety of uranium materials at the high part-per-billion (ppb) to low part-per-million (ppm) levels using matrix removal and analysis by quadrapole ICP-MS. Over 35 elements were measured in uranium oxides, acetate, ore and metal. Replicate analyses of samples did provide precise results however none of the materials was certified for trace element content thus no measure of the accuracy could be made. The DOE New Brunswick Laboratory (NBL) does provide a Certified Reference Material (CRM) that has provisional values for a seriesmore » of trace elements. The NBL CRM were purchased and analyzed to determine the accuracy of the method for the analysis of trace elements in uranium oxide. These results are presented and discussed in the following paper.« less
ICPP environmental monitoring report CY-1995
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-04-01
Summarized in this report are the data collected through Environmental Monitoring programs conducted at the Idaho Chemical Processing Plant (ICPP) by the Environmental Protection Department. The ICPP is responsible for complying with all applicable Federal, State, Local and DOE Rules, Regulations and Orders. Radiological effluent and emissions are regulated by the DOE in accordance with the Derived Concentration Guides (DCGs). The State of Idaho regulates nonradiological waste resulting from the ICPP operations including airborne, liquid, and solid waste. The Environmental Department updated the Quality Assurance (QA) Project Plan for Environmental Monitoring activities during the third quarter of 1992. QA activitiesmore » have resulted in the ICPP`s implementation of the Environmental Protection Agency (EPA) rules and guidelines pertaining to the collection, analyses, and reporting of environmentally related samples. Where no EPA methods for analyses existed for radionuclides, Lockheed Idaho Technologies Company (LITCO) methods were used.« less
Ecotoxicological evaluation of areas polluted by mining activities
NASA Astrophysics Data System (ADS)
García-Lorenzo, M. L.; Martínez-Sánchez, M. J.; Pérez-Sirvent, C.; Molina, J.
2009-04-01
Determination of the contaminant content is not enough to evaluate the toxic effects or to characterise contaminated sites, because such a measure does not reflect the ecotoxicological danger in the environment and does not provide information on the effects of the chemical compounds. To estimate the risk of contaminants, chemical methods need to be complemented with biological methods. Therefore, ecotoxicological testing may be a useful approach for assessing the toxicity as a complement to chemical analysis. The aim of this study was to develop a battery of bioassays for the ecotoxicological screening of areas polluted by mining activities. Particularly, the toxicity of water samples, sediments and their pore-water extracts was evaluated by using three assays: bacteria, plants and ostracods. Moreover, the possible relationship between observed toxicity and results of chemical analysis was studied. The studied area, Sierra Minera, is close to the mining region of La Uni
Shubhakar, Archana; Kalla, Rahul; Nimmo, Elaine R.; Fernandes, Daryl L.; Satsangi, Jack; Spencer, Daniel I. R.
2015-01-01
Introduction Serum N-glycans have been identified as putative biomarkers for numerous diseases. The impact of different serum sample tubes and processing methods on N-glycan analysis has received relatively little attention. This study aimed to determine the effect of different sample tubes and processing methods on the whole serum N-glycan profile in both health and disease. A secondary objective was to describe a robot automated N-glycan release, labeling and cleanup process for use in a biomarker discovery system. Methods 25 patients with active and quiescent inflammatory bowel disease and controls had three different serum sample tubes taken at the same draw. Two different processing methods were used for three types of tube (with and without gel-separation medium). Samples were randomised and processed in a blinded fashion. Whole serum N-glycan release, 2-aminobenzamide labeling and cleanup was automated using a Hamilton Microlab STARlet Liquid Handling robot. Samples were analysed using a hydrophilic interaction liquid chromatography/ethylene bridged hybrid(BEH) column on an ultra-high performance liquid chromatography instrument. Data were analysed quantitatively by pairwise correlation and hierarchical clustering using the area under each chromatogram peak. Qualitatively, a blinded assessor attempted to match chromatograms to each individual. Results There was small intra-individual variation in serum N-glycan profiles from samples collected using different sample processing methods. Intra-individual correlation coefficients were between 0.99 and 1. Unsupervised hierarchical clustering and principal coordinate analyses accurately matched samples from the same individual. Qualitative analysis demonstrated good chromatogram overlay and a blinded assessor was able to accurately match individuals based on chromatogram profile, regardless of disease status. Conclusions The three different serum sample tubes processed using the described methods cause minimal inter-individual variation in serum whole N-glycan profile when processed using an automated workstream. This has important implications for N-glycan biomarker discovery studies using different serum processing standard operating procedures. PMID:25831126
Clustering of samples and variables with mixed-type data
Edelmann, Dominic; Kopp-Schneider, Annette
2017-01-01
Analysis of data measured on different scales is a relevant challenge. Biomedical studies often focus on high-throughput datasets of, e.g., quantitative measurements. However, the need for integration of other features possibly measured on different scales, e.g. clinical or cytogenetic factors, becomes increasingly important. The analysis results (e.g. a selection of relevant genes) are then visualized, while adding further information, like clinical factors, on top. However, a more integrative approach is desirable, where all available data are analyzed jointly, and where also in the visualization different data sources are combined in a more natural way. Here we specifically target integrative visualization and present a heatmap-style graphic display. To this end, we develop and explore methods for clustering mixed-type data, with special focus on clustering variables. Clustering of variables does not receive as much attention in the literature as does clustering of samples. We extend the variables clustering methodology by two new approaches, one based on the combination of different association measures and the other on distance correlation. With simulation studies we evaluate and compare different clustering strategies. Applying specific methods for mixed-type data proves to be comparable and in many cases beneficial as compared to standard approaches applied to corresponding quantitative or binarized data. Our two novel approaches for mixed-type variables show similar or better performance than the existing methods ClustOfVar and bias-corrected mutual information. Further, in contrast to ClustOfVar, our methods provide dissimilarity matrices, which is an advantage, especially for the purpose of visualization. Real data examples aim to give an impression of various kinds of potential applications for the integrative heatmap and other graphical displays based on dissimilarity matrices. We demonstrate that the presented integrative heatmap provides more information than common data displays about the relationship among variables and samples. The described clustering and visualization methods are implemented in our R package CluMix available from https://cran.r-project.org/web/packages/CluMix. PMID:29182671
Salem, Shady; Chang, Sam S; Clark, Peter E; Davis, Rodney; Herrell, S Duke; Kordan, Yakup; Wills, Marcia L; Shappell, Scott B; Baumgartner, Roxelyn; Phillips, Sharon; Smith, Joseph A; Cookson, Michael S; Barocas, Daniel A
2010-10-01
Whole mount processing is more resource intensive than routine systematic sampling of radical retropubic prostatectomy specimens. We compared whole mount and systematic sampling for detecting pathological outcomes, and compared the prognostic value of pathological findings across pathological methods. We included men (608 whole mount and 525 systematic sampling samples) with no prior treatment who underwent radical retropubic prostatectomy at Vanderbilt University Medical Center between January 2000 and June 2008. We used univariate and multivariate analysis to compare the pathological outcome detection rate between pathological methods. Kaplan-Meier curves and the log rank test were used to compare the prognostic value of pathological findings across pathological methods. There were no significant differences between the whole mount and the systematic sampling groups in detecting extraprostatic extension (25% vs 30%), positive surgical margins (31% vs 31%), pathological Gleason score less than 7 (49% vs 43%), 7 (39% vs 43%) or greater than 7 (12% vs 13%), seminal vesicle invasion (8% vs 10%) or lymph node involvement (3% vs 5%). Tumor volume was higher in the systematic sampling group and whole mount detected more multiple surgical margins (each p <0.01). There were no significant differences in the likelihood of biochemical recurrence between the pathological methods when patients were stratified by pathological outcome. Except for estimated tumor volume and multiple margins whole mount and systematic sampling yield similar pathological information. Each method stratifies patients into comparable risk groups for biochemical recurrence. Thus, while whole mount is more resource intensive, it does not appear to result in improved detection of clinically important pathological outcomes or prognostication. Copyright © 2010 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Wang, Yu (Inventor)
2006-01-01
A miniature, ultra-high resolution, and color scanning microscope using microchannel and solid-state technology that does not require focus adjustment. One embodiment includes a source of collimated radiant energy for illuminating a sample, a plurality of narrow angle filters comprising a microchannel structure to permit the passage of only unscattered radiant energy through the microchannels with some portion of the radiant energy entering the microchannels from the sample, a solid-state sensor array attached to the microchannel structure, the microchannels being aligned with an element of the solid-state sensor array, that portion of the radiant energy entering the microchannels parallel to the microchannel walls travels to the sensor element generating an electrical signal from which an image is reconstructed by an external device, and a moving element for movement of the microchannel structure relative to the sample. Discloses a method for scanning samples whereby the sensor array elements trace parallel paths that are arbitrarily close to the parallel paths traced by other elements of the array.
Factors of quality of financial report of local government in Indonesia
NASA Astrophysics Data System (ADS)
Muda, Iskandar; Haris Harahap, Abdul; Erlina; Ginting, Syafruddin; Maksum, Azhar; Abubakar, Erwin
2018-03-01
The purpose of this research is to find out whether the Accounting Information System and Internal Control in Local Revenue Office to the affect the Quality of Financial Report of Local Government. The sampling was conducted by using simple random sampling method in which the sample was determined without considering strata. The data research was conducted by distributing the questionnaires. The results showed that the degree of Accounting Information System and Internal Control simultaneously affect the Quality of Financial Report of Local Government. However, partially, Partially, accounting information system influence to the quality of financial report of local government and the internal control does not affect the quality of financial report.
Assessing performance and validating finite element simulations using probabilistic knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolin, Ronald M.; Rodriguez, E. A.
Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less
ERIC Educational Resources Information Center
Mangan, Jean; Pugh, Geoff; Gray, John
2005-01-01
The article explores changes in the examination performance of a random sample of 500 English secondary schools between 1992 and 2001. Using econometric methods, it concludes that: there is an overall deterministic trend in school performance but it is not stable, making prediction accuracy poor; the aggregate trend does not explain improvement…
30 CFR 870.18 - General rules for calculating excess moisture.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Collection of Coal Samples from Core; and, D1412-93, Standard Test Method for Equilibrium Moisture of Coal at... cautions: (a) You or your customer may do any test required by §§ 870.19 and 870.20. But whoever does a test, you are to keep test results and all related records for at least six years after the test date...
ERIC Educational Resources Information Center
Holosko, Michael J.; Barner, John R.
2016-01-01
Objectives: We sought the answer to one major research question--Does psychology have a more defined culture of research than social work? Methods: Using "U.S. News and World Report" 2012 and 2013 rankings, we compared psychology faculty (N = 969) from their 25 top ranked programs with a controlled sample of social work faculty (N = 970)…
Does Diagnostic Classification of Early-Onset Psychosis Change over Follow-Up?
ERIC Educational Resources Information Center
Fraguas, David; de Castro, Maria J.; Medina, Oscar; Parellada, Mara; Moreno, Dolores; Graell, Montserrat; Merchan-Naranjo, Jessica; Arango, Celso
2008-01-01
Objective: To examine the diagnostic stability and the functional outcome of patients with early-onset psychosis (EOP) over a 2-year follow-up period. Methods: A total of 24 patients (18 males (75%) and 6 females (25%), mean age [plus or minus] SD: 15.7 [plus or minus] 1.6 years) with a first episode of EOP formed the sample. Psychotic symptoms…
ERIC Educational Resources Information Center
Studnicki, Elaine Ann
2012-01-01
This exploratory mixed method study builds upon previous research to investigate the influence of teacher self- and collective efficacy on technology use in the classroom. This population was purposefully sampled to examine first- and second order technology barriers, instructional strategies, and human influences on technology. The quantitative…
ERIC Educational Resources Information Center
Eddy, Kamryn T.; Le Grange, Daniel; Crosby, Ross D.; Hoste, Renee Rienecke; Doyle, Angela Celio; Smyth, Angela; Herzog, David B.
2010-01-01
Objective: The purpose of this study was to empirically derive eating disorder phenotypes in a clinical sample of children and adolescents using latent profile analysis (LPA), and to compare these latent profile (LP) groups to the DSM-IV-TR eating disorder categories. Method: Eating disorder symptom data collected from 401 youth (aged 7 through 19…
Puechmaille, Sebastien J
2016-05-01
Inferences of population structure and more precisely the identification of genetically homogeneous groups of individuals are essential to the fields of ecology, evolutionary biology and conservation biology. Such population structure inferences are routinely investigated via the program structure implementing a Bayesian algorithm to identify groups of individuals at Hardy-Weinberg and linkage equilibrium. While the method is performing relatively well under various population models with even sampling between subpopulations, the robustness of the method to uneven sample size between subpopulations and/or hierarchical levels of population structure has not yet been tested despite being commonly encountered in empirical data sets. In this study, I used simulated and empirical microsatellite data sets to investigate the impact of uneven sample size between subpopulations and/or hierarchical levels of population structure on the detected population structure. The results demonstrated that uneven sampling often leads to wrong inferences on hierarchical structure and downward-biased estimates of the true number of subpopulations. Distinct subpopulations with reduced sampling tended to be merged together, while at the same time, individuals from extensively sampled subpopulations were generally split, despite belonging to the same panmictic population. Four new supervised methods to detect the number of clusters were developed and tested as part of this study and were found to outperform the existing methods using both evenly and unevenly sampled data sets. Additionally, a subsampling strategy aiming to reduce sampling unevenness between subpopulations is presented and tested. These results altogether demonstrate that when sampling evenness is accounted for, the detection of the correct population structure is greatly improved. © 2016 John Wiley & Sons Ltd.
Ottaway, Josh; Farrell, Jeremy A; Kalivas, John H
2013-02-05
An essential part to calibration is establishing the analyte calibration reference samples. These samples must characterize the sample matrix and measurement conditions (chemical, physical, instrumental, and environmental) of any sample to be predicted. Calibration usually requires measuring spectra for numerous reference samples in addition to determining the corresponding analyte reference values. Both tasks are typically time-consuming and costly. This paper reports on a method named pure component Tikhonov regularization (PCTR) that does not require laboratory prepared or determined reference values. Instead, an analyte pure component spectrum is used in conjunction with nonanalyte spectra for calibration. Nonanalyte spectra can be from different sources including pure component interference samples, blanks, and constant analyte samples. The approach is also applicable to calibration maintenance when the analyte pure component spectrum is measured in one set of conditions and nonanalyte spectra are measured in new conditions. The PCTR method balances the trade-offs between calibration model shrinkage and the degree of orthogonality to the nonanalyte content (model direction) in order to obtain accurate predictions. Using visible and near-infrared (NIR) spectral data sets, the PCTR results are comparable to those obtained using ridge regression (RR) with reference calibration sets. The flexibility of PCTR also allows including reference samples if such samples are available.
A study of hydrocarbons associated with brines from DOE geopressured wells. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keeley, D.F.
1993-07-01
Accomplishments are summarized on the following tasks: distribution coefficients and solubilities, DOE design well sampling, analysis of well samples, review of theoretical models of geopressured reservoir hydrocarbons, monitor for aliphatic hydrocarbons, development of a ph meter probe, DOE design well scrubber analysis, removal and disposition of gas scrubber equipment at Pleasant Bayou Well, and disposition of archived brines.
A study of hydrocarbons associated with brines from DOE geopressured wells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keeley, D.F.
1993-01-01
Accomplishments are summarized on the following tasks: distribution coefficients and solubilities, DOE design well sampling, analysis of well samples, review of theoretical models of geopressured reservoir hydrocarbons, monitor for aliphatic hydrocarbons, development of a ph meter probe, DOE design well scrubber analysis, removal and disposition of gas scrubber equipment at Pleasant Bayou Well, and disposition of archived brines.
Removal of BCG artefact from concurrent fMRI-EEG recordings based on EMD and PCA.
Javed, Ehtasham; Faye, Ibrahima; Malik, Aamir Saeed; Abdullah, Jafri Malin
2017-11-01
Simultaneous electroencephalography (EEG) and functional magnetic resonance image (fMRI) acquisitions provide better insight into brain dynamics. Some artefacts due to simultaneous acquisition pose a threat to the quality of the data. One such problematic artefact is the ballistocardiogram (BCG) artefact. We developed a hybrid algorithm that combines features of empirical mode decomposition (EMD) with principal component analysis (PCA) to reduce the BCG artefact. The algorithm does not require extra electrocardiogram (ECG) or electrooculogram (EOG) recordings to extract the BCG artefact. The method was tested with both simulated and real EEG data of 11 participants. From the simulated data, the similarity index between the extracted BCG and the simulated BCG showed the effectiveness of the proposed method in BCG removal. On the other hand, real data were recorded with two conditions, i.e. resting state (eyes closed dataset) and task influenced (event-related potentials (ERPs) dataset). Using qualitative (visual inspection) and quantitative (similarity index, improved normalized power spectrum (INPS) ratio, power spectrum, sample entropy (SE)) evaluation parameters, the assessment results showed that the proposed method can efficiently reduce the BCG artefact while preserving the neuronal signals. Compared with conventional methods, namely, average artefact subtraction (AAS), optimal basis set (OBS) and combined independent component analysis and principal component analysis (ICA-PCA), the statistical analyses of the results showed that the proposed method has better performance, and the differences were significant for all quantitative parameters except for the power and sample entropy. The proposed method does not require any reference signal, prior information or assumption to extract the BCG artefact. It will be very useful in circumstances where the reference signal is not available. Copyright © 2017 Elsevier B.V. All rights reserved.
Digital simulation of an arbitrary stationary stochastic process by spectral representation.
Yura, Harold T; Hanson, Steen G
2011-04-01
In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America
Simulating contrast inversion in atomic force microscopy imaging with real-space pseudopotentials
NASA Astrophysics Data System (ADS)
Lee, Alex; Sakai, Yuki; Chelikowsky, James
Atomic force microscopy measurements have reported contrast inversions for systems such as Cu2N and graphene that can hamper image interpretation and characterization. Here, we apply a simulation method based on ab initio real-space pseudopotentials to gain an understanding of the tip-sample interactions that influence the inversion. We find that chemically reactive tips induce an attractive binding force that results in the contrast inversion. The inversion is tip height dependent and not observed when using less reactive CO-functionalized tips. Work is supported by the DOE under DOE/DE-FG02-06ER46286 and by the Welch Foundation under Grant F-1837. Computational resources were provided by NERSC and XSEDE.
Handwriting individualization using distance and rarity
NASA Astrophysics Data System (ADS)
Tang, Yi; Srihari, Sargur; Srinivasan, Harish
2012-01-01
Forensic individualization is the task of associating observed evidence with a specific source. The likelihood ratio (LR) is a quantitative measure that expresses the degree of uncertainty in individualization, where the numerator represents the likelihood that the evidence corresponds to the known and the denominator the likelihood that it does not correspond to the known. Since the number of parameters needed to compute the LR is exponential with the number of feature measurements, a commonly used simplification is the use of likelihoods based on distance (or similarity) given the two alternative hypotheses. This paper proposes an intermediate method which decomposes the LR as the product of two factors, one based on distance and the other on rarity. It was evaluated using a data set of handwriting samples, by determining whether two writing samples were written by the same/different writer(s). The accuracy of the distance and rarity method, as measured by error rates, is significantly better than the distance method.
Guo, Feng; Zhou, Weijie; Li, Peng; Mao, Zhangming; Yennawar, Neela H; French, Jarrod B; Huang, Tony Jun
2015-06-01
Advances in modern X-ray sources and detector technology have made it possible for crystallographers to collect usable data on crystals of only a few micrometers or less in size. Despite these developments, sample handling techniques have significantly lagged behind and often prevent the full realization of current beamline capabilities. In order to address this shortcoming, a surface acoustic wave-based method for manipulating and patterning crystals is developed. This method, which does not damage the fragile protein crystals, can precisely manipulate and pattern micrometer and submicrometer-sized crystals for data collection and screening. The technique is robust, inexpensive, and easy to implement. This method not only promises to significantly increase efficiency and throughput of both conventional and serial crystallography experiments, but will also make it possible to collect data on samples that were previously intractable. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Göhler, Daniel; Wessely, Benno; Stintz, Michael; Lazzerini, Giovanni Mattia; Yacoot, Andrew
2017-01-01
Dimensional measurements on nano-objects by atomic force microscopy (AFM) require samples of safely fixed and well individualized particles with a suitable surface-specific particle number on flat and clean substrates. Several known and proven particle preparation methods, i.e., membrane filtration, drying, rinsing, dip coating as well as electrostatic and thermal precipitation, were performed by means of scanning electron microscopy to examine their suitability for preparing samples for dimensional AFM measurements. Different suspensions of nano-objects (with varying material, size and shape) stabilized in aqueous solutions were prepared therefore on different flat substrates. The drop-drying method was found to be the most suitable one for the analysed suspensions, because it does not require expensive dedicated equipment and led to a uniform local distribution of individualized nano-objects. Traceable AFM measurements based on Si and SiO2 coated substrates confirmed the suitability of this technique. PMID:28904839
Fiala, Petra; Göhler, Daniel; Wessely, Benno; Stintz, Michael; Lazzerini, Giovanni Mattia; Yacoot, Andrew
2017-01-01
Dimensional measurements on nano-objects by atomic force microscopy (AFM) require samples of safely fixed and well individualized particles with a suitable surface-specific particle number on flat and clean substrates. Several known and proven particle preparation methods, i.e., membrane filtration, drying, rinsing, dip coating as well as electrostatic and thermal precipitation, were performed by means of scanning electron microscopy to examine their suitability for preparing samples for dimensional AFM measurements. Different suspensions of nano-objects (with varying material, size and shape) stabilized in aqueous solutions were prepared therefore on different flat substrates. The drop-drying method was found to be the most suitable one for the analysed suspensions, because it does not require expensive dedicated equipment and led to a uniform local distribution of individualized nano-objects. Traceable AFM measurements based on Si and SiO 2 coated substrates confirmed the suitability of this technique.
Gradient shadow pattern reveals refractive index of liquid
Kim, Wonkyoung; Kim, Dong Sung
2016-01-01
We propose a simple method that uses a gradient shadow pattern (GSP) to measure the refractive index nL of liquids. A light source generates a “dark-bright-dark” GSP when it is projected through through the back of a transparent, rectangular block with a cylindrical chamber that is filled with a liquid sample. We found that there is a linear relationship between nL and the proportion of the bright region in a GSP, which provides the basic principle of the proposed method. A wide range 1.33 ≤ nL ≤ 1.46 of liquids was measured in the single measurement setup with error <0.01. The proposed method is simple but robust to illuminating conditions, and does not require for any expensive or precise optical components, so we expect that it will be useful in many portable measurement systems that use nL to estimate attributes of liquid samples. PMID:27302603
Gradient shadow pattern reveals refractive index of liquid.
Kim, Wonkyoung; Kim, Dong Sung
2016-06-15
We propose a simple method that uses a gradient shadow pattern (GSP) to measure the refractive index nL of liquids. A light source generates a "dark-bright-dark" GSP when it is projected through through the back of a transparent, rectangular block with a cylindrical chamber that is filled with a liquid sample. We found that there is a linear relationship between nL and the proportion of the bright region in a GSP, which provides the basic principle of the proposed method. A wide range 1.33 ≤ nL ≤ 1.46 of liquids was measured in the single measurement setup with error <0.01. The proposed method is simple but robust to illuminating conditions, and does not require for any expensive or precise optical components, so we expect that it will be useful in many portable measurement systems that use nL to estimate attributes of liquid samples.
2014-04-01
with the binomial distribution for a particular dataset. This technique is more commonly known as the Langer, Bar-on and Miller ( LBM ) method [22,23...distribution unlimited. Using the LBM method, the frequency distribution plot for a dataset corresponding to a phase separated system, exhibiting a split peak...estimated parameters (namely μ1, μ2, σ, fγ’ and fγ) obtained from the LBM plots in Fig. 5 are summarized in Table 3. The EWQ sample does not exhibit any
Quantifying errors without random sampling.
Phillips, Carl V; LaPole, Luwanna M
2003-06-12
All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.
Solid-state NMR covariance of homonuclear correlation spectra.
Hu, Bingwen; Amoureux, Jean-Paul; Trebosc, Julien; Deschamps, Michael; Tricot, Gregory
2008-04-07
Direct covariance NMR spectroscopy, which does not involve a Fourier transformation along the indirect dimension, is demonstrated to obtain homonuclear correlation two-dimensional (2D) spectra in the solid state. In contrast to the usual 2D Fourier transform (2D-FT) NMR, in a 2D covariance (2D-Cov) spectrum the spectral resolution in the indirect dimension is determined by the resolution along the detection dimension, thereby largely reducing the time-consuming indirect sampling requirement. The covariance method does not need any separate phase correction or apodization along the indirect dimension because it uses those applied in the detection dimension. We compare in detail the specifications obtained with 2D-FT and 2D-Cov, for narrow and broad resonances. The efficiency of the covariance data treatment is demonstrated in organic and inorganic samples that are both well crystallized and amorphous, for spin -1/2 nuclei with 13C, 29Si, and 31P through-space or through-bond homonuclear 2D correlation spectra. In all cases, the experimental time has been reduced by at least a factor of 10, without any loss of resolution and signal to noise ratio, with respect to what is necessary with the 2D-FT NMR. According to this method, we have been able to study the silicate network of glasses by 2D NMR within reasonable experimental time despite the very long relaxation time of the 29Si nucleus. The main limitation of the 2D-Cov data treatment is related to the introduction of autocorrelated peaks onto the diagonal, which does not represent any actual connectivity.
NASA Technical Reports Server (NTRS)
Shiller, Alan M.
2003-01-01
It is well-established that sampling and sample processing can easily introduce contamination into dissolved trace element samples if precautions are not taken. However, work in remote locations sometimes precludes bringing bulky clean lab equipment into the field and likewise may make timely transport of samples to the lab for processing impossible. Straightforward syringe filtration methods are described here for collecting small quantities (15 mL) of 0.45- and 0.02-microm filtered river water in an uncontaminated manner. These filtration methods take advantage of recent advances in analytical capabilities that require only small amounts of waterfor analysis of a suite of dissolved trace elements. Filter clogging and solute rejection artifacts appear to be minimal, although some adsorption of metals and organics does affect the first approximately 10 mL of water passing through the filters. Overall the methods are clean, easy to use, and provide reproducible representations of the dissolved and colloidal fractions of trace elements in river waters. Furthermore, sample processing materials can be prepared well in advance in a clean lab and transported cleanly and compactly to the field. Application of these methods is illustrated with data from remote locations in the Rocky Mountains and along the Yukon River. Evidence from field flow fractionation suggests that the 0.02-microm filters may provide a practical cutoff to distinguish metals associated with small inorganic and organic complexes from those associated with silicate and oxide colloids.
Heat Management Strategies for Solid-state NMR of Functional Proteins
Fowler, Daniel J.; Harris, Michael J.; Thompson, Lynmarie K.
2012-01-01
Modern solid-state NMR methods can acquire high-resolution protein spectra for structure determination. However, these methods use rapid sample spinning and intense decoupling fields that can heat and denature the protein being studied. Here we present a strategy to avoid destroying valuable samples. We advocate first creating a sacrificial sample, which contains unlabeled protein (or no protein) in buffer conditions similar to the intended sample. This sample is then doped with the chemical shift thermometer Sm2Sn2O7. We introduce a pulse scheme called TCUP (for Temperature Calibration Under Pulseload) that can characterize the heating of this sacrificial sample rapidly, under a variety of experimental conditions, and with high temporal resolution. Sample heating is discussed with respect to different instrumental variables such as spinning speed, decoupling strength and duration, and cooling gas flow rate. The effects of different sample preparation variables are also discussed, including ionic strength, the inclusion of cryoprotectants, and the physical state of the sample (i.e. liquid, solid, or slurry). Lastly, we discuss probe detuning as a measure of sample thawing that does not require retuning the probe or using chemical shift thermometer compounds. Use of detuning tests and chemical shift thermometers with representative sample conditions makes it possible to maximize the efficiency of the NMR experiment while retaining a functional sample. PMID:22868258
Symmetry compression method for discovering network motifs.
Wang, Jianxin; Huang, Yuannan; Wu, Fang-Xiang; Pan, Yi
2012-01-01
Discovering network motifs could provide a significant insight into systems biology. Interestingly, many biological networks have been found to have a high degree of symmetry (automorphism), which is inherent in biological network topologies. The symmetry due to the large number of basic symmetric subgraphs (BSSs) causes a certain redundant calculation in discovering network motifs. Therefore, we compress all basic symmetric subgraphs before extracting compressed subgraphs and propose an efficient decompression algorithm to decompress all compressed subgraphs without loss of any information. In contrast to previous approaches, the novel Symmetry Compression method for Motif Detection, named as SCMD, eliminates most redundant calculations caused by widespread symmetry of biological networks. We use SCMD to improve three notable exact algorithms and two efficient sampling algorithms. Results of all exact algorithms with SCMD are the same as those of the original algorithms, since SCMD is a lossless method. The sampling results show that the use of SCMD almost does not affect the quality of sampling results. For highly symmetric networks, we find that SCMD used in both exact and sampling algorithms can help get a remarkable speedup. Furthermore, SCMD enables us to find larger motifs in biological networks with notable symmetry than previously possible.
DOE-2 sample run book: Version 2.1E
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winkelmann, F.C.; Birdsall, B.E.; Buhl, W.F.
1993-11-01
The DOE-2 Sample Run Book shows inputs and outputs for a variety of building and system types. The samples start with a simple structure and continue to a high-rise office building, a medical building, three small office buildings, a bar/lounge, a single-family residence, a small office building with daylighting, a single family residence with an attached sunspace, a ``parameterized`` building using input macros, and a metric input/output example. All of the samples use Chicago TRY weather. The main purpose of the Sample Run Book is instructional. It shows the relationship of LOADS-SYSTEMS-PLANT-ECONOMICS inputs, displays various input styles, and illustrates manymore » of the basic and advanced features of the program. Many of the sample runs are preceded by a sketch of the building showing its general appearance and the zoning used in the input. In some cases we also show a 3-D rendering of the building as produced by the program DrawBDL. Descriptive material has been added as comments in the input itself. We find that a number of users have loaded these samples onto their editing systems and use them as ``templates`` for creating new inputs. Another way of using them would be to store various portions as files that can be read into the input using the {number_sign}{number_sign} include command, which is part of the Input Macro feature introduced in version DOE-2.lD. Note that the energy rate structures here are the same as in the DOE-2.lD samples, but have been rewritten using the new DOE-2.lE commands and keywords for ECONOMICS. The samples contained in this report are the same as those found on the DOE-2 release files. However, the output numbers that appear here may differ slightly from those obtained from the release files. The output on the release files can be used as a check set to compare results on your computer.« less
Estimating Divergence Parameters With Small Samples From a Large Number of Loci
Wang, Yong; Hey, Jody
2010-01-01
Most methods for studying divergence with gene flow rely upon data from many individuals at few loci. Such data can be useful for inferring recent population history but they are unlikely to contain sufficient information about older events. However, the growing availability of genome sequences suggests a different kind of sampling scheme, one that may be more suited to studying relatively ancient divergence. Data sets extracted from whole-genome alignments may represent very few individuals but contain a very large number of loci. To take advantage of such data we developed a new maximum-likelihood method for genomic data under the isolation-with-migration model. Unlike many coalescent-based likelihood methods, our method does not rely on Monte Carlo sampling of genealogies, but rather provides a precise calculation of the likelihood by numerical integration over all genealogies. We demonstrate that the method works well on simulated data sets. We also consider two models for accommodating mutation rate variation among loci and find that the model that treats mutation rates as random variables leads to better estimates. We applied the method to the divergence of Drosophila melanogaster and D. simulans and detected a low, but statistically significant, signal of gene flow from D. simulans to D. melanogaster. PMID:19917765
Unified framework to evaluate panmixia and migration direction among multiple sampling locations.
Beerli, Peter; Palczewski, Michal
2010-05-01
For many biological investigations, groups of individuals are genetically sampled from several geographic locations. These sampling locations often do not reflect the genetic population structure. We describe a framework using marginal likelihoods to compare and order structured population models, such as testing whether the sampling locations belong to the same randomly mating population or comparing unidirectional and multidirectional gene flow models. In the context of inferences employing Markov chain Monte Carlo methods, the accuracy of the marginal likelihoods depends heavily on the approximation method used to calculate the marginal likelihood. Two methods, modified thermodynamic integration and a stabilized harmonic mean estimator, are compared. With finite Markov chain Monte Carlo run lengths, the harmonic mean estimator may not be consistent. Thermodynamic integration, in contrast, delivers considerably better estimates of the marginal likelihood. The choice of prior distributions does not influence the order and choice of the better models when the marginal likelihood is estimated using thermodynamic integration, whereas with the harmonic mean estimator the influence of the prior is pronounced and the order of the models changes. The approximation of marginal likelihood using thermodynamic integration in MIGRATE allows the evaluation of complex population genetic models, not only of whether sampling locations belong to a single panmictic population, but also of competing complex structured population models.
Computational fragment-based screening using RosettaLigand: the SAMPL3 challenge
NASA Astrophysics Data System (ADS)
Kumar, Ashutosh; Zhang, Kam Y. J.
2012-05-01
SAMPL3 fragment based virtual screening challenge provides a valuable opportunity for researchers to test their programs, methods and screening protocols in a blind testing environment. We participated in SAMPL3 challenge and evaluated our virtual fragment screening protocol, which involves RosettaLigand as the core component by screening a 500 fragments Maybridge library against bovine pancreatic trypsin. Our study reaffirmed that the real test for any virtual screening approach would be in a blind testing environment. The analyses presented in this paper also showed that virtual screening performance can be improved, if a set of known active compounds is available and parameters and methods that yield better enrichment are selected. Our study also highlighted that to achieve accurate orientation and conformation of ligands within a binding site, selecting an appropriate method to calculate partial charges is important. Another finding is that using multiple receptor ensembles in docking does not always yield better enrichment than individual receptors. On the basis of our results and retrospective analyses from SAMPL3 fragment screening challenge we anticipate that chances of success in a fragment screening process could be increased significantly with careful selection of receptor structures, protein flexibility, sufficient conformational sampling within binding pocket and accurate assignment of ligand and protein partial charges.
Asymmetric masks for laboratory-based X-ray phase-contrast imaging with edge illumination.
Endrizzi, Marco; Astolfo, Alberto; Vittoria, Fabio A; Millard, Thomas P; Olivo, Alessandro
2016-05-05
We report on an asymmetric mask concept that enables X-ray phase-contrast imaging without requiring any movement in the system during data acquisition. The method is compatible with laboratory equipment, namely a commercial detector and a rotating anode tube. The only motion required is that of the object under investigation which is scanned through the imaging system. Two proof-of-principle optical elements were designed, fabricated and experimentally tested. Quantitative measurements on samples of known shape and composition were compared to theory with good agreement. The method is capable of measuring the attenuation, refraction and (ultra-small-angle) X-ray scattering, does not have coherence requirements and naturally adapts to all those situations in which the X-ray image is obtained by scanning a sample through the imaging system.
Liu, Xiaowen; Pervez, Hira; Andersen, Lars W; Uber, Amy; Montissol, Sophia; Patel, Parth; Donnino, Michael W
2015-01-01
Pyruvate dehydrogenase (PDH) activity is altered in many human disorders. Current methods require tissue samples and yield inconsistent results. We describe a modified method for measuring PDH activity from isolated human peripheral blood mononuclear cells (PBMCs). RESULTS/METHODOLOGY: We found that PDH activity and quantity can be successfully measured in human PBMCs. Freeze-thaw cycles cannot efficiently disrupt the mitochondrial membrane. Processing time of up to 20 h does not affect PDH activity with proteinase inhibitor addition and a detergent concentration of 3.3% showed maximum yield. Sample protein concentration is correlated to PDH activity and quantity in human PBMCs from healthy subjects. Measuring PDH activity from PBMCs is a novel, easy and less invasive way to further understand the role of PDH in human disease.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czubek, J.A.; Drozdowicz, K.; Gabanska, B.
Czubek`s method of measurement of the thermal neutron macroscopic absorption cross section of small samples has been developed at the Henryk Niewodniczanski Institute of Nuclear Physics in Krakow, Poland. Theoretical principles of the method have been elaborated in the one-velocity diffusion approach in which the thermal neutron parameters used have been averaged over a modified Maxwellian. In consecutive measurements the investigated sample is enveloped in shells of a known moderator of varying thickness and irradiated with a pulsed beam of fast neutrons. The neutrons are slowed-down in the system and a die-away rate of escaping thermal neutrons is measured. Themore » decay constant vs. thickness of the moderator creates the experimental curve. The absorption cross section of the unknown sample is found from the intersection of this curve with the theoretical one. The theoretical curve is calculated for the case when the dynamic material buckling of the inner sample is zero. The method does not use any reference absorption standard and is independent of the transport cross section of the measured sample. The volume of the sample is form of fluid or crushed material is about 170 cm{sup 3}. The standard deviation for the measured mass absorption cross section of rock samples is in the range of 4 divided by 20% of the measured value and for brines is of the order of 0.5%.« less
Le, Minh Uyen Thi; Son, Jin Gyeong; Shon, Hyun Kyoung; Park, Jeong Hyang; Lee, Sung Bae; Lee, Tae Geol
2018-03-30
Time-of-flight secondary ion mass spectrometry (ToF-SIMS) imaging elucidates molecular distributions in tissue sections, providing useful information about the metabolic pathways linked to diseases. However, delocalization of the analytes and inadequate tissue adherence during sample preparation are among some of the unfortunate phenomena associated with this technique due to their role in the reduction of the quality, reliability, and spatial resolution of the ToF-SIMS images. For these reasons, ToF-SIMS imaging requires a more rigorous sample preparation method in order to preserve the natural state of the tissues. The traditional thaw-mounting method is particularly vulnerable to altered distributions of the analytes due to thermal effects, as well as to tissue shrinkage. In the present study, the authors made comparisons of different tissue mounting methods, including the thaw-mounting method. The authors used conductive tape as the tissue-mounting material on the substrate because it does not require heat from the finger for the tissue section to adhere to the substrate and can reduce charge accumulation during data acquisition. With the conductive-tape sampling method, they were able to acquire reproducible tissue sections and high-quality images without redistribution of the molecules. Also, the authors were successful in preserving the natural states and chemical distributions of the different components of fat metabolites such as diacylglycerol and fatty acids by using the tape-supported sampling in microRNA-14 (miR-14) deleted Drosophila models. The method highlighted here shows an improvement in the accuracy of mass spectrometric imaging of tissue samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robbins, G.A.; Brandes, S.D.; Winschel, R.A.
1995-05-01
The objectives of this project are to support the DOE direct coal liquefaction process development program and to improve the useful application of analytical chemistry to direct coal liquefaction process development. Independent analyses by well-established methods will be obtained of samples produced in direct coal liquefaction processes under evaluation by DOE. Additionally, analytical instruments and techniques which are currently underutilized for the purpose of examining coal-derived samples will be evaluated. The data obtained from this study will be used to help guide current process development and to develop an improved data base on coal and coal liquids properties. A samplemore » bank will be established and maintained for use in this project and will be available for use by other researchers. The reactivity of the non-distillable resids toward hydrocracking at liquefaction conditions (i.e., resid reactivity) will be examined. From the literature and data experimentally obtained, a mathematical kinetic model of resid conversion will be constructed. It is anticipated that such a model will provide insights useful for improving process performance and thus the economics of direct coal liquefaction. During this quarter, analyses were completed on 65 process samples from representative periods of HRI Run POC-2 in which coal, coal/plastics, and coal/rubber were the feedstocks. A sample of the oil phase of the oil/water separator from HRI Run POC-1 was analyzed to determine the types and concentrations of phenolic compounds. Chemical analyses and microautoclave tests were performed to monitor the oxidation and measure the reactivity of the standard coal (Old Ben Mine No. 1) which has been used for the last six years to determine solvent quality of process oils analyzed in this and previous DOE contracts.« less
Estimating means and variances: The comparative efficiency of composite and grab samples.
Brumelle, S; Nemetz, P; Casey, D
1984-03-01
This paper compares the efficiencies of two sampling techniques for estimating a population mean and variance. One procedure, called grab sampling, consists of collecting and analyzing one sample per period. The second procedure, called composite sampling, collectsn samples per period which are then pooled and analyzed as a single sample. We review the well known fact that composite sampling provides a superior estimate of the mean. However, it is somewhat surprising that composite sampling does not always generate a more efficient estimate of the variance. For populations with platykurtic distributions, grab sampling gives a more efficient estimate of the variance, whereas composite sampling is better for leptokurtic distributions. These conditions on kurtosis can be related to peakedness and skewness. For example, a necessary condition for composite sampling to provide a more efficient estimate of the variance is that the population density function evaluated at the mean (i.e.f(μ)) be greater than[Formula: see text]. If[Formula: see text], then a grab sample is more efficient. In spite of this result, however, composite sampling does provide a smaller estimate of standard error than does grab sampling in the context of estimating population means.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, Sam; Dam, Wiliam
In 2012, the U.S.Department of Energy (DOE) began reassessing the former Riverton, Wyoming, Processing Site area for potential contaminant sources impacting groundwater. A flood in 2010 along the Little Wind River resulted in increases in groundwater contamination (DOE 2013).This investigation is a small part of continued efforts by DOE and other stakeholders to update human health and ecological risk assessments, to make a comprehensive examination of all exposure pathways to ensure that the site remains protective through established institutional controls. During field inspections at the Riverton Site in 2013, a white evaporitic mineral deposit was identified along the bank ofmore » the Little Wind River within the discharge zone of the groundwater contamination plume. In December 2013, Savannah River National Laboratory (SRNL) personnel collected a sample for analysis by X-ray fluorescence (Figure 1 shows the type of material sampled). The sample had a uranium concentration of approximately 64 to 73 parts per million. Although the uranium in this mineral deposit is within the expected range for evaporatic minerals in the western United States (SRNL 2014), DOE determined that additional assessment of the mineral deposit was warranted. In response to the initial collection and analysis of a sample of the mineral deposit, DOE developed a work plan (Work Plan to Sample Mineral Deposits Along the Little Wind River, Riverton, Wyoming, Processing Site [DOE 2014]) to further define the extent of these mineral deposits and the concentration of the associated contaminants (Appendix A). The work plan addressed field reconnaissance, mapping, sampling, and the assessment of risk associated with the mineral deposits adjacent to the Little Wind River.« less
NASA Astrophysics Data System (ADS)
Yan, Xiaozhi; He, Duanwei; Xu, Chao; Ren, Xiangting; Zhou, Xiaoling; Liu, Shenzuo
2012-12-01
A new method is introduced for investigating the compressibility of solids under high pressure by in situ electrical resistance measurement of a manganin wire, which is wrapped around the sample. This method does not rely on the lattice parameters measurement, and the continuous volume change of the sample versus pressure can be obtained. Therefore, it is convenient to look at the compressibility of solids, especially for the X-ray diffraction amorphous materials. The I-II and II-III phase transition of Bi accompanying with volume change of 4.5% and 3.5% has been detected using the method, respectively, while the volume change for the phase transition of Tl occurring at 3.67 GPa is determined as 0.5%. The fit of the third-order Birch-Murnaghan equation of state to our data yields a zero-pressure bulk modulus K 0=28.98±0.03 GPa for NaCl and 6.97±0.02 GPa for amorphous red phosphorus.
Version 2.0 Visual Sample Plan (VSP): UXO Module Code Description and Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, Richard O.; Wilson, John E.; O'Brien, Robert F.
2003-05-06
The Pacific Northwest National Laboratory (PNNL) is developing statistical methods for determining the amount of geophysical surveys conducted along transects (swaths) that are needed to achieve specified levels of confidence of finding target areas (TAs) of anomalous readings and possibly unexploded ordnance (UXO) at closed, transferring and transferred (CTT) Department of Defense (DoD) ranges and other sites. The statistical methods developed by PNNL have been coded into the UXO module of the Visual Sample Plan (VSP) software code that is being developed by PNNL with support from the DoD, the U.S. Department of Energy (DOE, and the U.S. Environmental Protectionmore » Agency (EPA). (The VSP software and VSP Users Guide (Hassig et al, 2002) may be downloaded from http://dqo.pnl.gov/vsp.) This report describes and documents the statistical methods developed and the calculations and verification testing that have been conducted to verify that VSPs implementation of these methods is correct and accurate.« less
Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.
Kwak, Nojun
2016-05-20
Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.
Measuring fluorescence polarization with a dichrometer
Sutherland, John C.
2017-04-06
In this article, a method for obtaining fluorescence polarization data from an instrument designed to measure circular and linear dichroism is compared with a previously reported approach. The new method places a polarizer between the sample and a detector mounted perpendicular to the direction of the incident beam and results in determination of the fluorescence polarization ratio, whereas the previous method does not use a polarizer and yields the fluorescence anisotropy. A similar analysis with the detector located axially with the excitation beam demonstrates that there is no frequency modulated signal due to fluorescence polarization in the absence of amore » polarizer.« less
Accelerated Adaptive Integration Method
2015-01-01
Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AIM adaptively changes the value of λ in a single simulation so that conformations sampled at one value of λ seed the conformational space sampled at another λ value. Adapting the value of λ throughout a simulation, however, does not resolve issues in sampling when barriers remain high regardless of the λ value. In this work, we introduce a new method, called Accelerated AIM (AcclAIM), in which the potential energy function is flattened at intermediate values of λ, promoting the exploration of conformational space as the ligand is decoupled from its receptor. We show, with both a simple model system (Bromocyclohexane) and the more complex biomolecule Thrombin, that AcclAIM is a promising approach to overcome high barriers in the calculation of free energies, without the need for any statistical reweighting or additional processors. PMID:24780083
A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing
Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian
2016-01-01
Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623
Samuel, Douglas B; Connolly, Adrian J; Ball, Samuel A
2012-09-01
The DSM-5 proposal indicates that personality disorders (PDs) be defined as collections of maladaptive traits but does not provide a specific diagnostic method. However, researchers have previously suggested that PD constructs can be assessed by comparing individuals' trait profiles with those prototypic of PDs and evidence from the five-factor model (FFM) suggests that these prototype matching scores converge moderately with traditional PD instruments. The current study investigates the convergence of FFM PD prototypes with interview-assigned PD diagnoses in a sample of 99 homeless individuals. This sample had very high rates of PDs, which extends previous research on samples with more modest prevalence rates. Results indicated that diagnostic agreement between these methods was generally low but consistent with the agreement previously observed between explicit PD measures. Furthermore, trait-based and diagnostic interview scores evinced similar relationships with clinically important indicators such as abuse history and past suicide attempts. These findings demonstrate the validity of prototype methods and suggest their consideration for assessing trait-defined PD types within DSM-5.
Inagaki, Kazumi; Narukawa, Tomohiro; Yarita, Takashi; Takatsu, Akiko; Okamoto, Kensaku; Chiba, Koichi
2007-10-01
A coprecipitation method using sample constituents as carrier precipitants was developed that can remove molybdenum, which interferes with the determination of cadmium in grain samples via isotope dilution inductively coupled plasma mass spectrometry (ID-ICPMS). Samples were digested with HNO3, HF, and HClO4, and then purified 6 M sodium hydroxide solution was added to generate colloidal hydrolysis compounds, mainly magnesium hydroxide. Cadmium can be effectively separated from molybdenum because the cadmium forms hydroxides and adsorbs onto and/or is occluded in the colloid, while the molybdenum does not form hydroxides or adsorb onto the hydrolysis colloid. The colloid was separated by centrifugation and then dissolved with 0.2 M HNO3 solution to recover the cadmium. The recovery of Cd achieved using the coprecipitation was >97%, and the removal efficiency of Mo was approximately 99.9%. An extremely low procedural blank (below the detection limit of ICPMS) was achieved by purifying the 6 M sodium hydroxide solution via Mg coprecipitation using Mg(NO3)2 solution. The proposed method was applied to two certified reference materials (NIST SRM 1567a wheat flour and SRM 1568a rice flour) and CCQM-P64 soybean powder. Good analytical results with small uncertainties were obtained for all samples. This method is simple and reliable for the determination of Cd in grain samples by ID-ICPMS.
Gamma Radiation Sterilization Reduces the High-cycle Fatigue Life of Allograft Bone.
Islam, Anowarul; Chapin, Katherine; Moore, Emily; Ford, Joel; Rimnac, Clare; Akkus, Ozan
2016-03-01
Sterilization by gamma radiation impairs the mechanical properties of bone allografts. Previous work related to radiation-induced embrittlement of bone tissue has been limited mostly to monotonic testing which does not necessarily predict the high-cycle fatigue life of allografts in vivo. We designed a custom rotating-bending fatigue device to answer the following questions: (1) Does gamma radiation sterilization affect the high-cycle fatigue behavior of cortical bone; and (2) how does the fatigue life change with cyclic stress level? The high-cycle fatigue behavior of human cortical bone specimens was examined at stress levels related to physiologic levels using a custom-designed rotating-bending fatigue device. Test specimens were distributed among two treatment groups (n = 6/group); control and irradiated. Samples were tested until failure at stress levels of 25, 35, and 45 MPa. At 25 MPa, 83% of control samples survived 30 million cycles (run-out) whereas 83% of irradiated samples survived only 0.5 million cycles. At 35 MPa, irradiated samples showed an approximately 19-fold reduction in fatigue life compared with control samples (12.2 × 10(6) ± 12.3 × 10(6) versus 6.38 × 10(5) ± 6.81 × 10(5); p = 0.046), and in the case of 45 MPa, this reduction was approximately 17.5-fold (7.31 × 10(5) ± 6.39 × 10(5) versus 4.17 × 10(4) ± 1.91 × 10(4); p = 0.025). Equations to estimate high-cycle fatigue life of irradiated and control cortical bone allograft at a certain stress level were derived. Gamma radiation sterilization severely impairs the high cycle fatigue life of structural allograft bone tissues, more so than the decline that has been reported for monotonic mechanical properties. Therefore, clinicians need to be conservative in the expectation of the fatigue life of structural allograft bone tissues. Methods to preserve the fatigue strength of nonirradiated allograft bone tissue are needed. As opposed to what monotonic tests might suggest, the cyclic fatigue life of radiation-sterilized structural allografts is likely severely compromised relative to the nonirradiated condition and therefore should be taken into consideration. Methods to reduce the effect of irradiation or to recover structural allograft bone tissue fatigue strength are important to pursue.
Plotnikov, Nikolay V
2014-08-12
Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force.
Ventham, Nicholas T; Gardner, Richard A; Kennedy, Nicholas A; Shubhakar, Archana; Kalla, Rahul; Nimmo, Elaine R; Fernandes, Daryl L; Satsangi, Jack; Spencer, Daniel I R
2015-01-01
Serum N-glycans have been identified as putative biomarkers for numerous diseases. The impact of different serum sample tubes and processing methods on N-glycan analysis has received relatively little attention. This study aimed to determine the effect of different sample tubes and processing methods on the whole serum N-glycan profile in both health and disease. A secondary objective was to describe a robot automated N-glycan release, labeling and cleanup process for use in a biomarker discovery system. 25 patients with active and quiescent inflammatory bowel disease and controls had three different serum sample tubes taken at the same draw. Two different processing methods were used for three types of tube (with and without gel-separation medium). Samples were randomised and processed in a blinded fashion. Whole serum N-glycan release, 2-aminobenzamide labeling and cleanup was automated using a Hamilton Microlab STARlet Liquid Handling robot. Samples were analysed using a hydrophilic interaction liquid chromatography/ethylene bridged hybrid(BEH) column on an ultra-high performance liquid chromatography instrument. Data were analysed quantitatively by pairwise correlation and hierarchical clustering using the area under each chromatogram peak. Qualitatively, a blinded assessor attempted to match chromatograms to each individual. There was small intra-individual variation in serum N-glycan profiles from samples collected using different sample processing methods. Intra-individual correlation coefficients were between 0.99 and 1. Unsupervised hierarchical clustering and principal coordinate analyses accurately matched samples from the same individual. Qualitative analysis demonstrated good chromatogram overlay and a blinded assessor was able to accurately match individuals based on chromatogram profile, regardless of disease status. The three different serum sample tubes processed using the described methods cause minimal inter-individual variation in serum whole N-glycan profile when processed using an automated workstream. This has important implications for N-glycan biomarker discovery studies using different serum processing standard operating procedures.
2015-01-01
Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force. PMID:25136268
Belu, A; Schnitker, J; Bertazzo, S; Neumann, E; Mayer, D; Offenhäusser, A; Santoro, F
2016-07-01
The preparation of biological cells for either scanning or transmission electron microscopy requires a complex process of fixation, dehydration and drying. Critical point drying is commonly used for samples investigated with a scanning electron beam, whereas resin-infiltration is typically used for transmission electron microscopy. Critical point drying may cause cracks at the cellular surface and a sponge-like morphology of nondistinguishable intracellular compartments. Resin-infiltrated biological samples result in a solid block of resin, which can be further processed by mechanical sectioning, however that does not allow a top view examination of small cell-cell and cell-surface contacts. Here, we propose a method for removing resin excess on biological samples before effective polymerization. In this way the cells result to be embedded in an ultra-thin layer of epoxy resin. This novel method highlights in contrast to standard methods the imaging of individual cells not only on nanostructured planar surfaces but also on topologically challenging substrates with high aspect ratio three-dimensional features by scanning electron microscopy. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
Dennison, Andrew G.
Classification of the seafloor substrate can be done with a variety of methods. These methods include Visual (dives, drop cameras); mechanical (cores, grab samples); acoustic (statistical analysis of echosounder returns). Acoustic methods offer a more powerful and efficient means of collecting useful information about the bottom type. Due to the nature of an acoustic survey, larger areas can be sampled, and by combining the collected data with visual and mechanical survey methods provide greater confidence in the classification of a mapped region. During a multibeam sonar survey, both bathymetric and backscatter data is collected. It is well documented that the statistical characteristic of a sonar backscatter mosaic is dependent on bottom type. While classifying the bottom-type on the basis on backscatter alone can accurately predict and map bottom-type, i.e a muddy area from a rocky area, it lacks the ability to resolve and capture fine textural details, an important factor in many habitat mapping studies. Statistical processing of high-resolution multibeam data can capture the pertinent details about the bottom-type that are rich in textural information. Further multivariate statistical processing can then isolate characteristic features, and provide the basis for an accurate classification scheme. The development of a new classification method is described here. It is based upon the analysis of textural features in conjunction with ground truth sampling. The processing and classification result of two geologically distinct areas in nearshore regions of Lake Superior; off the Lester River,MN and Amnicon River, WI are presented here, using the Minnesota Supercomputer Institute's Mesabi computing cluster for initial processing. Processed data is then calibrated using ground truth samples to conduct an accuracy assessment of the surveyed areas. From analysis of high-resolution bathymetry data collected at both survey sites is was possible to successfully calculate a series of measures that describe textural information about the lake floor. Further processing suggests that the features calculated capture a significant amount of statistical information about the lake floor terrain as well. Two sources of error, an anomalous heave and refraction error significantly deteriorated the quality of the processed data and resulting validate results. Ground truth samples used to validate the classification methods utilized for both survey sites, however, resulted in accuracy values ranging from 5 -30 percent at the Amnicon River, and between 60-70 percent for the Lester River. The final results suggest that this new processing methodology does adequately capture textural information about the lake floor and does provide an acceptable classification in the absence of significant data quality issues.
ERIC Educational Resources Information Center
de Ruiter, Karen P.; Dekker, Marielle C.; Douma, Jolanda C. H.; Verhulst, Frank C.; Koot, Hans M.
2008-01-01
Background: This study described similarities and differences in the 5-year stability and change of problem behaviour between youths attending schools for children with mild to borderline (MiID) versus moderate intellectual disabilities (MoID). Methods: A two-wave multiple-birth-cohort sample of 6 to 18-year-old was assessed twice across a 5-year…
Review of present groundwater monitoring programs at the Nevada Test Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hershey, R.L.; Gillespie, D.
1993-09-01
Groundwater monitoring at the Nevada Test Site (NTS) is conducted to detect the presence of radionuclides produced by underground nuclear testing and to verify the quality and safety of groundwater supplies as required by the State of Nevada and federal regulations, and by U.S. Department of Energy (DOE) Orders. Groundwater is monitored at water-supply wells and at other boreholes and wells not specifically designed or located for traditional groundwater monitoring objectives. Different groundwater monitoring programs at the NTS are conducted by several DOE Nevada Operations Office (DOE/NV) contractors. Presently, these individual groundwater monitoring programs have not been assessed or administeredmore » under a comprehensive planning approach. Redundancy exists among the programs in both the sampling locations and the constituents analyzed. Also, sampling for certain radionuclides is conducted more frequently than required. The purpose of this report is to review the existing NTS groundwater monitoring programs and make recommendations for modifying the programs so a coordinated, streamlined, and comprehensive monitoring effort may be achieved by DOE/NV. This review will be accomplished in several steps. These include: summarizing the present knowledge of the hydrogeology of the NTS and the potential radionuclide source areas for groundwater contamination; reviewing the existing groundwater monitoring programs at the NTS; examining the rationale for monitoring and the constituents analyzed; reviewing the analytical methods used to quantify tritium activity; discussing monitoring network design criteria; and synthesizing the information presented and making recommendations based on the synthesis. This scope of work was requested by the DOE/NV Hydrologic Resources Management Program (HRMP) and satisfies the 1993 (fiscal year) HRMP Groundwater Monitoring Program Review task.« less
A portable molecular-sieve-based CO2 sampling system for radiocarbon measurements
NASA Astrophysics Data System (ADS)
Palonen, V.
2015-12-01
We have developed a field-capable sampling system for the collection of CO2 samples for radiocarbon-concentration measurements. Most target systems in environmental research are limited in volume and CO2 concentration, making conventional flask sampling hard or impossible for radiocarbon studies. The present system captures the CO2 selectively to cartridges containing 13X molecular sieve material. The sampling does not introduce significant under-pressures or significant losses of moisture to the target system, making it suitable for most environmental targets. The system also incorporates a significantly larger sieve container for the removal of CO2 from chambers prior to the CO2 build-up phase and sampling. In addition, both the CO2 and H2O content of the sample gas are measured continuously. This enables in situ estimation of the amount of collected CO2 and the determination of CO2 flux to a chamber. The portable sampling system is described in detail and tests for the reliability of the method are presented.
A portable molecular-sieve-based CO2 sampling system for radiocarbon measurements.
Palonen, V
2015-12-01
We have developed a field-capable sampling system for the collection of CO2 samples for radiocarbon-concentration measurements. Most target systems in environmental research are limited in volume and CO2 concentration, making conventional flask sampling hard or impossible for radiocarbon studies. The present system captures the CO2 selectively to cartridges containing 13X molecular sieve material. The sampling does not introduce significant under-pressures or significant losses of moisture to the target system, making it suitable for most environmental targets. The system also incorporates a significantly larger sieve container for the removal of CO2 from chambers prior to the CO2 build-up phase and sampling. In addition, both the CO2 and H2O content of the sample gas are measured continuously. This enables in situ estimation of the amount of collected CO2 and the determination of CO2 flux to a chamber. The portable sampling system is described in detail and tests for the reliability of the method are presented.
van der Put, Robert M F; de Haan, Alex; van den IJssel, Jan G M; Hamidi, Ahd; Beurret, Michel
2015-11-27
Due to the rapidly increasing introduction of Haemophilus influenzae type b (Hib) and other conjugate vaccines worldwide during the last decade, reliable and robust analytical methods are needed for the quantitative monitoring of intermediate samples generated during fermentation (upstream processing, USP) and purification (downstream processing, DSP) of polysaccharide vaccine components. This study describes the quantitative characterization of in-process control (IPC) samples generated during the fermentation and purification of the capsular polysaccharide (CPS), polyribosyl-ribitol-phosphate (PRP), derived from Hib. Reliable quantitative methods are necessary for all stages of production; otherwise accurate process monitoring and validation is not possible. Prior to the availability of high performance anion exchange chromatography methods, this polysaccharide was predominantly quantified either with immunochemical methods, or with the colorimetric orcinol method, which shows interference from fermentation medium components and reagents used during purification. Next to an improved high performance anion exchange chromatography-pulsed amperometric detection (HPAEC-PAD) method, using a modified gradient elution, both the orcinol assay and high performance size exclusion chromatography (HPSEC) analyses were evaluated. For DSP samples, it was found that the correlation between the results obtained by HPAEC-PAD specific quantification of the PRP monomeric repeat unit released by alkaline hydrolysis, and those from the orcinol method was high (R(2)=0.8762), and that it was lower between HPAEC-PAD and HPSEC results. Additionally, HPSEC analysis of USP samples yielded surprisingly comparable results to those obtained by HPAEC-PAD. In the early part of the fermentation, medium components interfered with the different types of analysis, but quantitative HPSEC data could still be obtained, although lacking the specificity of the HPAEC-PAD method. Thus, the HPAEC-PAD method has the advantage of giving a specific response compared to the orcinol assay and HPSEC, and does not show interference from various components that can be present in intermediate and purified PRP samples. Copyright © 2014 Elsevier Ltd. All rights reserved.
A receptor binding assay for paralytic shellfish poisoning toxins: recent advances and applications.
Powell, C L; Doucette, G J
1999-01-01
We recently described a high throughput receptor binding assay for paralytic shellfish poisoning (PSP) toxins, the use of the assay for detecting toxic activity in shellfish and algal extracts, and the validation of 11-[3H]-tetrodotoxin as an alternative radioligand to the [3H]-saxitoxin conventionally employed in the assay. Here, we report a dramatic increase in assay efficiency through application of microplate scintillation technology, resulting in an assay turn around time of 4 h. Efforts are now focused on demonstrating the range of applications for which this receptor assay can provide data comparable to the more time consuming, technically demanding HPLC analysis of PSP toxins, currently the method of choice for researchers. To date, we have compared the results of both methods for a variety of sample types, including different genera of PSP toxin producing dinoflagellates (e.g. Alexandrium lusitanicum, r2 = 0.9834, n = 12), size-fractioned field samples of Alexandrium spp. (20-64 microm; r2 = 0.9997, n = 10) as well as its associated zooplankton grazer community (200-500 microm: r2 = 0.6169, n = 10; >500 microm: r2 = 0.5063, n = 10), and contaminated human fluids (r2 = 0.9661, n = 7) from a PSP outbreak. Receptor-based STX equivalent values for all but the zooplankton samples were highly correlated and exhibited close quantitative agreement with those produced by HPLC. While the PSP receptor binding assay does not provide information on toxin composition obtainable by HPLC, it does represent a robust and reliable means of rapidly assessing PSP-like toxicity in laboratory and field samples. Moreover, this assay should be effective as a screening tool for use by public health officials in responding to suspected cases of PSP intoxication.
Quick, Joshua; Grubaugh, Nathan D; Pullan, Steven T; Claro, Ingra M; Smith, Andrew D; Gangavarapu, Karthik; Oliveira, Glenn; Robles-Sikisaka, Refugio; Rogers, Thomas F; Beutler, Nathan A; Burton, Dennis R; Lewis-Ximenez, Lia Laura; de Jesus, Jaqueline Goes; Giovanetti, Marta; Hill, Sarah C; Black, Allison; Bedford, Trevor; Carroll, Miles W; Nunes, Marcio; Alcantara, Luiz Carlos; Sabino, Ester C; Baylis, Sally A; Faria, Nuno R; Loose, Matthew; Simpson, Jared T; Pybus, Oliver G; Andersen, Kristian G; Loman, Nicholas J
2017-06-01
Genome sequencing has become a powerful tool for studying emerging infectious diseases; however, genome sequencing directly from clinical samples (i.e., without isolation and culture) remains challenging for viruses such as Zika, for which metagenomic sequencing methods may generate insufficient numbers of viral reads. Here we present a protocol for generating coding-sequence-complete genomes, comprising an online primer design tool, a novel multiplex PCR enrichment protocol, optimized library preparation methods for the portable MinION sequencer (Oxford Nanopore Technologies) and the Illumina range of instruments, and a bioinformatics pipeline for generating consensus sequences. The MinION protocol does not require an Internet connection for analysis, making it suitable for field applications with limited connectivity. Our method relies on multiplex PCR for targeted enrichment of viral genomes from samples containing as few as 50 genome copies per reaction. Viral consensus sequences can be achieved in 1-2 d by starting with clinical samples and following a simple laboratory workflow. This method has been successfully used by several groups studying Zika virus evolution and is facilitating an understanding of the spread of the virus in the Americas. The protocol can be used to sequence other viral genomes using the online Primal Scheme primer designer software. It is suitable for sequencing either RNA or DNA viruses in the field during outbreaks or as an inexpensive, convenient method for use in the lab.
Della Pelle, Flavio; González, María Cristina; Sergi, Manuel; Del Carlo, Michele; Compagnone, Dario; Escarpa, Alberto
2015-07-07
In this work, a rapid and simple gold nanoparticle (AuNPs)-based colorimetric assay meets a new type of synthesis of AuNPs in organic medium requiring no sample extraction. The AuNPs synthesis extraction-free approach strategically involves the use of dimethyl sulfoxide (DMSO) acting as an organic solvent for simultaneous sample analyte solubilization and AuNPs stabilization. Moreover, DMSO works as a cryogenic protector avoiding solidification at the temperatures used to block the synthesis. In addition, the chemical function as AuNPs stabilizers of the sample endogenous fatty acids is also exploited, avoiding the use of common surfactant AuNPs stabilizers, which, in an organic/aqueous medium, rise to the formation of undesirable emulsions. This is controlled by adding a fat analyte free sample (sample blank). The method was exhaustively applied for the determination of total polyphenols in two selected kinds of fat-rich liquid and solid samples with high antioxidant activity and economic impact: olive oil (n = 28) and chocolate (n = 16) samples. Fatty sample absorbance is easily followed by the absorption band of localized surface plasmon resonance (LSPR) at 540 nm and quantitation is refereed to gallic acid equivalents. A rigorous evaluation of the method was performed by comparison with the well and traditionally established Folin-Ciocalteu (FC) method, obtaining an excellent correlation for olive oil samples (R = 0.990, n = 28) and for chocolate samples (R = 0.905, n = 16). Additionally, it was also found that the proposed approach was selective (vs other endogenous sample tocopherols and pigments), fast (15-20 min), cheap and simple (does not require expensive/complex equipment), with a very limited amount of sample (30 μL) needed and a significant lower solvent consumption (250 μL in 500 μL total reaction volume) compared to classical methods.
Computer program for the calculation of grain size statistics by the method of moments
Sawyer, Michael B.
1977-01-01
A computer program is presented for a Hewlett-Packard Model 9830A desk-top calculator (1) which calculates statistics using weight or point count data from a grain-size analysis. The program uses the method of moments in contrast to the more commonly used but less inclusive graphic method of Folk and Ward (1957). The merits of the program are: (1) it is rapid; (2) it can accept data in either grouped or ungrouped format; (3) it allows direct comparison with grain-size data in the literature that have been calculated by the method of moments; (4) it utilizes all of the original data rather than percentiles from the cumulative curve as in the approximation technique used by the graphic method; (5) it is written in the computer language BASIC, which is easily modified and adapted to a wide variety of computers; and (6) when used in the HP-9830A, it does not require punching of data cards. The method of moments should be used only if the entire sample has been measured and the worker defines the measured grain-size range. (1) Use of brand names in this paper does not imply endorsement of these products by the U.S. Geological Survey.
UMTRA Project water sampling and analysis plan, Durango, Colorado. Revision 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-09-01
Planned, routine ground water sampling activities at the US Department of Energy (DOE) Uranium Mill Tailings Remedial Action (UMTRA) Project site in Durango, Colorado, are described in this water sampling and analysis plan. The plan identifies and justifies the sampling locations, analytical parameters, detection limits, and sampling frequency for the routine monitoring stations at the site. The ground water data are used to characterize the site ground water compliance strategies and to monitor contaminants of potential concern identified in the baseline risk assessment (DOE, 1995a). Regulatory basis for routine ground water monitoring at UMTRA Project sites is derived from themore » US EPA regulations in 40 CFR Part 192 (1994) and EPA standards of 1995 (60 FR 2854). Sampling procedures are guided by the UMTRA Project standard operating procedures (SOP) (JEG, n.d.), the Technical Approach Document (TAD) (DOE, 1989), and the most effective technical approach for the site.« less
Final voluntary release assessment/corrective action report
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-11-12
The US Department of Energy, Carlsbad Area Office (DOE-CAO) has completed a voluntary release assessment sampling program at selected Solid Waste Management Units (SWMUs) at the Waste Isolation Pilot Plant (WIPP). This Voluntary Release Assessment/Corrective Action (RA/CA) report has been prepared for final submittal to the Environmental protection Agency (EPA) Region 6, Hazardous Waste Management Division and the New Mexico Environment Department (NMED) Hazardous and Radioactive Materials Bureau to describe the results of voluntary release assessment sampling and proposed corrective actions at the SWMU sites. The Voluntary RA/CA Program is intended to be the first phase in implementing the Resourcemore » Conservation and Recovery Act (RCRA) Facility Investigation (RFI) and corrective action process at the WIPP. Data generated as part of this sampling program are intended to update the RCRA Facility Assessment (RFA) for the WIPP (Assessment of Solid Waste Management Units at the Waste Isolation Pilot Plant), NMED/DOE/AIP 94/1. This Final Voluntary RA/CA Report documents the results of release assessment sampling at 11 SWMUs identified in the RFA. With this submittal, DOE formally requests a No Further Action determination for these SWMUs. Additionally, this report provides information to support DOE`s request for No Further Action at the Brinderson and Construction landfill SWMUs, and to support DOE`s request for approval of proposed corrective actions at three other SWMUs (the Badger Unit Drill Pad, the Cotton Baby Drill Pad, and the DOE-1 Drill Pad). This information is provided to document the results of the Voluntary RA/CA activities submitted to the EPA and NMED in August 1995.« less
Insights from two industrial hygiene pilot e-cigarette passive vaping studies.
Maloney, John C; Thompson, Michael K; Oldham, Michael J; Stiff, Charles L; Lilly, Patrick D; Patskan, George J; Shafer, Kenneth H; Sarkar, Mohamadi A
2016-01-01
While several reports have been published using research methods of estimating exposure risk to e-cigarette vapors in nonusers, only two have directly measured indoor air concentrations from vaping using validated industrial hygiene sampling methodology. Our first study was designed to measure indoor air concentrations of nicotine, menthol, propylene glycol, glycerol, and total particulates during the use of multiple e-cigarettes in a well-characterized room over a period of time. Our second study was a repeat of the first study, and it also evaluated levels of formaldehyde. Measurements were collected using active sampling, near real-time and direct measurement techniques. Air sampling incorporated industrial hygiene sampling methodology using analytical methods established by the National Institute of Occupational Safety and Health and the Occupational Safety and Health Administration. Active samples were collected over a 12-hr period, for 4 days. Background measurements were taken in the same room the day before and the day after vaping. Panelists (n = 185 Study 1; n = 145 Study 2) used menthol and non-menthol MarkTen prototype e-cigarettes. Vaping sessions (six, 1-hr) included 3 prototypes, with total number of puffs ranging from 36-216 per session. Results of the active samples were below the limit of quantitation of the analytical methods. Near real-time data were below the lowest concentration on the established calibration curves. Data from this study indicate that the majority of chemical constituents sampled were below quantifiable levels. Formaldehyde was detected at consistent levels during all sampling periods. These two studies found that indoor vaping of MarkTen prototype e-cigarette does not produce chemical constituents at quantifiable levels or background levels using standard industrial hygiene collection techniques and analytical methods.
[Does dark field microscopy according to Enderlein allow for cancer diagnosis? A prospective study].
El-Safadi, Samer; Tinneberg, Hans-Rudolf; von Georgi, Richard; Münstedt, Karsten; Brück, Friede
2005-06-01
Dark field microscopy according to Enderlin claims to be able to detect forthcoming or beginning cancer at an early stage through minute abnormalities in the blood. In Germany and the USA, this method is used by an increasing number of physicians and health practitioners (non-medically qualified complementary practitioners), because this easy test seems to give important information about patients' health status. Can dark field microscopy reliably detect cancer? In the course of a prospective study on iridology, blood samples were drawn for dark field microscopy in 110 patients. A health practitioner with several years of training in the field carried out the examination without prior information about the patients. Out of 12 patients with present tumor metastasis as confirmed by radiological methods (CT, MRI or ultra-sound) 3 were correctly identified. Analysis of sensitivity (0.25), specificity (0.64), positive (0.09) and negative (0.85) predictive values revealed unsatisfactory results. Dark field micoroscopy does not seem to reliably detect the presence of cancer. Clinical use of the method can therefore not be recommended until future studies are conducted.
Simulating and assessing boson sampling experiments with phase-space representations
NASA Astrophysics Data System (ADS)
Opanchuk, Bogdan; Rosales-Zárate, Laura; Reid, Margaret D.; Drummond, Peter D.
2018-04-01
The search for new, application-specific quantum computers designed to outperform any classical computer is driven by the ending of Moore's law and the quantum advantages potentially obtainable. Photonic networks are promising examples, with experimental demonstrations and potential for obtaining a quantum computer to solve problems believed classically impossible. This introduces a challenge: how does one design or understand such photonic networks? One must be able to calculate observables using general methods capable of treating arbitrary inputs, dissipation, and noise. We develop complex phase-space software for simulating these photonic networks, and apply this to boson sampling experiments. Our techniques give sampling errors orders of magnitude lower than experimental correlation measurements for the same number of samples. We show that these techniques remove systematic errors in previous algorithms for estimating correlations, with large improvements in errors in some cases. In addition, we obtain a scalable channel-combination strategy for assessment of boson sampling devices.
Boudin, Mathieu; Boeckx, Pascal; Vandenabeele, Peter; Van Strydonck, Mark
2013-09-30
Radiocarbon dating and stable isotope analyses of bone collagen, wool, hair and silk contaminated with extraneous carbon (e.g. humic substances) does not yield reliable results if these materials are pre-treated using conventional methods. A cross-flow nanofiltration method was developed that can be applied to various protein materials like collagen, hair, silk, wool and leather, and should be able to remove low-molecular and high-molecular weight contaminants. To avoid extraneous carbon contamination via the filter a ceramic filter (molecular weight cut-off of 200 Da) was used. The amino acids, released by hot acid hydrolysis of the protein material, were collected in the permeate and contaminants in the retentate (>200 Da). (14)C-dating results for various contaminated archaeological samples were compared for bulk material (pre-treated with the conventional methods) and for cross-flow nanofiltrated amino acids (permeate) originating from the same samples. Contamination and quality control of (14)C dates of bulk and permeate samples were obtained by measuring C:N ratios, fluorescence spectra, and δ(13)C and δ(15)N values of the samples. Cross-flow nanofiltration decreases the C:N ratio which means that contaminants have been removed. Cross-flow nanofiltration clearly improved sample quality and (14)C results. It is a quick and non-labor-intensive technique and can easily be implemented in any (14)C and stable isotope laboratory for routine sample pre-treatment analyses. Copyright © 2013 John Wiley & Sons, Ltd.
Replica exchange with solute tempering: A method for sampling biological systems in explicit water
NASA Astrophysics Data System (ADS)
Liu, Pu; Kim, Byungchan; Friesner, Richard A.; Berne, B. J.
2005-09-01
An innovative replica exchange (parallel tempering) method called replica exchange with solute tempering (REST) for the efficient sampling of aqueous protein solutions is presented here. The method bypasses the poor scaling with system size of standard replica exchange and thus reduces the number of replicas (parallel processes) that must be used. This reduction is accomplished by deforming the Hamiltonian function for each replica in such a way that the acceptance probability for the exchange of replica configurations does not depend on the number of explicit water molecules in the system. For proof of concept, REST is compared with standard replica exchange for an alanine dipeptide molecule in water. The comparisons confirm that REST greatly reduces the number of CPUs required by regular replica exchange and increases the sampling efficiency. This method reduces the CPU time required for calculating thermodynamic averages and for the ab initio folding of proteins in explicit water. Author contributions: B.J.B. designed research; P.L. and B.K. performed research; P.L. and B.K. analyzed data; and P.L., B.K., R.A.F., and B.J.B. wrote the paper.Abbreviations: REST, replica exchange with solute tempering; REM, replica exchange method; MD, molecular dynamics.*P.L. and B.K. contributed equally to this work.
Sasakura, D; Nakayama, K; Sakamoto, T; Chikuma, T
2015-05-01
The use of transmission near infrared spectroscopy (TNIRS) is of particular interest in the pharmaceutical industry. This is because TNIRS does not require sample preparation and can analyze several tens of tablet samples in an hour. It has the capability to measure all relevant information from a tablet, while still on the production line. However, TNIRS has a narrow spectrum range and overtone vibrations often overlap. To perform content uniformity testing in tablets by TNIRS, various properties in the tableting process need to be analyzed by a multivariate prediction model, such as a Partial Least Square Regression modeling. One issue is that typical approaches require several hundred reference samples to act as the basis of the method rather than a strategically designed method. This means that many batches are needed to prepare the reference samples; this requires time and is not cost effective. Our group investigated the concentration dependence of the calibration model with a strategic design. Consequently, we developed a more effective approach to the TNIRS calibration model than the existing methodology.
Harmonic Fourier beads method for studying rare events on rugged energy surfaces.
Khavrutskii, Ilja V; Arora, Karunesh; Brooks, Charles L
2006-11-07
We present a robust, distributable method for computing minimum free energy paths of large molecular systems with rugged energy landscapes. The method, which we call harmonic Fourier beads (HFB), exploits the Fourier representation of a path in an appropriate coordinate space and proceeds iteratively by evolving a discrete set of harmonically restrained path points-beads-to generate positions for the next path. The HFB method does not require explicit knowledge of the free energy to locate the path. To compute the free energy profile along the final path we employ an umbrella sampling method in two generalized dimensions. The proposed HFB method is anticipated to aid the study of rare events in biomolecular systems. Its utility is demonstrated with an application to conformational isomerization of the alanine dipeptide in gas phase.
Hansen, Maj; Hyland, Philip; Karstoft, Karen-Inge; Vaegter, Henrik B.; Bramsen, Rikke H.; Nielsen, Anni B. S.; Armour, Cherie; Andersen, Søren B.; Høybye, Mette Terp; Larsen, Simone Kongshøj; Andersen, Tonny E.
2017-01-01
ABSTRACT Background: Researchers and clinicians within the field of trauma have to choose between different diagnostic descriptions of posttraumatic stress disorder (PTSD) in the DSM-5 and the proposed ICD-11. Several studies support different competing models of the PTSD structure according to both diagnostic systems; however, findings show that the choice of diagnostic systems can affect the estimated prevalence rates. Objectives: The present study aimed to investigate the potential impact of using a large (i.e. the DSM-5) compared to a small (i.e. the ICD-11) diagnostic description of PTSD. In other words, does the size of PTSD really matter? Methods: The aim was investigated by examining differences in diagnostic rates between the two diagnostic systems and independently examining the model fit of the competing DSM-5 and ICD-11 models of PTSD across three trauma samples: university students (N = 4213), chronic pain patients (N = 573), and military personnel (N = 118). Results: Diagnostic rates of PTSD were significantly lower according to the proposed ICD-11 criteria in the university sample, but no significant differences were found for chronic pain patients and military personnel. The proposed ICD-11 three-factor model provided the best fit of the tested ICD-11 models across all samples, whereas the DSM-5 seven-factor Hybrid model provided the best fit in the university and pain samples, and the DSM-5 six-factor Anhedonia model provided the best fit in the military sample of the tested DSM-5 models. Conclusions: The advantages and disadvantages of using a broad or narrow set of symptoms for PTSD can be debated, however, this study demonstrated that choice of diagnostic system may influence the estimated PTSD rates both qualitatively and quantitatively. In the current described diagnostic criteria only the ICD-11 model can reflect the configuration of symptoms satisfactorily. Thus, size does matter when assessing PTSD. PMID:29201287
Hansen, Maj; Hyland, Philip; Karstoft, Karen-Inge; Vaegter, Henrik B; Bramsen, Rikke H; Nielsen, Anni B S; Armour, Cherie; Andersen, Søren B; Høybye, Mette Terp; Larsen, Simone Kongshøj; Andersen, Tonny E
2017-01-01
Background : Researchers and clinicians within the field of trauma have to choose between different diagnostic descriptions of posttraumatic stress disorder (PTSD) in the DSM-5 and the proposed ICD-11. Several studies support different competing models of the PTSD structure according to both diagnostic systems; however, findings show that the choice of diagnostic systems can affect the estimated prevalence rates. Objectives : The present study aimed to investigate the potential impact of using a large (i.e. the DSM-5) compared to a small (i.e. the ICD-11) diagnostic description of PTSD. In other words, does the size of PTSD really matter? Methods: The aim was investigated by examining differences in diagnostic rates between the two diagnostic systems and independently examining the model fit of the competing DSM-5 and ICD-11 models of PTSD across three trauma samples: university students ( N = 4213), chronic pain patients ( N = 573), and military personnel ( N = 118). Results : Diagnostic rates of PTSD were significantly lower according to the proposed ICD-11 criteria in the university sample, but no significant differences were found for chronic pain patients and military personnel. The proposed ICD-11 three-factor model provided the best fit of the tested ICD-11 models across all samples, whereas the DSM-5 seven-factor Hybrid model provided the best fit in the university and pain samples, and the DSM-5 six-factor Anhedonia model provided the best fit in the military sample of the tested DSM-5 models. Conclusions : The advantages and disadvantages of using a broad or narrow set of symptoms for PTSD can be debated, however, this study demonstrated that choice of diagnostic system may influence the estimated PTSD rates both qualitatively and quantitatively. In the current described diagnostic criteria only the ICD-11 model can reflect the configuration of symptoms satisfactorily. Thus, size does matter when assessing PTSD.
Evaluation of metal content in perch of the Ob River basin
NASA Astrophysics Data System (ADS)
Osipova, N. A.; Stepanova, K. D.; Matveenko, I. A.
2015-11-01
The geochemical features of river perch in the River Ob basin have been studied (the upper and middle reaches of the Ob River and the lower reach of the Tom River). The contents of Ag, Bi, Cd, Co, Cr, Cu, Fe, Mn, Mo, Ni, Pb, Sn, W, Zn, Hg in perch's soft tissue are defined by the methods of ICP AES and stripping voltammetry, that of mercury in bones - by the atomic absorption method using mercury analyzer PA-915+. The distribution series of metal absolute concentrations in perch's soft tissue from the Ob River basin are plotted: Fe > Zn > Cu > Mn, typical for uncontaminated or slightly metal contaminated water bodies. In soft tissue of the studied samples the metal content does not exceed the permissible values. The mercury content in bones of studied samples is in the range 0,036-0,556 mg/kg. The mercury concentration is higher in bones in comparison with soft tissue in all samples.
Determination of 99Tc in fresh water using TRU resin by ICP-MS.
Guérin, Nicolas; Riopel, Remi; Kramer-Tremblay, Sheila; de Silva, Nimal; Cornett, Jack; Dai, Xiongxin
2017-10-02
Technetium-99 ( 99 Tc) determination at trace level by inductively coupled plasma mass spectrometry (ICP-MS) is challenging because there is no readily available appropriate Tc isotopic tracer. A new method using Re as a recovery tracer to determine 99 Tc in fresh water samples, which does not require any evaporation step, was developed. Tc(VII) and Re(VII) were pre-concentrated on a small anion exchange resin (AER) cartridge from one litre of water sample. They were then efficiently eluted from the AER using a potassium permanganate (KMnO 4 ) solution. After the reduction of KMnO 4 in 2 M sulfuric acid solution, the sample was passed through a small TRU resin cartridge. Tc(VII) and Re(VII) retained on the TRU resin were eluted using near boiling water, which can be directly used for the ICP-MS measurement. The results for method optimisation, validation and application were reported. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
A single test for rejecting the null hypothesis in subgroups and in the overall sample.
Lin, Yunzhi; Zhou, Kefei; Ganju, Jitendra
2017-01-01
In clinical trials, some patient subgroups are likely to demonstrate larger effect sizes than other subgroups. For example, the effect size, or informally the benefit with treatment, is often greater in patients with a moderate condition of a disease than in those with a mild condition. A limitation of the usual method of analysis is that it does not incorporate this ordering of effect size by patient subgroup. We propose a test statistic which supplements the conventional test by including this information and simultaneously tests the null hypothesis in pre-specified subgroups and in the overall sample. It results in more power than the conventional test when the differences in effect sizes across subgroups are at least moderately large; otherwise it loses power. The method involves combining p-values from models fit to pre-specified subgroups and the overall sample in a manner that assigns greater weight to subgroups in which a larger effect size is expected. Results are presented for randomized trials with two and three subgroups.
Quantitation of glycerophosphorylcholine by flow injection analysis using immobilized enzymes.
Mancini, A; Del Rosso, F; Roberti, R; Caligiana, P; Vecchini, A; Binaglia, L
1996-09-20
A method for quantitating glycerophosphorylcholine by flow injection analysis is reported in the present paper. Glycerophosphorylcholine phosphodiesterase and choline oxidase, immobilized on controlled porosity glass beads, are packed in a small reactor inserted in a flow injection manifold. When samples containing glycerophosphorylcholine are injected, glycerophosphorylcholine is hydrolyzed into choline and sn-glycerol-3-phosphate. The free choline produced in this reaction is oxidized to betain and hydrogen peroxide. Hydrogen peroxide is detected amperometrically. Quantitation of glycerophosphorylcholine in samples containing choline and phosphorylcholine is obtained inserting ahead of the reactor a small column packed with a mixed bed ion exchange resin. The time needed for each determination does not exceed one minute. The present method, applied to quantitate glycerophosphorylcholine in samples of seminal plasma, gave results comparable with those obtained using the standard enzymatic-spectrophotometric procedure. An alternative procedure, making use of co-immobilized glycerophosphorylcholine phosphodiesterase and glycerol-3-phosphate oxidase for quantitating glycerophosphorylcholine, glycerophosphorylethanolamine and glycerophosphorylserine is also described.
Ambient Ionization Mass Spectrometry Measurement of Aminotransferase Activity
NASA Astrophysics Data System (ADS)
Yan, Xin; Li, Xin; Zhang, Chengsen; Xu, Yang; Cooks, R. Graham
2017-06-01
A change in enzyme activity has been used as a clinical biomarker for diagnosis and is useful in evaluating patient prognosis. Current laboratory measurements of enzyme activity involve multi-step derivatization of the reaction products followed by quantitative analysis of these derivatives. This study simplified the reaction systems by using only the target enzymatic reaction and directly detecting its product. A protocol using paper spray mass spectrometry for identifying and quantifying the reaction product has been developed. Evaluation of the activity of aspartate aminotransferase (AST) was chosen as a proof-of-principle. The volume of sample needed is greatly reduced compared with the traditional method. Paper spray has a desalting effect that avoids sprayer clogging problems seen when examining serum samples by nanoESI. This very simple method does not require sample pretreatment and additional derivatization reactions, yet it gives high quality kinetic data, excellent limits of detection (60 ppb from serum), and coefficients of variation <10% in quantitation. [Figure not available: see fulltext.
Maw, Min Thein; Yamaguchi, Tsuyoshi; Kasanga, Christopher J; Terasaki, Kaori; Fukushi, Hideto
2006-12-01
A practical sampling method for bursal tissue using ordinary paper for molecular diagnosis of infectious bursal disease (IBD) was established. IBD virus-infected bursa was directly smeared on chromatography paper, filter paper, or stationery copy paper and was then fixed with absolute ethanol, Tris-HCl-saturated phenol, or phenol:chloroform:isoamyl alcohol (25:24:1). Flinders Technology Associates (FTA) card, which is designed for the collection of biological samples for molecular detection, was also used. After storage at 37 C for up to 30 days, total RNA directly extracted from the tissue fixed on the papers and FTA card were subjected to reverse transcriptase-polymerase chain reaction (RT-PCR) for detection of IBD virus (IBDV) RNA. In addition, the ability of each chemical used in the fixation and the FTA card to inactivate IBDV was evaluated. Regardless of the paper quality, storage period, and fixation method, IBDV RNA was consistently detected in all of the samples. IBDV in the bursal tissue was inactivated with phenol but not with ethanol or the unknown chemicals in FTA card. These results show that ordinary papers sustain the viral RNA, as does FTA card, but phenol fixation is superior to FTA card in inactivating IBDV. The new sampling method using ordinary paper with phenol fixation is safe, inexpensive, simple, and easy, and is thus suitable for conducting a global survey of IBD even where laboratory resources are limited. This practical method should contribute to the control of IBD worldwide.
A CHARACTERIZATION AND EVALUATION OF COAL LIQUEFACTION PROCESS STREAMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
G.A. Robbins; R.A. Winschel; S.D. Brandes
This is the first Annual Technical Report of activities under DOE Contract No. DE-AC22-94PC93054. Activities from the first three quarters of the fiscal 1998 year were reported previously as Quarterly Technical Progress Reports (DOE/PC93054-57, DOE/PC93054-61, and DOE/PC93054-66). Activities for the period July 1 through September 30, 1998, are reported here. This report describes CONSOL's characterization of process-derived samples obtained from HTI Run PB-08. These samples were derived from operations with Black Thunder Mine Wyoming subbituminous coal, simulated mixed waste plastics, and pyrolysis oils derived from waste plastics and waste tires. Comparison of characteristics among the PB-08 samples was made tomore » ascertain the effects of feed composition changes. A comparison also was made to samples from a previous test (Run PB-06) made in the same processing unit, with Black Thunder Mine coal, and in one run condition with co-fed mixed plastics.« less
Liu, Xiaowen; Pervez, Hira; Andersen, Lars W; Uber, Amy; Montissol, Sophia; Patel, Parth; Donnino, Michael W
2015-01-01
Background Pyruvate dehydrogenase (PDH) activity is altered in many human disorders. Current methods require tissue samples and yield inconsistent results. We describe a modified method for measuring PDH activity from isolated human peripheral blood mononuclear cells (PBMCs). Results/Methodology We found that PDH activity and quantity can be successfully measured in human PBMCs. Freeze-thaw cycles cannot efficiently disrupt the mitochondrial membrane. Processing time of up to 20 h does not affect PDH activity with proteinase inhibitor addition and a detergent concentration of 3.3% showed maximum yield. Sample protein concentration is correlated to PDH activity and quantity in human PBMCs from healthy subjects. Conclusion Measuring PDH activity from PBMCs is a novel, easy and less invasive way to further understand the role of PDH in human disease. PMID:25826140
The effects of liquid-phase oxidation of multiwall carbon nanotubes on their surface characteristics
NASA Astrophysics Data System (ADS)
Burmistrov, I. N.; Muratov, D. S.; Ilinykh, I. A.; Kolesnikov, E. A.; Godymchuk, A. Yu; Kuznetsov, D. V.
2016-01-01
The development of new sorbents based on nanostructured carbon materials recently became a perspective field of research. Main topic of current study is to investigate the effect of different regimes of multiwall carbon nanotubes (MWCNT) surface modification process on their structural characteristics. MWCNT samples were treated with nitric acid at high temperature. Structural properties were studied using low temperature nitrogen adsorption and acid-base back titration methods. The study showed that diluted nitric acid does not affect MWCNT structure. Concentrated nitric acid treatment leads to formation of 2.8 carboxylic groups per 1 nm2 of the sample surface.
Erro, Javier; Zamarreño, Angel M; Yvin, Jean-Claude; Garcia-Mina, Jose M
2009-05-27
This article describes a fast and simple methodology for the extraction and determination of organic acids in tissues and root exudates of maize, lupin, and chickpea by LC/MS/MS. Its main advantage is that it does not require sample prepurification before HPLC analysis or sample derivatization to improve sensibility. The results obtained showed good precision and accuracy, a recovery close to 100%, and no significant matrix effect. Moreover, the sensibility of the method is in general better than that of previously described methodologies, with detection limits between 15 and 900 pg injected.
Lessing, P.; Messina, C.P.; Fonner, R.F.
1983-01-01
Landslide risk can be assessed by evaluating geological conditions associated with past events. A sample of 2,4 16 slides from urban areas in West Virginia, each with 12 associated geological factors, has been analyzed using SAS computer methods. In addition, selected data have been normalized to account for areal distribution of rock formations, soil series, and slope percents. Final calculations yield landslide risk assessments of 1.50=high risk. The simplicity of the method provides for a rapid, initial assessment prior to financial investment. However, it does not replace on-site investigations, nor excuse poor construction. ?? 1983 Springer-Verlag New York Inc.
Application of Biased Metropolis Algorithms: From protons to proteins
Bazavov, Alexei; Berg, Bernd A.; Zhou, Huan-Xiang
2015-01-01
We show that sampling with a biased Metropolis scheme is essentially equivalent to using the heatbath algorithm. However, the biased Metropolis method can also be applied when an efficient heatbath algorithm does not exist. This is first illustrated with an example from high energy physics (lattice gauge theory simulations). We then illustrate the Rugged Metropolis method, which is based on a similar biased updating scheme, but aims at very different applications. The goal of such applications is to locate the most likely configurations in a rugged free energy landscape, which is most relevant for simulations of biomolecules. PMID:26612967
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.
2013-04-27
This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that amore » decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account for FNR > 0 2. qualitative data when the FNR > 0 but statistical sampling methods are used that assume the FNR = 0 3. quantitative data (e.g., contaminant concentrations expressed as CFU/cm2) when the FNR = 0 or when using statistical sampling methods that account for FNR > 0 4. quantitative data when the FNR > 0 but statistical sampling methods are used that assume the FNR = 0. For Situation 2, the hotspot sampling approach provides for stating with Z% confidence that a hotspot of specified shape and size with detectable contamination will be found. Also for Situation 2, the CJR approach provides for stating with X% confidence that at least Y% of the decision area does not contain detectable contamination. Forms of these statements for the other three situations are discussed in Section 2.2. Statistical methods that account for FNR > 0 currently only exist for the hotspot sampling approach with qualitative data (or quantitative data converted to qualitative data). This report documents the current status of methods and formulas for the hotspot and CJR sampling approaches. Limitations of these methods are identified. Extensions of the methods that are applicable when FNR = 0 to account for FNR > 0, or to address other limitations, will be documented in future revisions of this report if future funding supports the development of such extensions. For quantitative data, this report also presents statistical methods and formulas for 1. quantifying the uncertainty in measured sample results 2. estimating the true surface concentration corresponding to a surface sample 3. quantifying the uncertainty of the estimate of the true surface concentration. All of the methods and formulas discussed in the report were applied to example situations to illustrate application of the methods and interpretation of the results.« less
Certification in Structural Health Monitoring Systems
2011-09-01
validation [3,8]. This may be accomplished by computing the sum of squares of pure error ( SSPE ) and its associated squared correlation [3,8]. To compute...these values, a cross- validation sample must be established. In general, if the SSPE is high, the model does not predict well on independent data...plethora of cross- validation methods, some of which are more useful for certain models than others [3,8]. When possible, a disclosure of the SSPE
Comparison of AlGaAs Oxidation in MBE and MOCVD Grown Samples
2002-01-01
vertical cavity surface emitting lasers ( VCSELs ) [1, 2, 3]. They are also being... molecular beam epitaxy ( MBE ) [5, 6] or metal organic chemical vapor deposition (MOCVD) [7, 8]. The MBE -grown A1GaAs layers are sometimes pseudo or digital...Simultaneous wet-thermal oxidation of MBE and MOCVD grown AlxGal_xAs layers (x = 0.1 to 1.0) showed that the epitaxial growth method does not
NASA Astrophysics Data System (ADS)
Huijse, Pablo; Estévez, Pablo A.; Förster, Francisco; Daniel, Scott F.; Connolly, Andrew J.; Protopapas, Pavlos; Carrasco, Rodrigo; Príncipe, José C.
2018-05-01
The Large Synoptic Survey Telescope (LSST) will produce an unprecedented amount of light curves using six optical bands. Robust and efficient methods that can aggregate data from multidimensional sparsely sampled time-series are needed. In this paper we present a new method for light curve period estimation based on quadratic mutual information (QMI). The proposed method does not assume a particular model for the light curve nor its underlying probability density and it is robust to non-Gaussian noise and outliers. By combining the QMI from several bands the true period can be estimated even when no single-band QMI yields the period. Period recovery performance as a function of average magnitude and sample size is measured using 30,000 synthetic multiband light curves of RR Lyrae and Cepheid variables generated by the LSST Operations and Catalog simulators. The results show that aggregating information from several bands is highly beneficial in LSST sparsely sampled time-series, obtaining an absolute increase in period recovery rate up to 50%. We also show that the QMI is more robust to noise and light curve length (sample size) than the multiband generalizations of the Lomb–Scargle and AoV periodograms, recovering the true period in 10%–30% more cases than its competitors. A python package containing efficient Cython implementations of the QMI and other methods is provided.
Jiang, Wenqing; Chen, Xiaochu; Liu, Fengmao; You, Xiangwei; Xue, Jiaying
2014-11-01
A novel effervescence-assisted dispersive liquid-liquid microextraction method has been developed for the determination of four fungicides in apple juice samples. In this method, a solid effervescent agent is added into samples to assist the dispersion of extraction solvent. The effervescent agent is environmentally friendly and only produces an increase in the ionic strength and a negligible variation in the pH value of the aqueous sample, which does not interfere with the extraction of the analytes. The parameters affecting the extraction efficiency were investigated including the composition of effervescent agent, effervescent agent amount, formulation of effervescent agent, adding mode of effervescent agent, type and volume of extraction solvent, and pH. Under optimized conditions, the method showed a good linearity within the range of 0.05-2 mg/L for pyrimethanil, fludioxonil, and cyprodinil, and 0.1-4 mg/L for kresoxim-methyl, with the correlation coefficients >0.998. The limits of detection for the method ranged between 0.005 and 0.01 mg/L. The recoveries of the target fungicides in apple juice samples were in the range of 72.4-110.8% with the relative standard deviations ranging from 1.2 to 6.8%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Puzon, Geoffrey J; Lancaster, James A; Wylie, Jason T; Plumb, Iason J
2009-09-01
Rapid detection of pathogenic Naegleria fowler in water distribution networks is critical for water utilities. Current detection methods rely on sampling drinking water followed by culturing and molecular identification of purified strains. This culture-based method takes an extended amount of time (days), detects both nonpathogenic and pathogenic species, and does not account for N. fowleri cells associated with pipe wall biofilms. In this study, a total DNA extraction technique coupled with a real-time PCR method using primers specific for N. fowleri was developed and validated. The method readily detected N. fowleri without preculturing with the lowest detection limit for N. fowleri cells spiked in biofilm being one cell (66% detection rate) and five cells (100% detection rate). For drinking water, the detection limit was five cells (66% detection rate) and 10 cells (100% detection rate). By comparison, culture-based methods were less sensitive for detection of cells spiked into both biofilm (66% detection for <10 cells) and drinking water (0% detection for <10 cells). In mixed cultures of N. fowleri and nonpathogenic Naegleria, the method identified N. fowleri in 100% of all replicates, whereastests with the current consensus primers detected N. fowleri in only 5% of all replicates. Application of the new method to drinking water and pipe wall biofilm samples obtained from a distribution network enabled the detection of N. fowleri in under 6 h, versus 3+ daysforthe culture based method. Further, comparison of the real-time PCR data from the field samples and the standard curves enabled an approximation of N. fowleri cells in the biofilm and drinking water. The use of such a method will further aid water utilities in detecting and managing the persistence of N. fowleri in water distribution networks.
Serotyping of Streptococcus pneumoniae Based on Capsular Genes Polymorphisms
Raymond, Frédéric; Boucher, Nancy; Allary, Robin; Robitaille, Lynda; Lefebvre, Brigitte; Tremblay, Cécile
2013-01-01
Streptococcus pneumoniae serotype epidemiology is essential since serotype replacement is a concern when introducing new polysaccharide-conjugate vaccines. A novel PCR-based automated microarray assay was developed to assist in the tracking of the serotypes. Autolysin, pneumolysin and eight genes located in the capsular operon were amplified using multiplex PCR. This step was followed by a tagged fluorescent primer extension step targeting serotype-specific polymorphisms. The tagged primers were then hybridized to a microarray. Results were exported to an expert system to identify capsular serotypes. The assay was validated on 166 cultured S. pneumoniae samples from 63 different serotypes as determined by the Quellung method. We show that typing only 12 polymorphisms located in the capsular operon allows the identification at the serotype level of 22 serotypes and the assignation of 24 other serotypes to a subgroup of serotypes. Overall, 126 samples (75.9%) were correctly serotyped, 14 were assigned to a member of the same serogroup, 8 rare serotypes were erroneously serotyped, and 18 gave negative serotyping results. Most of the discrepancies involved rare serotypes or serotypes that are difficult to discriminate using a DNA-based approach, for example 6A and 6B. The assay was also tested on clinical specimens including 43 cerebrospinal fluid samples from patients with meningitis and 59 nasopharyngeal aspirates from bacterial pneumonia patients. Overall, 89% of specimens positive for pneumolysin were serotyped, demonstrating that this method does not require culture to serotype clinical specimens. The assay showed no cross-reactivity for 24 relevant bacterial species found in these types of samples. The limit of detection for serotyping and S. pneumoniae detection was 100 genome equivalent per reaction. This automated assay is amenable to clinical testing and does not require any culturing of the samples. The assay will be useful for the evaluation of serotype prevalence changes after new conjugate vaccines introduction. PMID:24086706
NASA Astrophysics Data System (ADS)
Shafer, J. M.; Varljen, M. D.
1990-08-01
A fundamental requirement for geostatistical analyses of spatially correlated environmental data is the estimation of the sample semivariogram to characterize spatial correlation. Selecting an underlying theoretical semivariogram based on the sample semivariogram is an extremely important and difficult task that is subject to a great deal of uncertainty. Current standard practice does not involve consideration of the confidence associated with semivariogram estimates, largely because classical statistical theory does not provide the capability to construct confidence limits from single realizations of correlated data, and multiple realizations of environmental fields are not found in nature. The jackknife method is a nonparametric statistical technique for parameter estimation that may be used to estimate the semivariogram. When used in connection with standard confidence procedures, it allows for the calculation of closely approximate confidence limits on the semivariogram from single realizations of spatially correlated data. The accuracy and validity of this technique was verified using a Monte Carlo simulation approach which enabled confidence limits about the semivariogram estimate to be calculated from many synthetically generated realizations of a random field with a known correlation structure. The synthetically derived confidence limits were then compared to jackknife estimates from single realizations with favorable results. Finally, the methodology for applying the jackknife method to a real-world problem and an example of the utility of semivariogram confidence limits were demonstrated by constructing confidence limits on seasonal sample variograms of nitrate-nitrogen concentrations in shallow groundwater in an approximately 12-mi2 (˜30 km2) region in northern Illinois. In this application, the confidence limits on sample semivariograms from different time periods were used to evaluate the significance of temporal change in spatial correlation. This capability is quite important as it can indicate when a spatially optimized monitoring network would need to be reevaluated and thus lead to more robust monitoring strategies.
Realistic sampling of amino acid geometries for a multipolar polarizable force field
Hughes, Timothy J.; Cardamone, Salvatore
2015-01-01
The Quantum Chemical Topological Force Field (QCTFF) uses the machine learning method kriging to map atomic multipole moments to the coordinates of all atoms in the molecular system. It is important that kriging operates on relevant and realistic training sets of molecular geometries. Therefore, we sampled single amino acid geometries directly from protein crystal structures stored in the Protein Databank (PDB). This sampling enhances the conformational realism (in terms of dihedral angles) of the training geometries. However, these geometries can be fraught with inaccurate bond lengths and valence angles due to artefacts of the refinement process of the X‐ray diffraction patterns, combined with experimentally invisible hydrogen atoms. This is why we developed a hybrid PDB/nonstationary normal modes (NM) sampling approach called PDB/NM. This method is superior over standard NM sampling, which captures only geometries optimized from the stationary points of single amino acids in the gas phase. Indeed, PDB/NM combines the sampling of relevant dihedral angles with chemically correct local geometries. Geometries sampled using PDB/NM were used to build kriging models for alanine and lysine, and their prediction accuracy was compared to models built from geometries sampled from three other sampling approaches. Bond length variation, as opposed to variation in dihedral angles, puts pressure on prediction accuracy, potentially lowering it. Hence, the larger coverage of dihedral angles of the PDB/NM method does not deteriorate the predictive accuracy of kriging models, compared to the NM sampling around local energetic minima used so far in the development of QCTFF. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26235784
Baker, Duncan G L; Eddy, Tyler D; McIver, Reba; Schmidt, Allison L; Thériault, Marie-Hélène; Boudreau, Monica; Courtenay, Simon C; Lotze, Heike K
2016-01-01
Coastal ecosystems are among the most productive yet increasingly threatened marine ecosystems worldwide. Particularly vegetated habitats, such as eelgrass (Zostera marina) beds, play important roles in providing key spawning, nursery and foraging habitats for a wide range of fauna. To properly assess changes in coastal ecosystems and manage these critical habitats, it is essential to develop sound monitoring programs for foundation species and associated assemblages. Several survey methods exist, thus understanding how different methods perform is important for survey selection. We compared two common methods for surveying macrofaunal assemblages: beach seine netting and underwater visual census (UVC). We also tested whether assemblages in shallow nearshore habitats commonly sampled by beach seines are similar to those of nearby eelgrass beds often sampled by UVC. Among five estuaries along the Southern Gulf of St. Lawrence, Canada, our results suggest that the two survey methods yield comparable results for species richness, diversity and evenness, yet beach seines yield significantly higher abundance and different species composition. However, sampling nearshore assemblages does not represent those in eelgrass beds despite considerable overlap and close proximity. These results have important implications for how and where macrofaunal assemblages are monitored in coastal ecosystems. Ideally, multiple survey methods and locations should be combined to complement each other in assessing the entire assemblage and full range of changes in coastal ecosystems, thereby better informing coastal zone management.
Elokely, Khaled M; Eldawy, Mohamed A; Elkersh, Mohamed A; El-Moselhy, Tarek F
2011-01-01
A simple spectrofluorometric method has been developed, adapted, and validated for the quantitative estimation of drugs containing α-methylene sulfone/sulfonamide functional groups using N(1)-methylnicotinamide chloride (NMNCl) as fluorogenic agent. The proposed method has been applied successfully to the determination of methyl sulfonyl methane (MSM) (1), tinidazole (2), rofecoxib (3), and nimesulide (4) in pure forms, laboratory-prepared mixtures, pharmaceutical dosage forms, spiked human plasma samples, and in volunteer's blood. The method showed linearity over concentration ranging from 1 to 150 μg/mL, 10 to 1000 ng/mL, 1 to 1800 ng/mL, and 30 to 2100 ng/mL for standard solutions of 1, 2, 3, and 4, respectively, and over concentration ranging from 5 to 150 μg/mL, 10 to 1000 ng/mL, 10 to 1700 ng/mL, and 30 to 2350 ng/mL in spiked human plasma samples of 1, 2, 3, and 4, respectively. The method showed good accuracy, specificity, and precision in both laboratory-prepared mixtures and in spiked human plasma samples. The proposed method is simple, does not need sophisticated instruments, and is suitable for quality control application, bioavailability, and bioequivalency studies. Besides, its detection limits are comparable to other sophisticated chromatographic methods.
Esteves, Lorena C R; Oliveira, Thaís R O; Souza, Elias C; Bomfeti, Cleide A; Gonçalves, Andrea M; Oliveira, Luiz C A; Barbosa, Fernando; Pereira, Márcio C; Rodrigues, Jairo L
2015-04-01
An easy, fast and environment-friendly method for COD determination in water is proposed. The procedure is based on the oxidation of organic matter by the H2O2/Fe(3-x)Co(x)O4 system. The Fe(3-x)Co(x)O4 nanoparticles activate the H2O2 molecule to produce hydroxyl radicals, which are highly reactive for oxidizing organic matter in an aqueous medium. After the oxidation step, the organic matter amounts can be quantified by comparing the quantity of H2O2 consumed. Moreover, the proposed COD method has several distinct advantages, since it does not use toxic reagents and the oxidation reaction of organic matter is conducted at room temperature and atmospheric pressure. Method detection limit is 2.0 mg L(-1) with intra- and inter-day precision lower than 1% (n=5). The calibration graph is linear in the range of 2.0-50 mg L(-1) with a sample throughput of 25 samples h(-1). Data are validated based on the analysis of six contaminated river water samples by the proposed method and by using a comparative method validated and marketed by Merck, with good agreement between the results (t test, 95%). Copyright © 2014 Elsevier B.V. All rights reserved.
Evaluation of digestion methods for analysis of trace metals in mammalian tissues and NIST 1577c.
Binder, Grace A; Metcalf, Rainer; Atlas, Zachary; Daniel, Kenyon G
2018-02-15
Digestion techniques for ICP analysis have been poorly studied for biological samples. This report describes an optimized method for analysis of trace metals that can be used across a variety of sample types. Digestion methods were tested and optimized with the analysis of trace metals in cancerous as compared to normal tissue as the end goal. Anthropological, forensic, oncological and environmental research groups can employ this method reasonably cheaply and safely whilst still being able to compare between laboratories. We examined combined HNO 3 and H 2 O 2 digestion at 170 °C for human, porcine and bovine samples whether they are frozen, fresh or lyophilized powder. Little discrepancy is found between microwave digestion and PFA Teflon pressure vessels. The elements of interest (Cu, Zn, Fe and Ni) yielded consistently higher and more accurate values on standard reference material than samples heated to 75 °C or samples that utilized HNO 3 alone. Use of H 2 SO 4 does not improve homogeneity of the sample and lowers precision during ICP analysis. High temperature digestions (>165 °C) using a combination of HNO 3 and H 2 O 2 as outlined are proposed as a standard technique for all mammalian tissues, specifically, human tissues and yield greater than 300% higher values than samples digested at 75 °C regardless of the acid or acid combinations used. The proposed standardized technique is designed to accurately quantify potential discrepancies in metal loads between cancerous and healthy tissues and applies to numerous tissue studies requiring quick, effective and safe digestions. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Tanc, Beril; Kaya, Mustafa; Gumus, Lokman; Kumral, Mustafa
2016-04-01
X-ray fluorescence (XRF) spectrometry is widely used for quantitative and semi quantitative analysis of many major, minor and trace elements in geological samples. Some advantages of the XRF method are; non-destructive sample preparation, applicability for powder, solid, paste and liquid samples and simple spectrum that are independent from chemical state. On the other hand, there are some disadvantages of the XRF methods such as poor sensitivity for low atomic number elements, matrix effect (physical matrix effects, such as fine versus course grain materials, may impact XRF performance) and interference effect (the spectral lines of elements may overlap distorting results for one or more elements). Especially, spectral interferences are very significant factors for accurate results. In this study, semi-quantitative analyzed manganese (II) oxide (MnO, 99.99%) was examined. Samples were pelleted and analyzed with XRF spectrometry (Bruker S8 Tiger). Unexpected peaks were obtained at the side of the major Mn peaks. Although sample does not contain Eu element, in results 0,3% Eu2O3 was observed. These result can occur high concentration of MnO and proximity of Mn and Eu lines. It can be eliminated by using correction equation or Mn concentration can confirm with other methods (such as Atomic absorption spectroscopy). Keywords: Spectral Interferences; Manganese (Mn); Europium (Eu); X-Ray Fluorescence Spectrometry Spectrum.
Simple, Defensible Sample Sizes Based on Cost Efficiency
Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.
2009-01-01
Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washington, Aaron L.; Narrows, William; Christian, Jonathan H.
During Decommissioning and Demolition (D&D) activities at SRS, it is important that the building be screened for radionuclides and heavy metals to ensure that the proper safety and disposal metrics are in place. A major source of contamination at DOE facilities is the accumulation of mercury contamination, from nuclear material processing and Liquid Waste System (LWS). This buildup of mercury could possibly cause harm to any demolition crew or the environment should this material be released. The current standard method is to take core samples in various places in the facility and use X-ray fluorescence (XRF) to detect the contamination.more » This standard method comes with a high financial value due to the security levels of these sample facilities with unknown contamination levels. Here in we propose the use of portable XRF units to detect for this contamination on-site. To validate this method, the instrument has to be calibrated to detect the heavy metal contamination, be both precise with the known elemental concentrations and consistent with its actual results of a sample concrete and pristine contaminant, and be able to detect changes in the sample concrete’s composition. After receiving the various concrete samples with their compositions found by a XRF wave-dispersive method, the calibration factor’s linear regressions were adjusted to give the baseline concentration of the concrete with no contamination. Samples of both concrete and concrete/flyash were evaluated; their standard deviations revealed that the measurements were consistent with the known composition. Finally, the samples were contaminated with different concentrations of sodium tungsten dihydrate, allowed to air dry, and measured. When the contaminated samples were analyzed, the heavy metal contamination was seen within the spectrum of the instrument, but there was not a trend of quantification based on the concentration of the solution.« less
NASA Astrophysics Data System (ADS)
Mota, Mariana F. B.; Gama, Ednilton M.; Rodrigues, Gabrielle de C.; Rodrigues, Guilherme D.; Nascentes, Clésia C.; Costa, Letícia M.
2018-01-01
In this work, a dilute-and-shoot method was developed for Ca, P, S and Zn determination in new and used lubricating oil samples by total reflection X-ray fluorescence (TXRF). The oil samples were diluted with organic solvents followed by addition of yttrium as internal standard and the TXRF measurements were performed after solvent evaporation. The method was optimized using an interlaboratorial reference material. The experimental parameters evaluated were sample volume (50 or 100 μL), measurement time (250 or 500 s) and volume deposited on the quartz glass sample carrier (5 or 10 μL). All of them were evaluated and optimized using xylene, kerosene and hexane. Analytical figures of merit (accuracy, precision, limit of detection and quantification) were used to evaluate the performance of the analytical method for all solvents. The recovery rates varied from 99 to 111% and the relative standard deviation remained between 1.7% and 10% (n = 8). For all elements, the results obtained by applying the new method were in agreement with the certified value. After the validation step, the method was applied for Ca, P, S and Zn quantification in eight new and four used lubricating oil samples, for all solvents. The concentration of the elements in the samples varied in the ranges of 1620-3711 mg L- 1 for Ca, 704-1277 mg L- 1 for P, 2027-9147 mg L- 1 for S, and 898-1593 mg L- 1 for Zn. The association of TXRF with a dilute-and-shoot sample preparation strategy was efficient for Ca, P, S and Zn determination in lubricating oils, presenting accurate results. Additionally, the time required for analysis is short, the reagent volumes are low minimizing waste generation, and the technique does not require calibration curves.
Subaperture correlation based digital adaptive optics for full field optical coherence tomography.
Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A
2013-05-06
This paper proposes a sub-aperture correlation based numerical phase correction method for interferometric full field imaging systems provided the complex object field information can be extracted. This method corrects for the wavefront aberration at the pupil/ Fourier transform plane without the need of any adaptive optics, spatial light modulators (SLM) and additional cameras. We show that this method does not require the knowledge of any system parameters. In the simulation study, we consider a full field swept source OCT (FF SSOCT) system to show the working principle of the algorithm. Experimental results are presented for a technical and biological sample to demonstrate the proof of the principle.
Handbook: Collecting Groundwater Samples from Monitoring Wells in Frenchman Flat, CAU 98
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapman, Jenny; Lyles, Brad; Cooper, Clay
Frenchman Flat basin on the Nevada National Security Site (NNSS) contains Corrective Action Unit (CAU) 98, which is comprised of ten underground nuclear test locations. Environmental management of these test locations is part of the Underground Test Area (UGTA) Activity conducted by the U.S. Department of Energy (DOE) under the Federal Facility Agreement and Consent Order (FFACO) (1996, as amended) with the U.S. Department of Defense (DOD) and the State of Nevada. A Corrective Action Decision Document (CADD)/Corrective Action Plan (CAP) has been approved for CAU 98 (DOE, 2011). The CADD/CAP reports on the Corrective Action Investigation that was conductedmore » for the CAU, which included characterization and modeling. It also presents the recommended corrective actions to address the objective of protecting human health and the environment. The recommended corrective action alternative is “Closure in Place with Modeling, Monitoring, and Institutional Controls.” The role of monitoring is to verify that Contaminants of Concern (COCs) have not exceeded the Safe Drinking Water Act (SDWA) limits (Code of Federal Regulations, 2014) at the regulatory boundary, to ensure that institutional controls are adequate, and to monitor for changed conditions that could affect the closure conditions. The long-term closure monitoring program will be planned and implemented as part of the Closure Report stage after activities specified in the CADD/CAP are complete. Groundwater at the NNSS has been monitored for decades through a variety of programs. Current activities were recently consolidated in an NNSS Integrated Sampling Plan (DOE, 2014). Although monitoring directed by the plan is not intended to meet the FFACO long-term monitoring requirements for a CAU (which will be defined in the Closure Report), the objective to ensure public health protection is similar. It is expected that data collected in accordance with the plan will support the transition to long-term monitoring at each CAU. The sampling plan is designed to ensure that monitoring activities occur in compliance with the UGTA Quality Assurance Plan (DOE, 2012). The sampling plan should be referenced for Quality Assurance (QA) elements and procedures governing sampling activities. The NNSS Integrated Sampling Plan specifies the groundwater monitoring that will occur in CAU 98 until the long-term monitoring program is approved in the Closure Report. The plan specifies the wells that must be monitored and categorizes them by their sampling objective with the associated analytical requirements and frequency. Possible sample collection methods and required standard operating procedures are also presented. The intent of this handbook is to augment the NNSS Integrated Sampling Plan by providing well-specific details for the sampling professional implementing the Sampling Plan in CAU 98, Frenchman Flat. This handbook includes each CAU 98 well designated for sampling in the NNSS Integrated Sampling Plan. The following information is provided in the individual well sections: 1. The purpose of sampling. 2. A physical description of the well. 3. The chemical characteristics of the formation water. 4. Recommended protocols for purging and sampling. The well-specific information has been gathered from numerous historical and current sources cited in each section, but two particularly valuable resources merit special mention. These are the USGS NNSS website (http://nevada.usgs.gov/doe_nv/ntsarea5.cfm) and the UGTA Field Operations website (https://ugta.nv.doe.gov/sites/Field%20Operations/default.aspx). 2 Land surface elevation and measuring point for water level measurements in Frenchman Flat were a focus during CAU investigations (see Appendix B, Attachment 1 in Navarro-Intera, 2014). Both websites listed above provide information on the accepted datum for each well. A summary is found on the home page for the well on the USGS website. Additional information is available through a link in the “Available Data” section to an “MP diagram” with a photo annotated with the datum information. On the UGTA Field Operations well page, the same information is in the “Wellhead Diagram” link. Well RNM-2s does not have an annotated photo at this time. All of the CAU 98 monitoring wells are located within Area 5 of Frenchman Flat, with the exception of ER-11-2 in Area 11 (Figure 1). The wells are clustered in two areas: the northern area (Figure 2) and the central area (Figure 3). Each well is discussed below in geographic order from north to south as follows: ER-11-2, ER-5-3 shallow piezometer, ER-5-3-2, ER-5-5, RNM-1, RNM-2s, and UE-5n.« less
Terahertz pulsed imaging study of dental caries
NASA Astrophysics Data System (ADS)
Karagoz, Burcu; Altan, Hakan; Kamburoglu, Kıvanç
2015-07-01
Current diagnostic techniques in dentistry rely predominantly on X-rays to monitor dental caries. Terahertz Pulsed Imaging (TPI) has great potential for medical applications since it is a nondestructive imaging method. It does not cause any ionization hazard on biological samples due to low energy of THz radiation. Even though it is strongly absorbed by water which exhibits very unique chemical and physical properties that contribute to strong interaction with THz radiation, teeth can still be investigated in three dimensions. Recent investigations suggest that this method can be used in the early identification of dental diseases and imperfections in the tooth structure without the hazards of using techniques which rely on x-rays. We constructed a continuous wave (CW) and time-domain reflection mode raster scan THz imaging system that enables us to investigate various teeth samples in two or three dimensions. The samples comprised of either slices of individual tooth samples or rows of teeth embedded in wax, and the imaging was done by scanning the sample across the focus of the THz beam. 2D images were generated by acquiring the intensity of the THz radiation at each pixel, while 3D images were generated by collecting the amplitude of the reflected signal at each pixel. After analyzing the measurements in both the spatial and frequency domains, the results suggest that the THz pulse is sensitive to variations in the structure of the samples that suggest that this method can be useful in detecting the presence of caries.
Dual-view plane illumination microscopy for rapid and spatially isotropic imaging
Kumar, Abhishek; Wu, Yicong; Christensen, Ryan; Chandris, Panagiotis; Gandler, William; McCreedy, Evan; Bokinsky, Alexandra; Colón-Ramos, Daniel A; Bao, Zhirong; McAuliffe, Matthew; Rondeau, Gary; Shroff, Hari
2015-01-01
We describe the construction and use of a compact dual-view inverted selective plane illumination microscope (diSPIM) for time-lapse volumetric (4D) imaging of living samples at subcellular resolution. Our protocol enables a biologist with some prior microscopy experience to assemble a diSPIM from commercially available parts, to align optics and test system performance, to prepare samples, and to control hardware and data processing with our software. Unlike existing light sheet microscopy protocols, our method does not require the sample to be embedded in agarose; instead, samples are prepared conventionally on glass coverslips. Tissue culture cells and Caenorhabditis elegans embryos are used as examples in this protocol; successful implementation of the protocol results in isotropic resolution and acquisition speeds up to several volumes per s on these samples. Assembling and verifying diSPIM performance takes ~6 d, sample preparation and data acquisition take up to 5 d and postprocessing takes 3–8 h, depending on the size of the data. PMID:25299154
Lehel, József; Zwillinger, Dóra; Bartha, András; Lányi, Katalin; Laczay, Péter
2017-11-01
The muscle, liver, kidney and fat samples of 20 roe deer of both sexes originating from a hunting area in central Hungary were investigated for the presence of heavy metals such as As, Cd, Hg and Pb, and their contents were evaluated for possible health risk to consumers. Both As and Hg were found at a level below the limit of detection (< 0.5 mg/kg wet weight) in all samples. The median of the measured Cd concentrations was significantly higher in both the kidney and the liver (p = 0.0011) of bucks than of does. In bucks, Cd levels exceeded the respective maximum limits laid down in the European legislation in four kidney and three muscle samples, whereas in does, the measured concentrations were below the respective limits in all samples. The detected amounts of Pb exceeded the maximum limits in the kidney of one buck and eight does, in the liver of two bucks and six does, in the muscle of six bucks and nine does, whereas in all fat tissues of both bucks and does. The concentration of Pb (p = 0.02) was significantly greater in the kidney of does compared to roebucks. Based on data obtained from the present study, the consumption of organs and tissues of the investigated roe deer could be objectionable from food-toxicological point of view and may pose risk to the high consumers of wild game due to their cadmium and lead contents.
Pontes, Fernanda V M; Mendes, Bruna A de O; de Souza, Evelyn M F; Ferreira, Fernanda N; da Silva, Lílian I D; Carneiro, Manuel C; Monteiro, Maria I C; de Almeida, Marcelo D; Neto, Arnaldo A; Vaitsman, Delmo S
2010-02-05
A method for determination of Co, Cr, Cu, Fe, Mn, Ni, Ti, V and Zn in coal fly ash samples using ultrasound-assisted digestion followed by inductively coupled plasma optical emission spectrometry (ICP-OES) is proposed. The digestion procedure consisted in the sonication of the previously dried sample with hydrofluoric acid and aqua regia at 80 degrees C for 30 min, elimination of fluorides by heating until dryness for about 1h and dissolution of the residue with nitric acid solution. A classical digestion method, used as comparative method, consisted in the addition of HCl, HNO(3) and HF to 1 g of sample, and heating on a hot plate until dryness for about 6h. The proposed method presents several advantages: it requires lower amounts of sample and reagents, and it is faster. It is also advantageous when compared to the published methods, which also use ultrasound-assisted digestion procedure: lower detection limits for Co, Cu, Ni, V and Zn, and it does not require shaking during the digestion. The detection limits (microg g(-1)) for Co, Cr, Cu, Fe, Mn, Ni, Ti, V and Zn were 0.06, 0.37, 1.0, 25, 0.93, 0.45, 4.0, 1.7 and 4.3, respectively. The results were in good agreement with those obtained by the classical method and reference values. The exception was Cr, which presented low recoveries in classical and proposed methods (83 and 87%, respectively). Also, the concentration for Cu obtained by the proposed method was significantly different from the reference value, in spite of the good recovery (91+/-1%). Copyright 2009 Elsevier B.V. All rights reserved.
Buratti, S; Benedetti, S; Cosio, M S
2007-02-28
In this paper is described the applicability of a flow injection system, operating with an amperometric detector, for measurement in rapid and simple way the antioxidant power of honey, propolis and royal jelly. The proposed method evaluates the reducing power of selected antioxidant compounds and does not require the use of free radicals or oxidants. Twelve honey, 12 propolis and 4 royal jelly samples of different botanical and geographical origin were evaluated by the electrochemical method and the data were compared with those obtained by the DPPH assay. Since a good correlation was found (R(2)=0.92) the proposed electrochemical method can be successfully employed for the direct, rapid and simple monitoring of the antioxidant power of honeybee products. Furthermore, the total phenolic content of samples was determined by the Folin-Ciocalteau procedure and the characteristic antioxidant activities showed a good correlation with phenolics (R(2)=0.96 for propolis and 0.90 for honey).
Carbon nanotubes for voltammetric determination of sulphite in some beverages.
Silva, Erika M; Takeuchi, Regina M; Santos, André L
2015-04-15
In this work, a square-wave voltammetric method based on sulphite electrochemical reduction was developed for quantification of this preservative in commercial beverages. A carbon-paste electrode chemically modified with multiwalled carbon nanotubes was used as the working electrode. Under the optimised experimental conditions, a linear response to sulphite concentrations from 1.6 to 32 mg SO2 L(-1) (25-500 μmol L(-1) of sulphite), with a limit of detection of 1.0 mg SO2 L(-1) (16 μmol L(-1) of sulphite), was obtained. This method does not suffer interference from other common beverage additives such as ascorbic acid, fructose, and sucrose, and it enables fast and reliable sulphite determination in beverages, with minimal sample pretreatment. Despite its selectivity, the method is not applicable to red grape juice or red wine samples, because some of their components produce a cathodic peak at almost the same potential as that of sulphite reduction. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wiśniewska, Paulina; Boqué, Ricard; Borràs, Eva; Busto, Olga; Wardencki, Waldemar; Namieśnik, Jacek; Dymerski, Tomasz
2017-02-01
Headspace mass-spectrometry (HS-MS), mid infrared (MIR) and UV-vis spectroscopy were used to authenticate whisky samples from different origins and ways of production ((Irish, Spanish, Bourbon, Tennessee Whisky and Scotch). The collected spectra were processed with partial least-squares discriminant analysis (PLS-DA) to build the classification models. In all cases the five groups of whiskies were distinguished, but the best results were obtained by HS-MS, which indicates that the biggest differences between different types of whisky are due to their aroma. Differences were also found inside groups, showing that not only raw material is important to discriminate samples but also the way of their production. The methodology is quick, easy and does not require sample preparation.
Moliner Martínez, Y; Muñoz-Ortuño, M; Herráez-Hernández, R; Campíns-Falcó, P
2014-02-01
This paper describes a new approach for the determination of fat in the effluents generated by the dairy industry which is based on the retention of fat in nylon membranes and measurement of the absorbances on the membrane surface by ATR-IR spectroscopy. Different options have been evaluated for retaining fat in the membranes using milk samples of different origin and fat content. Based on the results obtained, a method is proposed for the determination of fat in effluents which involves the filtration of 1 mL of the samples through 0.45 µm nylon membranes of 13 mm diameter. The fat content is then determined by measuring the absorbance of band at 1745 cm(-1). The proposed method can be used for the direct estimation of fat at concentrations in the 2-12 mg/L interval with adequate reproducibility. The intraday precision, expressed as coefficients of variation CVs, were ≤ 11%, whereas the interday CVs were ≤ 20%. The method shows a good tolerance towards conditions typically found in the effluents generated by the dairy industry. The most relevant features of the proposed method are simplicity and speed as the samples can be characterized in a few minutes. Sample preparation does not involve either additional instrumentation (such as pumps or vacuum equipment) or organic solvents or other chemicals. Therefore, the proposed method can be considered a rapid, simple and cost-effective alternative to gravimetric methods for controlling fat content in these effluents during production or cleaning processes. © 2013 Published by Elsevier B.V.
Nagy, László G; Urban, Alexander; Orstadius, Leif; Papp, Tamás; Larsson, Ellen; Vágvölgyi, Csaba
2010-12-01
Recently developed comparative phylogenetic methods offer a wide spectrum of applications in evolutionary biology, although it is generally accepted that their statistical properties are incompletely known. Here, we examine and compare the statistical power of the ML and Bayesian methods with regard to selection of best-fit models of fruiting-body evolution and hypothesis testing of ancestral states on a real-life data set of a physiological trait (autodigestion) in the family Psathyrellaceae. Our phylogenies are based on the first multigene data set generated for the family. Two different coding regimes (binary and multistate) and two data sets differing in taxon sampling density are examined. The Bayesian method outperformed Maximum Likelihood with regard to statistical power in all analyses. This is particularly evident if the signal in the data is weak, i.e. in cases when the ML approach does not provide support to choose among competing hypotheses. Results based on binary and multistate coding differed only modestly, although it was evident that multistate analyses were less conclusive in all cases. It seems that increased taxon sampling density has favourable effects on inference of ancestral states, while model parameters are influenced to a smaller extent. The model best fitting our data implies that the rate of losses of deliquescence equals zero, although model selection in ML does not provide proper support to reject three of the four candidate models. The results also support the hypothesis that non-deliquescence (lack of autodigestion) has been ancestral in Psathyrellaceae, and that deliquescent fruiting bodies represent the preferred state, having evolved independently several times during evolution. Copyright © 2010 Elsevier Inc. All rights reserved.
Relationship between pore geometric characteristics and SIP/NMR parameters observed for mudstones
NASA Astrophysics Data System (ADS)
Robinson, J.; Slater, L. D.; Keating, K.; Parker, B. L.; Robinson, T.
2017-12-01
The reliable estimation of permeability remains one of the most challenging problems in hydrogeological characterization. Cost effective, non-invasive geophysical methods such as spectral induced polarization (SIP) and nuclear magnetic resonance (NMR) offer an alternative to traditional sampling methods as they are sensitive to the mineral surfaces and pore spaces that control permeability. We performed extensive physical characterization, SIP and NMR geophysical measurements on fractured rock cores extracted from a mudstone site in an effort to compare 1) the pore size characterization determined from traditional and geophysical methods and 2) the performance of permeability models based on these methods. We focus on two physical characterizations that are well-correlated with hydraulic properties: the pore volume normalized surface area (Spor) and an interconnected pore diameter (Λ). We find the SIP polarization magnitude and relaxation time are better correlated with Spor than Λ, the best correlation of these SIP measures for our sample dataset was found with Spor divided by the electrical formation factor (F). NMR parameters are, similarly, better correlated with Spor than Λ. We implement previously proposed mechanistic and empirical permeability models using SIP and NMR parameters. A sandstone-calibrated SIP model using a polarization magnitude does not perform well while a SIP model using a mean relaxation time performs better in part by more sufficiently accounting for the effects of fluid chemistry. A sandstone-calibrated NMR permeability model using an average measure of the relaxation time does not perform well, presumably due to small pore sizes which are either not connected or contain water of limited mobility. An NMR model based on the laboratory determined portions of the bound versus mobile portions of the relaxation distribution performed reasonably well. While limitations exist, there are many opportunities to use geophysical data to predict permeability in mudstone formations.
NASA Astrophysics Data System (ADS)
Zhang, Qian; Harman, Ciaran J.; Kirchner, James W.
2018-02-01
River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1) fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2) the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling - in the form of spectral slope (β) or other equivalent scaling parameters (e.g., Hurst exponent) - are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1) they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β = 0) to Brown noise (β = 2) and (2) their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths) in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb-Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among all methods for a wide range of prescribed β values and gap distributions. The aliasing method, however, does not itself account for sampling irregularity, and this introduces some bias in the result. Nonetheless, the wavelet method is recommended for estimating β in irregular time series until improved methods are developed. Finally, all methods' performances depend strongly on the sampling irregularity, highlighting that the accuracy and precision of each method are data specific. Accurately quantifying the strength of fractal scaling in irregular water-quality time series remains an unresolved challenge for the hydrologic community and for other disciplines that must grapple with irregular sampling.
Vigliaturo, Ruggero; Capella, Silvana; Rinaudo, Caterina; Belluso, Elena
2016-07-01
The purpose of this work is to define a sample preparation protocol that allows inorganic fibers and particulate matter extracted from different biological samples to be characterized morphologically, crystallographically and chemically by transmission electron microscopy-energy dispersive spectroscopy (TEM-EDS). The method does not damage or create artifacts through chemical attacks of the target material. A fairly rapid specimen preparation is applied with the aim of performing as few steps as possible to transfer the withdrawn inorganic matter onto the TEM grid. The biological sample is previously digested chemically by NaClO. The salt is then removed through a series of centrifugation and rinse cycles in deionized water, thus drastically reducing the digestive power of the NaClO and concentrating the fibers for TEM analysis. The concept of equivalent hydrodynamic diameter is introduced to calculate the settling velocity during the centrifugation cycles. This technique is applicable to lung tissues and can be extended to a wide range of organic materials. The procedure does not appear to cause morphological damage to the fibers or modify their chemistry or degree of crystallinity. The extrapolated data can be used in interdisciplinary studies to understand the pathological effects caused by inorganic materials.
Sandwich mapping of schistosomiasis risk in Anhui Province, China.
Hu, Yi; Bergquist, Robert; Lynn, Henry; Gao, Fenghua; Wang, Qizhi; Zhang, Shiqing; Li, Rui; Sun, Liqian; Xia, Congcong; Xiong, Chenglong; Zhang, Zhijie; Jiang, Qingwu
2015-06-03
Schistosomiasis mapping using data obtained from parasitological surveys is frequently used in planning and evaluation of disease control strategies. The available geostatistical approaches are, however, subject to the assumption of stationarity, a stochastic process whose joint probability distribution does not change when shifted in time. As this is impractical for large areas, we introduce here the sandwich method, the basic idea of which is to divide the study area (with its attributes) into homogeneous subareas and estimate the values for the reporting units using spatial stratified sampling. The sandwich method was applied to map the county-level prevalence of schistosomiasis japonica in Anhui Province, China based on parasitological data collected from sample villages and land use data. We first mapped the county-level prevalence using the sandwich method, then compared our findings with block Kriging. The sandwich estimates ranged from 0.17 to 0.21% with a lower level of uncertainty, while the Kriging estimates varied from 0 to 0.97% with a higher level of uncertainty, indicating that the former is more smoothed and stable compared to latter. Aside from various forms of reporting units, the sandwich method has the particular merit of simple model assumption coupled with full utilization of sample data. It performs well when a disease presents stratified heterogeneity over space.
New radiocarbon measurement methods in the Hertelendi Laboratory, Hungary
NASA Astrophysics Data System (ADS)
Janovics, Róbert; Major, István; Rinyu, László; Veres, Mihály; Molnár, Mihály
2013-04-01
In this paper we present two very different and novel methods for C-14 measurement from dissolved inorganic carbonate (DIC) of water samples. A new LSC sample preparation method for liquid scintillation C-14 measurements was implemented in the ATOMKI. The first method uses direct absorption into a special absorbent (Carbosorb E®) and a following liquid scintillation measurement. Typical sample size is 20-40 litre of water. The developed CO2 absorption method is fast, and simple. The C-14 activities is measured by an ultra low background LSC (TRI-CARB 3170 TR/SL, Perkin Elmer) including quenching parameter (tSIE).The corresponding limit of C-14 dating is 31200 year. Several tests were executed with old borehole CO2 gas without significant content of C-14 and also performed on samples of known C-14 activities between 29 and 7000 pMC, previously measured by GPC. The combined uncertainty of the described determination is about 2 % in the case of recent carbon. It is a very cost-effective and easy to use method based on a novel and simple static absorption process for the CO2 extracted from groundwater. The other very sensitive method is based on accelerator mass spectrometry (AMS) using gas ion source. This method does not require graphite generation and a small volume of water sample (1-20mL) is enough for the radiocarbon measurement. The procedure is very similar to pre-treatment of carbonate contained sample preparation for stable isotope measurement with gasbench technique. We applied a MICADAS type accelerator mass spectrometry (AMS) with gas ion source for C-14 analysis. The radiocarbon content of water was sat free with phosphoric acid and then the headspace gas was rinsed vials. The whole measurement needs only 20 min of each sample. The precision of measurement is better than 1% for modern samples. The preparation is vastly reduced compared to the other AMS methods and principally allows fully automated measurements of groundwater samples with an auto-sampler. The presented two new methods can be suitable for C-14 measurements and dating of hydrological, and environmental samples as well. The new AMS facility in ATOMKI (Debrecen, Hungary) using an EnvironMICADAS AMS system with gas ion source has a great potential in groundwater C-14 analyses. The research was supported by the by TÁMOP-4.2.2.A-11/1/KONV and the Hungarian NSF (OTKA MB08-A 81515)
DOE-OES-EML quality assurance program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanderson, C.G.
1980-01-01
Contractor laboratories handling radioactive materials for the US Department of Energy (DOE) are required to monitor the environmental exposure and publish annual reports for the Division of Operational and Environmental Safety (OES). In order to determine the validity of the data contained in these reports the Environmental Measurements Laboratory (EML) was requested to develop, coordinate, and conduct an Environmental Quality Assurance Program (QAP). There are four major phases to the DOE-OES-EML Quality Assurance Program: sample collection and preparation, sample analyses at EML, quarterly sample distribution, and reporting the data returned by the participants. The various phases of the QAP andmore » the data reported during the first year of the program are discussed.« less
Akar, Melek; Yildirim, Tulin G; Sandal, Gonca; Bozdag, Senol; Erdeve, Omer; Altug, Nahide; Uras, Nurdan; Oguz, Serife S; Dilmen, Ugur
2017-04-01
Introduction Ibuprofen is used widely to close patent ductus arteriosus in preterm infants. The anti-inflammatory activity of ibuprofen may also be partly due to its ability to scavenge reactive oxygen species and reactive nitrogen species. We evaluated the interaction between oxidative status and the medical treatment of patent ductus arteriosus with two forms of ibuprofen. Materials and methods This study enrolled newborns of gestational age ⩽32 weeks, birth weight ⩽1500 g, and postnatal age 48-96 hours, who received either intravenous or oral ibuprofen to treat patent ductus arteriosus. Venous blood was sampled before ibuprofen treatment from each patient to determine antioxidant and oxidant concentrations. Secondary samples were collected 24 hours after the end of the treatment. Total oxidant status and total antioxidant capacity were measured using Erel's method. This prospective randomised study enrolled 102 preterm infants with patent ductus arteriosus. The patent ductus arteriosus closure rate was significantly higher in the oral ibuprofen group (84.6 versus 62%) after the first course of treatment (p=0.011). No significant difference was found between the pre- and post-treatment total oxidant status and total antioxidant capacity in the groups. Discussion Ibuprofen treatment does not change the total oxidant status or total antioxidant capacity. We believe that the effect of ibuprofen treatment in inducing ischaemia overcomes the scavenging effect of ibuprofen.
Determination of the molar mass of argon from high-precision acoustic comparisons.
Feng, X J; Zhang, J T; Moldover, M R; Yang, I; Plimmer, M D; Lin, H
2017-06-01
This article describes the accurate determination of the molar mass M of a sample of argon gas used for the determination of the Boltzmann constant. The method of one of the authors (Moldover et al 1988 J. Res. Natl. Bur. Stand. 93 85-144) uses the ratio of the square speed of sound in the gas under analysis and in a reference sample of known molar mass. A sample of argon that was isotopically-enriched in 40 Ar was used as the reference, whose unreactive impurities had been independently measured. The results for three gas samples are in good agreement with determinations by gravimetric mass spectrometry; (〈 M acoustic / M mass-spec 〉 - 1) = (-0.31 ± 0.69) × 10 -6 , where the indicated uncertainty is one standard deviation that does not account for the uncertainties from the acoustic and mass-spectroscopy references.
Determination of the molar mass of argon from high-precision acoustic comparisons
Feng, X J; Zhang, J T; Moldover, M R; Yang, I; Plimmer, M D; Lin, H
2017-01-01
This article describes the accurate determination of the molar mass M of a sample of argon gas used for the determination of the Boltzmann constant. The method of one of the authors (Moldover et al 1988 J. Res. Natl. Bur. Stand. 93 85–144) uses the ratio of the square speed of sound in the gas under analysis and in a reference sample of known molar mass. A sample of argon that was isotopically-enriched in 40Ar was used as the reference, whose unreactive impurities had been independently measured. The results for three gas samples are in good agreement with determinations by gravimetric mass spectrometry; (〈Macoustic/Mmass-spec〉 − 1) = (−0.31 ± 0.69) × 10−6, where the indicated uncertainty is one standard deviation that does not account for the uncertainties from the acoustic and mass-spectroscopy references. PMID:29332953
Designing testing service at baristand industri Medan’s liquid waste laboratory
NASA Astrophysics Data System (ADS)
Kusumawaty, Dewi; Napitupulu, Humala L.; Sembiring, Meilita T.
2018-03-01
Baristand Industri Medan is a technical implementation unit under the Industrial and Research and Development Agency, the Ministry of Industry. One of the services often used in Baristand Industri Medan is liquid waste testing service. The company set the standard of service is nine working days for testing services. At 2015, 89.66% on testing services liquid waste does not meet the specified standard of services company because of many samples accumulated. The purpose of this research is designing online services to schedule the coming the liquid waste sample. The method used is designing an information system that consists of model design, output design, input design, database design and technology design. The results of designing information system of testing liquid waste online consist of three pages are pages to the customer, the recipient samples and laboratory. From the simulation results with scheduled samples, then the standard services a minimum of nine working days can be reached.
Déjeant, Adrien; Bourva, Ludovic; Sia, Radia; Galoisy, Laurence; Calas, Georges; Phrommavanh, Vannapha; Descostes, Michael
2014-11-01
The radioactivities of (238)U and (226)Ra in mill tailings from the U mines of COMINAK and SOMAÏR in Niger were measured and quantified using a portable High-Purity Germanium (HPGe) detector. The (238)U and (226)Ra activities were measured under field conditions on drilling cores with 600s measurements and without any sample preparation. Field results were compared with those obtained by Inductive Coupled Plasma Atomic Emission Spectroscopy (ICP-AES) and emanometry techniques. This comparison indicates that gamma-ray absorption by such geological samples does not cause significant deviations. This work shows the feasibility of using portable HPGe detector in the field as a preliminary method to observe variations of radionuclides concentration with the aim of identifying samples of interest. The HPGe is particularly useful for samples with strong secular disequilibrium such as mill tailings. Copyright © 2014 Elsevier Ltd. All rights reserved.
Toxic chemical considerations for tank farm releases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Keuren, J.C.; Davis, J.S., Westinghouse Hanford
1996-08-01
This topical report contains technical information used to determine the accident consequences of releases of toxic chemical and gases for the Tank Farm Final Safety Analysis report (FSAR).It does not provide results for specific accident scenarios but does provide information for use in those calculations including chemicals to be considered, chemical concentrations, chemical limits and a method of summing the fractional contributions of each chemical. Tank farm composites evaluated were liquids and solids for double shell tanks, single shell tanks, all solids,all liquids, headspace gases, and 241-C-106 solids. Emergency response planning guidelines (ERPGs) were used as the limits.Where ERPGs weremore » not available for the chemicals of interest, surrogate ERPGs were developed. Revision 2 includes updated sample data, an executive summary, and some editorial revisions.« less
Quercetin does not alter the oral bioavailability of Atorvastatin in rats.
Koritala, Rekha; Challa, Siva Reddy; Ragam, Satheesh Kumar; Geddam, Lal Babu; Venkatesh Reddy Challa, Venkatesh Reddy; Devi, Renuka; Sattenapalli, Srinu; Babu, Narendra
2015-09-01
The study was undertaken to evaluate the effect of Quercetin on the pharmacokinetics of Atorvastatin Calcium. In-vivo Pharmacokinetic studies were performed on rats in a single dose study and multiple dose study. Rats were treated with Quercetin (10 mg/kg) and Atorvastatin Calcium (20 mg/kg) orally and blood samples were collected at (0) pretreatment and 0.5, 1, 1.5, 2, 2.5, 3, 4, 8, 12, 24 hours post treatment. Plasma concentrations of Atorvastatin were estimated by HPLC method. Quercetin treatment did not significantly alter the pharmacokinetic parameters of atorvastatin like AUC(0-24), AUC(0-α) , T(max), C(max) and T(½) in both single dose and multiple dose studies of Atorvastatin Calcium. Quercetin does not alter the oral bioavailability of Atorvastatin Calcium in rats.
Zhang, Fang; Wagner, Anita K; Soumerai, Stephen B; Ross-Degnan, Dennis
2009-02-01
Interrupted time series (ITS) is a strong quasi-experimental research design, which is increasingly applied to estimate the effects of health services and policy interventions. We describe and illustrate two methods for estimating confidence intervals (CIs) around absolute and relative changes in outcomes calculated from segmented regression parameter estimates. We used multivariate delta and bootstrapping methods (BMs) to construct CIs around relative changes in level and trend, and around absolute changes in outcome based on segmented linear regression analyses of time series data corrected for autocorrelated errors. Using previously published time series data, we estimated CIs around the effect of prescription alerts for interacting medications with warfarin on the rate of prescriptions per 10,000 warfarin users per month. Both the multivariate delta method (MDM) and the BM produced similar results. BM is preferred for calculating CIs of relative changes in outcomes of time series studies, because it does not require large sample sizes when parameter estimates are obtained correctly from the model. Caution is needed when sample size is small.
Lakshmi, Karunanidhi Santhana; Lakshmi, Sivasubramanian
2011-03-01
Simultaneous determination of valsartan and hydrochlorothiazide by the H-point standard additions method (HPSAM) and partial least squares (PLS) calibration is described. Absorbances at a pair of wavelengths, 216 and 228 nm, were monitored with the addition of standard solutions of valsartan. Results of applying HPSAM showed that valsartan and hydrochlorothiazide can be determined simultaneously at concentration ratios varying from 20:1 to 1:15 in a mixed sample. The proposed PLS method does not require chemical separation and spectral graphical procedures for quantitative resolution of mixtures containing the titled compounds. The calibration model was based on absorption spectra in the 200-350 nm range for 25 different mixtures of valsartan and hydrochlorothiazide. Calibration matrices contained 0.5-3 μg mL-1 of both valsartan and hydrochlorothiazide. The standard error of prediction (SEP) for valsartan and hydrochlorothiazide was 0.020 and 0.038 μg mL-1, respectively. Both proposed methods were successfully applied to the determination of valsartan and hydrochlorothiazide in several synthetic and real matrix samples.
A quantum–quantum Metropolis algorithm
Yung, Man-Hong; Aspuru-Guzik, Alán
2012-01-01
The classical Metropolis sampling method is a cornerstone of many statistical modeling applications that range from physics, chemistry, and biology to economics. This method is particularly suitable for sampling the thermal distributions of classical systems. The challenge of extending this method to the simulation of arbitrary quantum systems is that, in general, eigenstates of quantum Hamiltonians cannot be obtained efficiently with a classical computer. However, this challenge can be overcome by quantum computers. Here, we present a quantum algorithm which fully generalizes the classical Metropolis algorithm to the quantum domain. The meaning of quantum generalization is twofold: The proposed algorithm is not only applicable to both classical and quantum systems, but also offers a quantum speedup relative to the classical counterpart. Furthermore, unlike the classical method of quantum Monte Carlo, this quantum algorithm does not suffer from the negative-sign problem associated with fermionic systems. Applications of this algorithm include the study of low-temperature properties of quantum systems, such as the Hubbard model, and preparing the thermal states of sizable molecules to simulate, for example, chemical reactions at an arbitrary temperature. PMID:22215584
Prothmann, Jens; Sun, Mingzhe; Spégel, Peter; Sandahl, Margareta; Turner, Charlotta
2017-12-01
The conversion of lignin to potentially high-value low molecular weight compounds often results in complex mixtures of monomeric and oligomeric compounds. In this study, a method for the quantitative and qualitative analysis of 40 lignin-derived compounds using ultra-high-performance supercritical fluid chromatography coupled to quadrupole-time-of-flight mass spectrometry (UHPSFC/QTOF-MS) has been developed. Seven different columns were explored for maximum selectivity. Makeup solvent composition and ion source settings were optimised using a D-optimal design of experiment (DoE). Differently processed lignin samples were analysed and used for the method validation. The new UHPSFC/QTOF-MS method showed good separation of the 40 compounds within only 6-min retention time, and out of these, 36 showed high ionisation efficiency in negative electrospray ionisation mode. Graphical abstract A rapid and selective method for the quantitative and qualitative analysis of 40 lignin-derived compounds using ultra-high-performance supercritical fluid chromatography coupled to quadrupole-time-of-flight mass spectrometry (UHPSFC/QTOF-MS).
Characterization and properties of TiO2-SnO2 nanocomposites, obtained by hydrolysis method
NASA Astrophysics Data System (ADS)
Kutuzova, Anastasiya S.; Dontsova, Tetiana A.
2018-04-01
The paper deals with the process of TiO2-SnO2 nanocomposites synthesis utilizing simple hydrolysis method with further calcination for photocatalytic applications. The obtained nanopowders contain 100, 90, 75, 65 and 25 wt% of TiO2. The synthesized nanocomposite samples were analyzed by X-ray diffraction method, scanning electron microscopy, transmission electron microscopy, Fourier transform infrared spectroscopy and N2 adsorption-desorption method. The correlation between structure and morphology of the obtained nanocrystalline composite powders and their sorption and photocatalytic activity towards methylene blue degradation was established. It was found that the presence of SnO2 in the nanocomposites stabilizes the anatase phase of TiO2. Furthermore, sorption and photocatalytic properties of the obtained composites are significantly influenced not only by specific surface area, but also by pore size distribution and mesopore volume of the samples. In our opinion, the results obtained in this study have shown that the TiO2-SnO2 composites with SnO2 content that does not exceed 10% are promising for photocatalytic applications.
Inorganic arsenic in seafood: does the extraction method matter?
Pétursdóttir, Ásta H; Gunnlaugsdóttir, Helga; Krupp, Eva M; Feldmann, Jörg
2014-05-01
Nine different extraction methods were evaluated for three seafood samples to test whether the concentration of inorganic arsenic (iAs) determined in seafood is dependent on the extraction method. Certified reference materials (CRM) DOLT-4 (Dogfish Liver) and TORT-2 (Lobster Hepatopancreas), and a commercial herring fish meal were evaluated. All experimental work described here was carried out by the same operator using the same instrumentation, thus eliminating possible differences in results caused by laboratory related factors. Low concentrations of iAs were found in CRM DOLT-4 (0.012±0.003mgkg(-1)) and the herring fish meal sample (0.007±0.002mgkg(-1)) for all extraction methods. When comparing the concentration of iAs in CRM TORT-2 found in this study and in the literature dilute acids, HNO3 and HCl, showed the highest extracted iAs wheras dilute NaOH (in 50% ethanol) showed significantly lower extracted iAs. However, most other extraction solvents were not statistically different from one another. Copyright © 2013 Elsevier Ltd. All rights reserved.
A portable molecular-sieve-based CO{sub 2} sampling system for radiocarbon measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palonen, V., E-mail: vesa.palonen@helsinki.fi
We have developed a field-capable sampling system for the collection of CO{sub 2} samples for radiocarbon-concentration measurements. Most target systems in environmental research are limited in volume and CO{sub 2} concentration, making conventional flask sampling hard or impossible for radiocarbon studies. The present system captures the CO{sub 2} selectively to cartridges containing 13X molecular sieve material. The sampling does not introduce significant under-pressures or significant losses of moisture to the target system, making it suitable for most environmental targets. The system also incorporates a significantly larger sieve container for the removal of CO{sub 2} from chambers prior to the CO{submore » 2} build-up phase and sampling. In addition, both the CO{sub 2} and H{sub 2}O content of the sample gas are measured continuously. This enables in situ estimation of the amount of collected CO{sub 2} and the determination of CO{sub 2} flux to a chamber. The portable sampling system is described in detail and tests for the reliability of the method are presented.« less
Sanna, Daria; Pala, Maria; Cossu, Piero; Dedola, Gian Luca; Melis, Sonia; Fresu, Giovanni; Morelli, Laura; Obinu, Domenica; Tonolo, Giancarlo; Secchi, Giannina; Triunfo, Riccardo; Lorenz, Joseph G.; Scheinfeldt, Laura; Torroni, Antonio; Robledo, Renato; Francalacci, Paolo
2011-01-01
We report a sampling strategy based on Mendelian Breeding Units (MBUs), representing an interbreeding group of individuals sharing a common gene pool. The identification of MBUs is crucial for case-control experimental design in association studies. The aim of this work was to evaluate the possible existence of bias in terms of genetic variability and haplogroup frequencies in the MBU sample, due to severe sample selection. In order to reach this goal, the MBU sampling strategy was compared to a standard selection of individuals according to their surname and place of birth. We analysed mitochondrial DNA variation (first hypervariable segment and coding region) in unrelated healthy subjects from two different areas of Sardinia: the area around the town of Cabras and the western Campidano area. No statistically significant differences were observed when the two sampling methods were compared, indicating that the stringent sample selection needed to establish a MBU does not alter original genetic variability and haplogroup distribution. Therefore, the MBU sampling strategy can be considered a useful tool in association studies of complex traits. PMID:21734814
Jenkins, Paul A; Song, Yun S; Brem, Rachel B
2012-01-01
Genetic exchange between isolated populations, or introgression between species, serves as a key source of novel genetic material on which natural selection can act. While detecting historical gene flow from DNA sequence data is of much interest, many existing methods can be limited by requirements for deep population genomic sampling. In this paper, we develop a scalable genealogy-based method to detect candidate signatures of gene flow into a given population when the source of the alleles is unknown. Our method does not require sequenced samples from the source population, provided that the alleles have not reached fixation in the sampled recipient population. The method utilizes recent advances in algorithms for the efficient reconstruction of ancestral recombination graphs, which encode genealogical histories of DNA sequence data at each site, and is capable of detecting the signatures of gene flow whose footprints are of length up to single genes. Further, we employ a theoretical framework based on coalescent theory to test for statistical significance of certain recombination patterns consistent with gene flow from divergent sources. Implementing these methods for application to whole-genome sequences of environmental yeast isolates, we illustrate the power of our approach to highlight loci with unusual recombination histories. By developing innovative theory and methods to analyze signatures of gene flow from population sequence data, our work establishes a foundation for the continued study of introgression and its evolutionary relevance.
Jenkins, Paul A.; Song, Yun S.; Brem, Rachel B.
2012-01-01
Genetic exchange between isolated populations, or introgression between species, serves as a key source of novel genetic material on which natural selection can act. While detecting historical gene flow from DNA sequence data is of much interest, many existing methods can be limited by requirements for deep population genomic sampling. In this paper, we develop a scalable genealogy-based method to detect candidate signatures of gene flow into a given population when the source of the alleles is unknown. Our method does not require sequenced samples from the source population, provided that the alleles have not reached fixation in the sampled recipient population. The method utilizes recent advances in algorithms for the efficient reconstruction of ancestral recombination graphs, which encode genealogical histories of DNA sequence data at each site, and is capable of detecting the signatures of gene flow whose footprints are of length up to single genes. Further, we employ a theoretical framework based on coalescent theory to test for statistical significance of certain recombination patterns consistent with gene flow from divergent sources. Implementing these methods for application to whole-genome sequences of environmental yeast isolates, we illustrate the power of our approach to highlight loci with unusual recombination histories. By developing innovative theory and methods to analyze signatures of gene flow from population sequence data, our work establishes a foundation for the continued study of introgression and its evolutionary relevance. PMID:23226196
Evaluation of Three Field-Based Methods for Quantifying Soil Carbon
Izaurralde, Roberto C.; Rice, Charles W.; Wielopolski, Lucian; Ebinger, Michael H.; Reeves, James B.; Thomson, Allison M.; Francis, Barry; Mitra, Sudeep; Rappaport, Aaron G.; Etchevers, Jorge D.; Sayre, Kenneth D.; Govaerts, Bram; McCarty, Gregory W.
2013-01-01
Three advanced technologies to measure soil carbon (C) density (g C m−2) are deployed in the field and the results compared against those obtained by the dry combustion (DC) method. The advanced methods are: a) Laser Induced Breakdown Spectroscopy (LIBS), b) Diffuse Reflectance Fourier Transform Infrared Spectroscopy (DRIFTS), and c) Inelastic Neutron Scattering (INS). The measurements and soil samples were acquired at Beltsville, MD, USA and at Centro International para el Mejoramiento del Maíz y el Trigo (CIMMYT) at El Batán, Mexico. At Beltsville, soil samples were extracted at three depth intervals (0–5, 5–15, and 15–30 cm) and processed for analysis in the field with the LIBS and DRIFTS instruments. The INS instrument determined soil C density to a depth of 30 cm via scanning and stationary measurements. Subsequently, soil core samples were analyzed in the laboratory for soil bulk density (kg m−3), C concentration (g kg−1) by DC, and results reported as soil C density (kg m−2). Results from each technique were derived independently and contributed to a blind test against results from the reference (DC) method. A similar procedure was employed at CIMMYT in Mexico employing but only with the LIBS and DRIFTS instruments. Following conversion to common units, we found that the LIBS, DRIFTS, and INS results can be compared directly with those obtained by the DC method. The first two methods and the standard DC require soil sampling and need soil bulk density information to convert soil C concentrations to soil C densities while the INS method does not require soil sampling. We conclude that, in comparison with the DC method, the three instruments (a) showed acceptable performances although further work is needed to improve calibration techniques and (b) demonstrated their portability and their capacity to perform under field conditions. PMID:23383225
Gill, Christina; van de Wijgert, Janneke H H M; Blow, Frances; Darby, Alistair C
2016-01-01
Recent studies on the vaginal microbiota have employed molecular techniques such as 16S rRNA gene sequencing to describe the bacterial community as a whole. These techniques require the lysis of bacterial cells to release DNA before purification and PCR amplification of the 16S rRNA gene. Currently, methods for the lysis of bacterial cells are not standardised and there is potential for introducing bias into the results if some bacterial species are lysed less efficiently than others. This study aimed to compare the results of vaginal microbiota profiling using four different pretreatment methods for the lysis of bacterial samples (30 min of lysis with lysozyme, 16 hours of lysis with lysozyme, 60 min of lysis with a mixture of lysozyme, mutanolysin and lysostaphin and 30 min of lysis with lysozyme followed by bead beating) prior to chemical and enzyme-based DNA extraction with a commercial kit. After extraction, DNA yield did not significantly differ between methods with the exception of lysis with lysozyme combined with bead beating which produced significantly lower yields when compared to lysis with the enzyme cocktail or 30 min lysis with lysozyme only. However, this did not result in a statistically significant difference in the observed alpha diversity of samples. The beta diversity (Bray-Curtis dissimilarity) between different lysis methods was statistically significantly different, but this difference was small compared to differences between samples, and did not affect the grouping of samples with similar vaginal bacterial community structure by hierarchical clustering. An understanding of how laboratory methods affect the results of microbiota studies is vital in order to accurately interpret the results and make valid comparisons between studies. Our results indicate that the choice of lysis method does not prevent the detection of effects relating to the type of vaginal bacterial community one of the main outcome measures of epidemiological studies. However, we recommend that the same method is used on all samples within a particular study.
TNO/Centaurs grouping tested with asteroid data sets
NASA Astrophysics Data System (ADS)
Fulchignoni, M.; Birlan, M.; Barucci, M. A.
2001-11-01
Recently, we have discussed the possible subdivision in few groups of a sample of 22 TNO and Centaurs for which the BVRIJ photometry were available (Barucci et al., 2001, A&A, 371,1150). We obtained this results using the multivariate statistics adopted to define the current asteroid taxonomy, namely the Principal Components Analysis and the G-mode method (Tholen & Barucci, 1989, in ASTEROIDS II). How these methods work with a very small statistical sample as the TNO/Centaurs one? Theoretically, the number of degrees of freedom of the sample is correct. In fact it is 88 in our case and have to be larger then 50 to cope with the requirements of the G-mode. Does the random sampling of the small number of members of a large population contain enough information to reveal some structure in the population? We extracted several samples of 22 asteroids out of a data-base of 86 objects of known taxonomic type for which BVRIJ photometry is available from ECAS (Zellner et al. 1985, ICARUS 61, 355), SMASS II (S.W. Bus, 1999, PhD Thesis, MIT), and the Bell et al. Atlas of the asteroid infrared spectra. The objects constituting the first sample were selected in order to give a good representation of the major asteroid taxonomic classes (at least three samples each class): C,S,D,A, and G. Both methods were able to distinguish all these groups confirming the validity of the adopted methods. The S class is hard to individuate as a consequence of the choice of I and J variables, which imply a lack of information on the absorption band at 1 micron. The other samples were obtained by random choice of the objects. Not all the major groups were well represented (less than three samples per groups), but the general trend of the asteroid taxonomy has been always obtained. We conclude that the quoted grouping of TNO/Centaurs is representative of some physico-chemical structure of the outer solar system small body population.
Respiratory analysis of coupled mitochondria in cryopreserved liver biopsies.
García-Roche, Mercedes; Casal, Alberto; Carriquiry, Mariana; Radi, Rafael; Quijano, Celia; Cassina, Adriana
2018-07-01
The aim of this work was to develop a cryopreservation method of small liver biopsies for in situ mitochondrial function assessment. Herein we describe a detailed protocol for tissue collection, cryopreservation, high-resolution respirometry using complex I and II substrates, calculation and interpretation of respiratory parameters. Liver biopsies from cow and rat were sequentially frozen in a medium containing dimethylsulfoxide as cryoprotectant and stored for up to 3 months at -80 °C. Oxygen consumption rate studies of fresh and cryopreserved samples revealed that most respiratory parameters remained unchanged. Additionally, outer mitochondrial membrane integrity was assessed adding cytochrome c, proving that our cryopreservation method does not harm mitochondrial structure. In sum, we present a reliable way to cryopreserve small liver biopsies without affecting mitochondrial function. Our protocol will enable the transport and storage of samples, extending and facilitating mitochondrial function analysis of liver biopsies. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hanasaki, Itsuo; Kawano, Satoyuki
2013-11-01
Motility of bacteria is usually recognized in the trajectory data and compared with Brownian motion, but the diffusion coefficient is insufficient to evaluate it. In this paper, we propose a method based on the large deviation principle. We show that it can be used to evaluate the non-Gaussian characteristics of model Escherichia coli motions and to distinguish combinations of the mean running duration and running speed that lead to the same diffusion coefficient. Our proposed method does not require chemical stimuli to induce the chemotaxis in a specific direction, and it is applicable to various types of self-propelling motions for which no a priori information of, for example, threshold parameters for run and tumble or head/tail direction is available. We also address the issue of the finite-sample effect on the large deviation quantities, but we propose to make use of it to characterize the nature of motility.
An aptamer-based paper microfluidic device for the colorimetric determination of cocaine.
Wang, Ling; Musile, Giacomo; McCord, Bruce R
2018-02-01
A method utilizing paper microfluidics coupled with gold nanoparticles and two anticocaine aptamers has been developed to detect seized cocaine samples. The ready-to-use format involves the use of a paper strip that produces a color change resulting from the salt-induced aggregation of gold nanoparticles producing a visible color change indicating the presence of the drug. This format is specific for the detection of cocaine. The visual LOD for the method was 2.5 μg and the camera based LOD was 2.36 μg. The operation of the device is easy and rapid, and does not require extensive training or instrumentation. All of the materials utilized in the device are safe and environmental friendly. This device should prove a useful tool for the screening of forensic samples. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Fischer, R. P.; Grun, J.; Ting, A.; Felix, C.; Peckerar, M.; Fatemi, M.; Manka, C. K.
1999-11-01
Current semiconductor annealing methods are based on thermal processes which are accompanied by diffusion that degrades the definition of device features or causes other problems. This will be a serious obstacle for the production of next-generation ultra-high density, low power semiconductor devices. Experiments underway at NRL utilize a new annealing method which is much faster than thermal annealing and does not depend upon thermal energy (J. Grun, et al)., Phys. Rev. Letters 78, 1584 (1997).. A 10 J, 30 nsec, 1.053 nm wavelength laser pulse is focussed to approximately 1 mm diameter on a silicon sample. Acoustic and shock waves propagate from the impact region, which deposit mechanical energy into the material and anneal the silicon. Experimental results will be presented on annealing neutron-transmutation-doped (NTD) and ion implanted silicon samples with impurity concentrations from 1 × 10^15-3 × 10^20/cm^3.
OSM-Classic : An optical imaging technique for accurately determining strain
NASA Astrophysics Data System (ADS)
Aldrich, Daniel R.; Ayranci, Cagri; Nobes, David S.
OSM-Classic is a program designed in MATLAB® to provide a method of accurately determining strain in a test sample using an optical imaging technique. Measuring strain for the mechanical characterization of materials is most commonly performed with extensometers, LVDT (linear variable differential transistors), and strain gauges; however, these strain measurement methods suffer from their fragile nature and it is not particularly easy to attach these devices to the material for testing. To alleviate these potential problems, an optical approach that does not require contact with the specimen can be implemented to measure the strain. OSM-Classic is a software that interrogates a series of images to determine elongation in a test sample and hence, strain of the specimen. It was designed to provide a graphical user interface that includes image processing with a dynamic region of interest. Additionally, the stain is calculated directly while providing active feedback during the processing.
NASA Astrophysics Data System (ADS)
Hawkins, Cameron; Tschuaner, Oliver; Fussell, Zachary; Smith, Jesse
2017-06-01
A novel approach that spatially identifies inhomogeneities from microscale (defects, con-formational disorder) to mesoscale (voids, inclusions) is developed using synchrotron x-ray methods: tomography, Lang topography, and micro-diffraction mapping. These techniques pro-vide a non-destructive method for characterization of mm-sized samples prior to shock experiments. These characterization maps can be used to correlate continuum level measurements in shock compression experiments to the mesoscale and microscale structure. Specifically examined is a sample of C4. We show extensive conformational disorder in gamma-RDX, which is the main component. Further, we observe that the minor HMX-component in C4 contains at least two different phases: alpha- and beta-HMX. This work supported by National Security Technologies, LLC, under Contract No. DE-AC52-06NA25946 with the U.S. Department of Energy and by the Site-Directed Research and Development Program. DOE/NV/25946-3071.
Calibrationless parallel magnetic resonance imaging: a joint sparsity model.
Majumdar, Angshul; Chaudhury, Kunal Narayan; Ward, Rabab
2013-12-05
State-of-the-art parallel MRI techniques either explicitly or implicitly require certain parameters to be estimated, e.g., the sensitivity map for SENSE, SMASH and interpolation weights for GRAPPA, SPIRiT. Thus all these techniques are sensitive to the calibration (parameter estimation) stage. In this work, we have proposed a parallel MRI technique that does not require any calibration but yields reconstruction results that are at par with (or even better than) state-of-the-art methods in parallel MRI. Our proposed method required solving non-convex analysis and synthesis prior joint-sparsity problems. This work also derives the algorithms for solving them. Experimental validation was carried out on two datasets-eight channel brain and eight channel Shepp-Logan phantom. Two sampling methods were used-Variable Density Random sampling and non-Cartesian Radial sampling. For the brain data, acceleration factor of 4 was used and for the other an acceleration factor of 6 was used. The reconstruction results were quantitatively evaluated based on the Normalised Mean Squared Error between the reconstructed image and the originals. The qualitative evaluation was based on the actual reconstructed images. We compared our work with four state-of-the-art parallel imaging techniques; two calibrated methods-CS SENSE and l1SPIRiT and two calibration free techniques-Distributed CS and SAKE. Our method yields better reconstruction results than all of them.
Yang, Xiao-Huan; Cheng, Xiao-Lan; Qin, Bing; Cai, Zhuo-Ya; Cai, Xiong; Liu, Shao; Wang, Qi; Qin, Yong
2016-05-30
The Kang-Jing (KJ) formula is a compound preparation made from 12 kinds of herbs. So far, four different methods (M1-M4) have been documented for KJ preparation, but the influence of preparation methods on the holistic quality of KJ have remained unknown. In this study, a strategy was proposed to investigate the influence of different preparation methods on the holistic quality of KJ using ultra-high performance liquid chromatography coupled with quadrupole/time of flight mass spectrometry (UHPLC-QTOF-MS/MS) based chemical profiling. A total of 101 compounds mainly belonging to flavonoids, tanshinones, monoterpene glycosides, triterpenoid saponins, alkaloids, phenolic acids and volatile oils, were identified. Among these compounds, glaucine was detected only in M3/M4 samples, while two dehydrocorydaline isomers merely detected in M2/M3/M4 samples. Tetrahydrocolumbamine, ethylic lithospermic acid, salvianolic acid E and rosmarimic acid were only detected in M1/M3/M4 samples. In the subsequent quantitative analysis, 12 major compounds were determined by UHPLC-MS/MS. The proposed method was validated with respect to linearity, accuracy, precision and recovery. It was found that the contents of marker compounds varied significantly in samples prepared by different methods. These results demonstrated that preparation method does significantly affect the holistic quality of KJ. UHPLC-QTOF-MS/MS based chemical profiling approach is efficient and reliable for comprehensive quality evaluation of KJ. Collectively, this study provide the chemical evidence for revealing the material basis of KJ, and establish a simple and accurate chemical profiling method for its quality control. Copyright © 2016 Elsevier B.V. All rights reserved.
Wagner, Rebecca; Wetzel, Stephanie J; Kern, John; Kingston, H M Skip
2012-02-01
The employment of chemical weapons by rogue states and/or terrorist organizations is an ongoing concern in the United States. The quantitative analysis of nerve agents must be rapid and reliable for use in the private and public sectors. Current methods describe a tedious and time-consuming derivatization for gas chromatography-mass spectrometry and liquid chromatography in tandem with mass spectrometry. Two solid-phase extraction (SPE) techniques for the analysis of glyphosate and methylphosphonic acid are described with the utilization of isotopically enriched analytes for quantitation via atmospheric pressure chemical ionization-quadrupole time-of-flight mass spectrometry (APCI-Q-TOF-MS) that does not require derivatization. Solid-phase extraction-isotope dilution mass spectrometry (SPE-IDMS) involves pre-equilibration of a naturally occurring sample with an isotopically enriched standard. The second extraction method, i-Spike, involves loading an isotopically enriched standard onto the SPE column before the naturally occurring sample. The sample and the spike are then co-eluted from the column enabling precise and accurate quantitation via IDMS. The SPE methods in conjunction with IDMS eliminate concerns of incomplete elution, matrix and sorbent effects, and MS drift. For accurate quantitation with IDMS, the isotopic contribution of all atoms in the target molecule must be statistically taken into account. This paper describes two newly developed sample preparation techniques for the analysis of nerve agent surrogates in drinking water as well as statistical probability analysis for proper molecular IDMS. The methods described in this paper demonstrate accurate molecular IDMS using APCI-Q-TOF-MS with limits of quantitation as low as 0.400 mg/kg for glyphosate and 0.031 mg/kg for methylphosphonic acid. Copyright © 2012 John Wiley & Sons, Ltd.
Roe, Daniel R; Bergonzo, Christina; Cheatham, Thomas E
2014-04-03
Many problems studied via molecular dynamics require accurate estimates of various thermodynamic properties, such as the free energies of different states of a system, which in turn requires well-converged sampling of the ensemble of possible structures. Enhanced sampling techniques are often applied to provide faster convergence than is possible with traditional molecular dynamics simulations. Hamiltonian replica exchange molecular dynamics (H-REMD) is a particularly attractive method, as it allows the incorporation of a variety of enhanced sampling techniques through modifications to the various Hamiltonians. In this work, we study the enhanced sampling of the RNA tetranucleotide r(GACC) provided by H-REMD combined with accelerated molecular dynamics (aMD), where a boosting potential is applied to torsions, and compare this to the enhanced sampling provided by H-REMD in which torsion potential barrier heights are scaled down to lower force constants. We show that H-REMD and multidimensional REMD (M-REMD) combined with aMD does indeed enhance sampling for r(GACC), and that the addition of the temperature dimension in the M-REMD simulations is necessary to efficiently sample rare conformations. Interestingly, we find that the rate of convergence can be improved in a single H-REMD dimension by simply increasing the number of replicas from 8 to 24 without increasing the maximum level of bias. The results also indicate that factors beyond replica spacing, such as round trip times and time spent at each replica, must be considered in order to achieve optimal sampling efficiency.
Sampling free energy surfaces as slices by combining umbrella sampling and metadynamics.
Awasthi, Shalini; Kapil, Venkat; Nair, Nisanth N
2016-06-15
Metadynamics (MTD) is a very powerful technique to sample high-dimensional free energy landscapes, and due to its self-guiding property, the method has been successful in studying complex reactions and conformational changes. MTD sampling is based on filling the free energy basins by biasing potentials and thus for cases with flat, broad, and unbound free energy wells, the computational time to sample them becomes very large. To alleviate this problem, we combine the standard Umbrella Sampling (US) technique with MTD to sample orthogonal collective variables (CVs) in a simultaneous way. Within this scheme, we construct the equilibrium distribution of CVs from biased distributions obtained from independent MTD simulations with umbrella potentials. Reweighting is carried out by a procedure that combines US reweighting and Tiwary-Parrinello MTD reweighting within the Weighted Histogram Analysis Method (WHAM). The approach is ideal for a controlled sampling of a CV in a MTD simulation, making it computationally efficient in sampling flat, broad, and unbound free energy surfaces. This technique also allows for a distributed sampling of a high-dimensional free energy surface, further increasing the computational efficiency in sampling. We demonstrate the application of this technique in sampling high-dimensional surface for various chemical reactions using ab initio and QM/MM hybrid molecular dynamics simulations. Further, to carry out MTD bias reweighting for computing forward reaction barriers in ab initio or QM/MM simulations, we propose a computationally affordable approach that does not require recrossing trajectories. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Krause, Marita; Irwin, Judith; Wiegert, Theresa; Miskolczi, Arpad; Damas-Segovia, Ancor; Beck, Rainer; Li, Jiang-Tao; Heald, George; Müller, Peter; Stein, Yelena; Rand, Richard J.; Heesen, Volker; Walterbos, Rene A. M.; Dettmar, Ralf-Jürgen; Vargas, Carlos J.; English, Jayanne; Murphy, Eric J.
2018-03-01
Aim. The vertical halo scale height is a crucial parameter to understand the transport of cosmic-ray electrons (CRE) and their energy loss mechanisms in spiral galaxies. Until now, the radio scale height could only be determined for a few edge-on galaxies because of missing sensitivity at high resolution. Methods: We developed a sophisticated method for the scale height determination of edge-on galaxies. With this we determined the scale heights and radial scale lengths for a sample of 13 galaxies from the CHANG-ES radio continuum survey in two frequency bands. Results: The sample average values for the radio scale heights of the halo are 1.1 ± 0.3 kpc in C-band and 1.4 ± 0.7 kpc in L-band. From the frequency dependence analysis of the halo scale heights we found that the wind velocities (estimated using the adiabatic loss time) are above the escape velocity. We found that the halo scale heights increase linearly with the radio diameters. In order to exclude the diameter dependence, we defined a normalized scale height h˜ which is quite similar for all sample galaxies at both frequency bands and does not depend on the star formation rate or the magnetic field strength. However, h˜ shows a tight anticorrelation with the mass surface density. Conclusions: The sample galaxies with smaller scale lengths are more spherical in the radio emission, while those with larger scale lengths are flatter. The radio scale height depends mainly on the radio diameter of the galaxy. The sample galaxies are consistent with an escape-dominated radio halo with convective cosmic ray propagation, indicating that galactic winds are a widespread phenomenon in spiral galaxies. While a higher star formation rate or star formation surface density does not lead to a higher wind velocity, we found for the first time observational evidence of a gravitational deceleration of CRE outflow, e.g. a lowering of the wind velocity from the galactic disk.
The effect of ozonization on furniture dust: microbial content and immunotoxicity in vitro.
Huttunen, Kati; Kauhanen, Eeva; Meklin, Teija; Vepsäläinen, Asko; Hirvonen, Maija-Riitta; Hyvärinen, Anne; Nevalainen, Aino
2010-05-01
Moisture and mold problems in buildings contaminate also the furniture and other movable property. If cleaning of the contaminated furniture is neglected, it may continue to cause problems to the occupants even after the moisture-damage repairs. The aim of this study was to determine the effectiveness of high-efficiency ozone treatment in cleaning of the furniture from moisture-damaged buildings. In addition, the effectiveness of two cleaning methods was compared. Samples were vacuumed from the padded areas before and after the treatment. The microbial flora and concentrations in the dust sample were determined by quantitative cultivation and QPCR-methods. The immunotoxic potential of the dust samples was analyzed by measuring effects on cell viability and production of inflammatory mediators in vitro. Concentrations of viable microbes decreased significantly in most of the samples after cleaning. Cleaning with combined steam wash and ozonisation was more effective method than ozonising alone, but the difference was not statistically significant. Detection of fungal species with PCR showed a slight but nonsignificant decrease in concentrations after the cleaning. The immunotoxic potential of the collected dust decreased significantly in most of the samples. However, in a small subgroup of samples, increased concentrations of microbes and immunotoxicological activity were detected. This study shows that a transportable cleaning unit with high-efficiency ozonising is in most cases effective in decreasing the concentrations of viable microbes and immunotoxicological activity of the furniture dust. However, the method does not destroy or remove all fungal material present in the dust, as detected with QPCR analysis, and in some cases the cleaning procedure may increase the microbial concentrations and immunotoxicity of the dust. Copyright 2010 Elsevier B.V. All rights reserved.
Peck, Michael W.; Plowman, June; Aldus, Clare F.; Wyatt, Gary M.; Penaloza Izurieta, Walter; Stringer, Sandra C.; Barker, Gary C.
2010-01-01
The highly potent botulinum neurotoxins are responsible for botulism, a severe neuroparalytic disease. Strains of nonproteolytic Clostridium botulinum form neurotoxins of types B, E, and F and are the main hazard associated with minimally heated refrigerated foods. Recent developments in quantitative microbiological risk assessment (QMRA) and food safety objectives (FSO) have made food safety more quantitative and include, as inputs, probability distributions for the contamination of food materials and foods. A new method that combines a selective enrichment culture with multiplex PCR has been developed and validated to enumerate specifically the spores of nonproteolytic C. botulinum. Key features of this new method include the following: (i) it is specific for nonproteolytic C. botulinum (and does not detect proteolytic C. botulinum), (ii) the detection limit has been determined for each food tested (using carefully structured control samples), and (iii) a low detection limit has been achieved by the use of selective enrichment and large test samples. The method has been used to enumerate spores of nonproteolytic C. botulinum in 637 samples of 19 food materials included in pasta-based minimally heated refrigerated foods and in 7 complete foods. A total of 32 samples (5 egg pastas and 27 scallops) contained spores of nonproteolytic C. botulinum type B or F. The majority of samples contained <100 spores/kg, but one sample of scallops contained 444 spores/kg. Nonproteolytic C. botulinum type E was not detected. Importantly, for QMRA and FSO, the construction of probability distributions will enable the frequency of packs containing particular levels of contamination to be determined. PMID:20709854
Peck, Michael W; Plowman, June; Aldus, Clare F; Wyatt, Gary M; Izurieta, Walter Penaloza; Stringer, Sandra C; Barker, Gary C
2010-10-01
The highly potent botulinum neurotoxins are responsible for botulism, a severe neuroparalytic disease. Strains of nonproteolytic Clostridium botulinum form neurotoxins of types B, E, and F and are the main hazard associated with minimally heated refrigerated foods. Recent developments in quantitative microbiological risk assessment (QMRA) and food safety objectives (FSO) have made food safety more quantitative and include, as inputs, probability distributions for the contamination of food materials and foods. A new method that combines a selective enrichment culture with multiplex PCR has been developed and validated to enumerate specifically the spores of nonproteolytic C. botulinum. Key features of this new method include the following: (i) it is specific for nonproteolytic C. botulinum (and does not detect proteolytic C. botulinum), (ii) the detection limit has been determined for each food tested (using carefully structured control samples), and (iii) a low detection limit has been achieved by the use of selective enrichment and large test samples. The method has been used to enumerate spores of nonproteolytic C. botulinum in 637 samples of 19 food materials included in pasta-based minimally heated refrigerated foods and in 7 complete foods. A total of 32 samples (5 egg pastas and 27 scallops) contained spores of nonproteolytic C. botulinum type B or F. The majority of samples contained <100 spores/kg, but one sample of scallops contained 444 spores/kg. Nonproteolytic C. botulinum type E was not detected. Importantly, for QMRA and FSO, the construction of probability distributions will enable the frequency of packs containing particular levels of contamination to be determined.
Bonczyk, Michal; Michalik, Boguslaw; Chmielewska, Izabela
2017-03-01
The radioactive lead isotope 210 Pb occurs in waste originating from metal smelting and refining industry, gas and oil extraction and sometimes from underground coal mines, which are deposited in natural environment very often. Radiation risk assessment requires accurate knowledge about the concentration of 210 Pb in such materials. Laboratory measurements seem to be the only reliable method applicable in environmental 210 Pb monitoring. One of the methods is gamma-ray spectrometry, which is a very fast and cost-effective method to determine 210 Pb concentration. On the other hand, the self-attenuation of gamma ray from 210 Pb (46.5 keV) in a sample is significant as it does not depend only on sample density but also on sample chemical composition (sample matrix). This phenomenon is responsible for the under-estimation of the 210 Pb activity concentration level often when gamma spectrometry is applied with no regard to relevant corrections. Finally, the corresponding radiation risk can be also improperly evaluated. Sixty samples of coal mining solid tailings (sediments created from underground mining water) were analysed. Slightly modified and adapted to the existing laboratory condition, a transmission method has been applied for the accurate measurement of 210 Pb concentration . The observed concentrations of 210 Pb range between 42.2 ÷ 11,700 Bq·kg -1 of dry mass. Experimentally obtained correction factors related to a sample density and elemental composition range between 1.11 and 6.97. Neglecting this factor can cause a significant error or underestimations in radiological risk assessment. The obtained results have been used for environmental radiation risk assessment performed by use of the ERICA tool assuming exposure conditions typical for the final destination of such kind of waste.
Atkins, Salla; Launiala, Annika; Kagaha, Alexander; Smith, Helen
2012-04-30
Health policy makers now have access to a greater number and variety of systematic reviews to inform different stages in the policy making process, including reviews of qualitative research. The inclusion of mixed methods studies in systematic reviews is increasing, but these studies pose particular challenges to methods of review. This article examines the quality of the reporting of mixed methods and qualitative-only studies. We used two completed systematic reviews to generate a sample of qualitative studies and mixed method studies in order to make an assessment of how the quality of reporting and rigor of qualitative-only studies compares with that of mixed-methods studies. Overall, the reporting of qualitative studies in our sample was consistently better when compared with the reporting of mixed methods studies. We found that mixed methods studies are less likely to provide a description of the research conduct or qualitative data analysis procedures and less likely to be judged credible or provide rich data and thick description compared with standalone qualitative studies. Our time-related analysis shows that for both types of study, papers published since 2003 are more likely to report on the study context, describe analysis procedures, and be judged credible and provide rich data. However, the reporting of other aspects of research conduct (i.e. descriptions of the research question, the sampling strategy, and data collection methods) in mixed methods studies does not appear to have improved over time. Mixed methods research makes an important contribution to health research in general, and could make a more substantial contribution to systematic reviews. Through our careful analysis of the quality of reporting of mixed methods and qualitative-only research, we have identified areas that deserve more attention in the conduct and reporting of mixed methods research.
Compressive Sampling Based Interior Reconstruction for Dynamic Carbon Nanotube Micro-CT
Yu, Hengyong; Cao, Guohua; Burk, Laurel; Lee, Yueh; Lu, Jianping; Santago, Pete; Zhou, Otto; Wang, Ge
2010-01-01
In the computed tomography (CT) field, one recent invention is the so-called carbon nanotube (CNT) based field emission x-ray technology. On the other hand, compressive sampling (CS) based interior tomography is a new innovation. Combining the strengths of these two novel subjects, we apply the interior tomography technique to local mouse cardiac imaging using respiration and cardiac gating with a CNT based micro-CT scanner. The major features of our method are: (1) it does not need exact prior knowledge inside an ROI; and (2) two orthogonal scout projections are employed to regularize the reconstruction. Both numerical simulations and in vivo mouse studies are performed to demonstrate the feasibility of our methodology. PMID:19923686
How does Socio-Economic Factors Influence Interest to Go to Vocational High Schools?
NASA Astrophysics Data System (ADS)
Utomo, N. F.; Wonggo, D.
2018-02-01
This study is aimed to reveal the interest of the students of junior high schools in Sangihe Islands, Indonesia, to go to vocational high schools and the affecting factors. This study used the quantitative method with the ex-post facto approach. The population consisted of 332 students, and the sample of 178 students was established using the proportional random sampling technique applying Isaac table’s 5% error standard. The results show that family’s socio-economic condition positively contributes 26% to interest to go to vocational high schools thus proving that family’s socio-economic condition is influential and contribute to junior high school students’ interest to go to vocational high schools.
Differences in sampling techniques on total post-mortem tryptase.
Tse, R; Garland, J; Kesha, K; Elstub, H; Cala, A D; Ahn, Y; Stables, S; Palmiere, C
2018-05-01
The measurement of mast cell tryptase is commonly used to support the diagnosis of anaphylaxis. In the post-mortem setting, the literature recommends sampling from peripheral blood sources (femoral blood) but does not specify the exact sampling technique. Sampling techniques vary between pathologists, and it is unclear whether different sampling techniques have any impact on post-mortem tryptase levels. The aim of this study is to compare the difference in femoral total post-mortem tryptase levels between two sampling techniques. A 6-month retrospective study comparing femoral total post-mortem tryptase levels between (1) aspirating femoral vessels with a needle and syringe prior to evisceration and (2) femoral vein cut down during evisceration. Twenty cases were identified, with three cases excluded from analysis. There was a statistically significant difference (paired t test, p < 0.05) between mean post-mortem tryptase by aspiration (10.87 ug/L) and by cut down (14.15 ug/L). The mean difference between the two methods was 3.28 ug/L (median, 1.4 ug/L; min, - 6.1 ug/L; max, 16.5 ug/L; 95% CI, 0.001-6.564 ug/L). Femoral total post-mortem tryptase is significantly different, albeit by a small amount, between the two sampling methods. The clinical significance of this finding and what factors may contribute to it are unclear. When requesting post-mortem tryptase, the pathologist should consider documenting the exact blood collection site and method used for collection. In addition, blood samples acquired by different techniques should not be mixed together and should be analyzed separately if possible.
Remote Sensing of Soil Moisture: A Comparison of Optical and Thermal Methods
NASA Astrophysics Data System (ADS)
Foroughi, H.; Naseri, A. A.; Boroomandnasab, S.; Sadeghi, M.; Jones, S. B.; Tuller, M.; Babaeian, E.
2017-12-01
Recent technological advances in satellite and airborne remote sensing have provided new means for large-scale soil moisture monitoring. Traditional methods for soil moisture retrieval require thermal and optical RS observations. In this study we compared the traditional trapezoid model parameterized based on the land surface temperature - normalized difference vegetation index (LST-NDVI) space with the recently developed optical trapezoid model OPTRAM parameterized based on the shortwave infrared transformed reflectance (STR)-NDVI space for an extensive sugarcane field located in Southwestern Iran. Twelve Landsat-8 satellite images were acquired during the sugarcane growth season (April to October 2016). Reference in situ soil moisture data were obtained at 22 locations at different depths via core sampling and oven-drying. The obtained results indicate that the thermal/optical and optical prediction methods are comparable, both with volumetric moisture content estimation errors of about 0.04 cm3 cm-3. However, the OPTRAM model is more efficient because it does not require thermal data and can be universally parameterized for a specific location, because unlike the LST-soil moisture relationship, the reflectance-soil moisture relationship does not significantly vary with environmental variables (e.g., air temperature, wind speed, etc.).
Mapping the nonlinear optical susceptibility by noncollinear second-harmonic generation.
Larciprete, M C; Bovino, F A; Giardina, M; Belardini, A; Centini, M; Sibilia, C; Bertolotti, M; Passaseo, A; Tasco, V
2009-07-15
We present a method, based on noncollinear second-harmonic generation, to evaluate the nonzero elements of the nonlinear optical susceptibility. At a fixed incidence angle, the generated signal is investigated by varying the polarization state of both fundamental beams. The resulting polarization charts allows us to verify if Kleinman's symmetry rules can be applied to a given material or to retrieve the absolute value of the nonlinear optical tensor terms, from a reference measurement. Experimental measurements obtained from gallium nitride layers are reported. The proposed method does not require an angular scan and thus is useful when the generated signal is strongly affected by sample rotation.
Effects of graphene oxide doping on the structural and superconducting properties of YBa2Cu3O7-δ
NASA Astrophysics Data System (ADS)
Dadras, S.; Falahati, S.; Dehghani, S.
2018-05-01
In this research we reported the effects of graphene oxide (GO) doping on the structural and superconducting properties of YBa2Cu3O7-δ (YBCO) high temperature superconductors. We synthesized YBCO powder by sol-gel method. After calcination, the powder mixed with different weight percent (0, 0.1, 0.3, 0.7, 1 wt.%) of GO. Refinement of X-ray diffraction (XRD) was carried out by material analysis using diffraction (MAUD) program to obtain the structural parameters such as lattice parameters, site occupancy of different atoms and orthorhombicity value for the all samples. Results show that GO doping does not change the structure of YBCO compound, Cu (1), Cu (2) and oxygen sites occupancy. It seems that GO remains between the grains and can play the role of weak links. We found that GO addition to YBCO compound increases transition temperature (TC). The oxygen contents of the all GO-doped samples are increased with respect to the pure one. The strain (ɛ) of the samples obtained from Williamson-Hall method, varies with increasing of GO doping. The scanning electron microscopy (SEM) images of the samples show better YBCO grain connections by GO doping.
Cocaine abuse determination by ion mobility spectrometry using molecular imprinting.
Sorribes-Soriano, A; Esteve-Turrillas, F A; Armenta, S; de la Guardia, M; Herrero-Martínez, J M
2017-01-20
A cocaine-based molecular imprinted polymer (MIP) has been produced by bulk polymerization and employed as selective solid-phase extraction support for the determination of cocaine in saliva samples by ion mobility spectrometry (IMS). The most appropriate conditions for washing and elution of cocaine from MIPs were studied and MIPs were characterized in terms of analyte binding capacity, reusability in water and saliva analysis, imprinting factor and selectivity were established and compared with non-imprinted polymers. The proposed MIP-IMS method provided a LOD of 18μgL -1 and quantitative recoveries for blank saliva samples spiked from 75 to 500μgL -1 cocaine. Oral fluid samples were collected from cocaine consumers and analysed by the proposed MIP-IMS methodology. Results, ranging from below the LOD to 51±2mgL -1 , were statistically comparable to those obtained by a confirmatory gas chromatography-mass spectrometry method. Moreover, results were compared to a qualitative lateral flow immunoassay procedure providing similar classification of the samples. Thus, MIP-IMS can be considered an useful alternative that provided fast, selective and sensitive results with a cost affordable instrumentation that does not require skilled operators. Copyright © 2016 Elsevier B.V. All rights reserved.
Simultaneous extraction of proteins and metabolites from cells in culture
Sapcariu, Sean C.; Kanashova, Tamara; Weindl, Daniel; Ghelfi, Jenny; Dittmar, Gunnar; Hiller, Karsten
2014-01-01
Proper sample preparation is an integral part of all omics approaches, and can drastically impact the results of a wide number of analyses. As metabolomics and proteomics research approaches often yield complementary information, it is desirable to have a sample preparation procedure which can yield information for both types of analyses from the same cell population. This protocol explains a method for the separation and isolation of metabolites and proteins from the same biological sample, in order for downstream use in metabolomics and proteomics analyses simultaneously. In this way, two different levels of biological regulation can be studied in a single sample, minimizing the variance that would result from multiple experiments. This protocol can be used with both adherent and suspension cell cultures, and the extraction of metabolites from cellular medium is also detailed, so that cellular uptake and secretion of metabolites can be quantified. Advantages of this technique includes:1.Inexpensive and quick to perform; this method does not require any kits.2.Can be used on any cells in culture, including cell lines and primary cells extracted from living organisms.3.A wide variety of different analysis techniques can be used, adding additional value to metabolomics data analyzed from a sample; this is of high value in experimental systems biology. PMID:26150938
NASA Astrophysics Data System (ADS)
Khoo, Geoffrey; Kuennemeyer, Rainer; Claycomb, Rod W.
2005-04-01
Currently, the state of the art of mastitis detection in dairy cows is the laboratory-based measurement of somatic cell count (SCC), which is time consuming and expensive. Alternative, rapid, and reliable on-farm measurement methods are required for effective farm management. We have investigated whether fluorescence lifetime measurements can determine SCC in fresh, unprocessed milk. The method is based on the change in fluorescence lifetime of ethidium bromide when it binds to DNA from the somatic cells. Milk samples were obtained from a Fullwood Merlin Automated Milking System and analysed within a twenty-four hour period, over which the SCC does not change appreciably. For reference, the milk samples were also sent to a testing laboratory where the SCC was determined by traditional methods. The results show that we can quantify SCC using the fluorescence photon migration method from a lower bound of 4x105 cells mL-1 to an upper bound of 1 x 107 cells mL-1. The upper bound is due to the reference method used while the cause of the lower boundary is unknown, yet.
Estimating haplotype frequencies by combining data from large DNA pools with database information.
Gasbarra, Dario; Kulathinal, Sangita; Pirinen, Matti; Sillanpää, Mikko J
2011-01-01
We assume that allele frequency data have been extracted from several large DNA pools, each containing genetic material of up to hundreds of sampled individuals. Our goal is to estimate the haplotype frequencies among the sampled individuals by combining the pooled allele frequency data with prior knowledge about the set of possible haplotypes. Such prior information can be obtained, for example, from a database such as HapMap. We present a Bayesian haplotyping method for pooled DNA based on a continuous approximation of the multinomial distribution. The proposed method is applicable when the sizes of the DNA pools and/or the number of considered loci exceed the limits of several earlier methods. In the example analyses, the proposed model clearly outperforms a deterministic greedy algorithm on real data from the HapMap database. With a small number of loci, the performance of the proposed method is similar to that of an EM-algorithm, which uses a multinormal approximation for the pooled allele frequencies, but which does not utilize prior information about the haplotypes. The method has been implemented using Matlab and the code is available upon request from the authors.
Investigating the Effectiveness of Teaching Methods Based on a Four-Step Constructivist Strategy
NASA Astrophysics Data System (ADS)
Çalik, Muammer; Ayas, Alipaşa; Coll, Richard K.
2010-02-01
This paper reports on an investigation of the effectiveness an intervention using several different methods for teaching solution chemistry. The teaching strategy comprised a four-step approach derived from a constructivist view of learning. A sample consisting of 44 students (18 boys and 26 girls) was selected purposively from two different Grade 9 classes in the city of Trabzon, Turkey. Data collection employed a purpose-designed `solution chemistry concept test', consisting of 17 items, with the quantitative data from the survey supported by qualitative interview data. The findings suggest that using different methods embedded within the four-step constructivist-based teaching strategy enables students to refute some alternative conceptions, but does not completely eliminate student alternative conceptions for solution chemistry.
AFFINE-CORRECTED PARADISE: FREE-BREATHING PATIENT-ADAPTIVE CARDIAC MRI WITH SENSITIVITY ENCODING
Sharif, Behzad; Bresler, Yoram
2013-01-01
We propose a real-time cardiac imaging method with parallel MRI that allows for free breathing during imaging and does not require cardiac or respiratory gating. The method is based on the recently proposed PARADISE (Patient-Adaptive Reconstruction and Acquisition Dynamic Imaging with Sensitivity Encoding) scheme. The new acquisition method adapts the PARADISE k-t space sampling pattern according to an affine model of the respiratory motion. The reconstruction scheme involves multi-channel time-sequential imaging with time-varying channels. All model parameters are adapted to the imaged patient as part of the experiment and drive both data acquisition and cine reconstruction. Simulated cardiac MRI experiments using the realistic NCAT phantom show high quality cine reconstructions and robustness to modeling inaccuracies. PMID:24390159
Multiresolution Distance Volumes for Progressive Surface Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laney, D E; Bertram, M; Duchaineau, M A
2002-04-18
We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distancemore » volumes for surface compression and progressive reconstruction for complex high genus surfaces.« less
Chemiluminescence measurements on irradiated garlic powder by the single photon counting technique
NASA Astrophysics Data System (ADS)
Narvaiz, P.
1995-02-01
The feasibility of identifying irradiated garlic powder measuring chemiluminescence by liquid scintillation spectrometry was studied. Samples packed in 100 μm thick polyethylene bags were irradiated in a 60Co semi-industrial facility, with doses of 10 and 30 kGy. Control and irradiated samples were stored at 20 ± 4°C and 70 ± 10% RH in darkness for 2 years. Assays were performed to establish the best sample concentration and pH of the buffer solution in which garlic powder was to be suspended for its measurement. The water content of garlic samples was also analyzed throughout storage time, as it related to the stability of the species causing luminescence. Chemiluminescence values diminished in every sample over storage time following an exponential pattern. Irradiated samples showed values significantly higher than those of the control samples, according to the radiation dose, throughout the storage period. This does not necessarily imply that the identification of the irradiated samples would be certain, since values of control samples coming from different origins have been found to fluctuate within a rather wide range. Nonetheless, in principle, the method looks promising for the measurement of chemiluminescence in irradiated samples
Novel method of realizing metal freezing points by induced solidification
NASA Astrophysics Data System (ADS)
Ma, C. K.
1997-07-01
The freezing point of a pure metal, tf, is the temperature at which the solid and liquid phases are in equilibrium. The purest metal available is actually a dilute alloy. Normally, the liquidus point of a sample, tl, at which the amount of the solid phase in equilibrium with the liquid phase is minute, provides the closest approximation to tf. Thus the experimental realization of tf is a matter of realizing tl. The common method is to cool a molten sample continuously so that it supercools and recalesces. The highest temperature after recalescence is normally the best experimental value of tl. In the realization, supercooling of the sample at the sample container and the thermometer well is desirable for the formation of dual solid-liquid interfaces to thermally isolate the sample and the thermometer. However, the subsequent recalescence of the supercooled sample requires the formation of a certain amount of solid, which is not minute. Obviously, the plateau temperature is not the liquidus point. In this article we describe a method that minimizes supercooling. The condition that provides tl is closely approached so that the latter may be measured. As the temperature of the molten sample approaches the anticipated value of tl, a small solid of the same alloy is introduced into the sample to induce solidification. In general, solidification does not occur as long as the temperature is above or at tl, and occurs as soon as the sample supercools minutely. Thus tl can be obtained, in principle, by observing the temperature at which induced solidification begins. In case the solid is introduced after the sample has supercooled slightly, a slight recalescence results and the subsequent maximum temperature is a close approximation to tl. We demonstrate that the principle of induced solidification is indeed applicable to freezing point measurements by applying it to the design of a copper-freezing-point cell for industrial applications, in which a supercooled sample is reheated and then induced to solidify by the solidification of an auxiliary sample. Further experimental studies are necessary to assess the practical advantages and disadvantages of the induction method.
Cocci, Andrea; Zuppi, Cecilia; Persichilli, Silvia
2013-01-01
Objective. 25-hydroxyvitamin D2/D3 (25-OHD2/D3) determination is a reliable biomarker for vitamin D status. Liquid chromatography-tandem mass spectrometry was recently proposed as a reference method for vitamin D status evaluation. The aim of this work is to compare two commercial kits (Chromsystems and PerkinElmer) for 25-OHD2/D3 determination by our entry level LC-MS/MS. Design and Methods. Chromsystems kit adds an online trap column to an HPLC column and provides atmospheric pressure chemical ionization, isotopically labeled internal standard, and 4 calibrator points. PerkinElmer kit uses a solvent extraction and protein precipitation method. This kit can be used with or without derivatization with, respectively, electrospray and atmospheric pressure chemical ionization. For each analyte, there are isotopically labeled internal standards and 7 deuterated calibrator points. Results. Performance characteristics are acceptable for both methods. Mean bias between methods calculated on 70 samples was 1.9 ng/mL. Linear regression analysis gave an R 2 of 0.94. 25-OHD2 is detectable only with PerkinElmer kit in derivatized assay option. Conclusion. Both methods are suitable for routine. Chromsystems kit minimizes manual sample preparation, requiring only protein precipitation, but, with our system, 25-OHD2 is not detectable. PerkinElmer kit without derivatization does not guarantee acceptable performance with our LC-MS/MS system, as sample is not purified online. Derivatization provides sufficient sensitivity for 25-OHD2 detection. PMID:23555079
The effect of changes to the method of estimating the pollen count from aerobiological samples.
Sikoparija, Branko; Pejak-Šikoparija, Tatjana; Radišić, Predrag; Smith, Matt; Soldevilla, Carmen Galán
2011-02-01
Pollen data have been recorded at Novi Sad in Serbia since 2000. The adopted method of producing pollen counts has been the use of five longitudinal transects that examine 19.64% of total sample surface. However, counting five transects is time consuming and so the main objective of this study is to investigate whether reducing the number to three or even two transects would have a significant effect on daily average and bi-hourly pollen concentrations, as well as the main characteristics of the pollen season and long-term trends. This study has shown that there is a loss of accuracy in daily average and bi-hourly pollen concentrations (an increase in % ERROR) as the sub-sampling area is reduced from five to three or two longitudinal transects. However, this loss of accuracy does not impact on the main characteristics of the season or long-term trends. As a result, this study can be used to justify changing the sub-sampling method used at Novi Sad from five to three longitudinal transects. The use of two longitudinal transects has been ruled out because, although quicker, the counts produced: (a) had the greatest amount of % ERROR, (b) altered the amount of influence of the independent variable on the dependent variable (the slope in regression analysis) and (c) the total sampled surface (7.86%) was less than the minimum requirement recommended by the European Aerobiology Society working group on Quality Control (at least 10% of total slide area).
2015-10-01
an additional 80 to 100 estrogen receptor positive (ER+) cases. In terms of the impact of this change on our specific aims, it impacts only Aims 1...This modification does not impact the work or budget of Dr. Chinnaiyan’s component of this project as there were never any plans to provide samples...increased speed and sequencing yield, months 7-60: We will optimize and incorporate improved methods for library construction, such as transposon based
Bisphenol A polycarbonate as a reference material
NASA Technical Reports Server (NTRS)
Hilado, C. J.; Cumming, H. J.; Williams, J. B.
1977-01-01
Test methods require reference materials to standardize and maintain quality control. Various materials have been evaluated as possible reference materials, including a sample of bisphenol A polycarbonate without additives. Screening tests for relative toxicity under various experimental conditions were performed using male mice exposed to pyrolysis effluents over a 200-800 C temperature range. It was found that the bisphenol A polycarbonate served as a suitable reference material as it is available in large quantities, and does not significantly change with time.
Kaufmann, Anton; Maden, Kathryn
2018-03-01
A quantitative method for the determination of biogenic amines was developed. The method is characterized by the virtual absence of sample cleanup and does not require a derivatization reaction. Diluted extracts are centrifuged, filtrated, and directly injected into an ultra-HPLC column, which is coupled to a single-stage high-resolution mass spectrometer (Orbitrap). The chromatography is based on a reversed-phase column and an eluent containing an ion-pairing agent (heptafluorobutyric acid). The high sensitivity of the instrument permits the injection of very diluted extracts, which ensures stable retention times and the virtual absence of signal suppression effects. In addition, the quantification of histamine (a regulated compound) is further aided by the use of an isotopically labeled internal standard. The method was validated for three fish-based matrixes. Both the sample processing and the analytical measurement are very fast; hence, the methodology is ideal for high-throughput work. In addition, the method is significantly more selective than conventional methods (i.e., derivatization followed by LC with UV/fluorescence (FL) detection) for biogenic amines. A comparison showed that LC-UV/FL methods can produce false-positive findings due to coeluting matrix compounds.
PCL foamed scaffolds loaded with 5-fluorouracil anti-cancer drug prepared by an eco-friendly route.
Salerno, Aurelio; Domingo, Concepción; Saurina, Javier
2017-06-01
This study describes a new preparation method, which combines freeze drying and supercritical CO 2 foaming approaches, for the preparation of drug delivery scaffolds of polycaprolactone loaded with 5-fluorouracil, an anti-cancer drug, with low solubility in scCO 2 . It is a principal objective of this work to design a scCO 2 strategy to reduce 5-Fu solubility limitations in its homogeneous distribution into a PCL scaffold through the design of an innovative processing method. The design of this process is considered valuable for the development of clean technology in pharmacy and medicine, since most of the active agents have a null solubility in scCO 2 ·Supercritical CO 2 is used as a blowing agent to induce polymer foaming by means of the low temperature pressure quench process. The resulting samples have been prepared under different operational conditions focused on enhancing the performance of the release process. In this case, design of experiments (DOE) was considered for a more comprehensive and systematic optimization of the product. In particular, drug amount, equals to 4.8 or 9.1wt%, process temperature, of 45 or 50°C and depressurization rate, equals to 0.1MPas -1 or 2MPas -1 were selected as the factors to be investigated by a three-factor at two-level full factorial design. Samples were characterized to establish porosity data, drug loading percentage and, especially, release profile chromatographically monitored. Results from DOE have concluded which are the best samples providing a sustained drug release for several days, which may be of great interest to develop materials for tissue engineering and sustained release applications. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nurse, K. B.; Milkereit, B.
2017-12-01
The seismic Horizontal to Vertical Spectral Ratio analysis technique reliably gives overburden depth to bedrock, for an independently determined Vs, based on the frequency of the main resonance peak. Above this, smaller resonances reflect the velocity structure within the overburden itself. This range in the HVSR response shows sufficient sensitivity to be exploited as a monitoring tool, to detect change in seismic physical properties and from that, change in overburden conditions. To explore the variation of the response, several 3C geophones have been deployed in southern Ontario, Canada since December 2015 (and ongoing). The local geology is a sedimentary basin with 30m of overburden, a simple 2D environment well suited for the HVSR method. Data are collected for 15s per minute, with an effective frequency band of 2-400Hz. HVSR estimates are produced for each sampling period and archived. Over these two years, winter freeze/thaw, saturated spring and summer draught conditions were sampled. H/V daily averages are dominated by the stable 3Hz resonance due to the overall surface layer, but smaller spectral peaks up to 100Hz are clear and evolve in frequency and amplitude over the collection period. Ground freeze/thaw cycles are clearly evident by significant reduction in the horizontal field, but also the changing of the soil moisture content throughout the year causes subtle shifts in the response (correlated to rain events and water table variation). The long term sampling does show a sensitivity of the HVSR method to the overburden in proximity to the sensor, and suggests a possibility for its use in monitoring soil / water-table conditions. But it also highlights that the estimate from an isolated H/V acquisition does include this variability and needs to be adequately quantified in VS30 estimates.
Digital fast neutron radiography of steel reinforcing bar in concrete
NASA Astrophysics Data System (ADS)
Mitton, K.; Jones, A.; Joyce, M. J.
2014-12-01
Neutron imaging has previously been used in order to test for cracks, degradation and water content in concrete. However, these techniques often fall short of alternative non-destructive testing methods, such as γ-ray and X-ray imaging, particularly in terms of resolution. Further, thermal neutron techniques can be compromised by the significant expense associated with thermal neutron sources of sufficient intensity to yield satisfactory results that can often precipitate the need for a reactor. Such embodiments are clearly not portable in the context of the needs of field applications. This paper summarises the results of a study to investigate the potential for transmission radiography based on fast neutrons. The objective of this study was to determine whether the presence of heterogeneities in concrete, such as reinforcement structures, could be identified on the basis of variation in transmitted fast-neutron flux. Monte-Carlo simulations have been performed and the results from these are compared to those arising from practical tests using a 252Cf source. The experimental data have been acquired using a digital pulse-shape discrimination system that enables fast neutron transmission to be studied across an array of liquid scintillators placed in close proximity to samples under test, and read out in real time. Whilst this study does not yield sufficient spatial resolution, a comparison of overall flux ratios does provide a basis for the discrimination between samples with contrasting rebar content. This approach offers the potential for non-destructive testing that gives less dose, better transportability and better accessibility than competing approaches. It is also suitable for thick samples where γ-ray and X-ray methods can be limited.
Wiśniewska, Paulina; Boqué, Ricard; Borràs, Eva; Busto, Olga; Wardencki, Waldemar; Namieśnik, Jacek; Dymerski, Tomasz
2017-02-15
Headspace mass-spectrometry (HS-MS), mid infrared (MIR) and UV-vis spectroscopy were used to authenticate whisky samples from different origins and ways of production ((Irish, Spanish, Bourbon, Tennessee Whisky and Scotch). The collected spectra were processed with partial least-squares discriminant analysis (PLS-DA) to build the classification models. In all cases the five groups of whiskies were distinguished, but the best results were obtained by HS-MS, which indicates that the biggest differences between different types of whisky are due to their aroma. Differences were also found inside groups, showing that not only raw material is important to discriminate samples but also the way of their production. The methodology is quick, easy and does not require sample preparation. Copyright © 2016 Elsevier B.V. All rights reserved.
Preanalytical requirements of urinalysis
Delanghe, Joris; Speeckaert, Marijn
2014-01-01
Urine may be a waste product, but it contains an enormous amount of information. Well-standardized procedures for collection, transport, sample preparation and analysis should become the basis of an effective diagnostic strategy for urinalysis. As reproducibility of urinalysis has been greatly improved due to recent technological progress, preanalytical requirements of urinalysis have gained importance and have become stricter. Since the patients themselves often sample urine specimens, urinalysis is very susceptible to preanalytical issues. Various sampling methods and inappropriate specimen transport can cause important preanalytical errors. The use of preservatives may be helpful for particular analytes. Unfortunately, a universal preservative that allows a complete urinalysis does not (yet) exist. The preanalytical aspects are also of major importance for newer applications (e.g. metabolomics). The present review deals with the current preanalytical problems and requirements for the most common urinary analytes. PMID:24627718
Isolation of hydrophilic organic acids from water using nonionic macroporous resins
Aiken, G.R.; McKnight, Diane M.; Thorn, K.A.; Thurman, E.M.
1992-01-01
A method has been developed for the isolation of hydrophilic organic acids from aquatic environments using Amberlite* * Use of trade names in this report is for identification purposes only and does not constitute endorsement by the U.S. Geological Survey. XAD-4 resin. The method uses a two column array of XAD-8 and XAD-4 resins in series. The hydrophobic organic acids, composed primarily of aquatic fulvic acid, are removed from the sample on XAD-8, followed by the isolation of the more hydrophilic organic acids on XAD-4. For samples from a number of diverse environments, more of the dissolved organic carbon was isolated on the XAD-8 resin (23-58%) than on the XAD-4 resin (7-25%). For these samples, the hydrophilic acids have lower carbon and hydrogen contents, higher oxygen and nitrogen contents, and are lower in molecular weight than the corresponding fulvic acids. 13C NMR analyses indicate that the hydrophilic acids have a lower concentration of aromatic carbon and greater heteroaliphatic, ketone and carboxyl content than the fulvic acid. ?? 1992.
Aphesteguy, Juan Carlos; Jacobo, Silvia E; Lezama, Luis; Kurlyandskaya, Galina V; Schegoleva, Nina N
2014-06-19
Fe3O4 and ZnxFe3-xO4 pure and doped magnetite magnetic nanoparticles (NPs) were prepared in aqueous solution (Series A) or in a water-ethyl alcohol mixture (Series B) by the co-precipitation method. Only one ferromagnetic resonance line was observed in all cases under consideration indicating that the materials are magnetically uniform. The shortfall in the resonance fields from 3.27 kOe (for the frequency of 9.5 GHz) expected for spheres can be understood taking into account the dipolar forces, magnetoelasticity, or magnetocrystalline anisotropy. All samples show non-zero low field absorption. For Series A samples the grain size decreases with an increase of the Zn content. In this case zero field absorption does not correlate with the changes of the grain size. For Series B samples the grain size and zero field absorption behavior correlate with each other. The highest zero-field absorption corresponded to 0.2 zinc concentration in both A and B series. High zero-field absorption of Fe3O4 ferrite magnetic NPs can be interesting for biomedical applications.
Han, Bin; Cao, Lei; Zheng, Li; Zang, Jia-ye; Wang, Xiao-ru
2012-01-01
Using three pipe clamp solenoid valves to replace the traditional six-port valve for sample quota, a set of multi-channel flow injection analyzer was designed in the present paper. The authors optimized optimum instrumental testing condition, and realized determination and analysis of total dissolved nitrogen in seawaters. The construction of apparatus is simple and it has the potential to be used for analysis of total dissolved nitrogen. The sample throughput of total dissolved nitrogen was 27 samples per hour. The linear range of total dissolved nitrogen was 50.0-1 000.0 microgN x L(-3) (r > or = 0.999). The detection limit was 7.6 microgN x L(-3). The recovery of total dissolved nitrogen was 87.3%-107.2%. The relative standard deviation for total dissolved nitrogen was 1.35%-6.32% (n = 6). After the t-test analysis, it does not have the significance difference between this method and national standard method. It is suitable for fast analysis of total dissolved nitrogen in seawater.
The microwave Hall effect measured using a waveguide tee
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coppock, J. E.; Anderson, J. R.; Johnson, W. B.
2016-03-14
This paper describes a simple microwave apparatus to measure the Hall effect in semiconductor wafers. The advantage of this technique is that it does not require contacts on the sample or the use of a resonant cavity. Our method consists of placing the semiconductor wafer into a slot cut in an X-band (8–12 GHz) waveguide series tee, injecting microwave power into the two opposite arms of the tee, and measuring the microwave output at the third arm. A magnetic field applied perpendicular to the wafer gives a microwave Hall signal that is linear in the magnetic field and which reverses phasemore » when the magnetic field is reversed. The microwave Hall signal is proportional to the semiconductor mobility, which we compare for calibration purposes with d.c. mobility measurements obtained using the van der Pauw method. We obtain the resistivity by measuring the microwave reflection coefficient of the sample. This paper presents data for silicon and germanium samples doped with boron or phosphorus. The measured mobilities ranged from 270 to 3000 cm{sup 2}/(V s).« less
Preliminary Tests For Development Of A Non-Pertechnetate Analysis Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diprete, D.; McCabe, D.
2016-09-28
The objective of this task was to develop a non-pertechnetate analysis method that 222-S lab could easily implement. The initial scope involved working with 222-S laboratory personnel to adapt the existing Tc analytical method to fractionate the non-pertechnetate and pertechnetate. SRNL then developed and tested a method using commercial sorbents containing Aliquat ® 336 to extract the pertechnetate (thereby separating it from non-pertechnetate), followed by oxidation, extraction, and stripping steps, and finally analysis by beta counting and Mass Spectroscopy. Several additional items were partially investigated, including impacts of a 137Cs removal step. The method was initially tested on SRS tankmore » waste samples to determine its viability. Although SRS tank waste does not contain non-pertechnetate, testing with it was useful to investigate the compatibility, separation efficiency, interference removal efficacy, and method sensitivity.« less
Robust DNA Isolation and High-throughput Sequencing Library Construction for Herbarium Specimens.
Saeidi, Saman; McKain, Michael R; Kellogg, Elizabeth A
2018-03-08
Herbaria are an invaluable source of plant material that can be used in a variety of biological studies. The use of herbarium specimens is associated with a number of challenges including sample preservation quality, degraded DNA, and destructive sampling of rare specimens. In order to more effectively use herbarium material in large sequencing projects, a dependable and scalable method of DNA isolation and library preparation is needed. This paper demonstrates a robust, beginning-to-end protocol for DNA isolation and high-throughput library construction from herbarium specimens that does not require modification for individual samples. This protocol is tailored for low quality dried plant material and takes advantage of existing methods by optimizing tissue grinding, modifying library size selection, and introducing an optional reamplification step for low yield libraries. Reamplification of low yield DNA libraries can rescue samples derived from irreplaceable and potentially valuable herbarium specimens, negating the need for additional destructive sampling and without introducing discernible sequencing bias for common phylogenetic applications. The protocol has been tested on hundreds of grass species, but is expected to be adaptable for use in other plant lineages after verification. This protocol can be limited by extremely degraded DNA, where fragments do not exist in the desired size range, and by secondary metabolites present in some plant material that inhibit clean DNA isolation. Overall, this protocol introduces a fast and comprehensive method that allows for DNA isolation and library preparation of 24 samples in less than 13 h, with only 8 h of active hands-on time with minimal modifications.
McComb, Jacqueline Q.; Rogers, Christian; Han, Fengxiang X.; Tchounwou, Paul B.
2014-01-01
With industrialization, great amounts of trace elements and heavy metals have been excavated and released on the surface of the earth and dissipated into the environments. Rapid screening technology for detecting major and trace elements as well as heavy metals in variety of environmental samples is most desired. The objectives of this study were to determine the detection limits, accuracy, repeatability and efficiency of a X-ray fluorescence spectrometer (Niton XRF analyzer) in comparison with the traditional analytical methods, inductively coupled plasma optical emission spectrometer (ICP-OES) and inductively coupled plasma optical emission spectrometer (ICP-MS) in screening of major and trace elements of environmental samples including estuary soils and sediments, contaminated soils, and biological samples. XRF is a fast and non-destructive method in measuring the total concentration of multi--elements simultaneously. Contrary to ICP-OES and ICP-MS, XRF analyzer is characterized by the limited preparation required for solid samples, non-destructive analysis, increased total speed and high throughout, the decreased production of hazardous waste and the low running costs as well as multi-elemental determination and portability in the fields. The current comparative study demonstrates that XRF is a good rapid non-destructive method for contaminated soils, sediments and biological samples containing higher concentrations of major and trace elements. Unfortunately, XRF does not have sensitive detection limits of most major and trace elements as ICP-OES or ICP-MS but it may serve as a rapid screening tool for locating hot spots of uncontaminated field soils and sediments. PMID:25861136
Local sample thickness determination via scanning transmission electron microscopy defocus series.
Beyer, A; Straubinger, R; Belz, J; Volz, K
2016-05-01
The usable aperture sizes in (scanning) transmission electron microscopy ((S)TEM) have significantly increased in the past decade due to the introduction of aberration correction. In parallel with the consequent increase of convergence angle the depth of focus has decreased severely and optical sectioning in the STEM became feasible. Here we apply STEM defocus series to derive the local sample thickness of a TEM sample. To this end experimental as well as simulated defocus series of thin Si foils were acquired. The systematic blurring of high resolution high angle annular dark field images is quantified by evaluating the standard deviation of the image intensity for each image of a defocus series. The derived dependencies exhibit a pronounced maximum at the optimum defocus and drop to a background value for higher or lower values. The full width half maximum (FWHM) of the curve is equal to the sample thickness above a minimum thickness given by the size of the used aperture and the chromatic aberration of the microscope. The thicknesses obtained from experimental defocus series applying the proposed method are in good agreement with the values derived from other established methods. The key advantages of this method compared to others are its high spatial resolution and that it does not involve any time consuming simulations. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Testing the accuracy of clustering redshifts with simulations
NASA Astrophysics Data System (ADS)
Scottez, V.; Benoit-Lévy, A.; Coupon, J.; Ilbert, O.; Mellier, Y.
2018-03-01
We explore the accuracy of clustering-based redshift inference within the MICE2 simulation. This method uses the spatial clustering of galaxies between a spectroscopic reference sample and an unknown sample. This study give an estimate of the reachable accuracy of this method. First, we discuss the requirements for the number objects in the two samples, confirming that this method does not require a representative spectroscopic sample for calibration. In the context of next generation of cosmological surveys, we estimated that the density of the Quasi Stellar Objects in BOSS allows us to reach 0.2 per cent accuracy in the mean redshift. Secondly, we estimate individual redshifts for galaxies in the densest regions of colour space ( ˜ 30 per cent of the galaxies) without using the photometric redshifts procedure. The advantage of this procedure is threefold. It allows: (i) the use of cluster-zs for any field in astronomy, (ii) the possibility to combine photo-zs and cluster-zs to get an improved redshift estimation, (iii) the use of cluster-z to define tomographic bins for weak lensing. Finally, we explore this last option and build five cluster-z selected tomographic bins from redshift 0.2 to 1. We found a bias on the mean redshift estimate of 0.002 per bin. We conclude that cluster-z could be used as a primary redshift estimator by next generation of cosmological surveys.
System health monitoring using multiple-model adaptive estimation techniques
NASA Astrophysics Data System (ADS)
Sifford, Stanley Ryan
Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.
Dowle, Eddy J; Pochon, Xavier; C Banks, Jonathan; Shearer, Karen; Wood, Susanna A
2016-09-01
Recent studies have advocated biomonitoring using DNA techniques. In this study, two high-throughput sequencing (HTS)-based methods were evaluated: amplicon metabarcoding of the cytochrome C oxidase subunit I (COI) mitochondrial gene and gene enrichment using MYbaits (targeting nine different genes including COI). The gene-enrichment method does not require PCR amplification and thus avoids biases associated with universal primers. Macroinvertebrate samples were collected from 12 New Zealand rivers. Macroinvertebrates were morphologically identified and enumerated, and their biomass determined. DNA was extracted from all macroinvertebrate samples and HTS undertaken using the illumina miseq platform. Macroinvertebrate communities were characterized from sequence data using either six genes (three of the original nine were not used) or just the COI gene in isolation. The gene-enrichment method (all genes) detected the highest number of taxa and obtained the strongest Spearman rank correlations between the number of sequence reads, abundance and biomass in 67% of the samples. Median detection rates across rare (<1% of the total abundance or biomass), moderately abundant (1-5%) and highly abundant (>5%) taxa were highest using the gene-enrichment method (all genes). Our data indicated primer biases occurred during amplicon metabarcoding with greater than 80% of sequence reads originating from one taxon in several samples. The accuracy and sensitivity of both HTS methods would be improved with more comprehensive reference sequence databases. The data from this study illustrate the challenges of using PCR amplification-based methods for biomonitoring and highlight the potential benefits of using approaches, such as gene enrichment, which circumvent the need for an initial PCR step. © 2015 John Wiley & Sons Ltd.
Indices of polarimetric purity for biological tissues inspection
NASA Astrophysics Data System (ADS)
Van Eeckhout, Albert; Lizana, Angel; Garcia-Caurel, Enric; Gil, José J.; Sansa, Adrià; Rodríguez, Carla; Estévez, Irene; González, Emilio; Escalera, Juan C.; Moreno, Ignacio; Campos, Juan
2018-02-01
We highlight the interest of using the Indices of Polarimetric Purity (IPPs) for the biological tissue inspection. These are three polarimetric metrics focused on the study of the depolarizing behaviour of the sample. The IPPs have been recently proposed in the literature and provide different and synthetized information than the commonly used depolarizing indices, as depolarization index (PΔ) or depolarization power (Δ). Compared with the standard polarimetric images of biological samples, IPPs enhance the contrast between different tissues of the sample and show differences between similar tissues which are not observed using the other standard techniques. Moreover, they present further physical information related to the depolarization mechanisms inherent to different tissues. In addition, the algorithm does not require advanced calculations (as in the case of polar decompositions), being the indices of polarimetric purity fast and easy to implement. We also propose a pseudo-coloured image method which encodes the sample information as a function of the different indices weights. These images allow us to customize the visualization of samples and to highlight certain of their constitutive structures. The interest and potential of the IPP approach are experimentally illustrated throughout the manuscript by comparing polarimetric images of different ex-vivo samples obtained with standard polarimetric methods with those obtained from the IPPs analysis. Enhanced contrast and retrieval of new information are experimentally obtained from the different IPP based images.
A rapid method for the sampling of atmospheric water vapour for isotopic analysis.
Peters, Leon I; Yakir, Dan
2010-01-01
Analysis of the stable isotopic composition of atmospheric moisture is widely applied in the environmental sciences. Traditional methods for obtaining isotopic compositional data from ambient moisture have required complicated sampling procedures, expensive and sophisticated distillation lines, hazardous consumables, and lengthy treatments prior to analysis. Newer laser-based techniques are expensive and usually not suitable for large-scale field campaigns, especially in cases where access to mains power is not feasible or high spatial coverage is required. Here we outline the construction and usage of a novel vapour-sampling system based on a battery-operated Stirling cycle cooler, which is simple to operate, does not require any consumables, or post-collection distillation, and is light-weight and highly portable. We demonstrate the ability of this system to reproduce delta(18)O isotopic compositions of ambient water vapour, with samples taken simultaneously by a traditional cryogenic collection technique. Samples were collected over 1 h directly into autosampler vials and were analysed by mass spectrometry after pyrolysis of 1 microL aliquots to CO. This yielded an average error of < +/-0.5 per thousand, approximately equal to the signal-to-noise ratio of traditional approaches. This new system provides a rapid and reliable alternative to conventional cryogenic techniques, particularly in cases requiring high sample throughput or where access to distillation lines, slurry maintenance or mains power is not feasible. Copyright 2009 John Wiley & Sons, Ltd.
Baldelli, Sara; Marrubini, Giorgio; Cattaneo, Dario; Clementi, Emilio; Cerea, Matteo
2017-10-01
The application of Quality by Design (QbD) principles in clinical laboratories can help to develop an analytical method through a systematic approach, providing a significant advance over the traditional heuristic and empirical methodology. In this work, we applied for the first time the QbD concept in the development of a method for drug quantification in human plasma using elvitegravir as the test molecule. The goal of the study was to develop a fast and inexpensive quantification method, with precision and accuracy as requested by the European Medicines Agency guidelines on bioanalytical method validation. The method was divided into operative units, and for each unit critical variables affecting the results were identified. A risk analysis was performed to select critical process parameters that should be introduced in the design of experiments (DoEs). Different DoEs were used depending on the phase of advancement of the study. Protein precipitation and high-performance liquid chromatography-tandem mass spectrometry were selected as the techniques to be investigated. For every operative unit (sample preparation, chromatographic conditions, and detector settings), a model based on factors affecting the responses was developed and optimized. The obtained method was validated and clinically applied with success. To the best of our knowledge, this is the first investigation thoroughly addressing the application of QbD to the analysis of a drug in a biological matrix applied in a clinical laboratory. The extensive optimization process generated a robust method compliant with its intended use. The performance of the method is continuously monitored using control charts.
Non-interferometric quantitative phase imaging of yeast cells
NASA Astrophysics Data System (ADS)
Poola, Praveen K.; Pandiyan, Vimal Prabhu; John, Renu
2015-12-01
Real-time imaging of live cells is quite difficult without the addition of external contrast agents. Various methods for quantitative phase imaging of living cells have been proposed like digital holographic microscopy and diffraction phase microscopy. In this paper, we report theoretical and experimental results of quantitative phase imaging of live yeast cells with nanometric precision using transport of intensity equations (TIE). We demonstrate nanometric depth sensitivity in imaging live yeast cells using this technique. This technique being noninterferometric, does not need any coherent light sources and images can be captured through a regular bright-field microscope. This real-time imaging technique would deliver the depth or 3-D volume information of cells and is highly promising in real-time digital pathology applications, screening of pathogens and staging of diseases like malaria as it does not need any preprocessing of samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fondeur, F.; Taylor-Pashow, K.
2014-08-05
SRNL received two sets of SHT samples (MCU-14-259/260/261 in April 2014 and MCU-14- 315/316/317 in May 2014) for analysis. The samples were analyzed for composition. Both samples have similar chemical composition. As with the previous solvent sample results, these analyses indicate that the solvent does not require Isopar® L trimming at this time. Since an addition of TiDG and MaxCalix to the SHT was added in early July 2014, the solvent does not require TiDG addition at this time. The current TiDG level (1.5 mM) is above the minimum recommended operating level of 1 mM.
Di Nunno, N R; Costantinides, F; Bernasconi, P; Bottin, C; Melato, M
1998-03-01
The time of death can be established by determining the length of the postmortem interval. Many methods have been proposed to achieve this goal. Flow cytometric evaluation of DNA degradation seems to be reliable for the first 72 hours after death. Our study evaluated the correspondence of the corruption process between in vitro and corpse tissues. We chose spleen tissue to perform our investigation because it is rich in nucleated cells. Results showed a precise correspondence between the two kinds of samples in the time period between 24 and 36 hours. The period from 36 to 72 hours is characterized by a much looser correspondence than that found in the first period. After the first 72 hours, DNA denaturation is massive and does not allow useful cytofluorimetric readings. The spleen does not seem to be the most suitable organ for this type of investigation because it tends to colliquate very rapidly. We therefore are evaluating other organs to identify a more suitable tissue source for the investigation of longer postmortem period using flow cytometry.
Compression of Encrypted Images Using Set Partitioning In Hierarchical Trees Algorithm
NASA Astrophysics Data System (ADS)
Sarika, G.; Unnithan, Harikuttan; Peter, Smitha
2011-10-01
When it is desired to transmit redundant data over an insecure channel, it is customary to encrypt the data. For encrypted real world sources such as images, the use of Markova properties in the slepian-wolf decoder does not work well for gray scale images. Here in this paper we propose a method of compression of an encrypted image. In the encoder section, the image is first encrypted and then it undergoes compression in resolution. The cipher function scrambles only the pixel values, but does not shuffle the pixel locations. After down sampling, each sub-image is encoded independently and the resulting syndrome bits are transmitted. The received image undergoes a joint decryption and decompression in the decoder section. By using the local statistics based on the image, it is recovered back. Here the decoder gets only lower resolution version of the image. In addition, this method provides only partial access to the current source at the decoder side, which improves the decoder's learning of the source statistics. The source dependency is exploited to improve the compression efficiency. This scheme provides better coding efficiency and less computational complexity.
Development and analysis of new type microresonator with electro-optic feedback
NASA Astrophysics Data System (ADS)
Janusas, Giedrius; Palevicius, Arvydas; Cekas, Elingas; Brunius, Alfredas; Bauce, Jokubas
2016-04-01
Micro-resonators are fundamental components integrated in a hosts of MEMS applications: safety and stability systems, biometric sensors, switches, mechanical filters, micro-mirror devices, material characterization, gyroscopes, etc. A constituent part of the micro-resonator is a diffractive optical element (DOE). Different methods and materials are used to produce diffraction gratings for DOEs. Two-dimensional or three-dimensional periodic structures of micrometer-scale period are widely used in microsystems or their components. They can be used as elements for micro-scale synthesis, processing, and analysis of chemical and biological samples. On the other hand micro-resonator was designed using composite piezoelectric material. In case when microscopes, vibrometers or other direct measurement methods are destructive and hardly can be employed for in-situ analysis, indirect measurement of electrical signal generated by composite piezoelectric layer allows to measure natural frequency changes. Also piezoelectric layer allows to create a novel micro-resonator with controllable parameters, which could assure much higher functionality of micro-electromechanical systems. The novel micro-resonator for pollution detection is proposed. Mathematical model of the micro-resonator and its dynamical, electrical and optical characteristics are presented.
A Comparison of "Total Dust" and Inhalable Personal Sampling for Beryllium Exposure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carter, Colleen M.
2012-05-09
In 2009, the American Conference of Governmental Industrial Hygienists (ACGIH) reduced the Beryllium (Be) 8-hr Time Weighted Average Threshold Limit Value (TLV-TWA) from 2.0 μg/m 3 to 0.05 μg/m 3 with an inhalable 'I' designation in accordance with ACGIH's particle size-selective criterion for inhalable mass. Currently, per the Department of Energy (DOE) requirements, the Lawrence Livermore National Laboratory (LLNL) is following the Occupational Health and Safety Administration (OSHA) Permissible Exposure Limit (PEL) of 2.0 μg/m 3 as an 8-hr TWA, which is also the 2005 ACGIH TLV-TWA, and an Action Level (AL) of 0.2 μg/m 3 and sampling is performedmore » using the 37mm (total dust) sampling method. Since DOE is considering adopting the newer 2009 TLV guidelines, the goal of this study was to determine if the current method of sampling using the 37mm (total dust) sampler would produce results that are comparable to what would be measured using the IOM (inhalable) sampler specific to the application of high energy explosive work at LLNL's remote experimental test facility at Site 300. Side-by-side personal sampling using the two samplers was performed over an approximately two-week period during chamber re-entry and cleanup procedures following detonation of an explosive assembly containing Beryllium (Be). The average ratio of personal sampling results for the IOM (inhalable) vs. 37-mm (total dust) sampler was 1.1:1 with a P-value of 0.62, indicating that there was no statistically significant difference in the performance of the two samplers. Therefore, for the type of activity monitored during this study, the 37-mm sampling cassette would be considered a suitable alternative to the IOM sampler for collecting inhalable particulate matter, which is important given the many practical and economic advantages that it presents. However, similar comparison studies would be necessary for this conclusion to be applied to other types of activities, where earlier studies have shown that the IOM sampler tends to collect higher concentrations of Be compared to the 37-mm cassette, which could complicate compliance with what is already an extremely low exposure limit.« less
Pressman, Alice R.; Avins, Andrew L.; Hubbard, Alan; Satariano, William A.
2014-01-01
Background There is a paucity of literature comparing Bayesian analytic techniques with traditional approaches for analyzing clinical trials using real trial data. Methods We compared Bayesian and frequentist group sequential methods using data from two published clinical trials. We chose two widely accepted frequentist rules, O'Brien–Fleming and Lan–DeMets, and conjugate Bayesian priors. Using the nonparametric bootstrap, we estimated a sampling distribution of stopping times for each method. Because current practice dictates the preservation of an experiment-wise false positive rate (Type I error), we approximated these error rates for our Bayesian and frequentist analyses with the posterior probability of detecting an effect in a simulated null sample. Thus for the data-generated distribution represented by these trials, we were able to compare the relative performance of these techniques. Results No final outcomes differed from those of the original trials. However, the timing of trial termination differed substantially by method and varied by trial. For one trial, group sequential designs of either type dictated early stopping of the study. In the other, stopping times were dependent upon the choice of spending function and prior distribution. Conclusions Results indicate that trialists ought to consider Bayesian methods in addition to traditional approaches for analysis of clinical trials. Though findings from this small sample did not demonstrate either method to consistently outperform the other, they did suggest the need to replicate these comparisons using data from varied clinical trials in order to determine the conditions under which the different methods would be most efficient. PMID:21453792
Sample size considerations for clinical research studies in nuclear cardiology.
Chiuzan, Cody; West, Erin A; Duong, Jimmy; Cheung, Ken Y K; Einstein, Andrew J
2015-12-01
Sample size calculation is an important element of research design that investigators need to consider in the planning stage of the study. Funding agencies and research review panels request a power analysis, for example, to determine the minimum number of subjects needed for an experiment to be informative. Calculating the right sample size is crucial to gaining accurate information and ensures that research resources are used efficiently and ethically. The simple question "How many subjects do I need?" does not always have a simple answer. Before calculating the sample size requirements, a researcher must address several aspects, such as purpose of the research (descriptive or comparative), type of samples (one or more groups), and data being collected (continuous or categorical). In this article, we describe some of the most frequent methods for calculating the sample size with examples from nuclear cardiology research, including for t tests, analysis of variance (ANOVA), non-parametric tests, correlation, Chi-squared tests, and survival analysis. For the ease of implementation, several examples are also illustrated via user-friendly free statistical software.
Adaptive sampling of AEM transients
NASA Astrophysics Data System (ADS)
Di Massa, Domenico; Florio, Giovanni; Viezzoli, Andrea
2016-02-01
This paper focuses on the sampling of the electromagnetic transient as acquired by airborne time-domain electromagnetic (TDEM) systems. Typically, the sampling of the electromagnetic transient is done using a fixed number of gates whose width grows logarithmically (log-gating). The log-gating has two main benefits: improving the signal to noise (S/N) ratio at late times, when the electromagnetic signal has amplitudes equal or lower than the natural background noise, and ensuring a good resolution at the early times. However, as a result of fixed time gates, the conventional log-gating does not consider any geological variations in the surveyed area, nor the possibly varying characteristics of the measured signal. We show, using synthetic models, how a different, flexible sampling scheme can increase the resolution of resistivity models. We propose a new sampling method, which adapts the gating on the base of the slope variations in the electromagnetic (EM) transient. The use of such an alternative sampling scheme aims to get more accurate inverse models by extracting the geoelectrical information from the measured data in an optimal way.
Preparation and validation of gross alpha/beta samples used in EML`s quality assessment program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scarpitta, S.C.
1997-10-01
A set of water and filter samples have been incorporated into the existing Environmental Measurements Laboratory`s (EML) Quality Assessment Program (QAP) for gross alpha/beta determinations by participating DOE laboratories. The participating laboratories are evaluated by comparing their results with the EML value. The preferred EML method for measuring water and filter samples, described in this report, uses gas flow proportional counters with 2 in. detectors. Procedures for sample preparation, quality control and instrument calibration are presented. Liquid scintillation (LS) counting is an alternative technique that is suitable for quantifying both the alpha ({sup 241}Am, {sup 230}Th and {sup 238}Pu) andmore » beta ({sup 90}Sr/{sup 90}Y) activity concentrations in the solutions used to prepare the QAP water and air filter samples. Three LS counting techniques (Cerenkov, dual dpm and full spectrum analysis) are compared. These techniques may be used to validate the activity concentrations of each component in the alpha/beta solution before the QAP samples are actually prepared.« less
Research on Abnormal Detection Based on Improved Combination of K - means and SVDD
NASA Astrophysics Data System (ADS)
Hao, Xiaohong; Zhang, Xiaofeng
2018-01-01
In order to improve the efficiency of network intrusion detection and reduce the false alarm rate, this paper proposes an anomaly detection algorithm based on improved K-means and SVDD. The algorithm first uses the improved K-means algorithm to cluster the training samples of each class, so that each class is independent and compact in class; Then, according to the training samples, the SVDD algorithm is used to construct the minimum superspheres. The subordinate relationship of the samples is determined by calculating the distance of the minimum superspheres constructed by SVDD. If the test sample is less than the center of the hypersphere, the test sample belongs to this class, otherwise it does not belong to this class, after several comparisons, the final test of the effective detection of the test sample.In this paper, we use KDD CUP99 data set to simulate the proposed anomaly detection algorithm. The results show that the algorithm has high detection rate and low false alarm rate, which is an effective network security protection method.
NASA Astrophysics Data System (ADS)
Okuyama, Keita; Sasahira, Akira; Noshita, Kenji; Yoshida, Takuma; Kato, Kazuyuki; Nagasaki, Shinya; Ohe, Toshiaki
Experimental effort to evaluate the barrier performance of geologic disposal requires relatively long testing periods and chemically stable conditions. We have developed a new technique, the micro mock-up method, to present a fast and sensitive method to measure both nuclide diffusivity and sorption coefficient within a day to overcome such disadvantage of the conventional method. In this method, a Teflon plate having a micro channel (10-200 μm depth, 2, 4 mm width) is placed just beneath the rock sample plate, radionuclide solution is injected into the channel with constant rate. The breakthrough curve is being measured until a steady state. The outlet flux in the steady state however does not meet the inlet flux because of the matrix diffusion into the rock body. This inlet-outlet difference is simply related to the effective diffusion coefficient ( De) and the distribution coefficient ( Kd) of rock sample. Then, we adopt a fitting procedure to speculate Kd and De values by comparing the observation to the theoretical curve of the two-dimensional diffusion-advection equation. In the present study, we measured De of 3H by using both the micro mock-up method and the conventional through-diffusion method for comparison. The obtained values of De by two different ways for granite sample (Inada area of Japan) were identical: 1.0 × 10 -11 and 9.0 × 10 -12 m 2/s but the testing period was much different: 10 h and 3 days, respectively. We also measured the breakthrough curve of 85Sr and the resulting Kd and De agreed well to the previous study obtained by the batch sorption experiments with crushed samples. The experimental evidence and the above advantages reveal that the micro mock-up method based on the microreactor concept is powerful and much advantageous when compared to the conventional method.
Karageorgou, Eftychia; Christoforidou, Sofia; Ioannidou, Maria; Psomas, Evdoxios; Samouris, Georgios
2018-06-01
The present study was carried out to assess the detection sensitivity of four microbial inhibition assays (MIAs) in comparison with the results obtained by the High Performance Liquid Chromatography with Diode-Array Detection (HPLC-DAD) method for antibiotics of the β-lactam group and chloramphenicol in fortified raw milk samples. MIAs presented fairly good results when detecting β-lactams, whereas none were able to detect chloramphenicol at or above the permissible limits. HPLC analysis revealed high recoveries of examined compounds, whereas all detection limits observed were lower than their respective maximum residue limits (MRL) values. The extraction and clean-up procedure of antibiotics was performed by a modified matrix solid phase dispersion procedure using a mixture of Plexa by Agilent and QuEChERS as a sorbent. The HPLC method developed was validated, determining the accuracy, precision, linearity, decision limit, and detection capability. Both methods were used to monitor raw milk samples of several cows and sheep, obtained from producers in different regions of Greece, for the presence of examined antibiotic residues. Results obtained showed that MIAs could be used effectively and routinely to detect antibiotic residues in several milk types. However, in some cases, spoilage of milk samples revealed that the kits' sensitivity could be strongly affected, whereas this fact does not affect the effectiveness of HPLC-DAD analysis.
Zhou, Dan-Lei; Yan, Dan; Li, Bao-Cai; Wu, Yan-Shu; Xiao, Xiao-He
2009-06-01
This study is to investigate the effect of Cordyceps sinensis and its cultured mycelia on growth and metabolism of Escherichia coli, and microcalorimetric method was carried out to evaluate its biological activity. The study will provide the basis for the quality control of Cordyceps sinensis. Experimental result will show the effect of natural Cordyceps sinensis and its cultured mycelia on growth and metabolism of Escherichia coli, with index of P(1max) and effective rate (E) by microcalorimetry, the data of experiment were studied by cluster analysis. The results showed that Cordyceps sinensis and its cultured mycelia not only can promote growth and metabolism of Escherichia coli but also can regulate the balance of intestinal microecology efficiently. When the concentrations of samples > 6.0 mg mL(-1), natural Cordyceps sinensis can promote the growth and metabolism of Escherichia coli efficiently (P < 0.05) compared with the control group, and have better dose-effect relationship with concentration (r > 0.9), its cultured mycelia does not show conspicuous auxoaction (P > 0.05) and have not dose-effect relationship with concentration (r < 0.6); when the concentration of samples < 6.0 mg mL(-1), all samples does not show conspicuous auxoaction (P > 0.05). Natural Cordyceps sinensis and its cultured mycelia can be distinguished by cluster analysis. Microcalorimetry has a good prospect on the quality evaluation of the traditional Chinese medicine.
2015-01-01
Many problems studied via molecular dynamics require accurate estimates of various thermodynamic properties, such as the free energies of different states of a system, which in turn requires well-converged sampling of the ensemble of possible structures. Enhanced sampling techniques are often applied to provide faster convergence than is possible with traditional molecular dynamics simulations. Hamiltonian replica exchange molecular dynamics (H-REMD) is a particularly attractive method, as it allows the incorporation of a variety of enhanced sampling techniques through modifications to the various Hamiltonians. In this work, we study the enhanced sampling of the RNA tetranucleotide r(GACC) provided by H-REMD combined with accelerated molecular dynamics (aMD), where a boosting potential is applied to torsions, and compare this to the enhanced sampling provided by H-REMD in which torsion potential barrier heights are scaled down to lower force constants. We show that H-REMD and multidimensional REMD (M-REMD) combined with aMD does indeed enhance sampling for r(GACC), and that the addition of the temperature dimension in the M-REMD simulations is necessary to efficiently sample rare conformations. Interestingly, we find that the rate of convergence can be improved in a single H-REMD dimension by simply increasing the number of replicas from 8 to 24 without increasing the maximum level of bias. The results also indicate that factors beyond replica spacing, such as round trip times and time spent at each replica, must be considered in order to achieve optimal sampling efficiency. PMID:24625009
2015 Long-Term Hydrologic Monitoring Program Sampling and Analysis Results at Rio Blanco, Colorado
DOE Office of Scientific and Technical Information (OSTI.GOV)
Findlay, Rick; Kautsky, Mark
2015-12-01
The U.S. Department of Energy (DOE) Office of Legacy Management conducted annual sampling at the Rio Blanco, Colorado, Site for the Long-Term Hydrologic Monitoring Program (LTHMP) on May 20–21, 2015. This report documents the analytical results of the Rio Blanco annual monitoring event, the trip report, and the data validation package. The groundwater and surface water monitoring samples were shipped to the GEL Group Inc. laboratories for conventional analysis of tritium and analysis of gamma-emitting radionuclides by high-resolution gamma spectrometry. A subset of water samples collected from wells near the Rio Blanco site was also sent to GEL Group Inc.more » for enriched tritium analysis. All requested analyses were successfully completed. Samples were collected from a total of four onsite wells, including two that are privately owned. Samples were also collected from two additional private wells at nearby locations and from nine surface water locations. Samples were analyzed for gamma-emitting radionuclides by high-resolution gamma spectrometry, and they were analyzed for tritium using the conventional method with a detection limit on the order of 400 picocuries per liter (pCi/L). Four locations (one well and three surface locations) were analyzed using the enriched tritium method, which has a detection limit on the order of 3 pCi/L. The enriched locations included the well at the Brennan Windmill and surface locations at CER-1, CER-4, and Fawn Creek 500 feet upstream.« less
Fraisier, V; Clouvel, G; Jasaitis, A; Dimitrov, A; Piolot, T; Salamero, J
2015-09-01
Multiconfocal microscopy gives a good compromise between fast imaging and reasonable resolution. However, the low intensity of live fluorescent emitters is a major limitation to this technique. Aberrations induced by the optical setup, especially the mismatch of the refractive index and the biological sample itself, distort the point spread function and further reduce the amount of detected photons. Altogether, this leads to impaired image quality, preventing accurate analysis of molecular processes in biological samples and imaging deep in the sample. The amount of detected fluorescence can be improved with adaptive optics. Here, we used a compact adaptive optics module (adaptive optics box for sectioning optical microscopy), which was specifically designed for spinning disk confocal microscopy. The module overcomes undesired anomalies by correcting for most of the aberrations in confocal imaging. Existing aberration detection methods require prior illumination, which bleaches the sample. To avoid multiple exposures of the sample, we established an experimental model describing the depth dependence of major aberrations. This model allows us to correct for those aberrations when performing a z-stack, gradually increasing the amplitude of the correction with depth. It does not require illumination of the sample for aberration detection, thus minimizing photobleaching and phototoxicity. With this model, we improved both signal-to-background ratio and image contrast. Here, we present comparative studies on a variety of biological samples. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Fell, Shari; Bröckl, Stephanie; Büttner, Mathias; Rettinger, Anna; Zimmermann, Pia; Straubinger, Reinhard K
2016-09-15
Bovine tuberculosis (bTB), which is caused by Mycobacterium bovis and M. caprae, is a notifiable animal disease in Germany. Diagnostic procedure is based on a prescribed protocol that is published in the framework of German bTB legislation. In this protocol small sample volumes are used for DNA extraction followed by real-time PCR analyses. As mycobacteria tend to concentrate in granuloma and the infected tissue in early stages of infection does not necessarily show any visible lesions, it is likely that DNA extraction from only small tissue samples (20-40 mg) of a randomly chosen spot from the organ and following PCR testing may result in false negative results. In this study two DNA extraction methods were developed to process larger sample volumes to increase the detection sensitivity of mycobacterial DNA in animal tissue. The first extraction method is based on magnetic capture, in which specific capture oligonucleotides were utilized. These nucleotides are linked to magnetic particles and capture Mycobacterium-tuberculosis-complex (MTC) DNA released from 10 to 15 g of tissue material. In a second approach remaining sediments from the magnetic capture protocol were further processed with a less complex extraction protocol that can be used in daily routine diagnostics. A total number of 100 tissue samples from 34 cattle (n = 74) and 18 red deer (n = 26) were analyzed with the developed protocols and results were compared to the prescribed protocol. All three extraction methods yield reliable results by the real-time PCR analysis. The use of larger sample volume led to a sensitivity increase of DNA detection which was shown by the decrease of Ct-values. Furthermore five samples which were tested negative or questionable by the official extraction protocol were detected positive by real time PCR when the alternative extraction methods were used. By calculating the kappa index, the three extraction protocols resulted in a moderate (0.52; protocol 1 vs 3) to almost perfect agreement (1.00; red deer sample testing with all protocols). Both new methods yielded increased detection rates for MTC DNA detection in large sample volumes and consequently improve the official diagnostic protocol.
Raterink, Robert-Jan; Witkam, Yoeri; Vreeken, Rob J; Ramautar, Rawi; Hankemeier, Thomas
2014-10-21
In the field of bioanalysis, there is an increasing demand for miniaturized, automated, robust sample pretreatment procedures that can be easily connected to direct-infusion mass spectrometry (DI-MS) in order to allow the high-throughput screening of drugs and/or their metabolites in complex body fluids like plasma. Liquid-Liquid extraction (LLE) is a common sample pretreatment technique often used for complex aqueous samples in bioanalysis. Despite significant developments that have been made in automated and miniaturized LLE procedures, fully automated LLE techniques allowing high-throughput bioanalytical studies on small-volume samples using direct infusion mass spectrometry, have not been matured yet. Here, we introduce a new fully automated micro-LLE technique based on gas-pressure assisted mixing followed by passive phase separation, coupled online to nanoelectrospray-DI-MS. Our method was characterized by varying the gas flow and its duration through the solvent mixture. For evaluation of the analytical performance, four drugs were spiked to human plasma, resulting in highly acceptable precision (RSD down to 9%) and linearity (R(2) ranging from 0.990 to 0.998). We demonstrate that our new method does not only allow the reliable extraction of analytes from small sample volumes of a few microliters in an automated and high-throughput manner, but also performs comparable or better than conventional offline LLE, in which the handling of small volumes remains challenging. Finally, we demonstrate the applicability of our method for drug screening on dried blood spots showing excellent linearity (R(2) of 0.998) and precision (RSD of 9%). In conclusion, we present the proof of principe of a new high-throughput screening platform for bioanalysis based on a new automated microLLE method, coupled online to a commercially available nano-ESI-DI-MS.
Alizadeh, Naader; Mohammadi, Abdorreza; Tabrizchi, Mahmoud
2008-03-07
A simple, rapid and highly sensitive method for simultaneous analysis of methamphetamine (MA) and 3,4-methylenedioxy methamphetamine (MDMA) in human serum was developed using the solid-phase microextraction (SPME) combined with ion mobility spectrometry (IMS). A dodecylsulfate-doped polypyrrole (PPy-DS) was applied as a new fiber for SPME. Electrochemically polymerized PPy is formed on the surface of a platinum wire and will contain charge-compensating anion (dodecylsulfate) incorporated during synthesis using cyclic voltammetry (CV) technique. The extraction properties of the fiber to MA and MDMA were examined, using a headspace-SPME (HS-SPME) device and thermal desorption in injection port of IMS. The results show that PPy-DS as a SPME fiber coating is suitable for the successful extraction of these compounds. This method is suitable for the identification and determination of MAs, is not time-consuming, requires small quantities of sample and does not require any derivatization. Parameters like pH, extraction time, ionic strength, and temperature of the sample were studied and optimized to obtain the best extraction results. The HS-SPME-IMS method provided good repeatability (RSDs<7.8 %) for spiked serum samples. The calibration graphs were linear in the range of 20-4000 ng ml(-1) (R(2)>0.99) and detection limits for MDMA and MA were 5 and 8 ng ml(-1), respectively. HS-SPME-IMS of non-spiked serum sample provided a spectrum without any peak from the matrix, supporting an effective sample clean-up. Finally, the proposed method was applied for analysis one of the ecstasy tablet.
Zorn, Julia; Ritter, Bärbel; Miller, Manuel; Kraus, Monika; Northrup, Emily; Brielmeier, Markus
2017-06-01
One limitation to housing rodents in individually ventilated cages (IVCs) is the ineffectiveness of traditional health monitoring programs that test soiled bedding sentinels every quarter. Aerogen transmission does not occur with this method. Moreover, the transmission of numerous pathogens in bedding is uncertain, and sentinel susceptibility to various pathogens varies. A novel method using particle collection from samples of exhaust air was developed in this study which was also systematically compared with routine health monitoring using soiled bedding sentinels. We used our method to screen these samples for the presence of murine norovirus (MNV), a mouse pathogen highly prevalent in laboratory animal facilities. Exhaust air particles from prefilters of IVC racks with known MNV prevalence were tested by quantitative reverse transcription polymerase chain reaction (RT-qPCR). MNV was detected in exhaust air as early as one week with one MNV-positive cage per rack, while sentinels discharged MNV RNA without seroconverting. MNV was reliably and repeatedly detected in particles collected from samples of exhaust air in all seven of the three-month sampling rounds, with increasing MNV prevalence, while sentinels only seroconverted in one round. Under field conditions, routine soiled bedding sentinel health monitoring in our animal facility failed to identify 67% ( n = 85) of positive samples by RT-qPCR of exhaust air particles. Thus, this method proved to be highly sensitive and superior to soiled bedding sentinels in the reliable detection of MNV. These results represent a major breakthrough in hygiene monitoring of rodent IVC systems and contribute to the 3R principles by reducing the number of animals used and by improving experimental conditions.
Zhuang, H; Savage, E M
2012-05-01
The effects of postdeboning aging and frozen storage on water-holding capacity (WHC) of chicken breast pectoralis major muscle were investigated. Broiler breast muscle was removed from carcasses either early postmortem (2 h) or later postmortem (24 h). Treatments included: no postdeboning aging; 1-d postdeboning aging at 2°C, 7-d postdeboning aging (2-h deboned meat only), and 6-d storage at -20°C plus 1-d thawing at 2°C (freezing and thawing treatment, 2-h deboned meat only). The WHC was determined by cooking loss, drip loss, a filter paper press method (results were presented as expressible fluid), and a salt-induced swelling and centrifugation method (results were presented as percentage of salt-induced water gain). There were no differences for WHC estimated by cooking loss and expressible fluid between the treatments. Only the freezing and thawing treatment resulted in a significant increase in drip loss. The average percentage of salt-induced water gains by the 24-h deboned samples, postdeboning aged 2 h samples, and frozen 2 h sample, which did not differ from each other, were significantly higher than that by the 2-h deboned sample. These results indicate that regardless of method (carcass aging vs. postdeboning aging) and time (aging for 1 d vs. for 7 d), postmortem aging more than 1 d does not affect WHC of the early deboned samples measured by dripping, cooking, and pressing. However, postmortem carcass aging, postdeboning aging, and freezing and thawing storage can significantly enhance the ability of chicken breast meat to hold added salt water or WHC measured by the salt-induced swelling and centrifuge method.
The scientific status of childhood dissociative identity disorder: a review of published research.
Boysen, Guy A
2011-01-01
Dissociative identity disorder (DID) remains a controversial diagnosis due to conflicting views on its etiology. Some attribute DID to childhood trauma and others attribute it to iatrogenesis. The purpose of this article is to review the published cases of childhood DID in order to evaluate its scientific status, and to answer research questions related to the etiological models. I searched MEDLINE and PsycINFO records for studies published since 1980 on DID/multiple personality disorder in children. For each study I coded information regarding the origin of samples and diagnostic methods. The review produced a total of 255 cases of childhood DID reported as individual case studies (44) or aggregated into empirical studies (211). Nearly all cases (93%) emerged from samples of children in treatment, and multiple personalities was the presenting problem in 23% of the case studies. Four US research groups accounted for 65% of all 255 cases. Diagnostic methods typically included clinical evaluation based on Diagnostic and Statistical Manual of Mental Disorder criteria, but hypnosis, structured interviews, and multiple raters were rarely used in diagnoses. Despite continuing research on the related concepts of trauma and dissociation, childhood DID itself appears to be an extremely rare phenomenon that few researchers have studied in depth. Nearly all of the research that does exist on childhood DID is from the 1980s and 1990s and does not resolve the ongoing controversies surrounding the disorder. Copyright © 2011 S. Karger AG, Basel.
A fast and objective multidimensional kernel density estimation method: fastKDE
O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.; ...
2016-03-07
Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less
Ait Kaci Azzou, S; Larribe, F; Froda, S
2016-10-01
In Ait Kaci Azzou et al. (2015) we introduced an Importance Sampling (IS) approach for estimating the demographic history of a sample of DNA sequences, the skywis plot. More precisely, we proposed a new nonparametric estimate of a population size that changes over time. We showed on simulated data that the skywis plot can work well in typical situations where the effective population size does not undergo very steep changes. In this paper, we introduce an iterative procedure which extends the previous method and gives good estimates under such rapid variations. In the iterative calibrated skywis plot we approximate the effective population size by a piecewise constant function, whose values are re-estimated at each step. These piecewise constant functions are used to generate the waiting times of non homogeneous Poisson processes related to a coalescent process with mutation under a variable population size model. Moreover, the present IS procedure is based on a modified version of the Stephens and Donnelly (2000) proposal distribution. Finally, we apply the iterative calibrated skywis plot method to a simulated data set from a rapidly expanding exponential model, and we show that the method based on this new IS strategy correctly reconstructs the demographic history. Copyright © 2016. Published by Elsevier Inc.
Method to deterministically study photonic nanostructures in different experimental instruments.
Husken, B H; Woldering, L A; Blum, C; Vos, W L
2009-01-01
We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the nanostructure is made during the fabrication of the structure. These maps are made using a series of micrographs with successively decreasing magnifications. The graphs reveal intrinsic and characteristic geometric features that can subsequently be used in different setups to act as markers. As an illustration, we probe surface cavities with radii of 65 nm on a silica opal photonic crystal with various setups: a focused ion beam workstation; a scanning electron microscope (SEM); a wide field optical microscope and a confocal microscope. We use cross-correlation techniques to recover a small area imaged with the SEM in a large area photographed with the optical microscope, which provides a possible avenue to automatic searching. We show how both structural and optical reflectivity data can be obtained from one and the same nanostructure. Since our approach does not use artificial grids or markers, it is of particular interest for samples whose structure is not known a priori, like samples created solely by self-assembly. In addition, our method is not restricted to conducting samples.
Dorival-García, N; Bones, J
2017-08-25
A method for the identification of leachables in chemically defined media for CHO cell culture using dispersive liquid-liquid microextraction (DLLME) and UHPLC-MS is described. A Box-Behnken design of experiments (DoE) approach was applied to obtain the optimum extraction conditions of the target analytes. Performance of DLLME as extraction technique was studied by comparison of two commercial chemically defined media for CHO cell culture. General extraction conditions for any group of leachables, regardless of their specific chemical functionalities can be applied and similar optimum conditions were obtained with the two media. Extraction efficiency and matrix effects were determined. The method was validated using matrix-matched standard calibration followed by recovery assays with spiked samples. Finally, cell culture media was incubated in 7 single use bioreactors (SUBs) from different vendors and analysed. TBPP was not detected in any of the samples, whereas DtBP and TBPP-ox were found in all samples, with bDtBPP detected in six SUBs. This method can be used for early identification of non-satisfactory SUB films for cultivation of CHO cell lines for biopharmaceutical production. Copyright © 2017 Elsevier B.V. All rights reserved.
Cepeda-Vázquez, Mayela; Blumenthal, David; Camel, Valérie; Rega, Barbara
2017-03-01
Furan, a possibly carcinogenic compound to humans, and furfural, a naturally occurring volatile contributing to aroma, can be both found in thermally treated foods. These process-induced compounds, formed by close reaction pathways, play an important role as markers of food safety and quality. A method capable of simultaneously quantifying both molecules is thus highly relevant for developing mitigation strategies and preserving the sensory properties of food at the same time. We have developed a unique reliable and sensitive headspace trap (HS trap) extraction method coupled to GC-MS for the simultaneous quantification of furan and furfural in a solid processed food (sponge cake). HS Trap extraction has been optimized using an optimal design of experiments (O-DOE) approach, considering four instrumental and two sample preparation variables, as well as a blocking factor identified during preliminary assays. Multicriteria and multiple response optimization was performed based on a desirability function, yielding the following conditions: thermostatting temperature, 65°C; thermostatting time, 15min; number of pressurization cycles, 4; dry purge time, 0.9min; water / sample amount ratio (dry basis), 16; and total amount (water + sample amount, dry basis), 10g. The performances of the optimized method were also assessed: repeatability (RSD: ≤3.3% for furan and ≤2.6% for furfural), intermediate precision (RSD: 4.0% for furan and 4.3% for furfural), linearity (R 2 : 0.9957 for furan and 0.9996 for furfural), LOD (0.50ng furan g sample dry basis -1 and 10.2ng furfural g sample dry basis -1 ), LOQ (0.99ng furan g sample dry basis -1 and 41.1ng furfural g sample dry basis -1 ). Matrix effect was observed mainly for furan. Finally, the optimized method was applied to other sponge cakes with different matrix characteristics and levels of analytes. Copyright © 2016. Published by Elsevier B.V.
Options for Robust Airfoil Optimization under Uncertainty
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Li, Wu
2002-01-01
A robust optimization method is developed to overcome point-optimization at the sampled design points. This method combines the best features from several preliminary methods proposed by the authors and their colleagues. The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of spline control points as design variables yet the resulting airfoil shape does not need to be smoothed, and (3) it allows the user to make a tradeoff between the level of optimization and the amount of computing time consumed. For illustration purposes, the robust optimization method is used to solve a lift-constrained drag minimization problem for a two-dimensional (2-D) airfoil in Euler flow with 20 geometric design variables.
NASA Astrophysics Data System (ADS)
Berbiche, A.; Sadouki, M.; Fellah, Z. E. A.; Ogam, E.; Fellah, M.; Mitri, F. G.; Depollier, C.
2016-01-01
An acoustic reflectivity method is proposed for measuring the permeability or flow resistivity of air-saturated porous materials. In this method, a simplified expression of the reflection coefficient is derived in the Darcy's regime (low frequency range), which does not depend on frequency and porosity. Numerical simulations show that the reflection coefficient of a porous material can be approximated by its simplified expression obtained from its Taylor development to the first order. This approximation is good especially for resistive materials (of low permeability) and for the lower frequencies. The permeability is reconstructed by solving the inverse problem using waves reflected by plastic foam samples, at different frequency bandwidths in the Darcy regime. The proposed method has the advantage of being simple compared to the conventional methods that use experimental reflected data, and is complementary to the transmissivity method, which is more adapted to low resistive materials (high permeability).
A self-organizing Lagrangian particle method for adaptive-resolution advection-diffusion simulations
NASA Astrophysics Data System (ADS)
Reboux, Sylvain; Schrader, Birte; Sbalzarini, Ivo F.
2012-05-01
We present a novel adaptive-resolution particle method for continuous parabolic problems. In this method, particles self-organize in order to adapt to local resolution requirements. This is achieved by pseudo forces that are designed so as to guarantee that the solution is always well sampled and that no holes or clusters develop in the particle distribution. The particle sizes are locally adapted to the length scale of the solution. Differential operators are consistently evaluated on the evolving set of irregularly distributed particles of varying sizes using discretization-corrected operators. The method does not rely on any global transforms or mapping functions. After presenting the method and its error analysis, we demonstrate its capabilities and limitations on a set of two- and three-dimensional benchmark problems. These include advection-diffusion, the Burgers equation, the Buckley-Leverett five-spot problem, and curvature-driven level-set surface refinement.
A sparse equivalent source method for near-field acoustic holography.
Fernandez-Grande, Efren; Xenaki, Angeliki; Gerstoft, Peter
2017-01-01
This study examines a near-field acoustic holography method consisting of a sparse formulation of the equivalent source method, based on the compressive sensing (CS) framework. The method, denoted Compressive-Equivalent Source Method (C-ESM), encourages spatially sparse solutions (based on the superposition of few waves) that are accurate when the acoustic sources are spatially localized. The importance of obtaining a non-redundant representation, i.e., a sensing matrix with low column coherence, and the inherent ill-conditioning of near-field reconstruction problems is addressed. Numerical and experimental results on a classical guitar and on a highly reactive dipole-like source are presented. C-ESM is valid beyond the conventional sampling limits, making wide-band reconstruction possible. Spatially extended sources can also be addressed with C-ESM, although in this case the obtained solution does not recover the spatial extent of the source.
Day, Ryan; Qu, Xiaotao; Swanson, Rosemarie; Bohannan, Zach; Bliss, Robert
2011-01-01
Abstract Most current template-based structure prediction methods concentrate on finding the correct backbone conformation and then packing sidechains within that backbone. Our packing-based method derives distance constraints from conserved relative packing groups (RPGs). In our refinement approach, the RPGs provide a level of resolution that restrains global topology while allowing conformational sampling. In this study, we test our template-based structure prediction method using 51 prediction units from CASP7 experiments. RPG-based constraints are able to substantially improve approximately two-thirds of starting templates. Upon deeper investigation, we find that true positive spatial constraints, especially those non-local in sequence, derived from the RPGs were important to building nearer native models. Surprisingly, the fraction of incorrect or false positive constraints does not strongly influence the quality of the final candidate. This result indicates that our RPG-based true positive constraints sample the self-consistent, cooperative interactions of the native structure. The lack of such reinforcing cooperativity explains the weaker effect of false positive constraints. Generally, these findings are encouraging indications that RPGs will improve template-based structure prediction. PMID:21210729
Larcher, R; Nicolini, G; Bertoldi, D; Nardin, T
2008-02-25
A HPLC method using a coulometric electrode array detector (CEAD) to analyse 4-ethylcatechol in wine was established. The procedure does not require any sample preparation or analyte derivatisation and performs chromatographic separation in a short time. The assay method is linear up to 1520microgL(-1) and precise (R.S.D.<3%), with limits of detection and quantitation of 1.34microgL(-1) and 2.2microgL(-1), respectively. Recoveries in spiked wine samples ranged from 95% to 104% with a median value of 102% and matrix effects were not observed. The method was applied to the evaluation of the concentration of 4-EC in 250 commercial Italian wines. The red wines analysed had median, 75 degrees percentile and maximum values of 37microgL(-1), 89microgL(-1) and 1610microgL(-1), respectively. For Sangiovese-based wines the mean ratios of 4-EP and 4-EG to 4-EC were 3.7:1 and 0.7:1, respectively. The feasibility of a cheaper fluorimetric approach to 4-EC quantification was investigated.
Choe, Leila H; Lee, Kelvin H
2003-10-01
We investigate one approach to assess the quantitative variability in two-dimensional gel electrophoresis (2-DE) separations based on gel-to-gel variability, sample preparation variability, sample load differences, and the effect of automation on image analysis. We observe that 95% of spots present in three out of four replicate gels exhibit less than a 0.52 coefficient of variation (CV) in fluorescent stain intensity (% volume) for a single sample run on multiple gels. When four parallel sample preparations are performed, this value increases to 0.57. We do not observe any significant change in quantitative value for an increase or decrease in sample load of 30% when using appropriate image analysis variables. Increasing use of automation, while necessary in modern 2-DE experiments, does change the observed level of quantitative and qualitative variability among replicate gels. The number of spots that change qualitatively for a single sample run in parallel varies from a CV = 0.03 for fully manual analysis to CV = 0.20 for a fully automated analysis. We present a systematic method by which a single laboratory can measure gel-to-gel variability using only three gel runs.
Passive sampling for the isotopic fingerprinting of atmospheric mercury
NASA Astrophysics Data System (ADS)
Bergquist, B. A.; MacLagan, D.; Spoznar, N.; Kaplan, R.; Chandan, P.; Stupple, G.; Zimmerman, L.; Wania, F.; Mitchell, C. P. J.; Steffen, A.; Monaci, F.; Derry, L. A.
2017-12-01
Recent studies show that there are variations in the mercury (Hg) isotopic signature of atmospheric Hg, which demonstrates the potential for source tracing and improved understanding of atmospheric cycling of Hg. However, current methods for both measuring atmospheric Hg and collecting enough atmospheric Hg for isotopic analyses require expensive instruments that need power and expertise. Additionally, methods for collecting enough atmospheric Hg for isotopic analysis require pumping air through traps for long periods (weeks and longer). Combining a new passive atmospheric sampler for mercury (Hg) with novel Hg isotopic analyses will allow for the application of stable Hg isotopes to atmospheric studies of Hg. Our group has been testing a new passive sampler for gaseous Hg that relies on the diffusion of Hg through a diffusive barrier and adsorption onto a sulphur-impregnated activated carbon sorbent. The benefit of this passive sampler is that it is low cost, requires no power, and collects gaseous Hg for up to one year with linear, well-defined uptake, which allows for reproducible and accurate measurements of atmospheric gaseous Hg concentrations ( 8% uncertainty). As little as one month of sampling is often adequate to collect sufficient Hg for isotopic analysis at typical background concentrations. Experiments comparing the isotopic Hg signature in activated carbon samples using different approaches (i.e. by passive diffusion, by passive diffusion through diffusive barriers of different thickness, by active pumping) and at different temperatures confirm that the sampling process itself does not impose mass-independent fractionation (MIF). However, sampling does result in a consistent and thus correctable mass-dependent fractionation (MDF) effect. Therefore, the sampler preserves Hg MIF with very high accuracy and precision, which is necessary for atmospheric source tracing, and reasonable MDF can be estimated with some increase in error. In addition to experimental work, initial field data will be presented including a transect of increasing distance from a known strong source of Hg (Mt. Amiata mine, Italy), downwind of Kilauea volcano in Hawaii, and several other locales including the Arctic station Alert and various sites across Ontario, Canada.
Constructing a multidimensional free energy surface like a spider weaving a web.
Chen, Changjun
2017-10-15
Complete free energy surface in the collective variable space provides important information of the reaction mechanisms of the molecules. But, sufficient sampling in the collective variable space is not easy. The space expands quickly with the number of the collective variables. To solve the problem, many methods utilize artificial biasing potentials to flatten out the original free energy surface of the molecule in the simulation. Their performances are sensitive to the definitions of the biasing potentials. Fast-growing biasing potential accelerates the sampling speed but decreases the accuracy of the free energy result. Slow-growing biasing potential gives an optimized result but needs more simulation time. In this article, we propose an alternative method. It adds the biasing potential to a representative point of the molecule in the collective variable space to improve the conformational sampling. And the free energy surface is calculated from the free energy gradient in the constrained simulation, not given by the negative of the biasing potential as previous methods. So the presented method does not require the biasing potential to remove all the barriers and basins on the free energy surface exactly. Practical applications show that the method in this work is able to produce the accurate free energy surfaces for different molecules in a short time period. The free energy errors are small in the cases of various biasing potentials. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
La Nasa, Jacopo; Modugno, Francesca; Aloisi, Matteo; Lluveras-Tenorio, Anna; Bonaduce, Ilaria
2018-02-25
In this paper we present a new analytical GC/MS method for the analysis of mixtures of free fatty acids and metal soaps in paint samples. This approach is based on the use of two different silylating agents: N,O-bis(trimethylsilyl)trifluoroacetamide (BSTFA) and 1,1,1,3,3,3-hexamethyldisilazane (HMDS). Our experimentation demonstrated that HMDS does not silylate fatty acid carboxylates, so it can be used for the selective derivatization and GC/MS quantitative analysis of free fatty acids. On the other hand BSTFA is able to silylate both free fatty acids and fatty acids carboxylates. The reaction conditions for the derivatization of carboxylates with BSTFA were thus optimized with a full factorial 3 2 experimental design using lead stearate and lead palmitate as model systems. The analytical method was validated following the ICH guidelines. The method allows the qualitative and quantitative analysis of fatty acid carboxylates of sodium, calcium, magnesium, aluminium, manganese, cobalt, copper, zinc, cadmium, and lead and of lead azelate. In order to exploit the performances of the new analytical method, samples collected from two reference paint layers, from a gilded 16th century marble sculpture, and from a paint tube belonging to the atelier of Edvard Munch, used in the last period of his life (1916-1944), were characterized. Copyright © 2017 Elsevier B.V. All rights reserved.
Joelsson, Adam C; Terkhorn, Shawn P; Brown, Ashley S; Puri, Amrita; Pascal, Benjamin J; Gaudioso, Zara E; Siciliano, Nicholas A
2017-09-01
Veriflow® Listeria species (Veriflow LS) is a molecular-based assay for the presumptive detection of Listeria spp. from environmental surfaces (stainless steel, sealed concrete, plastic, and ceramic tile) and ready-to-eat (RTE) food matrixes (hot dogs and deli meat). The assay utilizes a PCR detection method coupled with a rapid, visual, flow-based assay that develops in 3 min post-PCR amplification and requires only a 24 h enrichment for maximum sensitivity. The Veriflow LS system eliminates the need for sample purification, gel electrophoresis, or fluorophore-based detection of target amplification and does not require complex data analysis. This Performance Tested MethodSM validation study demonstrated the ability of the Veriflow LS assay to detect low levels of artificially inoculated Listeria spp. in six distinct environmental and food matrixes. In each unpaired reference comparison study, probability of detection analysis indicated that there was no significant difference between the Veriflow LS method and the U.S. Department of Agriculture Food Safety and Inspection Service Microbiology Laboratory Guide Chapter 8.08 reference method. Fifty-one strains of various Listeria spp. were detected in the inclusivity study, and 35 nonspecific organisms went undetected in the exclusivity study. The study results show that the Veriflow LS is a sensitive, selective, and robust assay for the presumptive detection of Listeria spp. sampled from environmental surfaces (stainless steel, sealed concrete, plastic, and ceramic tile) and RTE food matrixes (hot dogs and deli meat).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruedig, Elizabeth
Public Law 105-119 directs the U.S. Department of Energy (DOE) to convey or transfer parcels of land to the Incorporated County of Los Alamos or their designees and to the Department of Interior, Bureau of Indian Affairs, in trust for the Pueblo de San Ildefonso. Los Alamos National Security is tasked to support DOE in conveyance and/or transfer of identified land parcels no later than September 2022. Under DOE Order 458.1, Radiation Protection of the Public and the Environment (O458.1, 2013) and Los Alamos National Laboratory (LANL or the Laboratory) implementing Policy 412 (P412, 2014), real property with the potentialmore » to contain residual radioactive material must meet the criteria for clearance and release to the public. This Sampling and Analysis Plan (SAP) is a second investigation of Tract A-18-2 for the purpose of verifying the previous sampling results (LANL 2017). This sample plan requires 18 projectspecific soil samples for use in radiological clearance decisions consistent with LANL Procedure ENV-ES-TP-238 (2015a) and guidance in the Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM, 2000). The sampling work will be conducted by LANL, and samples will be evaluated by a LANL-contracted independent lab. However, there will be federal review (verification) of all steps of the sampling process.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clayton, Christopher; Kothari, Vijendra; Starr, Ken
2012-07-01
The U.S. Department of Energy (DOE) methods and protocols allow evaluation of remediation and final site conditions to determine if remediated sites remain protective. Two case studies are presented that involve the Niagara Falls Storage Site (NFSS) and associated vicinity properties (VPs), which are being remediated under the Formerly Utilized Sites Remedial Action Program (FUSRAP). These properties are a part of the former Lake Ontario Ordnance Works (LOOW). In response to stakeholders concerns about whether certain remediated NFSS VPs were putting them at risk, DOE met with stakeholders and agreed to evaluate protectiveness. Documentation in the DOE records collection adequatelymore » described assessed and final radiological conditions at the completed VPs. All FUSRAP wastes at the completed sites were cleaned up to meet DOE guidelines for unrestricted use. DOE compiled the results of the investigation in a report that was released for public comment. In conducting the review of site conditions, DOE found that stakeholders were also concerned about waste from the Separations Process Research Unit (SPRU) at the Knolls Atomic Power Laboratory (KAPL) that was handled at LOOW. DOE agreed to determine if SPRU waste remained at that needed to be remediated. DOE reviewed records of waste characterization, historical handling locations and methods, and assessment and remediation data. DOE concluded that the SPRU waste was remediated on the LOOW to levels that pose no unacceptable risk and allow unrestricted use and unlimited exposure. This work confirms the following points as tenets of an effective long-term surveillance and maintenance (LTS and M) program: - Stakeholder interaction must be open and transparent, and DOE must respond promptly to stakeholder concerns. - DOE, as the long-term custodian, must collect and preserve site records in order to demonstrate that remediated sites pose no unacceptable risk. - DOE must continue to maintain constructive relationships with the U.S. Army Corps of Engineers and state and federal regulators. After review of historical site documentation, DOE reports, and USACE radiological data, DOE concluded the following: - DOE had access to adequate documentation to evaluate site conditions at the former LOOW. This is important to confirm now, while institutional knowledge of early FUSRAP work remains available. - DOE remediated the completed VPs to conditions that are protective for unrestricted residential use. Sample and walkover gamma scan results indicate that no wastes remain that exceed cleanup criteria. - Process knowledge and field observations establish that Cs-137 is the predominant radionuclide in the KAPL waste stream. Cs-137, a strong gamma emitter, was used as an indicator for remediation of KAPL waste. Other radionuclides were present in much lower relative concentrations and were likely also removed during remediation of the VPs. - KAPL contaminants were removed during remedial activities at the former LOOW as either co-located or co-mingled with other radionuclides. - For the active VPs (VP-E, VP-E', and VP-G), results of DOE's cleanup of the accessible portions of these properties indicate that KAPL waste does not remain at concentrations greater than the DOE cleanup limit: - Inaccessible areas were not associated with historic KAPL waste handling. Therefore, it is unlikely that KAPL waste remains on the active VPs. - Because gamma activity was used by DOE during remediation/verification activities for excavation control, additional USACE cleanup of FUSRAP wastes on these properties will likely result in the remediation of any co-located residual KAPL wastes to acceptable levels or identification of KAPL waste that is not co-located. - Although USACE has not established a cleanup level for Cs-137 on the active NFSS VPs, DOE assessment and remediation data indicate that assessed Cs-137 was remediated and significant Cs-137 is unlikely to remain. Because of the low likelihood of encountering significant KAPL waste on the active NFSS VPs, additional remediation is not anticipated at these properties. - USACE assessment soil sampling results on the NFSS proper indicate that KAPL waste does not exceed the DOE cleanup level for Cs-137. USACE has not established a cleanup level for Cs-137 on NFSS proper. The USACE cleanup of FUSRAP wastes on the NFSS proper will likely result in the remediation of any co-located residual KAPL wastes or identification of KAPL waste that is not co-located. DOE is drafting a report of the investigation of KAPL waste at LOOW. The report will be released to the public for comment when the draft is complete. DOE responses to stakeholder inquiries resulted in a common understanding of site conditions and site risk. DOE expects additional interaction with stakeholders at the former LOOW as USACE completes remediation of the active VPs and the NFSS proper, and these relationships will hopefully have built trust between DOE and the stakeholders that DOE will perform its duties in an open and transparent manner that includes stakeholders as stewards for remediated FUSRAP sites. (authors)« less
Ahrens, Brian D; Kucherova, Yulia; Butch, Anthony W
2016-01-01
Sports drug testing laboratories are required to detect several classes of compounds that are prohibited at all times, which include anabolic agents, peptide hormones, growth factors, beta-2 agonists, hormones and metabolic modulators, and diuretics/masking agents. Other classes of compounds such as stimulants, narcotics, cannabinoids, and glucocorticoids are also prohibited, but only when an athlete is in competition. A single class of compounds can contain a large number of prohibited substances and all of the compounds should be detected by the testing procedure. Since there are almost 70 stimulants on the prohibited list it can be a challenge to develop a single screening method that will optimally detect all the compounds. We describe a combined liquid chromatography-tandem mass spectrometry (LC-MS/MS) and gas chromatography-mass spectrometry (GC-MS) testing method for detection of all the stimulants and narcotics on the World Anti-Doping Agency prohibited list. Urine for LC-MS/MS testing does not require sample pretreatment and is a direct dilute and shoot method. Urine samples for the GC-MS method require a liquid-liquid extraction followed by derivatization with trifluoroacetic anhydride.
Klopp, R N; Oconitrillo, M J; Sackett, A; Hill, T M; Schlotterbeck, R L; Lascano, G J
2018-07-01
A limited amount of research is available related to the rumen microbiota of calves, yet there has been a recent spike of interest in determining the diversity and development of calf rumen microbial populations. To study the microbial populations of a calf's rumen, a sample of the rumen fluid is needed. One way to take a rumen fluid sample from a calf is by fistulating the animal. This method requires surgery and can be very stressful on a young animal that is trying to adapt to a new environment and has a depressed immune system. Another method that can be used instead of fistulation surgery is a rumen pump. This method requires a tube to be inserted into the rumen through the calf's esophagus. Once inside the rumen, fluid can be pumped out and collected in a few minutes. This method is quick, inexpensive, and does not cause significant stress on the animal. This technical note presents the materials and methodology used to convert a drenching system into a rumen pump and its respective utilization in 2 experiments using dairy bull calves. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Araújo, M. M.; Duarte, R. C.; Silva, P. V.; Marchioni, E.; Villavicencio, A. L. C. H.
2009-07-01
Marketing of minimally processed vegetables (MPV) are gaining impetus due to its convenience, freshness and apparent health effect. However, minimal processing does not reduce pathogenic microorganisms to safe levels. Food irradiation is used to extend the shelf life and to inactivate food-borne pathogens. In combination with minimal processing it could improve safety and quality of MPV. A microbiological screening method based on the use of direct epifluorescent filter technique (DEFT) and aerobic plate count (APC) has been established for the detection of irradiated foodstuffs. The aim of this study was to evaluate the applicability of this technique in detecting MPV irradiation. Samples from retail markets were irradiated with 0.5 and 1.0 kGy using a 60Co facility. In general, with a dose increment, DEFT counts remained similar independent of the irradiation while APC counts decreased gradually. The difference of the two counts gradually increased with dose increment in all samples. It could be suggested that a DEFT/APC difference over 2.0 log would be a criteria to judge if a MPV was treated by irradiation. The DEFT/APC method could be used satisfactorily as a screening method for indicating irradiation processing.
Carvalho, Rimenys J; Cruz, Thayana A
2018-01-01
High-throughput screening (HTS) systems have emerged as important tools to provide fast and low cost evaluation of several conditions at once since it requires small quantities of material and sample volumes. These characteristics are extremely valuable for experiments with large number of variables enabling the application of design of experiments (DoE) strategies or simple experimental planning approaches. Once, the capacity of HTS systems to mimic chromatographic purification steps was established, several studies were performed successfully including scale down purification. Here, we propose a method for studying different purification conditions that can be used for any recombinant protein, including complex and glycosylated proteins, using low binding filter microplates.
Elmer-Dixon, Margaret M; Bowler, Bruce E
2018-05-19
A novel approach to quantify mixed lipid systems is described. Traditional approaches to lipid vesicle quantification are time consuming, require large amounts of material and are destructive. We extend our recently described method for quantification of pure lipid systems to mixed lipid systems. The method only requires a UV-Vis spectrometer and does not destroy sample. Mie scattering data from absorbance measurements are used as input into a Matlab program to calculate the total vesicle concentration and the concentrations of each lipid in the mixed lipid system. The technique is fast and accurate, which is essential for analytical lipid binding experiments. Copyright © 2018. Published by Elsevier Inc.
Sakai, Joseph T; Mikulich-Gilbertson, Susan K; Young, Susan E; Rhee, Soo Hyun; McWilliams, Shannon K; Dunn, Robin; Salomonsen-Sautel, Stacy; Thurstone, Christian; Hopfer, Christian J
2016-01-01
To our knowledge, this is the first study to examine the DSM-5-defined conduct disorder (CD) with limited prosocial emotions (LPE) among adolescents in substance use disorder (SUD) treatment, despite the high rates of CD in this population. We tested previously published methods of LPE categorization in a sample of male conduct-disordered patients in SUD treatment (n=196). CD with LPE patients did not demonstrate a distinct pattern in terms of demographics or co-morbidity regardless of the categorization method utilized. In conclusion, LPE, as operationalized here, does not identify a distinct subgroup of patients based on psychiatric comorbidity, SUD diagnoses, or demographics.
NASA Astrophysics Data System (ADS)
Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd
2018-03-01
Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.
Were policies in Brazil effective to reducing trans fat from industrial origin in foods?
Dias, Flávia da Silva Lima; Lima, Mário Ferreira; de Velasco, Patricia Coelho; Salles-Costa, Rosana; Sardinha, Fátima Lúcia de Carvalho; do Carmo, Maria das Graças Tavares
2018-01-01
ABSTRACT OBJECTIVE To determine the trans fatty acids content of processed foods frequently consumed by adults living in a Rio de Janeiro, Brazil, after the enactment of a mandatory trans fatty acids labelling policy. METHODS Between February 2014 and January 2015, a specifically dietary questionnaire was completed by 107 adults to assess the frequency of processed foods consumption. The most commonly consumed products from the survey, including vegetable oils, margarine, biscuits, snacks, cheese bread (pão de queijo), french fries, cheeseburger and ice cream, were then analyzed for their trans fatty acids content using gas chromatography with a flame ionization detector. RESULTS Differences in the levels of trans fatty acids were observed among 22 products analyzed, considering that trans fatty acids content ranged between 0.0 g/100 g in samples of cream cracker biscuit 1 and olive oil to 0.83 g/100 g in samples of cheeseburger (fast food), 0.51 g/100 g in samples of frozen pão de queijo and 12.92 g/100 g in samples of chocolate sandwich cookies with cream filling 2. The overall trans fatty acids content of the different samples of margarine brands was 0.20 g/100 g for brand 1 and 0.0 g/100 g for brand 2. These data are significantly lower than those observed in a survey conducted in 2003, when the regulation had been enacted. CONCLUSIONS Our data indicate that Brazilian regulation is very likely implicated in the observed drop in trans fatty acids of the most processed foods but has yet to eliminate them, which reinforces the urgent need to revise the legislation, since a minimum amount of trans fat does not mean that the food product does not contain this type of fat. PMID:29641658
Nevada National Security Site Integrated Groundwater Sampling Plan, Revision 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilborn, Bill R.; Boehlecke, Robert F.
The purpose is to provide a comprehensive, integrated approach for collecting and analyzing groundwater samples to meet the needs and objectives of the DOE/EM Nevada Program’s UGTA Activity. Implementation of this Plan will provide high-quality data required by the UGTA Activity for ensuring public protection in an efficient and cost-effective manner. The Plan is designed to ensure compliance with the UGTA Quality Assurance Plan (QAP) (NNSA/NFO, 2015); Federal Facility Agreement and Consent Order (FFACO) (1996, as amended); and DOE Order 458.1, Radiation Protection of the Public and the Environment (DOE, 2013). The Plan’s scope comprises sample collection and analysis requirementsmore » relevant to assessing both the extent of groundwater contamination from underground nuclear testing and impact of testing on water quality in downgradient communities. This Plan identifies locations to be sampled by CAU and location type, sampling frequencies, sample collection methodologies, and the constituents to be analyzed. In addition, the Plan defines data collection criteria such as well purging, detection levels, and accuracy requirements/recommendations; identifies reporting and data management requirements; and provides a process to ensure coordination between NNSS groundwater sampling programs for sampling analytes of interest to UGTA. Information used in the Plan development—including the rationale for selection of wells, sampling frequency, and the analytical suite—is discussed under separate cover (N-I, 2014) and is not reproduced herein. This Plan does not address compliance for those wells involved in a permitted activity. Sampling and analysis requirements associated with these wells are described in their respective permits and are discussed in NNSS environmental reports (see Section 5.2). In addition, sampling for UGTA CAUs that are in the Closure Report (CR) stage are not included in this Plan. Sampling requirements for these CAUs are described in the CR. Frenchman Flat is currently the only UGTA CAU in the CR stage. Sampling requirements for this CAU are described in Underground Test Area (UGTA) Closure Report for Corrective Action Unit 98: Frenchman Flat Nevada National Security Site, Nevada (NNSA/NFO, 2016).« less
Development of new methodologies for evaluating the energy performance of new commercial buildings
NASA Astrophysics Data System (ADS)
Song, Suwon
The concept of Measurement and Verification (M&V) of a new building continues to become more important because efficient design alone is often not sufficient to deliver an efficient building. Simulation models that are calibrated to measured data can be used to evaluate the energy performance of new buildings if they are compared to energy baselines such as similar buildings, energy codes, and design standards. Unfortunately, there is a lack of detailed M&V methods and analysis methods to measure energy savings from new buildings that would have hypothetical energy baselines. Therefore, this study developed and demonstrated several new methodologies for evaluating the energy performance of new commercial buildings using a case-study building in Austin, Texas. First, three new M&V methods were developed to enhance the previous generic M&V framework for new buildings, including: (1) The development of a method to synthesize weather-normalized cooling energy use from a correlation of Motor Control Center (MCC) electricity use when chilled water use is unavailable, (2) The development of an improved method to analyze measured solar transmittance against incidence angle for sample glazing using different solar sensor types, including Eppley PSP and Li-Cor sensors, and (3) The development of an improved method to analyze chiller efficiency and operation at part-load conditions. Second, three new calibration methods were developed and analyzed, including: (1) A new percentile analysis added to the previous signature method for use with a DOE-2 calibration, (2) A new analysis to account for undocumented exhaust air in DOE-2 calibration, and (3) An analysis of the impact of synthesized direct normal solar radiation using the Erbs correlation on DOE-2 simulation. Third, an analysis of the actual energy savings compared to three different energy baselines was performed, including: (1) Energy Use Index (EUI) comparisons with sub-metered data, (2) New comparisons against Standards 90.1-1989 and 90.1-2001, and (3) A new evaluation of the performance of selected Energy Conservation Design Measures (ECDMs). Finally, potential energy savings were also simulated from selected improvements, including: minimum supply air flow, undocumented exhaust air, and daylighting.
Tian, Peng; Yang, David; Mandrell, Robert
2011-06-30
Human norovirus (NoV) outbreaks are major food safety concerns. The virus has to be concentrated from food samples in order to be detected. PEG precipitation is the most common method to recover the virus. Recently, histo-blood group antigens (HBGA) have been recognized as receptors for human NoV, and have been utilized as an alternative method to concentrate human NoV for samples up to 40 mL in volume. However, to wash off the virus from contaminated fresh food samples, at least 250 mL of wash volume is required. Recirculating affinity magnetic separation system (RCAMS) has been tried by others to concentrate human NoV from large-volume samples and failed to yield consistent results with the standard procedure of 30 min of recirculation at the default flow rate. Our work here demonstrates that proper recirculation time and flow rate are key factors for success in using the RCAMS. The bead recovery rate was increased from 28% to 47%, 67% and 90% when recirculation times were extended from 30 min to 60 min, 120 min and 180 min, respectively. The kinetics study suggests that at least 120 min recirculation is required to obtain a good recovery of NoV. In addition, different binding and elution conditions were compared for releasing NoV from inoculated lettuce. Phosphate-buffered saline (PBS) and water results in similar efficacy for virus release, but the released virus does not bind to RCAMS effectively unless pH was adjusted to acidic. Either citrate-buffered saline (CBS) wash, or water wash followed by CBS adjustment, resulted in an enhanced recovery of virus. We also demonstrated that the standard curve generated from viral RNA extracted from serially-diluted virus samples is more accurate for quantitative analysis than standard curves generated from serially-diluted plasmid DNA or transcribed-RNA templates, both of which tend to overestimate the concentration power. The efficacy of recovery of NoV from produce using RCAMS was directly compared with that of the PEG method in NoV inoculated lettuce. 40, 4, 0.4, and 0.04 RTU can be detected by both methods. At 0.004 RTU, NoV was detectable in all three samples concentrated by the RCAMS method, while none could be detected by the PEG precipitation method. RCAMS is a simple and rapid method that is more sensitive than conventional methods for recovery of NoV from food samples with a large sample size. In addition, the RTU value detected through RCAMS-processed samples is more biologically relevant. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Verma, Shivcharan; Mohanty, Biraja P.; Singh, Karn P.; Kumar, Ashok
2018-02-01
The proton beam facility at variable energy cyclotron (VEC) Panjab University, Chandigarh, India is being used for Particle Induced X-ray Emission (PIXE) analysis of different environmental, biological and industrial samples. The PIXE method, however, does not provide any information of low Z elements like carbon, nitrogen, oxygen and fluorine. As a result of the increased need for rapid and multi-elemental analysis of biological and environmental samples, the PIXE facility was upgraded and standardized to facilitate simultaneous measurements using PIXE and Proton Elastic Scattering Analysis (PESA). Both PIXE and PESA techniques were calibrated and standardized individually. Finally, the set up was tested by carrying out simultaneous PIXE and PESA measurements using a 2 mm diameter proton beam of 2.7 MeV on few multilayered thin samples. The results obtained show excellent agreement between PIXE and PESA measurements and confirm adequate sensitivity and precision of the experimental set up.
Measurement of helium isotopes in soil gas as an indicator of tritium groundwater contamination.
Olsen, Khris B; Dresel, P Evan; Evans, John C; McMahon, William J; Poreda, Robert
2006-05-01
The focus of this study was to define the shape and extent of tritium groundwater contamination emanating from a legacy burial ground and to identify vadose zone sources of tritium using helium isotopes (3He and 4He) in soil gas. Helium isotopes were measured in soil-gas samples collected from 70 sampling points around the perimeter and downgradient of a burial ground that contains buried radioactive solid waste. The soil-gas samples were analyzed for helium isotopes using rare gas mass spectrometry. 3He/4He ratios, reported as normalized to the air ratio (RA), were used to locate the tritium groundwater plume emanating from the burial ground. The 3He (excess) suggested that the general location of the tritium source is within the burial ground. This study clearly demonstrated the efficacy of the 3He method for application to similar sites elsewhere within the DOE weapons complex.
NASA Astrophysics Data System (ADS)
Xu, Xiaochun; Wang, Yu; Xiang, Jialing; Liu, Jonathan T. C.; Tichauer, Kenneth M.
2017-06-01
Conventional molecular assessment of tissue through histology, if adapted to fresh thicker samples, has the potential to enhance cancer detection in surgical margins and monitoring of 3D cell culture molecular environments. However, in thicker samples, substantial background staining is common despite repeated rinsing, which can significantly reduce image contrast. Recently, ‘paired-agent’ methods—which employ co-administration of a control (untargeted) imaging agent—have been applied to thick-sample staining applications to account for background staining. To date, these methods have included (1) a simple ratiometric method that is relatively insensitive to noise in the data but has accuracy that is dependent on the staining protocol and the characteristics of the sample; and (2) a complex paired-agent kinetic modeling method that is more accurate but is more noise-sensitive and requires a precise serial rinsing protocol. Here, a new simplified mathematical model—the rinsing paired-agent model (RPAM)—is derived and tested that offers a good balance between the previous models, is adaptable to arbitrary rinsing-imaging protocols, and does not require calibration of the imaging system. RPAM is evaluated against previous models and is validated by comparison to estimated concentrations of targeted biomarkers on the surface of 3D cell culture and tumor xenograft models. This work supports the use of RPAM as a preferable model to quantitatively analyze targeted biomarker concentrations in topically stained thick tissues, as it was found to match the accuracy of the complex paired-agent kinetic model while retaining the low noise-sensitivity characteristics of the ratiometric method.
Li, Yong; Ruan, Qiang; Li, Yanli; Ye, Guozhu; Lu, Xin; Lin, Xiaohui; Xu, Guowang
2012-09-14
Non-targeted metabolic profiling is the most widely used method for metabolomics. In this paper, a novel approach was established to transform a non-targeted metabolic profiling method to a pseudo-targeted method using the retention time locking gas chromatography/mass spectrometry-selected ion monitoring (RTL-GC/MS-SIM). To achieve this transformation, an algorithm based on the automated mass spectral deconvolution and identification system (AMDIS), GC/MS raw data and a bi-Gaussian chromatographic peak model was developed. The established GC/MS-SIM method was compared with GC/MS-full scan (the total ion current and extracted ion current, TIC and EIC) methods, it was found that for a typical tobacco leaf extract, 93% components had their relative standard deviations (RSDs) of relative peak areas less than 20% by the SIM method, while 88% by the EIC method and 81% by the TIC method. 47.3% components had their linear correlation coefficient higher than 0.99, compared with 5.0% by the EIC and 6.2% by TIC methods. Multivariate analysis showed the pooled quality control samples clustered more tightly using the developed method than using GC/MS-full scan methods, indicating a better data quality. With the analysis of the variance of the tobacco samples from three different planting regions, 167 differential components (p<0.05) were screened out using the RTL-GC/MS-SIM method, but 151 and 131 by the EIC and TIC methods, respectively. The results show that the developed method not only has a higher sensitivity, better linearity and data quality, but also does not need complicated peak alignment among different samples. It is especially suitable for the screening of differential components in the metabolic profiling investigation. Copyright © 2012 Elsevier B.V. All rights reserved.
Jervis, Lori L; Fickenscher, Alexandra; Beals, Janette
2014-04-01
Although elder mistreatment among ethnic minorities is increasingly gaining attention, our empirical knowledge of this phenomenon among American Indians remains quite limited, especially with respect to measurement. The Shielding American Indian Elders (SAIE) Project used a collaborative approach to explore culturally informed measurement of elder mistreatment in two American Indian elder samples (a Northern Plains reservation and a South Central metropolitan area). The project sought to investigate the performance characteristics of the commonly used Hwalek-Sengstock Elder Abuse Screening Test (HS-EAST), as well as to examine the psychometric properties of a new measure developed to capture culturally salient aspects of mistreatment in American Indian contexts--the Native Elder Life Scale (NELS). Using methods and samples comparable to those in the literature, the HS-EAST performed adequately in these Native samples. The NELS also shows promise for use with this population and assesses different aspects of elder mistreatment than does the HS-EAST.
Randomly Sampled-Data Control Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Han, Kuoruey
1990-01-01
The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.
Determination of the molar mass of argon from high-precision acoustic comparisons
NASA Astrophysics Data System (ADS)
Feng, X. J.; Zhang, J. T.; Moldover, M. R.; Yang, I.; Plimmer, M. D.; Lin, H.
2017-06-01
This article describes the accurate determination of the molar mass M of a sample of argon gas used for the determination of the Boltzmann constant. The method of one of the authors (Moldover et al 1988 J. Res. Natl. Bur. Stand. 93 85-144) uses the ratio of the square speed of sound in the gas under analysis and in a reference sample of known molar mass. A sample of argon that was isotopically-enriched in 40Ar was used as the reference, whose unreactive impurities had been independently measured. The results for three gas samples are in good agreement with determinations by gravimetric mass spectrometry; (
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martino, C
The Department of Energy (DOE) recognizes the need for the characterization of High-Level Waste (HLW) saltcake in the Savannah River Site (SRS) F- and H-area tank farms to support upcoming salt processing activities. As part of the enhanced characterization efforts, Tank 25F will be sampled and the samples analyzed at the Savannah River National Laboratory (SRNL). This Task Technical and Quality Assurance Plan documents the planned activities for the physical, chemical, and radiological analysis of the Tank 25F saltcake core samples. This plan does not cover other characterization activities that do not involve core sample analysis and it does notmore » address issues regarding sampling or sample transportation. The objectives of this report are: (1) Provide information useful in projecting the composition of dissolved salt batches by quantifying important components (such as actinides, {sup 137}Cs, and {sup 90}Sr) on a per batch basis. This will assist in process selection for the treatment of salt batches and provide data for the validation of dissolution modeling. (2) Determine the properties of the heel resulting from dissolution of the bulk saltcake. Also note tendencies toward post-mixing precipitation. (3) Provide a basis for determining the number of samples needed for the characterization of future saltcake tanks. Gather information useful towards performing characterization in a manner that is more cost and time effective.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, David A.
2013-12-12
The U.S. Department of Energy (DOE) Oak Ridge Office of Environmental Management (OR-EM) requested that Oak Ridge Associated Universities (ORAU), working under the Oak Ridge Institute for Science and Education (ORISE) contract, provide technical and independent waste management planning support using American Recovery and Reinvestment Act (ARRA) funds. Specifically, DOE OR-EM requested that ORAU plan and implement a sampling and analysis campaign targeting potential removable radiological contamination that may be transferrable to future personal protective equipment (PPE) and contamination control materials—collectively referred to as PPE throughout the remainder of this report—used in certain URS|CH2M Oak Ridge, LLC (UCOR) Surveillance andmore » Maintenance (S&M) Project facilities at the Oak Ridge National Laboratory (ORNL). Routine surveys in Bldgs. 3001, 3005, 3010, 3028, 3029, 3038, 3042, 3517, 4507, and 7500 continuously generate PPE. The waste is comprised of Tyvek coveralls, gloves, booties, Herculite, and other materials used to prevent worker exposure or the spread of contamination during routine maintenance and monitoring activities. This report describes the effort to collect and quantify removable activity that may be used by the ORNL S&M Project team to develop radiation instrumentation “screening criteria.” Material potentially containing removable activity was collected on smears, including both masselin large-area wipes (LAWs) and standard paper smears, and analyzed for site-related constituents (SRCs) in an analytical laboratory. The screening criteria, if approved, may be used to expedite waste disposition of relatively clean PPE. The ultimate objectives of this effort were to: 1) determine whether screening criteria can be developed for these facilities, and 2) provide process knowledge information for future site planners. The screening criteria, if calculated, must be formally approved by Federal Facility Agreement parties prior to use for ORNL S&M Project PPE disposal at the Environmental Management Waste Management Facility (EMWMF). ORAU executed the approved sampling and analysis plan (SAP) (DOE 2013) while closely coordinating with ORNL S&M Project personnel and using guidelines outlined in the Waste Handling Plan for Surveillance and Maintenance Activities at the Oak Ridge National Laboratory, DOE/OR/01-2565&D2 (WHP) (DOE 2012). WHP guidelines were followed because the PPE waste targeted by this SAP is consistent with that addressed under the approved Waste Lot (WL) 108.1 profile for disposal at EMWMF—this PPE is a “future waste stream” as defined in the WHP. The SAP presents sampling strategy and methodology, sample selection guidelines, and analytical guidelines and requirements necessary for characterizing future ORNL S&M Project PPE waste. This report presents a review of the sample and analysis methods including data quality objectives (DQOs), required deviations from the original design, summary of field activities, radiation measurement data, analytical laboratory results, a brief presentation of results, and process knowledge summaries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.
Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less
Pressman, Alice R; Avins, Andrew L; Hubbard, Alan; Satariano, William A
2011-07-01
There is a paucity of literature comparing Bayesian analytic techniques with traditional approaches for analyzing clinical trials using real trial data. We compared Bayesian and frequentist group sequential methods using data from two published clinical trials. We chose two widely accepted frequentist rules, O'Brien-Fleming and Lan-DeMets, and conjugate Bayesian priors. Using the nonparametric bootstrap, we estimated a sampling distribution of stopping times for each method. Because current practice dictates the preservation of an experiment-wise false positive rate (Type I error), we approximated these error rates for our Bayesian and frequentist analyses with the posterior probability of detecting an effect in a simulated null sample. Thus for the data-generated distribution represented by these trials, we were able to compare the relative performance of these techniques. No final outcomes differed from those of the original trials. However, the timing of trial termination differed substantially by method and varied by trial. For one trial, group sequential designs of either type dictated early stopping of the study. In the other, stopping times were dependent upon the choice of spending function and prior distribution. Results indicate that trialists ought to consider Bayesian methods in addition to traditional approaches for analysis of clinical trials. Though findings from this small sample did not demonstrate either method to consistently outperform the other, they did suggest the need to replicate these comparisons using data from varied clinical trials in order to determine the conditions under which the different methods would be most efficient. Copyright © 2011 Elsevier Inc. All rights reserved.
Efficient statistical tests to compare Youden index: accounting for contingency correlation.
Chen, Fangyao; Xue, Yuqiang; Tan, Ming T; Chen, Pingyan
2015-04-30
Youden index is widely utilized in studies evaluating accuracy of diagnostic tests and performance of predictive, prognostic, or risk models. However, both one and two independent sample tests on Youden index have been derived ignoring the dependence (association) between sensitivity and specificity, resulting in potentially misleading findings. Besides, paired sample test on Youden index is currently unavailable. This article develops efficient statistical inference procedures for one sample, independent, and paired sample tests on Youden index by accounting for contingency correlation, namely associations between sensitivity and specificity and paired samples typically represented in contingency tables. For one and two independent sample tests, the variances are estimated by Delta method, and the statistical inference is based on the central limit theory, which are then verified by bootstrap estimates. For paired samples test, we show that the estimated covariance of the two sensitivities and specificities can be represented as a function of kappa statistic so the test can be readily carried out. We then show the remarkable accuracy of the estimated variance using a constrained optimization approach. Simulation is performed to evaluate the statistical properties of the derived tests. The proposed approaches yield more stable type I errors at the nominal level and substantially higher power (efficiency) than does the original Youden's approach. Therefore, the simple explicit large sample solution performs very well. Because we can readily implement the asymptotic and exact bootstrap computation with common software like R, the method is broadly applicable to the evaluation of diagnostic tests and model performance. Copyright © 2015 John Wiley & Sons, Ltd.
Hautala, E L; Rekilä, R; Tarhanen, J; Ruuskanen, J
1995-01-01
A vertical snow-sampling method, where a sample was taken throughout the snowpack, was used to estimate the pollutant load on a roadside where average daily traffic density was about 9100 motor vehicles. The snow samples were collected at two sites, forest and open field, at two distances of 10 and 30 m from the road. The concentrations of inorganic anions (Cl(-), NO(-)(3), SO(2-)(4)), total N, polycyclic aromatic hydrocarbons (PAHs) and polychlorinated phenols (PCPhs) were analysed. The results suggest that on roadsides there is a deposition caused by road traffic emissions and winter maintenance which exceeds normal background deposition. Inorganic anions mainly in particle form, originating from winter maintenance, are deposited near the road. PAHs with low molecular weight (=252) are mainly in gaseous form and are deposited further away from the road. Also, some PCPhs show similar behaviour. The dispersion is different at the forest site than at the open-field site. Our results also indicate that the vertical snow-sampling method can be used in studying pollutant load from traffic near the roads. However, studies should focus on individual PAH or PCPh compounds as markers of highway pollution. The deposition of mixtures of compounds does not bring sufficient information in the light of present knowledge.
kWIP: The k-mer weighted inner product, a de novo estimator of genetic similarity.
Murray, Kevin D; Webers, Christfried; Ong, Cheng Soon; Borevitz, Justin; Warthmann, Norman
2017-09-01
Modern genomics techniques generate overwhelming quantities of data. Extracting population genetic variation demands computationally efficient methods to determine genetic relatedness between individuals (or "samples") in an unbiased manner, preferably de novo. Rapid estimation of genetic relatedness directly from sequencing data has the potential to overcome reference genome bias, and to verify that individuals belong to the correct genetic lineage before conclusions are drawn using mislabelled, or misidentified samples. We present the k-mer Weighted Inner Product (kWIP), an assembly-, and alignment-free estimator of genetic similarity. kWIP combines a probabilistic data structure with a novel metric, the weighted inner product (WIP), to efficiently calculate pairwise similarity between sequencing runs from their k-mer counts. It produces a distance matrix, which can then be further analysed and visualised. Our method does not require prior knowledge of the underlying genomes and applications include establishing sample identity and detecting mix-up, non-obvious genomic variation, and population structure. We show that kWIP can reconstruct the true relatedness between samples from simulated populations. By re-analysing several published datasets we show that our results are consistent with marker-based analyses. kWIP is written in C++, licensed under the GNU GPL, and is available from https://github.com/kdmurray91/kwip.
Nonlinear vibrational microscopy
Holtom, Gary R.; Xie, Xiaoliang Sunney; Zumbusch, Andreas
2000-01-01
The present invention is a method and apparatus for microscopic vibrational imaging using coherent Anti-Stokes Raman Scattering or Sum Frequency Generation. Microscopic imaging with a vibrational spectroscopic contrast is achieved by generating signals in a nonlinear optical process and spatially resolved detection of the signals. The spatial resolution is attained by minimizing the spot size of the optical interrogation beams on the sample. Minimizing the spot size relies upon a. directing at least two substantially co-axial laser beams (interrogation beams) through a microscope objective providing a focal spot on the sample; b. collecting a signal beam together with a residual beam from the at least two co-axial laser beams after passing through the sample; c. removing the residual beam; and d. detecting the signal beam thereby creating said pixel. The method has significantly higher spatial resolution then IR microscopy and higher sensitivity than spontaneous Raman microscopy with much lower average excitation powers. CARS and SFG microscopy does not rely on the presence of fluorophores, but retains the resolution and three-dimensional sectioning capability of confocal and two-photon fluorescence microscopy. Complementary to these techniques, CARS and SFG microscopy provides a contrast mechanism based on vibrational spectroscopy. This vibrational contrast mechanism, combined with an unprecedented high sensitivity at a tolerable laser power level, provides a new approach for microscopic investigations of chemical and biological samples.
WATER QUALITY MONITORING OF PHARMACEUTICALS ...
The demand on freshwater to sustain the needs of the growing population is of worldwide concern. Often this water is used, treated, and released for reuse by other communities. The anthropogenic contaminants present in this water may include complex mixtures of pesticides, prescription and nonprescription drugs, personal care and common consumer products, industrial and domestic-use materials and degradation products of these compounds. Although, the fate of these pharmaceuticals and personal care products (PPCPs) in wastewater treatment facilities is largely unknown, the limited data that does exist suggests that many of these chemicals survive treatment and some others are returned to their biologically active form via deconjugation of metabolites.Traditional water sampling methods (i.e., grab or composite samples) often require the concentration of large amounts of water to detect trace levels of PPCPs. A passive sampler, the polar organic chemical integrative sampler (POCIS), has been developed to integratively concentrate the trace levels of these chemicals, determine the time-weighted average water concentrations, and provide a method of estimating the potential exposure of aquatic organisms to these complex mixtures of waterborne contaminants. The POCIS (U.S. Patent number 6,478,961) consists of a hydrophilic microporous membrane, acting as a semipermeable barrier, enveloping various solid-phase sorbents that retain the sampled chemicals. Sampling rates f
Optimal time points sampling in pathway modelling.
Hu, Shiyan
2004-01-01
Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.
contamDE: differential expression analysis of RNA-seq data for contaminated tumor samples.
Shen, Qi; Hu, Jiyuan; Jiang, Ning; Hu, Xiaohua; Luo, Zewei; Zhang, Hong
2016-03-01
Accurate detection of differentially expressed genes between tumor and normal samples is a primary approach of cancer-related biomarker identification. Due to the infiltration of tumor surrounding normal cells, the expression data derived from tumor samples would always be contaminated with normal cells. Ignoring such cellular contamination would deflate the power of detecting DE genes and further confound the biological interpretation of the analysis results. For the time being, there does not exists any differential expression analysis approach for RNA-seq data in literature that can properly account for the contamination of tumor samples. Without appealing to any extra information, we develop a new method 'contamDE' based on a novel statistical model that associates RNA-seq expression levels with cell types. It is demonstrated through simulation studies that contamDE could be much more powerful than the existing methods that ignore the contamination. In the application to two cancer studies, contamDE uniquely found several potential therapy and prognostic biomarkers of prostate cancer and non-small cell lung cancer. An R package contamDE is freely available at http://homepage.fudan.edu.cn/zhangh/softwares/ zhanghfd@fudan.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Schaffer, Michael; Cheng, Chen-Chih; Chao, Oscar; Hill, Virginia; Matsui, Paul
2016-03-01
An LC/MS/MS method to identify and quantitate in hair the minor metabolites of cocaine-meta-, para-, and ortho-hydroxy cocaine-was developed and validated. Analysis was performed on a triple quadrupole ABSciex API 3000 MS equipped with an atmospheric pressure ionization source via an IonSpray (ESI). For LC, a series 200 micro binary pump with a Perkin Elmer Model 200 autosampler was used. The limit of detection (LOD) and limit of quantification (LOQ) were 0.02 ng/10 mg hair, with linearity from 0.02 to 10 ng/10 mg hair. Concentrations of the para isomer in extensively washed hair samples were in the range of 1-2 % of the cocaine in the sample, while the concentrations of the ortho form were considerably less. The method was used to analyze large numbers of samples from two populations: workplace and criminal justice. In vitro experiments to determine if deodorants or peroxide-containing cosmetic treatments could result in the presence of these metabolites in hair showed that this does not occur with extensively washed hair. Presence of hydroxycocaines, when detected after aggressive washing of the hair samples, provides a valuable additional indicator of ingestion of cocaine rather than mere environmental exposure.
Grasping and fingering (active or haptic touch) in healthy newborns.
Adamson-Macedo, Elvidina Nabuco; Barnes, Christopher R
2004-12-01
The traditional view that the activity of the baby's hands are triggered by a stimulus in an automatic, compulsory, stereotyped way and persisting view that fingering does not occur prior to 4 months of age, have led perception researchers to the assumption that the processing, encoding, and retainment of sensory information could not take place through the manual mode. This study aims to investigate whether fingering and different types of grasping occur before 3 months of age and can be modulated by surface texture of three objects. Using naturalistic observations, this small sample developmental study applied the AB experimental design to achieve aims above. Babies were video taped every week for 12 weeks. Three special manual stimuli were developed for this study. Focal sampling method with either zero-sampling or instantaneous sampling recording rules were used to analyse data with the Observer Video Pro. Each session comprising baseline and 3 experimental conditions lasted for four minutes. Fingering or 'proto fingering' as it is suggested in this article emerges as early as the first week of postnatal life; texture of a handled object modulates both 'proto-palm' and hand-grasp behaviour of healthy newborns. Results suggest that texture also modulates 'proto-fingering' and challenge persisting current assumption that fingering does not occur before four months of age, and further validates the phrase 'neo-haptic' touch to describe hands-on exploration of the newborn. The author suggests that some 'mental representation' of the stimulus is present during 'neo-haptic' recognition of the objects which is in accordance to a constructivist approach to (touch) perception.
Ruiz-Jiménez, J; Priego-Capote, F; Luque de Castro, M D
2006-08-01
A study of the feasibility of Fourier transform medium infrared spectroscopy (FT-midIR) for analytical determination of fatty acid profiles, including trans fatty acids, is presented. The training and validation sets-75% (102 samples) and 25% (36 samples) of the samples once the spectral outliers have been removed-to develop FT-midIR general equations, were built with samples from 140 commercial and home-made bakery products. The concentration of the analytes in the samples used for this study is within the typical range found in these kinds of products. Both sets were independent; thus, the validation set was only used for testing the equations. The criterion used for the selection of the validation set was samples with the highest number of neighbours and the most separation between them (H<0.6). Partial least squares regression and cross validation were used for multivariate calibration. The FT-midIR method does not require post-extraction manipulation and gives information about the fatty acid profile in two min. The 14:0, 16:0, 18:0, 18:1 and 18:2 fatty acids can be determined with excellent precision and other fatty acids with good precision according to the Shenk criteria, R (2)>/=0.90, SEP=1-1.5 SEL and R (2)=0.70-0.89, SEP=2-3 SEL, respectively. The results obtained with the proposed method were compared with those provided by the conventional method based on GC-MS. At 95% significance level, the differences between the values obtained for the different fatty acids were within the experimental error.
Effects of Bacterial Inactivation Methods on Downstream Proteomic Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Andy; Merkley, Eric D.; Clowers, Brian H.
2015-05-01
Inactivation of pathogenic microbial samples is often necessary for the protection of researchers and to comply with local and federal regulations. By its nature, biological inactivation causes changes to microbial samples, potentially affecting observed experimental results. While inactivation induced damage to materials such as DNA has been evaluated, the effect of various inactivation strategies on proteomic data, to our knowledge, has not been discussed. To this end, we inactivated samples of Yersinia pestis and Escherichia coli by autoclave, ethanol, or irradiation treatment to determine how inactivation changes liquid chromatography tandem mass spectrometry data quality as well as apparent protein contentmore » of cells. Proteomic datasets obtained from aliquots of samples inactivated by different methods were highly similar, with Pearson correlation coefficients ranging from 0.822 to 0.985 and 0.816 to 0.985 for E. coli and Y. pestis, respectively, suggesting that inactivation had only slight impacts on the set of proteins identified. In addition, spectral quality metrics such as distributions of various database search algorithm scores remained constant across inactivation methods, indicating that inactivation does not appreciably degrade spectral quality. Though overall changes resulting from inactivation were small, there were detectable trends. For example, one-sided Fischer exact tests determined that periplasmic proteins decrease in observed abundance after sample inactivation by autoclaving (α = 1.71x10-2 for E. coli, α = 4.97x10-4 for Y. pestis) and irradiation (α = 9.43x10-7 for E. coli, α = 1.21x10-5 for Y. pestis) when compared to controls that were not inactivated. Based on our data, if sample inactivation is necessary, we recommend inactivation with ethanol treatment with secondary preference given to irradiation.« less
Pardo, O; Yusà, V; Coscollà, C; León, N; Pastor, A
2007-07-01
A selective and sensitive procedure has been developed and validated for the determination of acrylamide in difficult matrices, such as coffee and chocolate. The proposed method includes pressurised fluid extraction (PFE) with acetonitrile, florisil clean-up purification inside the PFE extraction cell and detection by liquid chromatography (LC) coupled to atmospheric pressure ionisation in positive mode tandem mass spectrometry (APCI-MS-MS). Comparison of ionisation sources (atmospheric pressure chemical ionisation (APCI), atmospheric pressure photoionization (APPI) and the combined APCI/APPI) and clean-up procedures were carried out to improve the analytical signal. The main parameters affecting the performance of the different ionisation sources were previously optimised using statistical design of experiments (DOE). PFE parameters were also optimised by DOE. For quantitation, an isotope dilution approach was used. The limit of quantification (LOQ) of the method was 1 microg kg(-1) for coffee and 0.6 microg kg(-1) for chocolate. Recoveries ranged between 81-105% in coffee and 87-102% in chocolate. The accuracy was evaluated using a coffee reference test material FAPAS T3008. Using the optimised method, 20 coffee and 15 chocolate samples collected from Valencian (Spain) supermarkets, were investigated for acrylamide, yielding median levels of 146 microg kg(-1) in coffee and 102 microg kg(-1) in chocolate.
Baltussen, E; Snijders, H; Janssen, H G; Sandra, P; Cramers, C A
1998-04-10
A recently developed method for the extraction of organic micropollutants from aqueous samples based on sorptive enrichment in columns packed with 100% polydimethylsiloxane (PDMS) particles was coupled on-line with HPLC analysis. The sorptive enrichment procedure originally developed for relatively nonpolar analytes was used to preconcentrate polar phenylurea herbicides from aqueous samples. PDMS extraction columns of 5, 10 and 25 cm were used to extract the herbicides from distilled, tap and river water samples. A model that allows prediction of retention and breakthrough volumes is presented. Despite the essentially apolar nature of the PDMS material, it is possible to concentrate sample volumes up to 10 ml on PDMS cartridges without losses of the most polar analyte under investigation, fenuron. For less polar analytes significantly larger sample volumes can be applied. Since standard UV detection does not provide adequate selectivity for river water samples, an electrospray (ES)-MS instrument was used to determine phenylurea herbicides in a water sample from the river Dommel. Methoxuron was present at a level of 80 ng/l. The detection limit of the current set-up, using 10 ml water samples and ES-MS detection is 10 ng/l in river water samples. Strategies for further improvement of the detection limits are identified.
Gill, Christina; Blow, Frances; Darby, Alistair C.
2016-01-01
Background Recent studies on the vaginal microbiota have employed molecular techniques such as 16S rRNA gene sequencing to describe the bacterial community as a whole. These techniques require the lysis of bacterial cells to release DNA before purification and PCR amplification of the 16S rRNA gene. Currently, methods for the lysis of bacterial cells are not standardised and there is potential for introducing bias into the results if some bacterial species are lysed less efficiently than others. This study aimed to compare the results of vaginal microbiota profiling using four different pretreatment methods for the lysis of bacterial samples (30 min of lysis with lysozyme, 16 hours of lysis with lysozyme, 60 min of lysis with a mixture of lysozyme, mutanolysin and lysostaphin and 30 min of lysis with lysozyme followed by bead beating) prior to chemical and enzyme-based DNA extraction with a commercial kit. Results After extraction, DNA yield did not significantly differ between methods with the exception of lysis with lysozyme combined with bead beating which produced significantly lower yields when compared to lysis with the enzyme cocktail or 30 min lysis with lysozyme only. However, this did not result in a statistically significant difference in the observed alpha diversity of samples. The beta diversity (Bray-Curtis dissimilarity) between different lysis methods was statistically significantly different, but this difference was small compared to differences between samples, and did not affect the grouping of samples with similar vaginal bacterial community structure by hierarchical clustering. Conclusions An understanding of how laboratory methods affect the results of microbiota studies is vital in order to accurately interpret the results and make valid comparisons between studies. Our results indicate that the choice of lysis method does not prevent the detection of effects relating to the type of vaginal bacterial community one of the main outcome measures of epidemiological studies. However, we recommend that the same method is used on all samples within a particular study. PMID:27643503
Jockusch, Elizabeth L; Martínez-Solano, Iñigo; Timpe, Elizabeth K
2015-01-01
Species tree methods are now widely used to infer the relationships among species from multilocus data sets. Many methods have been developed, which differ in whether gene and species trees are estimated simultaneously or sequentially, and in how gene trees are used to infer the species tree. While these methods perform well on simulated data, less is known about what impacts their performance on empirical data. We used a data set including five nuclear genes and one mitochondrial gene for 22 species of Batrachoseps to compare the effects of method of analysis, within-species sampling and gene sampling on species tree inferences. For this data set, the choice of inference method had the largest effect on the species tree topology. Exclusion of individual loci had large effects in *BEAST and STEM, but not in MP-EST. Different loci carried the greatest leverage in these different methods, showing that the causes of their disproportionate effects differ. Even though substantial information was present in the nuclear loci, the mitochondrial gene dominated the *BEAST species tree. This leverage is inherent to the mtDNA locus and results from its high variation and lower assumed ploidy. This mtDNA leverage may be problematic when mtDNA has undergone introgression, as is likely in this data set. By contrast, the leverage of RAG1 in STEM analyses does not reflect properties inherent to the locus, but rather results from a gene tree that is strongly discordant with all others, and is best explained by introgression between distantly related species. Within-species sampling was also important, especially in *BEAST analyses, as shown by differences in tree topology across 100 subsampled data sets. Despite the sensitivity of the species tree methods to multiple factors, five species groups, the relationships among these, and some relationships within them, are generally consistently resolved for Batrachoseps. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Soil Sampling Techniques For Alabama Grain Fields
NASA Technical Reports Server (NTRS)
Thompson, A. N.; Shaw, J. N.; Mask, P. L.; Touchton, J. T.; Rickman, D.
2003-01-01
Characterizing the spatial variability of nutrients facilitates precision soil sampling. Questions exist regarding the best technique for directed soil sampling based on a priori knowledge of soil and crop patterns. The objective of this study was to evaluate zone delineation techniques for Alabama grain fields to determine which method best minimized the soil test variability. Site one (25.8 ha) and site three (20.0 ha) were located in the Tennessee Valley region, and site two (24.2 ha) was located in the Coastal Plain region of Alabama. Tennessee Valley soils ranged from well drained Rhodic and Typic Paleudults to somewhat poorly drained Aquic Paleudults and Fluventic Dystrudepts. Coastal Plain s o i l s ranged from coarse-loamy Rhodic Kandiudults to loamy Arenic Kandiudults. Soils were sampled by grid soil sampling methods (grid sizes of 0.40 ha and 1 ha) consisting of: 1) twenty composited cores collected randomly throughout each grid (grid-cell sampling) and, 2) six composited cores collected randomly from a -3x3 m area at the center of each grid (grid-point sampling). Zones were established from 1) an Order 1 Soil Survey, 2) corn (Zea mays L.) yield maps, and 3) airborne remote sensing images. All soil properties were moderately to strongly spatially dependent as per semivariogram analyses. Differences in grid-point and grid-cell soil test values suggested grid-point sampling does not accurately represent grid values. Zones created by soil survey, yield data, and remote sensing images displayed lower coefficient of variations (8CV) for soil test values than overall field values, suggesting these techniques group soil test variability. However, few differences were observed between the three zone delineation techniques. Results suggest directed sampling using zone delineation techniques outlined in this paper would result in more efficient soil sampling for these Alabama grain fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, George; Zhang, Xi-Cheng
Concrete and asbestos-containing materials were widely used in U.S. Department of Energy (DOE) building construction in the 1940s and 1950s. Over the years, many of these porous building materials have been contaminated with radioactive sources, on and below the surface. This intractable radioactive-and-hazardous- asbestos mixed-waste-stream has created a tremendous challenge to DOE decontamination and decommissioning (D&D) project managers. The current practice to identify asbestos and to characterize radioactive contamination depth profiles involve bore sampling, and is inefficient, costly, and unsafe. A three-year research project was started on 10/1/98 at Rensselaer with the following ultimate goals: (1) development of novel non-destructivemore » methods for identifying the hazardous asbestos in real-time and in-situ, and (2) development of new algorithms and apparatus for characterizing the radioactive contamination depth profile in real-time and in-situ.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, George; Zhang, Xi-Cheng
Concrete and asbestos-containing materials were widely used in U.S. Department of Energy (DOE) building construction in the 1940s and 1950s. Over the years, many of these porous building materials have been contaminated with radioactive sources, on and below the surface. This intractable radioactive-and-hazardous-asbestos mixed-waste stream has created a tremendous challenge to DOE decontamination and decommissioning (D&D) project managers. The current practice to identify asbestos and to characterize radioactive contamination depth profiles in based solely on bore sampling, which is inefficient, costly, and unsafe. A three-year research project was started 1998 at Rensselaer with the following ultimate goals: (1) development ofmore » novel non-destructive methods for identifying the hazardous asbestos in real-time and in-situ, and (2) development of new algorithms and apparatus for characterizing the radioactive contamination depth profile in real-time and in-situ.« less
Water dynamics during the association of hiv capsid proteins studied by all-atom simulations
NASA Astrophysics Data System (ADS)
Yu, Naiyin; Hagan, Michael
2012-02-01
The C-terminal domain of the HIV-1 capsid protein (CA-C) plays an important role in the assembly of the mature capsid. We have used molecular dynamics simulations combined with enhanced sampling methods to study the association of two CA-C proteins in atomistic detail. In this talk we will discuss the dynamics of water during the association process. In particular, we will show that that water in the interfacial region does not undergo a liquid-vapor transition (de-wetting) during association of wild type CA-C. However, mutation of some hydrophilic residues does lead to a dewetting transition. We discuss the relationship between the arrangement of hydrophilic and hydrophobic residues and dewetting during protein association. For the HIV capsid protein, the arrangement of hydrophilic residues contributes to maintaining weak interactions, which are crucial for successful assembly.
He, Wanzhong; Kivork, Christine; Machinani, Suman; Morphew, Mary K.; Gail, Anna M.; Tesar, Devin B.; Tiangco, Noreen E.; McIntosh, J. Richard; Bjorkman, Pamela J.
2007-01-01
We have developed methods to locate individual ligands that can be used for electron microscopy studies of dynamic events during endocytosis and subsequent intracellular trafficking. The methods are based on enlargement of 1.4 nm Nanogold attached to an endocytosed ligand. Nanogold, a small label that does not induce misdirection of ligand-receptor complexes, is ideal for labeling ligands endocytosed by live cells, but is too small to be routinely located in cells by electron microscopy. Traditional pre-embedding enhancement protocols to enlarge Nanogold are not compatible with high pressure freezing/freeze substitution fixation (HPF/FSF), the most accurate method to preserve ultrastructure and dynamic events during trafficking. We have developed an improved enhancement procedure for chemically-fixed samples that reduced autonucleation, and a new pre-embedding gold-enlarging technique for HPF/FSF samples that preserved contrast and ultrastructure and can be used for high-resolution tomography. We evaluated our methods using labeled Fc as a ligand for the neonatal Fc receptor. Attachment of Nanogold to Fc did not interfere with receptor binding or uptake, and gold-labeled Fc could be specifically enlarged to allow identification in 2D projections and in tomograms. These methods should be broadly applicable to many endocytosis and transcytosis studies. PMID:17723309
Heat Transfer Measurements on Surfaces with Natural Ice Castings and Modeled Roughness
NASA Technical Reports Server (NTRS)
Breuer, Kenneth S.; Torres, Benjamin E.; Orr, D. J.; Hansman, R. John
1997-01-01
An experimental method is described to measure and compare the convective heat transfer coefficient of natural and simulated ice accretion roughness and to provide a rational means for determining accretion-related enhanced heat transfer coefficients. The natural ice accretion roughness was a sample casting made from accretions at the NASA Lewis Icing Research Tunnel (IRT). One of these castings was modeled using a Spectral Estimation Technique (SET) to produce three roughness elements patterns that simulate the actual accretion. All four samples were tested in a flat-plate boundary layer at angle of attack in a "dry" wind tunnel test. The convective heat transfer coefficient was measured using infrared thermography. It is shown that, dispite some problems in the current data set, the method does show considerable promise in determining roughness-induced heat transfer coefficients, and that, in addition to the roughness height and spacing in the flow direction, the concentration and spacing of elements in the spanwise direction are important parameters.
NASA Astrophysics Data System (ADS)
Zhu, Mingyuan; Gao, Xiaoling; Luo, Guangqin; Dai, Bin
2013-03-01
This manuscript reports a convenient method for immobilizing phosphomolybdic acid (HPMo) on polyaniline (PAN-) functionalized carbon supports. The obtained HPMo-PAN-C sample is used as the support to prepare a Pd/HPMo-PAN-C catalyst. The samples are characterized by Fourier transform infrared spectroscopy, transmission electron microscopy and X-ray diffraction analysis. The results suggest that HPMo retains its Keggin structure and that the presence of HPMo reduces the average particle size of the Pd nano-particles in the obtained Pd/HPMo-PAN-C catalyst. Electro-chemical measurements in 0.5 M HClO4 solution reveal that the Pd/HPMo-PAN-C catalyst has higher catalytic activity for oxygen reduction reactions than does a Pd/C catalyst prepared using a similar procedure. The stability of the Pd/HPMo-PAN-C catalyst is evaluated by multiple-cycle voltammetry techniques; the mass catalytic activity decreases by only 10% after 100 scanning cycles.
Using Peptide-Level Proteomics Data for Detecting Differentially Expressed Proteins.
Suomi, Tomi; Corthals, Garry L; Nevalainen, Olli S; Elo, Laura L
2015-11-06
The expression of proteins can be quantified in high-throughput means using different types of mass spectrometers. In recent years, there have emerged label-free methods for determining protein abundance. Although the expression is initially measured at the peptide level, a common approach is to combine the peptide-level measurements into protein-level values before differential expression analysis. However, this simple combination is prone to inconsistencies between peptides and may lose valuable information. To this end, we introduce here a method for detecting differentially expressed proteins by combining peptide-level expression-change statistics. Using controlled spike-in experiments, we show that the approach of averaging peptide-level expression changes yields more accurate lists of differentially expressed proteins than does the conventional protein-level approach. This is particularly true when there are only few replicate samples or the differences between the sample groups are small. The proposed technique is implemented in the Bioconductor package PECA, and it can be downloaded from http://www.bioconductor.org.
Raman spectroscopic studies on bacteria
NASA Astrophysics Data System (ADS)
Maquelin, Kees; Choo-Smith, Lin-P'ing; Endtz, Hubert P.; Bruining, Hajo A.; Puppels, Gerwin J.
2000-11-01
Routine clinical microbiological identification of pathogenic micro-organisms is largely based on nutritional and biochemical tests. Laboratory results can be presented to a clinician after 2 - 3 days for most clinically relevant micro- organisms. Most of this time is required to obtain pure cultures and enough biomass for the tests to be performed. In the case of severely ill patients, this unavoidable time delay associated with such identification procedures can be fatal. A novel identification method based on confocal Raman microspectroscopy will be presented. With this method it is possible to obtain Raman spectra directly from microbial microcolonies on the solid culture medium, which have developed after only 6 hours of culturing for most commonly encountered organisms. Not only does this technique enable rapid (same day) identifications, but also preserves the sample allowing it to be double-checked with traditional tests. This, combined with the speed and minimal sample handling indicate that confocal Raman microspectroscopy has much potential as a powerful new tool in clinical diagnostic microbiology.
Layer Number and Stacking Order Imaging of Few-layer Graphenes by Transmission Electron Microscopy
NASA Astrophysics Data System (ADS)
Ping, Jinglei; Fuhrer, Michael
2012-02-01
A method using transmission electron microscopy (TEM) selected area electron diffraction (SAED) patterns and dark field (DF) images is developed to identify graphene layer number and stacking order by comparing intensity ratios of SAED spots with theory. Graphene samples are synthesized by ambient pressure chemical vapor depostion and then etched by hydrogen in high temperature to produce samples with crystalline stacking but varying layer number on the nanometer scale. Combined DF images from first- and second-order diffraction spots are used to produce images with layer-number and stacking-order contrast with few-nanometer resolution. This method is proved to be accurate enough for quantative stacking-order-identification of graphenes up to at least four layers. This work was partially supported by Science of Precision Multifunctional Nanostructures for Elecrical Energy Storage, an Energy Frontier Research Center funded by the U.S. DOE, Office of Science, Office of Basic Energy Sciences under Award Number DESC0001160.
A minimally invasive method for extraction of sturgeon oocytes
Candrl, James S.; Papoulias, Diana M.; Tillitt, Donald E.
2010-01-01
Fishery biologists, hatchery personnel, and caviar fishers routinely extract oocytes from sturgeon (Acipenseridae) to determine the stage of maturation by checking egg quality. Typically, oocytes are removed either by inserting a catheter into the oviduct or by making an incision in the body cavity. Both methods can be time-consuming and stressful to the fish. We describe a device to collect mature oocytes from sturgeons quickly and effectively with minimal stress on the fish. The device is made by creating a needle from stainless steel tubing and connecting it to a syringe with polyvinyl chloride tubing. The device is filled with saline solution or water, the needle is inserted into the abdominal wall, and eggs are extracted from the fish. Using this device, an oocyte sample can be collected in less than 30 s. Such sampling leaves a minute wound that heals quickly and does not require suturing. The extractor device can easily be used in the field or hatchery, reduces fish handling time, and minimizes stress.
Validity of the t-plot method to assess microporosity in hierarchical micro/mesoporous materials.
Galarneau, Anne; Villemot, François; Rodriguez, Jeremy; Fajula, François; Coasne, Benoit
2014-11-11
The t-plot method is a well-known technique which allows determining the micro- and/or mesoporous volumes and the specific surface area of a sample by comparison with a reference adsorption isotherm of a nonporous material having the same surface chemistry. In this paper, the validity of the t-plot method is discussed in the case of hierarchical porous materials exhibiting both micro- and mesoporosities. Different hierarchical zeolites with MCM-41 type ordered mesoporosity are prepared using pseudomorphic transformation. For comparison, we also consider simple mechanical mixtures of microporous and mesoporous materials. We first show an intrinsic failure of the t-plot method; this method does not describe the fact that, for a given surface chemistry and pressure, the thickness of the film adsorbed in micropores or small mesopores (< 10σ, σ being the diameter of the adsorbate) increases with decreasing the pore size (curvature effect). We further show that such an effect, which arises from the fact that the surface area and, hence, the free energy of the curved gas/liquid interface decreases with increasing the film thickness, is captured using the simple thermodynamical model by Derjaguin. The effect of such a drawback on the ability of the t-plot method to estimate the micro- and mesoporous volumes of hierarchical samples is then discussed, and an abacus is given to correct the underestimated microporous volume by the t-plot method.
Analysis of spatial distribution of land cover maps accuracy
NASA Astrophysics Data System (ADS)
Khatami, R.; Mountrakis, G.; Stehman, S. V.
2017-12-01
Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain yielded similar AUC; iv) for the larger sample size (i.e., very dense spatial sample) and per-class predictions, the spatial domain yielded larger AUC; v) increasing the sample size improved accuracy predictions with a greater benefit accruing to the spatial domain; and vi) the function used for interpolation had the smallest effect on AUC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nixon, R.
The deterioration of concrete structures due to chloride induced reinforcing steel corrosion such as in elevated concrete floor slabs, columns, and beams in bleach plants is a constant and growing problem within the pulp and paper industry. In general, the condition analysis methods used for assessing the extent of bleach plant concrete degradation include physical testing of drilled concrete core samples, chloride ion concentration testing, half-cell potential measurements, and physical sounding of concrete surfaces, i.e. chain drag for topside surfaces and hammer sounding of soffit surfaces. While this paper does not promote any vastly different evaluative methods, it does sharemore » learnings relative to interpreting the data provided by these typical test methods. It further offers some recommendations on how to improve the use of these typical evaluation techniques and offers some other test methods which should be considered as valuable additions for such evaluations. One of the most common methods which has been used in the past for large scale bleach plant concrete restoration has been the application of site dry mixed shotcrete for rebuilding the soffits of floor slabs and the faces of columns and beams. More often than not, bulk mixed dry shotcrete repairs have not been cost-effective because they prematurely failed due to excessive hydration related shrinkage cracking, lack of sufficient adhesion to the parent concrete substrate or other problems related to poor durability or construction practice.« less