Sample records for samples analytical problems

  1. Practical solution for control of the pre-analytical phase in decentralized clinical laboratories for meeting the requirements of the medical laboratory accreditation standard DIN EN ISO 15189.

    PubMed

    Vacata, Vladimir; Jahns-Streubel, Gerlinde; Baldus, Mirjana; Wood, William Graham

    2007-01-01

    This report was written in response to the article by Wood published recently in this journal. It describes a practical solution to the problems of controlling the pre-analytical phase in the clinical diagnostic laboratory. As an indicator of quality in the pre-analytical phase of sample processing, a target analyte was chosen which is sensitive to delay in centrifugation and/or analysis. The results of analyses of the samples sent by satellite medical practitioners were compared with those from an on-site hospital laboratory with a controllable optimized pre-analytical phase. The aim of the comparison was: (a) to identify those medical practices whose mean/median sample values significantly deviate from those of the control situation in the hospital laboratory due to the possible problems in the pre-analytical phase; (b) to aid these laboratories in the process of rectifying these problems. A Microsoft Excel-based Pre-Analytical Survey tool (PAS tool) has been developed which addresses the above mentioned problems. It has been tested on serum potassium which is known to be sensitive to delay and/or irregularities in sample treatment. The PAS tool has been shown to be one possibility for improving the quality of the analyses by identifying the sources of problems within the pre-analytical phase, thus allowing them to be rectified. Additionally, the PAS tool has an educational value and can also be adopted for use in other decentralized laboratories.

  2. The analytical calibration in (bio)imaging/mapping of the metallic elements in biological samples--definitions, nomenclature and strategies: state of the art.

    PubMed

    Jurowski, Kamil; Buszewski, Bogusław; Piekoszewski, Wojciech

    2015-01-01

    Nowadays, studies related to the distribution of metallic elements in biological samples are one of the most important issues. There are many articles dedicated to specific analytical atomic spectrometry techniques used for mapping/(bio)imaging the metallic elements in various kinds of biological samples. However, in such literature, there is a lack of articles dedicated to reviewing calibration strategies, and their problems, nomenclature, definitions, ways and methods used to obtain quantitative distribution maps. The aim of this article was to characterize the analytical calibration in the (bio)imaging/mapping of the metallic elements in biological samples including (1) nomenclature; (2) definitions, and (3) selected and sophisticated, examples of calibration strategies with analytical calibration procedures applied in the different analytical methods currently used to study an element's distribution in biological samples/materials such as LA ICP-MS, SIMS, EDS, XRF and others. The main emphasis was placed on the procedures and methodology of the analytical calibration strategy. Additionally, the aim of this work is to systematize the nomenclature for the calibration terms: analytical calibration, analytical calibration method, analytical calibration procedure and analytical calibration strategy. The authors also want to popularize the division of calibration methods that are different than those hitherto used. This article is the first work in literature that refers to and emphasizes many different and complex aspects of analytical calibration problems in studies related to (bio)imaging/mapping metallic elements in different kinds of biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. A Critical Review on Clinical Application of Separation Techniques for Selective Recognition of Uracil and 5-Fluorouracil.

    PubMed

    Pandey, Khushaboo; Dubey, Rama Shankar; Prasad, Bhim Bali

    2016-03-01

    The most important objectives that are frequently found in bio-analytical chemistry involve applying tools to relevant medical/biological problems and refining these applications. Developing a reliable sample preparation step, for the medical and biological fields is another primary objective in analytical chemistry, in order to extract and isolate the analytes of interest from complex biological matrices. Since, main inborn errors of metabolism (IEM) diagnosable through uracil analysis and the therapeutic monitoring of toxic 5-fluoruracil (an important anti-cancerous drug) in dihydropyrimidine dehydrogenase deficient patients, require an ultra-sensitive, reproducible, selective, and accurate analytical techniques for their measurements. Therefore, keeping in view, the diagnostic value of uracil and 5-fluoruracil measurements, this article refines several analytical techniques involved in selective recognition and quantification of uracil and 5-fluoruracil from biological and pharmaceutical samples. The prospective study revealed that implementation of molecularly imprinted polymer as a solid-phase material for sample preparation and preconcentration of uracil and 5-fluoruracil had proven to be effective as it could obviates problems related to tedious separation techniques, owing to protein binding and drastic interferences, from the complex matrices in real samples such as blood plasma, serum samples.

  4. External Standards or Standard Addition? Selecting and Validating a Method of Standardization

    NASA Astrophysics Data System (ADS)

    Harvey, David T.

    2002-05-01

    A common feature of many problem-based laboratories in analytical chemistry is a lengthy independent project involving the analysis of "real-world" samples. Students research the literature, adapting and developing a method suitable for their analyte, sample matrix, and problem scenario. Because these projects encompass the complete analytical process, students must consider issues such as obtaining a representative sample, selecting a method of analysis, developing a suitable standardization, validating results, and implementing appropriate quality assessment/quality control practices. Most textbooks and monographs suitable for an undergraduate course in analytical chemistry, however, provide only limited coverage of these important topics. The need for short laboratory experiments emphasizing important facets of method development, such as selecting a method of standardization, is evident. The experiment reported here, which is suitable for an introductory course in analytical chemistry, illustrates the importance of matrix effects when selecting a method of standardization. Students also learn how a spike recovery is used to validate an analytical method, and obtain a practical experience in the difference between performing an external standardization and a standard addition.

  5. Applications of the Analytical Electron Microscope to Materials Science

    NASA Technical Reports Server (NTRS)

    Goldstein, J. I.

    1992-01-01

    In the last 20 years, the analytical electron microscope (AEM) as allowed investigators to obtain chemical and structural information from less than 50 nanometer diameter regions in thin samples of materials and to explore problems where reactions occur at boundaries and interfaces or within small particles or phases in bulk samples. Examples of the application of the AEM to materials science problems are presented in this paper and demonstrate the usefulness and the future potential of this instrument.

  6. U.S. Geological Survey Standard Reference Sample Project: Performance Evaluation of Analytical Laboratories

    USGS Publications Warehouse

    Long, H. Keith; Daddow, Richard L.; Farrar, Jerry W.

    1998-01-01

    Since 1962, the U.S. Geological Survey (USGS) has operated the Standard Reference Sample Project to evaluate the performance of USGS, cooperator, and contractor analytical laboratories that analyze chemical constituents of environmental samples. The laboratories are evaluated by using performance evaluation samples, called Standard Reference Samples (SRSs). SRSs are submitted to laboratories semi-annually for round-robin laboratory performance comparison purposes. Currently, approximately 100 laboratories are evaluated for their analytical performance on six SRSs for inorganic and nutrient constituents. As part of the SRS Project, a surplus of homogeneous, stable SRSs is maintained for purchase by USGS offices and participating laboratories for use in continuing quality-assurance and quality-control activities. Statistical evaluation of the laboratories results provides information to compare the analytical performance of the laboratories and to determine possible analytical deficiences and problems. SRS results also provide information on the bias and variability of different analytical methods used in the SRS analyses.

  7. Simulation and statistics: Like rhythm and song

    NASA Astrophysics Data System (ADS)

    Othman, Abdul Rahman

    2013-04-01

    Simulation has been introduced to solve problems in the form of systems. By using this technique the following two problems can be overcome. First, a problem that has an analytical solution but the cost of running an experiment to solve is high in terms of money and lives. Second, a problem exists but has no analytical solution. In the field of statistical inference the second problem is often encountered. With the advent of high-speed computing devices, a statistician can now use resampling techniques such as the bootstrap and permutations to form pseudo sampling distribution that will lead to the solution of the problem that cannot be solved analytically. This paper discusses how a Monte Carlo simulation was and still being used to verify the analytical solution in inference. This paper also discusses the resampling techniques as simulation techniques. The misunderstandings about these two techniques are examined. The successful usages of both techniques are also explained.

  8. Image correlation and sampling study

    NASA Technical Reports Server (NTRS)

    Popp, D. J.; Mccormack, D. S.; Sedwick, J. L.

    1972-01-01

    The development of analytical approaches for solving image correlation and image sampling of multispectral data is discussed. Relevant multispectral image statistics which are applicable to image correlation and sampling are identified. The general image statistics include intensity mean, variance, amplitude histogram, power spectral density function, and autocorrelation function. The translation problem associated with digital image registration and the analytical means for comparing commonly used correlation techniques are considered. General expressions for determining the reconstruction error for specific image sampling strategies are developed.

  9. [Local Regression Algorithm Based on Net Analyte Signal and Its Application in Near Infrared Spectral Analysis].

    PubMed

    Zhang, Hong-guang; Lu, Jian-gang

    2016-02-01

    Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.

  10. SAM Technical Contacts

    EPA Pesticide Factsheets

    These technical contacts are available to help with questions regarding method deviations, modifications, sample problems or interferences, quality control requirements, the use of alternative methods, or the need to address analytes or sample types.

  11. Removal of uranium from soil samples for ICP-OES analysis of RCRA metals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wero, M.; Lederer-Cano, A.; Billy, C.

    1995-12-01

    Soil samples containing high levels of uranium present unique analytical problems when analyzed for toxic metals (Ag, As, Ba, Cd, Cr, Cu, Ni, Pb, Se and Tl) because of the spectral interference of uranium in the ICP-OES emission spectrometer. Methods to remove uranium from the digestates of soil samples, known to be high in uranium, have been developed that reduce the initial uranium concentration (1-3%) to less than 500 ppm. UTEVA ion exchange columns, used as an ICP-OES analytical pre-treatment, reduces uranium to acceptable levels, permitting good analytical results of the RCRA metals by ICP-OES.

  12. Teaching Analytical Thinking

    ERIC Educational Resources Information Center

    Behn, Robert D.; Vaupel, James W.

    1976-01-01

    Description of the philosophy and general nature of a course at Drake University that emphasizes basic concepts of analytical thinking, including think, decompose, simplify, specify, and rethink problems. Some sample homework exercises are included. The journal is available from University of California Press, Berkeley, California 94720.…

  13. Pushing quantitation limits in micro UHPLC-MS/MS analysis of steroid hormones by sample dilution using high volume injection.

    PubMed

    Márta, Zoltán; Bobály, Balázs; Fekete, Jenő; Magda, Balázs; Imre, Tímea; Mészáros, Katalin Viola; Szabó, Pál Tamás

    2016-09-10

    Ultratrace analysis of sample components requires excellent analytical performance in terms of limits of quantitation (LoQ). Micro UHPLC coupling with sensitive tandem mass spectrometry provides state of the art solutions for such analytical problems. Decreased column volume in micro LC limits the injectable sample volume. However, if analyte concentration is extremely low, it might be necessary to inject high sample volumes. This is particularly critical for strong sample solvents and weakly retained analytes, which are often the case when preparing biological samples (protein precipitation, sample extraction, etc.). In that case, high injection volumes may cause band broadening, peak distortion or even elution in dead volume. In this study, we evaluated possibilities of high volume injection onto microbore RP-LC columns, when sample solvent is diluted. The presented micro RP-LC-MS/MS method was optimized for the analysis of steroid hormones from human plasma after protein precipitation with organic solvents. A proper sample dilution procedure helps to increase the injection volume without compromising peak shapes. Finally, due to increased injection volume, the limit of quantitation can be decreased by a factor of 2-5, depending on the analytes and the experimental conditions. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. The antisocial family tree: family histories of behavior problems in antisocial personality in the United States.

    PubMed

    Vaughn, Michael G; Salas-Wright, Christopher P; DeLisi, Matt; Qian, Zhengmin

    2015-05-01

    Multiple avenues of research (e.g., criminal careers, intergenerational family transmission, and epidemiological studies) have indicated a concentration of antisocial traits and behaviors that cluster among families and within individuals in a population. The current study draws on each of these perspectives in exploring the intergenerational contours of antisocial personality disorder across multiple generations of a large-scale epidemiological sample. The analytic sample of persons meeting criteria for antisocial personality disorder (N = 1,226) was derived from waves I and II of the National Epidemiologic Survey on Alcohol and Related Conditions. Path analytic, latent class, and multinomial models were executed to describe and elucidate family histories among persons diagnosed with antisocial personality disorder. Three classes of an antisocial family tree were found: minimal family history of problem behaviors (70.3 % of sample) who were characterized by higher socioeconomic functioning, parental and progeny behavior problems (9.4 % of sample) who were characterized by criminal behaviors, psychopathology, and substance use disorders, and multigenerational history of problem behaviors (20.3 % of sample) who were characterized by alcoholism, psychopathology, and versatile criminal offending. These findings add a typology to intergenerational studies of antisocial behavior that can assist in identifying etiological and treatment factors among those for whom crime runs in the family.

  15. Sequential Multiplex Analyte Capturing for Phosphoprotein Profiling*

    PubMed Central

    Poetz, Oliver; Henzler, Tanja; Hartmann, Michael; Kazmaier, Cornelia; Templin, Markus F.; Herget, Thomas; Joos, Thomas O.

    2010-01-01

    Microarray-based sandwich immunoassays can simultaneously detect dozens of proteins. However, their use in quantifying large numbers of proteins is hampered by cross-reactivity and incompatibilities caused by the immunoassays themselves. Sequential multiplex analyte capturing addresses these problems by repeatedly probing the same sample with different sets of antibody-coated, magnetic suspension bead arrays. As a miniaturized immunoassay format, suspension bead array-based assays fulfill the criteria of the ambient analyte theory, and our experiments reveal that the analyte concentrations are not significantly changed. The value of sequential multiplex analyte capturing was demonstrated by probing tumor cell line lysates for the abundance of seven different receptor tyrosine kinases and their degree of phosphorylation and by measuring the complex phosphorylation pattern of the epidermal growth factor receptor in the same sample from the same cavity. PMID:20682761

  16. SOIL AND SEDIMENT SAMPLING METHODS

    EPA Science Inventory

    The EPA Office of Solid Waste and Emergency Response's (OSWER) Office of Superfund Remediation and Technology Innovation (OSRTI) needs innovative methods and techniques to solve new and difficult sampling and analytical problems found at the numerous Superfund sites throughout th...

  17. Post-column infusion study of the 'dosing vehicle effect' in the liquid chromatography/tandem mass spectrometric analysis of discovery pharmacokinetic samples.

    PubMed

    Shou, Wilson Z; Naidong, Weng

    2003-01-01

    It has become increasingly popular in drug development to conduct discovery pharmacokinetic (PK) studies in order to evaluate important PK parameters of new chemical entities (NCEs) early in the discovery process. In these studies, dosing vehicles are typically employed in high concentrations to dissolve the test compounds in dose formulations. This can pose significant problems for the liquid chromatography/tandem mass spectrometric (LC/MS/MS) analysis of incurred samples due to potential signal suppression of the analytes caused by the vehicles. In this paper, model test compounds in rat plasma were analyzed using a generic fast gradient LC/MS/MS method. Commonly used dosing vehicles, including poly(ethylene glycol) 400 (PEG 400), polysorbate 80 (Tween 80), hydroxypropyl beta-cyclodextrin, and N,N-dimethylacetamide, were fortified into rat plasma at 5 mg/mL before extraction. Their effects on the sample analysis results were evaluated by the method of post-column infusion. Results thus obtained indicated that polymeric vehicles such as PEG 400 and Tween 80 caused significant suppression (> 50%, compared with results obtained from plasma samples free from vehicles) to certain analytes, when minimum sample cleanup was used and the analytes happened to co-elute with the vehicles. Effective means to minimize this 'dosing vehicle effect' included better chromatographic separations, better sample cleanup, and alternative ionization methods. Finally, a real-world example is given to illustrate the suppression problem posed by high levels of PEG 400 in sample analysis, and to discuss steps taken in overcoming the problem. A simple but effective means of identifying a 'dosing vehicle effect' is also proposed. Copyright 2003 John Wiley & Sons, Ltd.

  18. A Short Research Note on Calculating Exact Distribution Functions and Random Sampling for the 3D NFW Profile

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.; Howlett, Cullan

    2018-06-01

    In this short note we publish the analytic quantile function for the Navarro, Frenk & White (NFW) profile. All known published and coded methods for sampling from the 3D NFW PDF use either accept-reject, or numeric interpolation (sometimes via a lookup table) for projecting random Uniform samples through the quantile distribution function to produce samples of the radius. This is a common requirement in N-body initial condition (IC), halo occupation distribution (HOD), and semi-analytic modelling (SAM) work for correctly assigning particles or galaxies to positions given an assumed concentration for the NFW profile. Using this analytic description allows for much faster and cleaner code to solve a common numeric problem in modern astronomy. We release R and Python versions of simple code that achieves this sampling, which we note is trivial to reproduce in any modern programming language.

  19. Blood venous sample collection: Recommendations overview and a checklist to improve quality.

    PubMed

    Giavarina, Davide; Lippi, Giuseppe

    2017-07-01

    The extra-analytical phases of the total testing process have substantial impact on managed care, as well as an inherent high risk of vulnerability to errors which is often greater than that of the analytical phase. The collection of biological samples is a crucial preanalytical activity. Problems or errors occurring shortly before, or soon after, this preanalytical step may impair sample quality and characteristics, or else modify the final results of testing. The standardization of fasting requirements, rest, patient position and psychological state of the patient are therefore crucial for mitigating the impact of preanalytical variability. Moreover, the quality of materials used for collecting specimens, along with their compatibility, can guarantee sample quality and persistence of chemical and physical characteristics of the analytes over time, so safeguarding the reliability of testing. Appropriate techniques and sampling procedures are effective to prevent problems such as hemolysis, undue clotting in the blood tube, draw of insufficient sample volume and modification of analyte concentration. An accurate identification of both patient and blood samples is a key priority as for other healthcare activities. Good laboratory practice and appropriate training of operators, by specifically targeting collection of biological samples, blood in particular, may greatly improve this issue, thus lowering the risk of errors and their adverse clinical consequences. The implementation of a simple and rapid check-list, including verification of blood collection devices, patient preparation and sampling techniques, was found to be effective for enhancing sample quality and reducing some preanalytical errors associated with these procedures. The use of this tool, along with implementation of objective and standardized systems for detecting non-conformities related to unsuitable samples, can be helpful for standardizing preanalytical activities and improving the quality of laboratory diagnostics, ultimately helping to reaffirm a "preanalytical" culture founded on knowledge and real risk perception. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  20. Capillary Zone Electrophoresis for the Analysis of Peptides: Fostering Students' Problem-Solving and Discovery Learning in an Undergraduate Laboratory Experiment

    ERIC Educational Resources Information Center

    Albright, Jessica C.; Beussman, Douglas J.

    2017-01-01

    Capillary electrophoresis is an important analytical separation method used to study a wide variety of samples, including those of biological origin. Capillary electrophoresis may be covered in the classroom, especially in advanced analytical courses, and while many students are exposed to gel electrophoresis in biology or biochemistry…

  1. NIR and UV-vis spectroscopy, artificial nose and tongue: comparison of four fingerprinting techniques for the characterisation of Italian red wines.

    PubMed

    Casale, M; Oliveri, P; Armanino, C; Lanteri, S; Forina, M

    2010-06-04

    Four rapid and low-cost vanguard analytical systems (NIR and UV-vis spectroscopy, a headspace-mass based artificial nose and a voltammetric artificial tongue), together with chemometric pattern recognition techniques, were applied and compared in addressing a food authentication problem: the distinction between wine samples from the same Italian oenological region, according to the grape variety. Specifically, 59 certified samples belonging to the Barbera d'Alba and Dolcetto d'Alba appellations and collected from the same vintage (2007) were analysed. The instrumental responses, after proper data pre-processing, were used as fingerprints of the characteristics of the samples: the results from principal component analysis and linear discriminant analysis were discussed, comparing the capability of the four analytical strategies in addressing the problem studied. Copyright 2010 Elsevier B.V. All rights reserved.

  2. A transient laboratory method for determining the hydraulic properties of 'tight' rocks-I. Theory

    USGS Publications Warehouse

    Hsieh, P.A.; Tracy, J.V.; Neuzil, C.E.; Bredehoeft, J.D.; Silliman, Stephen E.

    1981-01-01

    Transient pulse testing has been employed increasingly in the laboratory to measure the hydraulic properties of rock samples with low permeability. Several investigators have proposed a mathematical model in terms of an initial-boundary value problem to describe fluid flow in a transient pulse test. However, the solution of this problem has not been available. In analyzing data from the transient pulse test, previous investigators have either employed analytical solutions that are derived with the use of additional, restrictive assumptions, or have resorted to numerical methods. In Part I of this paper, a general, analytical solution for the transient pulse test is presented. This solution is graphically illustrated by plots of dimensionless variables for several cases of interest. The solution is shown to contain, as limiting cases, the more restrictive analytical solutions that the previous investigators have derived. A method of computing both the permeability and specific storage of the test sample from experimental data will be presented in Part II. ?? 1981.

  3. An evolution based biosensor receptor DNA sequence generation algorithm.

    PubMed

    Kim, Eungyeong; Lee, Malrey; Gatton, Thomas M; Lee, Jaewan; Zang, Yupeng

    2010-01-01

    A biosensor is composed of a bioreceptor, an associated recognition molecule, and a signal transducer that can selectively detect target substances for analysis. DNA based biosensors utilize receptor molecules that allow hybridization with the target analyte. However, most DNA biosensor research uses oligonucleotides as the target analytes and does not address the potential problems of real samples. The identification of recognition molecules suitable for real target analyte samples is an important step towards further development of DNA biosensors. This study examines the characteristics of DNA used as bioreceptors and proposes a hybrid evolution-based DNA sequence generating algorithm, based on DNA computing, to identify suitable DNA bioreceptor recognition molecules for stable hybridization with real target substances. The Traveling Salesman Problem (TSP) approach is applied in the proposed algorithm to evaluate the safety and fitness of the generated DNA sequences. This approach improves efficiency and stability for enhanced and variable-length DNA sequence generation and allows extension to generation of variable-length DNA sequences with diverse receptor recognition requirements.

  4. Direct trace-elemental analysis of urine samples by laser ablation-inductively coupled plasma mass spectrometry after sample deposition on clinical filter papers.

    PubMed

    Aramendía, Maite; Rello, Luis; Vanhaecke, Frank; Resano, Martín

    2012-10-16

    Collection of biological fluids on clinical filter papers shows important advantages from a logistic point of view, although analysis of these specimens is far from straightforward. Concerning urine analysis, and particularly when direct trace elemental analysis by laser ablation-inductively coupled plasma mass spectrometry (LA-ICPMS) is aimed at, several problems arise, such as lack of sensitivity or different distribution of the analytes on the filter paper, rendering obtaining reliable quantitative results quite difficult. In this paper, a novel approach for urine collection is proposed, which circumvents many of these problems. This methodology consists on the use of precut filter paper discs where large amounts of sample can be retained upon a single deposition. This provides higher amounts of the target analytes and, thus, sufficient sensitivity, and allows addition of an adequate internal standard at the clinical lab prior to analysis, therefore making it suitable for a strategy based on unsupervised sample collection and ulterior analysis at referral centers. On the basis of this sampling methodology, an analytical method was developed for the direct determination of several elements in urine (Be, Bi, Cd, Co, Cu, Ni, Sb, Sn, Tl, Pb, and V) at the low μg L(-1) level by means of LA-ICPMS. The method developed provides good results in terms of accuracy and LODs (≤1 μg L(-1) for most of the analytes tested), with a precision in the range of 15%, fit-for-purpose for clinical control analysis.

  5. USGS Blind Sample Project: monitoring and evaluating laboratory analytical quality

    USGS Publications Warehouse

    Ludtke, Amy S.; Woodworth, Mark T.

    1997-01-01

    The U.S. Geological Survey (USGS) collects and disseminates information about the Nation's water resources. Surface- and ground-water samples are collected and sent to USGS laboratories for chemical analyses. The laboratories identify and quantify the constituents in the water samples. Random and systematic errors occur during sample handling, chemical analysis, and data processing. Although all errors cannot be eliminated from measurements, the magnitude of their uncertainty can be estimated and tracked over time. Since 1981, the USGS has operated an independent, external, quality-assurance project called the Blind Sample Project (BSP). The purpose of the BSP is to monitor and evaluate the quality of laboratory analytical results through the use of double-blind quality-control (QC) samples. The information provided by the BSP assists the laboratories in detecting and correcting problems in the analytical procedures. The information also can aid laboratory users in estimating the extent that laboratory errors contribute to the overall errors in their environmental data.

  6. Chemistry and haematology sample rejection and clinical impact in a tertiary laboratory in Cape Town.

    PubMed

    Jacobsz, Lourens A; Zemlin, Annalise E; Roos, Mark J; Erasmus, Rajiv T

    2011-10-14

    Recent publications report that up to 70% of total laboratory errors occur in the pre-analytical phase. Identification of specific problems highlights pre-analytic processes susceptible to errors. The rejection of unsuitable samples can lead to delayed turnaround time and affect patient care. A retrospective audit was conducted investigating the rejection rate of routine blood specimens received at chemistry and haematology laboratories over a 2-week period. The reasons for rejection and potential clinical impact of these rejections were investigated. Thirty patient files were randomly selected and examined to assess the impact of these rejections on clinical care. A total of 32,910 specimens were received during the study period, of which 481 were rejected, giving a rejection rate of 1.46%. The main reasons for rejection were inappropriate clotting (30%) and inadequate sample volume (22%). Only 51.7% of rejected samples were repeated and the average time for a repeat sample to reach the laboratory was about 5 days (121 h). Of the repeated samples, 5.1% had results within critical values. Examination of patient folders showed that in 40% of cases the rejection of samples had an impact on patient care. The evaluation of pre-analytical processes in the laboratory, with regard to sample rejection, allowed one to identify problem areas where improvement is necessary. Rejected samples due to factors out of the laboratory's control had a definite impact on patient care and can thus affect customer satisfaction. Clinicians should be aware of these factors to prevent such rejections.

  7. An analytical approach to thermal modeling of Bridgman type crystal growth: One dimensional analysis. Computer program users manual

    NASA Technical Reports Server (NTRS)

    Cothran, E. K.

    1982-01-01

    The computer program written in support of one dimensional analytical approach to thermal modeling of Bridgman type crystal growth is presented. The program listing and flow charts are included, along with the complete thermal model. Sample problems include detailed comments on input and output to aid the first time user.

  8. Modification of a Microwave Oven for Laboratory Use.

    ERIC Educational Resources Information Center

    Andrews, Judith; Atkinson, George F.

    1984-01-01

    Discusses use of a domestic microwave oven for drying analytical samples with time savings compared to conventional ovens, providing a solution to the problem of loss of load as samples dry. Presents a system for examining emitted gases from drying process and reports results of several test dryings. (JM)

  9. Current projects in Pre-analytics: where to go?

    PubMed

    Sapino, Anna; Annaratone, Laura; Marchiò, Caterina

    2015-01-01

    The current clinical practice of tissue handling and sample preparation is multifaceted and lacks strict standardisation: this scenario leads to significant variability in the quality of clinical samples. Poor tissue preservation has a detrimental effect thus leading to morphological artefacts, hampering the reproducibility of immunocytochemical and molecular diagnostic results (protein expression, DNA gene mutations, RNA gene expression) and affecting the research outcomes with irreproducible gene expression and post-transcriptional data. Altogether, this limits the opportunity to share and pool national databases into European common databases. At the European level, standardization of pre-analytical steps is just at the beginning and issues regarding bio-specimen collection and management are still debated. A joint (public-private) project entitled on standardization of tissue handling in pre-analytical procedures has been recently funded in Italy with the aim of proposing novel approaches to the neglected issue of pre-analytical procedures. In this chapter, we will show how investing in pre-analytics may impact both public health problems and practical innovation in solid tumour processing.

  10. Screening blood samples to estimate when oxytetverycline residues exceed regulatory tolerances in poultry muscle

    USDA-ARS?s Scientific Manuscript database

    The presence of antibiotic residues in edible animal products is a human food safety concern. To address this potential problem, the government samples edible tissues, such as muscle, to monitor for residues. Due to loss of valuable product and analytical difficulties only a small percentage of po...

  11. Measurement and Visualization of Mass Transport for the Flowing Atmospheric Pressure Afterglow (FAPA) Ambient Mass-Spectrometry Source

    PubMed Central

    Pfeuffer, Kevin P.; Ray, Steven J.; Hieftje, Gary M.

    2014-01-01

    Ambient desorption/ionization mass spectrometry (ADI-MS) has developed into an important analytical field over the last nine years. The ability to analyze samples under ambient conditions while retaining the sensitivity and specificity of mass spectrometry has led to numerous applications and a corresponding jump in the popularity of this field. Despite the great potential of ADI-MS, problems remain in the areas of ion identification and quantification. Difficulties with ion identification can be solved through modified instrumentation, including accurate-mass or MS/MS capabilities for analyte identification. More difficult problems include quantification due to the ambient nature of the sampling process. To characterize and improve sample volatilization, ionization, and introduction into the mass-spectrometer interface, a method of visualizing mass transport into the mass spectrometer is needed. Schlieren imaging is a well-established technique that renders small changes in refractive index visible. Here, schlieren imaging was used to visualize helium flow from a plasma-based ADI-MS source into a mass spectrometer while ion signals were recorded. Optimal sample positions for melting-point capillary and transmission-mode (stainless steel mesh) introduction were found to be near (within 1 mm of) the mass spectrometer inlet. Additionally, the orientation of the sampled surface plays a significant role. More efficient mass transport resulted for analyte deposits directly facing the MS inlet. Different surfaces (glass slide and rough surface) were also examined; for both it was found that the optimal position is immediately beneath the MS inlet. PMID:24658804

  12. Measurement and visualization of mass transport for the flowing atmospheric pressure afterglow (FAPA) ambient mass-spectrometry source.

    PubMed

    Pfeuffer, Kevin P; Ray, Steven J; Hieftje, Gary M

    2014-05-01

    Ambient desorption/ionization mass spectrometry (ADI-MS) has developed into an important analytical field over the last 9 years. The ability to analyze samples under ambient conditions while retaining the sensitivity and specificity of mass spectrometry has led to numerous applications and a corresponding jump in the popularity of this field. Despite the great potential of ADI-MS, problems remain in the areas of ion identification and quantification. Difficulties with ion identification can be solved through modified instrumentation, including accurate-mass or MS/MS capabilities for analyte identification. More difficult problems include quantification because of the ambient nature of the sampling process. To characterize and improve sample volatilization, ionization, and introduction into the mass spectrometer interface, a method of visualizing mass transport into the mass spectrometer is needed. Schlieren imaging is a well-established technique that renders small changes in refractive index visible. Here, schlieren imaging was used to visualize helium flow from a plasma-based ADI-MS source into a mass spectrometer while ion signals were recorded. Optimal sample positions for melting-point capillary and transmission-mode (stainless steel mesh) introduction were found to be near (within 1 mm of) the mass spectrometer inlet. Additionally, the orientation of the sampled surface plays a significant role. More efficient mass transport resulted for analyte deposits directly facing the MS inlet. Different surfaces (glass slide and rough surface) were also examined; for both it was found that the optimal position is immediately beneath the MS inlet.

  13. Measurement and Visualization of Mass Transport for the Flowing Atmospheric Pressure Afterglow (FAPA) Ambient Mass-Spectrometry Source

    NASA Astrophysics Data System (ADS)

    Pfeuffer, Kevin P.; Ray, Steven J.; Hieftje, Gary M.

    2014-05-01

    Ambient desorption/ionization mass spectrometry (ADI-MS) has developed into an important analytical field over the last 9 years. The ability to analyze samples under ambient conditions while retaining the sensitivity and specificity of mass spectrometry has led to numerous applications and a corresponding jump in the popularity of this field. Despite the great potential of ADI-MS, problems remain in the areas of ion identification and quantification. Difficulties with ion identification can be solved through modified instrumentation, including accurate-mass or MS/MS capabilities for analyte identification. More difficult problems include quantification because of the ambient nature of the sampling process. To characterize and improve sample volatilization, ionization, and introduction into the mass spectrometer interface, a method of visualizing mass transport into the mass spectrometer is needed. Schlieren imaging is a well-established technique that renders small changes in refractive index visible. Here, schlieren imaging was used to visualize helium flow from a plasma-based ADI-MS source into a mass spectrometer while ion signals were recorded. Optimal sample positions for melting-point capillary and transmission-mode (stainless steel mesh) introduction were found to be near (within 1 mm of) the mass spectrometer inlet. Additionally, the orientation of the sampled surface plays a significant role. More efficient mass transport resulted for analyte deposits directly facing the MS inlet. Different surfaces (glass slide and rough surface) were also examined; for both it was found that the optimal position is immediately beneath the MS inlet.

  14. A framework for parallelized efficient global optimization with application to vehicle crashworthiness optimization

    NASA Astrophysics Data System (ADS)

    Hamza, Karim; Shalaby, Mohamed

    2014-09-01

    This article presents a framework for simulation-based design optimization of computationally expensive problems, where economizing the generation of sample designs is highly desirable. One popular approach for such problems is efficient global optimization (EGO), where an initial set of design samples is used to construct a kriging model, which is then used to generate new 'infill' sample designs at regions of the search space where there is high expectancy of improvement. This article attempts to address one of the limitations of EGO, where generation of infill samples can become a difficult optimization problem in its own right, as well as allow the generation of multiple samples at a time in order to take advantage of parallel computing in the evaluation of the new samples. The proposed approach is tested on analytical functions, and then applied to the vehicle crashworthiness design of a full Geo Metro model undergoing frontal crash conditions.

  15. The Efficacy of Cognitive Behavioral Therapy: A Review of Meta-analyses

    PubMed Central

    Hofmann, Stefan G.; Asnaani, Anu; Vonk, Imke J.J.; Sawyer, Alice T.; Fang, Angela

    2012-01-01

    Cognitive behavioral therapy (CBT) refers to a popular therapeutic approach that has been applied to a variety of problems. The goal of this review was to provide a comprehensive survey of meta-analyses examining the efficacy of CBT. We identified 269 meta-analytic studies and reviewed of those a representative sample of 106 meta-analyses examining CBT for the following problems: substance use disorder, schizophrenia and other psychotic disorders, depression and dysthymia, bipolar disorder, anxiety disorders, somatoform disorders, eating disorders, insomnia, personality disorders, anger and aggression, criminal behaviors, general stress, distress due to general medical conditions, chronic pain and fatigue, distress related to pregnancy complications and female hormonal conditions. Additional meta-analytic reviews examined the efficacy of CBT for various problems in children and elderly adults. The strongest support exists for CBT of anxiety disorders, somatoform disorders, bulimia, anger control problems, and general stress. Eleven studies compared response rates between CBT and other treatments or control conditions. CBT showed higher response rates than the comparison conditions in 7 of these reviews and only one review reported that CBT had lower response rates than comparison treatments. In general, the evidence-base of CBT is very strong. However, additional research is needed to examine the efficacy of CBT for randomized-controlled studies. Moreover, except for children and elderly populations, no meta-analytic studies of CBT have been reported on specific subgroups, such as ethnic minorities and low income samples. PMID:23459093

  16. Quality-control materials in the USDA National Food and Nutrient Analysis Program (NFNAP).

    PubMed

    Phillips, Katherine M; Patterson, Kristine Y; Rasor, Amy S; Exler, Jacob; Haytowitz, David B; Holden, Joanne M; Pehrsson, Pamela R

    2006-03-01

    The US Department of Agriculture (USDA) Nutrient Data Laboratory (NDL) develops and maintains the USDA National Nutrient Databank System (NDBS). Data are released from the NDBS for scientific and public use through the USDA National Nutrient Database for Standard Reference (SR) ( http://www.ars.usda.gov/ba/bhnrc/ndl ). In 1997 the NDL initiated the National Food and Nutrient Analysis Program (NFNAP) to update and expand its food-composition data. The program included: 1) nationwide probability-based sampling of foods; 2) central processing and archiving of food samples; 3) analysis of food components at commercial, government, and university laboratories; 4) incorporation of new analytical data into the NDBS; and 5) dissemination of these data to the scientific community. A key feature and strength of the NFNAP was a rigorous quality-control program that enabled independent verification of the accuracy and precision of analytical results. Custom-made food-control composites and/or commercially available certified reference materials were sent to the laboratories, blinded, with the samples. Data for these materials were essential to ongoing monitoring of analytical work, to identify and resolve suspected analytical problems, to ensure the accuracy and precision of results for the NFNAP food samples.

  17. Co-occurring substance-related and behavioral addiction problems: A person-centered, lay epidemiology approach.

    PubMed

    Konkolÿ Thege, Barna; Hodgins, David C; Wild, T Cameron

    2016-12-01

    Background and aims The aims of this study were (a) to describe the prevalence of single versus multiple addiction problems in a large representative sample and (b) to identify distinct subgroups of people experiencing substance-related and behavioral addiction problems. Methods A random sample of 6,000 respondents from Alberta, Canada, completed survey items assessing self-attributed problems experienced in the past year with four substances (alcohol, tobacco, marijuana, and cocaine) and six behaviors (gambling, eating, shopping, sex, video gaming, and work). Hierarchical cluster analyses were used to classify patterns of co-occurring addiction problems on an analytic subsample of 2,728 respondents (1,696 women and 1032 men; M age  = 45.1 years, SD age  = 13.5 years) who reported problems with one or more of the addictive behaviors in the previous year. Results In the total sample, 49.2% of the respondents reported zero, 29.8% reported one, 13.1% reported two, and 7.9% reported three or more addiction problems in the previous year. Cluster-analytic results suggested a 7-group solution. Members of most clusters were characterized by multiple addiction problems; the average number of past year addictive behaviors in cluster members ranged between 1 (Cluster II: excessive eating only) and 2.5 (Cluster VII: excessive video game playing with the frequent co-occurrence of smoking, excessive eating and work). Discussion and conclusions Our findings replicate previous results indicating that about half of the adult population struggles with at least one excessive behavior in a given year; however, our analyses revealed a higher number of co-occurring addiction clusters than typically found in previous studies.

  18. Co-occurring substance-related and behavioral addiction problems: A person-centered, lay epidemiology approach

    PubMed Central

    Konkolÿ Thege, Barna; Hodgins, David C.; Wild, T. Cameron

    2016-01-01

    Background and aims The aims of this study were (a) to describe the prevalence of single versus multiple addiction problems in a large representative sample and (b) to identify distinct subgroups of people experiencing substance-related and behavioral addiction problems. Methods A random sample of 6,000 respondents from Alberta, Canada, completed survey items assessing self-attributed problems experienced in the past year with four substances (alcohol, tobacco, marijuana, and cocaine) and six behaviors (gambling, eating, shopping, sex, video gaming, and work). Hierarchical cluster analyses were used to classify patterns of co-occurring addiction problems on an analytic subsample of 2,728 respondents (1,696 women and 1032 men; Mage = 45.1 years, SDage = 13.5 years) who reported problems with one or more of the addictive behaviors in the previous year. Results In the total sample, 49.2% of the respondents reported zero, 29.8% reported one, 13.1% reported two, and 7.9% reported three or more addiction problems in the previous year. Cluster-analytic results suggested a 7-group solution. Members of most clusters were characterized by multiple addiction problems; the average number of past year addictive behaviors in cluster members ranged between 1 (Cluster II: excessive eating only) and 2.5 (Cluster VII: excessive video game playing with the frequent co-occurrence of smoking, excessive eating and work). Discussion and conclusions Our findings replicate previous results indicating that about half of the adult population struggles with at least one excessive behavior in a given year; however, our analyses revealed a higher number of co-occurring addiction clusters than typically found in previous studies. PMID:27829288

  19. Results of quality-control sampling of water, bed sediment, and tissue in the Western Lake Michigan Drainages study unit of the National Water-Quality Assessment Program

    USGS Publications Warehouse

    Fitzgerald, S.A.

    1997-01-01

    This report contains the quality control results of the Western Lake Michigan Drainages study unit of the National Water Quality Assessment Program. Quality control samples were collected in the same manner and contemporaneously with environmental samples during the first highintensity study phase in the unit (1992 through 1995) and amounted to approximately 15 percent of all samples collected. The accuracy and precision of hundreds of chemical analyses of surface and ground-water, bed sediment, and tissue was determined through the collection and analysis of field blanks, field replicates and splits, matrix spikes, and surrogates. Despite the several detections of analytes in the field blanks, the concentrations of most constituents in the environmental samples will likely be an order of magnitude or higher than those in the blanks. However, frequent detections, and high concentrations, of dissolved organic carbon (DOC) in several surface and ground-water blanks are probably significant with respect to commonly measured environmental concentrations, and the environmental data will have to be qualified accordingly. The precision of sampling of water on a percent basis, as determined from replicates and splits, was generally proportional to the concentration of the constituents, with constituents present in relatively high concentrations generally having less sampling variability than those with relatively low concentrations. In general, analytes with relatively high variability between replicates were present at concentrations near the reporting limit or were associated with relatively small absolute concentration differences, or both. Precision of replicates compared to that for splits in bed sediment samples was similar, thus eliminating sampling as a major source of variability in analyte concentrations. In the case the phthalates in bed sediment, contamination in either the field or laboratory could have caused the relatively large variability between replicate samples and between split samples.Variability of analyte concentrations in tissue samples was relatively low, being 29 percent or less for all constituents. Recoveries of most laboratory schedule 2001/2010 pesticide spike compounds in surfacewater samples were reasonably good. Low intrinsic method recovery resulted in relatively low recovery forp,p'-DDE, metribuzin, and propargite. In the case of propargite, decomposition with the environmental sample matrices was also indicated. Recoveries of two compounds, cyanazine and thiobencarb, might have been biased high due to interferences. The one laboratory schedule 2050/2051 field matrix pesticide spike indicated numerous operational problems with this method that biased recoveries either low or high. Recoveries of pesticides from both pesticide schedules in field spikes of ground-water samples generally were similar to those of field matrix spikes of surface- water samples. High maximum recoveries were noted for tebuthiuron, disulfoton, DCPA, and permethrin, which indicates the possible presence of interferents in the matrices for these compounds. Problems in the recoveries of pesticides on schedule 2050/2051 from ground-water samples generally were the same as those for surfacewater samples. Recoveries of VOCs in field matrix spikes were reasonable when consideration was given for the use of the micropipettor that delivered only about 80 percent on average of the nominal mass of spiked analytes. Finally, the recoveries of most surrogate compounds in surface and ground-water samples were reasonable. Problems in sample handling (for example, spillage) were likely not the cause of any of the low recoveries of spiked compounds.

  20. Exploration Geochemistry.

    ERIC Educational Resources Information Center

    Closs, L. Graham

    1983-01-01

    Contributions in mineral-deposit model formulation, geochemical exploration in glaciated and arid environments, analytical and sampling problems, and bibliographic research were made in symposia held and proceedings volumes published during 1982. Highlights of these symposia and proceedings and comments on trends in exploration geochemistry are…

  1. An extended laser flash technique for thermal diffusivity measurement of high-temperature materials

    NASA Technical Reports Server (NTRS)

    Shen, F.; Khodadadi, J. M.

    1993-01-01

    Knowledge of thermal diffusivity data for high-temperature materials (solids and liquids) is very important in analyzing a number of processes, among them solidification, crystal growth, and welding. However, reliable thermal diffusivity versus temperature data, particularly those for high-temperature liquids, are still far from complete. The main measurement difficulties are due to the presence of convection and the requirement for a container. Fortunately, the availability of levitation techniques has made it possible to solve the containment problem. Based on the feasibility of the levitation technology, a new laser flash technique which is applicable to both levitated liquid and solid samples is being developed. At this point, the analysis for solid samples is near completion and highlights of the technique are presented here. The levitated solid sample which is assumed to be a sphere is subjected to a very short burst of high power radiant energy. The temperature of the irradiated surface area is elevated and a transient heat transfer process takes place within the sample. This containerless process is a two-dimensional unsteady heat conduction problem. Due to the nonlinearity of the radiative plus convective boundary condition, an analytic solution cannot be obtained. Two options are available at this point. Firstly, the radiation boundary condition can be linearized, which then accommodates a closed-form analytic solution. Comparison of the analytic curves for the temperature rise at different points to the experimentally-measured values will then provide the thermal diffusivity values. Secondly, one may set up an inverse conduction problem whereby experimentally obtained surface temperature history is used as the boundary conditions. The thermal diffusivity can then be elevated by minimizing the difference between the real heat flux boundary condition (radiation plus convection) and the measurements. Status of an experimental study directed at measuring the thermal diffusivity of high-temperature solid samples of pure Nickel and Inconel 718 superalloys are presented. Preliminary measurements showing surface temperature histories are discussed.

  2. Material property analytical relations for the case of an AFM probe tapping a viscoelastic surface containing multiple characteristic times

    PubMed Central

    López-Guerra, Enrique A

    2017-01-01

    We explore the contact problem of a flat-end indenter penetrating intermittently a generalized viscoelastic surface, containing multiple characteristic times. This problem is especially relevant for nanoprobing of viscoelastic surfaces with the highly popular tapping-mode AFM imaging technique. By focusing on the material perspective and employing a rigorous rheological approach, we deliver analytical closed-form solutions that provide physical insight into the viscoelastic sources of repulsive forces, tip–sample dissipation and virial of the interaction. We also offer a systematic comparison to the well-established standard harmonic excitation, which is the case relevant for dynamic mechanical analysis (DMA) and for AFM techniques where tip–sample sinusoidal interaction is permanent. This comparison highlights the substantial complexity added by the intermittent-contact nature of the interaction, which precludes the derivation of straightforward equations as is the case for the well-known harmonic excitations. The derivations offered have been thoroughly validated through numerical simulations. Despite the complexities inherent to the intermittent-contact nature of the technique, the analytical findings highlight the potential feasibility of extracting meaningful viscoelastic properties with this imaging method. PMID:29114450

  3. Class-modelling in food analytical chemistry: Development, sampling, optimisation and validation issues - A tutorial.

    PubMed

    Oliveri, Paolo

    2017-08-22

    Qualitative data modelling is a fundamental branch of pattern recognition, with many applications in analytical chemistry, and embraces two main families: discriminant and class-modelling methods. The first strategy is appropriate when at least two classes are meaningfully defined in the problem under study, while the second strategy is the right choice when the focus is on a single class. For this reason, class-modelling methods are also referred to as one-class classifiers. Although, in the food analytical field, most of the issues would be properly addressed by class-modelling strategies, the use of such techniques is rather limited and, in many cases, discriminant methods are forcedly used for one-class problems, introducing a bias in the outcomes. Key aspects related to the development, optimisation and validation of suitable class models for the characterisation of food products are critically analysed and discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Analytic score distributions for a spatially continuous tridirectional Monte Carol transport problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Booth, T.E.

    1996-01-01

    The interpretation of the statistical error estimates produced by Monte Carlo transport codes is still somewhat of an art. Empirically, there are variance reduction techniques whose error estimates are almost always reliable, and there are variance reduction techniques whose error estimates are often unreliable. Unreliable error estimates usually result from inadequate large-score sampling from the score distribution`s tail. Statisticians believe that more accurate confidence interval statements are possible if the general nature of the score distribution can be characterized. Here, the analytic score distribution for the exponential transform applied to a simple, spatially continuous Monte Carlo transport problem is provided.more » Anisotropic scattering and implicit capture are included in the theory. In large part, the analytic score distributions that are derived provide the basis for the ten new statistical quality checks in MCNP.« less

  5. Pre-analytical issues in the haemostasis laboratory: guidance for the clinical laboratories.

    PubMed

    Magnette, A; Chatelain, M; Chatelain, B; Ten Cate, H; Mullier, F

    2016-01-01

    Ensuring quality has become a daily requirement in laboratories. In haemostasis, even more than in other disciplines of biology, quality is determined by a pre-analytical step that encompasses all procedures, starting with the formulation of the medical question, and includes patient preparation, sample collection, handling, transportation, processing, and storage until time of analysis. This step, based on a variety of manual activities, is the most vulnerable part of the total testing process and is a major component of the reliability and validity of results in haemostasis and constitutes the most important source of erroneous or un-interpretable results. Pre-analytical errors may occur throughout the testing process and arise from unsuitable, inappropriate or wrongly handled procedures. Problems may arise during the collection of blood specimens such as misidentification of the sample, use of inadequate devices or needles, incorrect order of draw, prolonged tourniquet placing, unsuccessful attempts to locate the vein, incorrect use of additive tubes, collection of unsuitable samples for quality or quantity, inappropriate mixing of a sample, etc. Some factors can alter the result of a sample constituent after collection during transportation, preparation and storage. Laboratory errors can often have serious adverse consequences. Lack of standardized procedures for sample collection accounts for most of the errors encountered within the total testing process. They can also have clinical consequences as well as a significant impact on patient care, especially those related to specialized tests as these are often considered as "diagnostic". Controlling pre-analytical variables is critical since this has a direct influence on the quality of results and on their clinical reliability. The accurate standardization of the pre-analytical phase is of pivotal importance for achieving reliable results of coagulation tests and should reduce the side effects of the influence factors. This review is a summary of the most important recommendations regarding the importance of pre-analytical factors for coagulation testing and should be a tool to increase awareness about the importance of pre-analytical factors for coagulation testing.

  6. Analytical solutions for one-, two-, and three-dimensional solute transport in ground-water systems with uniform flow

    USGS Publications Warehouse

    Wexler, Eliezer J.

    1992-01-01

    Analytical solutions to the advective-dispersive solute-transport equation are useful in predicting the fate of solutes in ground water. Analytical solutions compiled from available literature or derived by the author are presented for a variety of boundary condition types and solute-source configurations in one-, two-, and three-dimensional systems having uniform ground-water flow. A set of user-oriented computer programs was created to evaluate these solutions and to display the results in tabular and computer-graphics format. These programs incorporate many features that enhance their accuracy, ease of use, and versatility. Documentation for the programs describes their operation and required input data, and presents the results of sample problems. Derivations of selected solutions, source codes for the computer programs, and samples of program input and output also are included.

  7. Rapid and precise determination of ATP using a modified photometer

    USGS Publications Warehouse

    Shultz, David J.; Stephens, Doyle W.

    1980-01-01

    An inexpensive delay timer was designed to modify a commercially available ATP photometer which allows a disposable tip pipette to be used for injecting either enzyme or sample into the reaction cuvette. The disposable tip pipette is as precise and accurate as a fixed-needle syringe but eliminates the problem of sample contamination and decreases analytical time. (USGS)

  8. Computer-Based Mathematics Instructions for Engineering Students

    NASA Technical Reports Server (NTRS)

    Khan, Mustaq A.; Wall, Curtiss E.

    1996-01-01

    Almost every engineering course involves mathematics in one form or another. The analytical process of developing mathematical models is very important for engineering students. However, the computational process involved in the solution of some mathematical problems may be very tedious and time consuming. There is a significant amount of mathematical software such as Mathematica, Mathcad, and Maple designed to aid in the solution of these instructional problems. The use of these packages in classroom teaching can greatly enhance understanding, and save time. Integration of computer technology in mathematics classes, without de-emphasizing the traditional analytical aspects of teaching, has proven very successful and is becoming almost essential. Sample computer laboratory modules are developed for presentation in the classroom setting. This is accomplished through the use of overhead projectors linked to graphing calculators and computers. Model problems are carefully selected from different areas.

  9. Direct analysis of six antibiotics in wastewater samples using rapid high-performance liquid chromatography coupled with diode array detector: a chemometric study towards green analytical chemistry.

    PubMed

    Vosough, Maryam; Rashvand, Masoumeh; Esfahani, Hadi M; Kargosha, Kazem; Salemi, Amir

    2015-04-01

    In this work, a rapid HPLC-DAD method has been developed for the analysis of six antibiotics (amoxicillin, metronidazole, sulfamethoxazole, ofloxacine, sulfadiazine and sulfamerazine) in the sewage treatment plant influent and effluent samples. Decreasing the chromatographic run time to less than 4 min as well as lowering the cost per analysis, were achieved through direct injection of the samples into the HPLC system followed by chemometric analysis. The problem of the complete separation of the analytes from each other and/or from the matrix ingredients was resolved as a posteriori. The performance of MCR/ALS and U-PLS/RBL, as second-order algorithms, was studied and comparable results were obtained from implication of these modeling methods. It was demonstrated that the proposed methods could be used promisingly as green analytical strategies for detection and quantification of the targeted pollutants in wastewater samples while avoiding the more complicated high cost instrumentations. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Bayesian inference based on stationary Fokker-Planck sampling.

    PubMed

    Berrones, Arturo

    2010-06-01

    A novel formalism for bayesian learning in the context of complex inference models is proposed. The method is based on the use of the stationary Fokker-Planck (SFP) approach to sample from the posterior density. Stationary Fokker-Planck sampling generalizes the Gibbs sampler algorithm for arbitrary and unknown conditional densities. By the SFP procedure, approximate analytical expressions for the conditionals and marginals of the posterior can be constructed. At each stage of SFP, the approximate conditionals are used to define a Gibbs sampling process, which is convergent to the full joint posterior. By the analytical marginals efficient learning methods in the context of artificial neural networks are outlined. Offline and incremental bayesian inference and maximum likelihood estimation from the posterior are performed in classification and regression examples. A comparison of SFP with other Monte Carlo strategies in the general problem of sampling from arbitrary densities is also presented. It is shown that SFP is able to jump large low-probability regions without the need of a careful tuning of any step-size parameter. In fact, the SFP method requires only a small set of meaningful parameters that can be selected following clear, problem-independent guidelines. The computation cost of SFP, measured in terms of loss function evaluations, grows linearly with the given model's dimension.

  11. Solid sorbent air sampling and analytical procedure for methyl-, dimethyl-, ethyl-, and diethylamine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elskamp, C.J.; Schultz, G.R.

    1986-01-01

    A sampling and analytical procedure for methyl-, dimethyl-, ethyl-, and diethylamine was developed in order to avoid problems typically encountered in the sampling and analysis of low molecular weight aliphatic amines. Samples are collected with adsorbent tubes containing Amberlite XAD-7 resin coated with the derivatizing reagent, NBD chloride (7-chloro-4-nitrobenzo-2-oxa-1,3-diazole). Analysis is performed by high performance liquid chromatography with the use of a fluorescence and/or UV/visible detector. All four amines can be monitored simultaneously, and neither collection nor storage is affected by humidity. Samples are stable at room temperature for at least two weeks. The methodology has been tested for eachmore » of the four amines at sample loadings equivalent to air concentration ranges of 0.5 to 30 ppm for a sample volume of 10 liters. The method shows promise for determining other airborne primary and secondary low molecular weight aliphatic amines.« less

  12. Implicit finite difference methods on composite grids

    NASA Technical Reports Server (NTRS)

    Mastin, C. Wayne

    1987-01-01

    Techniques for eliminating time lags in the implicit finite-difference solution of partial differential equations are investigated analytically, with a focus on transient fluid dynamics problems on overlapping multicomponent grids. The fundamental principles of the approach are explained, and the method is shown to be applicable to both rectangular and curvilinear grids. Numerical results for sample problems are compared with exact solutions in graphs, and good agreement is demonstrated.

  13. Tracking Matrix Effects in the Analysis of DNA Adducts of Polycyclic Aromatic Hydrocarbons

    PubMed Central

    Klaene, Joshua J.; Flarakos, Caroline; Glick, James; Barret, Jennifer T.; Zarbl, Helmut; Vouros, Paul

    2015-01-01

    LC-MS using electrospray ionization is currently the method of choice in bio-organic analysis covering a wide range of applications in a broad spectrum of biological media. The technique is noted for its high sensitivity but one major limitation which hinders achievement of its optimal sensitivity is the signal suppression due to matrix inferences introduced by the presence of co-extracted compounds during the sample preparation procedure. The analysis of DNA adducts of common environmental carcinogens is particularly sensitive to such matrix effects as sample preparation is a multistep process which involves “contamination” of the sample due to the addition of enzymes and other reagents for digestion of the DNA in order to isolate the analyte(s). This problem is further exacerbated by the need to reach low levels of quantitation (LOQ in the ppb level) while also working with limited (2-5 μg) quantities of sample. We report here on the systematic investigation of ion signal suppression contributed by each individual step involved in the sample preparation associated with the analysis of DNA adducts of polycyclic aromatic hydrocarbon (PAH) using as model analyte dG-BaP, the deoxyguanosine adduct of benzo[a]pyrene (BaP). The individual matrix contribution of each one of these sources to analyte signal was systematically addressed as were any interactive effects. The information was used to develop a validated analytical protocol for the target biomarker at levels typically encountered in vivo using as little as 2 μg of DNA and applied to a dose response study using a metabolically competent cell line. PMID:26607319

  14. Simplex-stochastic collocation method with improved scalability

    NASA Astrophysics Data System (ADS)

    Edeling, W. N.; Dwight, R. P.; Cinnella, P.

    2016-04-01

    The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.

  15. Ring-oven based preconcentration technique for microanalysis: simultaneous determination of Na, Fe, and Cu in fuel ethanol by laser induced breakdown spectroscopy.

    PubMed

    Cortez, Juliana; Pasquini, Celio

    2013-02-05

    The ring-oven technique, originally applied for classical qualitative analysis in the years 1950s to 1970s, is revisited to be used in a simple though highly efficient and green procedure for analyte preconcentration prior to its determination by the microanalytical techniques presently available. The proposed preconcentration technique is based on the dropwise delivery of a small volume of sample to a filter paper substrate, assisted by a flow-injection-like system. The filter paper is maintained in a small circular heated oven (the ring oven). Drops of the sample solution diffuse by capillarity from the center to a circular area of the paper substrate. After the total sample volume has been delivered, a ring with a sharp (c.a. 350 μm) circular contour, of about 2.0 cm diameter, is formed on the paper to contain most of the analytes originally present in the sample volume. Preconcentration coefficients of the analyte can reach 250-fold (on a m/m basis) for a sample volume as small as 600 μL. The proposed system and procedure have been evaluated to concentrate Na, Fe, and Cu in fuel ethanol, followed by simultaneous direct determination of these species in the ring contour, employing the microanalytical technique of laser induced breakdown spectroscopy (LIBS). Detection limits of 0.7, 0.4, and 0.3 μg mL(-1) and mean recoveries of (109 ± 13)%, (92 ± 18)%, and (98 ± 12)%, for Na, Fe, and Cu, respectively, were obtained in fuel ethanol. It is possible to anticipate the application of the technique, coupled to modern microanalytical and multianalyte techniques, to several analytical problems requiring analyte preconcentration and/or sample stabilization.

  16. Maybe Small Is Too Small a Term: Introduction to Advancing Small Sample Prevention Science.

    PubMed

    Fok, Carlotta Ching Ting; Henry, David; Allen, James

    2015-10-01

    Prevention research addressing health disparities often involves work with small population groups experiencing such disparities. The goals of this special section are to (1) address the question of what constitutes a small sample; (2) identify some of the key research design and analytic issues that arise in prevention research with small samples; (3) develop applied, problem-oriented, and methodologically innovative solutions to these design and analytic issues; and (4) evaluate the potential role of these innovative solutions in describing phenomena, testing theory, and evaluating interventions in prevention research. Through these efforts, we hope to promote broader application of these methodological innovations. We also seek whenever possible, to explore their implications in more general problems that appear in research with small samples but concern all areas of prevention research. This special section includes two sections. The first section aims to provide input for researchers at the design phase, while the second focuses on analysis. Each article describes an innovative solution to one or more challenges posed by the analysis of small samples, with special emphasis on testing for intervention effects in prevention research. A concluding article summarizes some of their broader implications, along with conclusions regarding future directions in research with small samples in prevention science. Finally, a commentary provides the perspective of the federal agencies that sponsored the conference that gave rise to this special section.

  17. Analytical solutions for one-, two-, and three-dimensional solute transport in ground-water systems with uniform flow

    USGS Publications Warehouse

    Wexler, Eliezer J.

    1989-01-01

    Analytical solutions to the advective-dispersive solute-transport equation are useful in predicting the fate of solutes in ground water. Analytical solutions compiled from available literature or derived by the author are presented in this report for a variety of boundary condition types and solute-source configurations in one-, two-, and three-dimensional systems with uniform ground-water flow. A set of user-oriented computer programs was created to evaluate these solutions and to display the results in tabular and computer-graphics format. These programs incorporate many features that enhance their accuracy, ease of use, and versatility. Documentation for the programs describes their operation and required input data, and presents the results of sample problems. Derivations of select solutions, source codes for the computer programs, and samples of program input and output also are included.

  18. Preanalytics in lung cancer.

    PubMed

    Warth, Arne; Muley, Thomas; Meister, Michael; Weichert, Wilko

    2015-01-01

    Preanalytic sampling techniques and preparation of tissue specimens strongly influence analytical results in lung tissue diagnostics both on the morphological but also on the molecular level. However, in contrast to analytics where tremendous achievements in the last decade have led to a whole new portfolio of test methods, developments in preanalytics have been minimal. This is specifically unfortunate in lung cancer, where usually only small amounts of tissue are at hand and optimization in all processing steps is mandatory in order to increase the diagnostic yield. In the following, we provide a comprehensive overview on some aspects of preanalytics in lung cancer from the method of sampling over tissue processing to its impact on analytical test results. We specifically discuss the role of preanalytics in novel technologies like next-generation sequencing and in the state-of the-art cytology preparations. In addition, we point out specific problems in preanalytics which hamper further developments in the field of lung tissue diagnostics.

  19. Analysis of plant gums and saccharide materials in paint samples: comparison of GC-MS analytical procedures and databases

    PubMed Central

    2012-01-01

    Background Saccharide materials have been used for centuries as binding media, to paint, write and illuminate manuscripts and to apply metallic leaf decorations. Although the technical literature often reports on the use of plant gums as binders, actually several other saccharide materials can be encountered in paint samples, not only as major binders, but also as additives. In the literature, there are a variety of analytical procedures that utilize GC-MS to characterize saccharide materials in paint samples, however the chromatographic profiles are often extremely different and it is impossible to compare them and reliably identify the paint binder. Results This paper presents a comparison between two different analytical procedures based on GC-MS for the analysis of saccharide materials in works-of-art. The research presented here evaluates the influence of the analytical procedure used, and how it impacts the sugar profiles obtained from the analysis of paint samples that contain saccharide materials. The procedures have been developed, optimised and systematically used to characterise plant gums at the Getty Conservation Institute in Los Angeles, USA (GCI) and the Department of Chemistry and Industrial Chemistry of the University of Pisa, Italy (DCCI). The main steps of the analytical procedures and their optimisation are discussed. Conclusions The results presented highlight that the two methods give comparable sugar profiles, whether the samples analysed are simple raw materials, pigmented and unpigmented paint replicas, or paint samples collected from hundreds of centuries old polychrome art objects. A common database of sugar profiles of reference materials commonly found in paint samples was thus compiled. The database presents data also from those materials that only contain a minor saccharide fraction. This database highlights how many sources of saccharides can be found in a paint sample, representing an important step forward in the problem of identifying polysaccharide binders in paint samples. PMID:23050842

  20. Analysis of plant gums and saccharide materials in paint samples: comparison of GC-MS analytical procedures and databases.

    PubMed

    Lluveras-Tenorio, Anna; Mazurek, Joy; Restivo, Annalaura; Colombini, Maria Perla; Bonaduce, Ilaria

    2012-10-10

    Saccharide materials have been used for centuries as binding media, to paint, write and illuminate manuscripts and to apply metallic leaf decorations. Although the technical literature often reports on the use of plant gums as binders, actually several other saccharide materials can be encountered in paint samples, not only as major binders, but also as additives. In the literature, there are a variety of analytical procedures that utilize GC-MS to characterize saccharide materials in paint samples, however the chromatographic profiles are often extremely different and it is impossible to compare them and reliably identify the paint binder. This paper presents a comparison between two different analytical procedures based on GC-MS for the analysis of saccharide materials in works-of-art. The research presented here evaluates the influence of the analytical procedure used, and how it impacts the sugar profiles obtained from the analysis of paint samples that contain saccharide materials. The procedures have been developed, optimised and systematically used to characterise plant gums at the Getty Conservation Institute in Los Angeles, USA (GCI) and the Department of Chemistry and Industrial Chemistry of the University of Pisa, Italy (DCCI). The main steps of the analytical procedures and their optimisation are discussed. The results presented highlight that the two methods give comparable sugar profiles, whether the samples analysed are simple raw materials, pigmented and unpigmented paint replicas, or paint samples collected from hundreds of centuries old polychrome art objects. A common database of sugar profiles of reference materials commonly found in paint samples was thus compiled. The database presents data also from those materials that only contain a minor saccharide fraction. This database highlights how many sources of saccharides can be found in a paint sample, representing an important step forward in the problem of identifying polysaccharide binders in paint samples.

  1. Considerations in As analysis and speciation

    USGS Publications Warehouse

    Edwards, M.; Patel, S.; McNeil, L.; Chen, H.W.; Frey, M.; Eaton, A.D.; Antweiler, Ronald C.; Taylor, Howard E.

    1998-01-01

    This article summarizes recent experiences in arsenic (As) quantification, preservation, and speciation developed during AWWA Research Foundation (AWWARF) and Water Industry Technical Action Fund (WITAF) projects. The goal of this article is to alert analysts and decision-makers to potential problems in As analysis and speciation, because there appear to be several unresolved problems with routine analytical approaches. In true split drinking water samples As was quantified by three accepted analytical methods in three laboratories. The techniques used were graphite furnace atomic absorption spectrometry (GFAAS), inductively coupled plasma mass spectrometry (ICP-MS), and hydride generation inductively coupled plasma-emission spectrometry (HG-ICP-AES). Experimental findings are organized into sections on As analysis, particulate As in water supplies, and examination of As speciation methods.

  2. Affinity-based biosensors as promising tools for gene doping detection.

    PubMed

    Minunni, Maria; Scarano, Simona; Mascini, Marco

    2008-05-01

    Innovative bioanalytical approaches can be foreseen as interesting means for solving relevant emerging problems in anti-doping control. Sport authorities fear that the newer form of doping, so-called gene doping, based on a misuse of gene therapy, will be undetectable and thus much less preventable. The World Anti-Doping Agency has already asked scientists to assist in finding ways to prevent and detect this newest kind of doping. In this Opinion article we discuss the main aspects of gene doping, from the putative target analytes to suitable sampling strategies. Moreover, we discuss the potential application of affinity sensing in this field, which so far has been successfully applied to a variety of analytical problems, from clinical diagnostics to food and environmental analysis.

  3. Paraxial light distribution in the focal region of a lens: a comparison of several analytical solutions and a numerical result.

    PubMed

    Wu, Yang; Kelly, Damien P

    2014-12-12

    The distribution of the complex field in the focal region of a lens is a classical optical diffraction problem. Today, it remains of significant theoretical importance for understanding the properties of imaging systems. In the paraxial regime, it is possible to find analytical solutions in the neighborhood of the focus, when a plane wave is incident on a focusing lens whose finite extent is limited by a circular aperture. For example, in Born and Wolf's treatment of this problem, two different, but mathematically equivalent analytical solutions, are presented that describe the 3D field distribution using infinite sums of [Formula: see text] and [Formula: see text] type Lommel functions. An alternative solution expresses the distribution in terms of Zernike polynomials, and was presented by Nijboer in 1947. More recently, Cao derived an alternative analytical solution by expanding the Fresnel kernel using a Taylor series expansion. In practical calculations, however, only a finite number of terms from these infinite series expansions is actually used to calculate the distribution in the focal region. In this manuscript, we compare and contrast each of these different solutions to a numerically calculated result, paying particular attention to how quickly each solution converges for a range of different spatial locations behind the focusing lens. We also examine the time taken to calculate each of the analytical solutions. The numerical solution is calculated in a polar coordinate system and is semi-analytic. The integration over the angle is solved analytically, while the radial coordinate is sampled with a sampling interval of [Formula: see text] and then numerically integrated. This produces an infinite set of replicas in the diffraction plane, that are located in circular rings centered at the optical axis and each with radii given by [Formula: see text], where [Formula: see text] is the replica order. These circular replicas are shown to be fundamentally different from the replicas that arise in a Cartesian coordinate system.

  4. Paraxial light distribution in the focal region of a lens: a comparison of several analytical solutions and a numerical result

    NASA Astrophysics Data System (ADS)

    Wu, Yang; Kelly, Damien P.

    2014-12-01

    The distribution of the complex field in the focal region of a lens is a classical optical diffraction problem. Today, it remains of significant theoretical importance for understanding the properties of imaging systems. In the paraxial regime, it is possible to find analytical solutions in the neighborhood of the focus, when a plane wave is incident on a focusing lens whose finite extent is limited by a circular aperture. For example, in Born and Wolf's treatment of this problem, two different, but mathematically equivalent analytical solutions, are presented that describe the 3D field distribution using infinite sums of ? and ? type Lommel functions. An alternative solution expresses the distribution in terms of Zernike polynomials, and was presented by Nijboer in 1947. More recently, Cao derived an alternative analytical solution by expanding the Fresnel kernel using a Taylor series expansion. In practical calculations, however, only a finite number of terms from these infinite series expansions is actually used to calculate the distribution in the focal region. In this manuscript, we compare and contrast each of these different solutions to a numerically calculated result, paying particular attention to how quickly each solution converges for a range of different spatial locations behind the focusing lens. We also examine the time taken to calculate each of the analytical solutions. The numerical solution is calculated in a polar coordinate system and is semi-analytic. The integration over the angle is solved analytically, while the radial coordinate is sampled with a sampling interval of ? and then numerically integrated. This produces an infinite set of replicas in the diffraction plane, that are located in circular rings centered at the optical axis and each with radii given by ?, where ? is the replica order. These circular replicas are shown to be fundamentally different from the replicas that arise in a Cartesian coordinate system.

  5. Airborne chemistry: acoustic levitation in chemical analysis.

    PubMed

    Santesson, Sabina; Nilsson, Staffan

    2004-04-01

    This review with 60 references describes a unique path to miniaturisation, that is, the use of acoustic levitation in analytical and bioanalytical chemistry applications. Levitation of small volumes of sample by means of a levitation technique can be used as a way to avoid solid walls around the sample, thus circumventing the main problem of miniaturisation, the unfavourable surface-to-volume ratio. Different techniques for sample levitation have been developed and improved. Of the levitation techniques described, acoustic or ultrasonic levitation fulfils all requirements for analytical chemistry applications. This technique has previously been used to study properties of molten materials and the equilibrium shape()and stability of liquid drops. Temperature and mass transfer in levitated drops have also been described, as have crystallisation and microgravity applications. The airborne analytical system described here is equipped with different and exchangeable remote detection systems. The levitated drops are normally in the 100 nL-2 microL volume range and additions to the levitated drop can be made in the pL-volume range. The use of levitated drops in analytical and bioanalytical chemistry offers several benefits. Several remote detection systems are compatible with acoustic levitation, including fluorescence imaging detection, right angle light scattering, Raman spectroscopy, and X-ray diffraction. Applications include liquid/liquid extractions, solvent exchange, analyte enrichment, single-cell analysis, cell-cell communication studies, precipitation screening of proteins to establish nucleation conditions, and crystallisation of proteins and pharmaceuticals.

  6. Solving a Mock Arsenic-Poisoning Case Using Atomic Spectroscopy

    NASA Astrophysics Data System (ADS)

    Tarr, Matthew A.

    2001-01-01

    A new upper-level undergraduate atomic spectroscopy laboratory procedure has been developed that presents a realistic problem to students and asks them to assist in solving it. Students are given arsenic-laced soda samples from a mock crime scene. From these samples, they are to gather evidence to help prosecute a murder suspect. The samples are analyzed by inductively coupled plasma atomic emission spectroscopy or by atomic absorbance spectroscopy to determine the content of specific metal impurities. By statistical comparison of the samples' composition, the students determine if the soda samples can be linked to arsenic found in the suspect's home. As much as possible, the procedures and interpretations are developed by the students. Particular emphasis is placed on evaluating the limitations and capabilities of the analytical method with respect to the demands of the problem.

  7. [Amanitine determination as an example of peptide analysis in the biological samples with HPLC-MS technique].

    PubMed

    Janus, Tomasz; Jasionowicz, Ewa; Potocka-Banaś, Barbara; Borowiak, Krzysztof

    Routine toxicological analysis is mostly focused on the identification of non-organic and organic, chemically different compounds, but generally with low mass, usually not greater than 500–600 Da. Peptide compounds with atomic mass higher than 900 Da are a specific analytical group. Several dozen of them are highly-toxic substances well known in toxicological practice, for example mushroom toxin and animal venoms. In the paper the authors present an example of alpha-amanitin to explain the analytical problems and different original solutions in identifying peptides in urine samples with the use of the universal LC MS/MS procedure. The analyzed material was urine samples collected from patients with potential mushroom intoxication, routinely diagnosed for amanitin determination. Ultra filtration with centrifuge filter tubes (limited mass cutoff 3 kDa) was used. Filtrate fluid was directly injected on the chromatographic column and analyzed with a mass detector (MS/MS). The separation of peptides as organic, amphoteric compounds from biological material with the use of the SPE technique is well known but requires dedicated, specific columns. The presented paper proved that with the fast and simple ultra filtration technique amanitin can be effectively isolated from urine, and the procedure offers satisfactory sensitivity of detection and eliminates the influence of the biological matrix on analytical results. Another problem which had to be solved was the non-characteristic fragmentation of peptides in the MS/MS procedure providing non-selective chromatograms. It is possible to use higher collision energies in the analytical procedure, which results in more characteristic mass spectres, although it offers lower sensitivity. The ultra filtration technique as a procedure of sample preparation is effective for the isolation of amanitin from the biological matrix. The monitoring of selected mass corresponding to transition with the loss of water molecule offers satisfactory sensitivity of determination.

  8. Simultaneous determination of glucose, triglycerides, urea, cholesterol, albumin and total protein in human plasma by Fourier transform infrared spectroscopy: direct clinical biochemistry without reagents.

    PubMed

    Jessen, Torben E; Höskuldsson, Agnar T; Bjerrum, Poul J; Verder, Henrik; Sørensen, Lars; Bratholm, Palle S; Christensen, Bo; Jensen, Lene S; Jensen, Maria A B

    2014-09-01

    Direct measurement of chemical constituents in complex biologic matrices without the use of analyte specific reagents could be a step forward toward the simplification of clinical biochemistry. Problems related to reagents such as production errors, improper handling, and lot-to-lot variations would be eliminated as well as errors occurring during assay execution. We describe and validate a reagent free method for direct measurement of six analytes in human plasma based on Fourier-transform infrared spectroscopy (FTIR). Blood plasma is analyzed without any sample preparation. FTIR spectrum of the raw plasma is recorded in a sampling cuvette specially designed for measurement of aqueous solutions. For each analyte, a mathematical calibration process is performed by a stepwise selection of wavelengths giving the optimal least-squares correlation between the measured FTIR signal and the analyte concentration measured by conventional clinical reference methods. The developed calibration algorithms are subsequently evaluated for their capability to predict the concentration of the six analytes in blinded patient samples. The correlation between the six FTIR methods and corresponding reference methods were 0.87

  9. Psychometric properties of the Spanish version of the Experiencing of Self Scale (EOSS) for assessment in Functional Analytic Psychotherapy.

    PubMed

    Valero-Aguayo, Luis; Ferro-García, Rafael; López-Bermúdez, Miguel Ángel; Selva-López de Huralde, María de los Ángeles

    2014-01-01

    The Experiencing of Self Scale (EOSS) was created to evaluate the experience of the personal self, within the field of Functional Analytic Psychotherapy. This paper presents a study of the reliability and validity of the EOSS in a Spanish sample. The study sample, chosen from 24 different centres, comprised 1,040 participants aged between 18-75, of whom 32% were men and 68% women. The clinical sample was made up of 32.7%, whereas 67.3% had no known problem. To obtain evidence of convergent validity, other questionnaires related to the self (EPQ-R, DES, RSES) were used for comparison. The EOSS showed high internal consistency (Cronbach's α = .941) and significantly high correlations with the EPQ-R Neuroticism scale and the DES Dissociation scale, while showing negative correlations with the Rosenberg Self-Esteem Scale (RSES). The EOSS revealed 4 principal factors: a self in close relationships, a self with casual social relationships, a self in general and a positive self-concept. Significant statistical differences were found between the clinical and standard sample, the former showing a higher average. The EOSS had high internal consistency, showing evidence of convergent validity with similar scales and proving useful for the assessment of people with psychological problems related to the self.

  10. Preanalytical requirements of urinalysis

    PubMed Central

    Delanghe, Joris; Speeckaert, Marijn

    2014-01-01

    Urine may be a waste product, but it contains an enormous amount of information. Well-standardized procedures for collection, transport, sample preparation and analysis should become the basis of an effective diagnostic strategy for urinalysis. As reproducibility of urinalysis has been greatly improved due to recent technological progress, preanalytical requirements of urinalysis have gained importance and have become stricter. Since the patients themselves often sample urine specimens, urinalysis is very susceptible to preanalytical issues. Various sampling methods and inappropriate specimen transport can cause important preanalytical errors. The use of preservatives may be helpful for particular analytes. Unfortunately, a universal preservative that allows a complete urinalysis does not (yet) exist. The preanalytical aspects are also of major importance for newer applications (e.g. metabolomics). The present review deals with the current preanalytical problems and requirements for the most common urinary analytes. PMID:24627718

  11. Estimating the "impact" of out-of-home placement on child well-being: approaching the problem of selection bias.

    PubMed

    Berger, Lawrence M; Bruch, Sarah K; Johnson, Elizabeth I; James, Sigrid; Rubin, David

    2009-01-01

    This study used data on 2,453 children aged 4-17 from the National Survey of Child and Adolescent Well-Being and 5 analytic methods that adjust for selection factors to estimate the impact of out-of-home placement on children's cognitive skills and behavior problems. Methods included ordinary least squares (OLS) regressions and residualized change, simple change, difference-in-difference, and fixed effects models. Models were estimated using the full sample and a matched sample generated by propensity scoring. Although results from the unmatched OLS and residualized change models suggested that out-of-home placement is associated with increased child behavior problems, estimates from models that more rigorously adjust for selection bias indicated that placement has little effect on children's cognitive skills or behavior problems.

  12. Selected analytical challenges in the determination of pharmaceuticals in drinking/marine waters and soil/sediment samples.

    PubMed

    Białk-Bielińska, Anna; Kumirska, Jolanta; Borecka, Marta; Caban, Magda; Paszkiewicz, Monika; Pazdro, Ksenia; Stepnowski, Piotr

    2016-03-20

    Recent developments and improvements in advanced instruments and analytical methodologies have made the detection of pharmaceuticals at low concentration levels in different environmental matrices possible. As a result of these advances, over the last 15 years residues of these compounds and their metabolites have been detected in different environmental compartments and pharmaceuticals have now become recognized as so-called 'emerging' contaminants. To date, a lot of papers have been published presenting the development of analytical methodologies for the determination of pharmaceuticals in aqueous and solid environmental samples. Many papers have also been published on the application of the new methodologies, mainly to the assessment of the environmental fate of pharmaceuticals. Although impressive improvements have undoubtedly been made, in order to fully understand the behavior of these chemicals in the environment, there are still numerous methodological challenges to be overcome. The aim of this paper therefore, is to present a review of selected recent improvements and challenges in the determination of pharmaceuticals in environmental samples. Special attention has been paid to the strategies used and the current challenges (also in terms of Green Analytical Chemistry) that exist in the analysis of these chemicals in soils, marine environments and drinking waters. There is a particular focus on the applicability of modern sorbents such as carbon nanotubes (CNTs) in sample preparation techniques, to overcome some of the problems that exist in the analysis of pharmaceuticals in different environmental samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Heparin removal by ecteola-cellulose pre-treatment enables the use of plasma samples for accurate measurement of anti-Yellow fever virus neutralizing antibodies.

    PubMed

    Campi-Azevedo, Ana Carolina; Peruhype-Magalhães, Vanessa; Coelho-Dos-Reis, Jordana Grazziela; Costa-Pereira, Christiane; Yamamura, Anna Yoshida; Lima, Sheila Maria Barbosa de; Simões, Marisol; Campos, Fernanda Magalhães Freire; de Castro Zacche Tonini, Aline; Lemos, Elenice Moreira; Brum, Ricardo Cristiano; de Noronha, Tatiana Guimarães; Freire, Marcos Silva; Maia, Maria de Lourdes Sousa; Camacho, Luiz Antônio Bastos; Rios, Maria; Chancey, Caren; Romano, Alessandro; Domingues, Carla Magda; Teixeira-Carvalho, Andréa; Martins-Filho, Olindo Assis

    2017-09-01

    Technological innovations in vaccinology have recently contributed to bring about novel insights for the vaccine-induced immune response. While the current protocols that use peripheral blood samples may provide abundant data, a range of distinct components of whole blood samples are required and the different anticoagulant systems employed may impair some properties of the biological sample and interfere with functional assays. Although the interference of heparin in functional assays for viral neutralizing antibodies such as the functional plaque-reduction neutralization test (PRNT), considered the gold-standard method to assess and monitor the protective immunity induced by the Yellow fever virus (YFV) vaccine, has been well characterized, the development of pre-analytical treatments is still required for the establishment of optimized protocols. The present study intended to optimize and evaluate the performance of pre-analytical treatment of heparin-collected blood samples with ecteola-cellulose (ECT) to provide accurate measurement of anti-YFV neutralizing antibodies, by PRNT. The study was designed in three steps, including: I. Problem statement; II. Pre-analytical steps; III. Analytical steps. Data confirmed the interference of heparin on PRNT reactivity in a dose-responsive fashion. Distinct sets of conditions for ECT pre-treatment were tested to optimize the heparin removal. The optimized protocol was pre-validated to determine the effectiveness of heparin plasma:ECT treatment to restore the PRNT titers as compared to serum samples. The validation and comparative performance was carried out by using a large range of serum vs heparin plasma:ECT 1:2 paired samples obtained from unvaccinated and 17DD-YFV primary vaccinated subjects. Altogether, the findings support the use of heparin plasma:ECT samples for accurate measurement of anti-YFV neutralizing antibodies. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Problem-based learning on quantitative analytical chemistry course

    NASA Astrophysics Data System (ADS)

    Fitri, Noor

    2017-12-01

    This research applies problem-based learning method on chemical quantitative analytical chemistry, so called as "Analytical Chemistry II" course, especially related to essential oil analysis. The learning outcomes of this course include aspects of understanding of lectures, the skills of applying course materials, and the ability to identify, formulate and solve chemical analysis problems. The role of study groups is quite important in improving students' learning ability and in completing independent tasks and group tasks. Thus, students are not only aware of the basic concepts of Analytical Chemistry II, but also able to understand and apply analytical concepts that have been studied to solve given analytical chemistry problems, and have the attitude and ability to work together to solve the problems. Based on the learning outcome, it can be concluded that the problem-based learning method in Analytical Chemistry II course has been proven to improve students' knowledge, skill, ability and attitude. Students are not only skilled at solving problems in analytical chemistry especially in essential oil analysis in accordance with local genius of Chemistry Department, Universitas Islam Indonesia, but also have skilled work with computer program and able to understand material and problem in English.

  15. Sampling Mars: Analytical requirements and work to do in advance

    NASA Technical Reports Server (NTRS)

    Koeberl, Christian

    1988-01-01

    Sending a mission to Mars to collect samples and return them to the Earth for analysis is without doubt one of the most exciting and important tasks for planetary science in the near future. Many scientifically important questions are associated with the knowledge of the composition and structure of Martian samples. Amongst the most exciting questions is the clarification of the SNC problem- to prove or disprove a possible Martian origin of these meteorites. Since SNC meteorites have been used to infer the chemistry of the planet Mars, and its evolution (including the accretion history), it would be important to know if the whole story is true. But before addressing possible scientific results, we have to deal with the analytical requirements, and with possible pre-return work. It is unlikely to expect that a possible Mars sample return mission will bring back anything close to the amount returned by the Apollo missions. It will be more like the amount returned by the Luna missions, or at least in that order of magnitude. This requires very careful sample selection, and very precise analytical techniques. These techniques should be able to use minimal sample sizes and on the other hand optimize the scientific output. The possibility to work with extremely small samples should not obstruct another problem: possible sampling errors. As we know from terrestrial geochemical studies, sampling procedures are quite complicated and elaborate to ensure avoiding sampling errors. The significance of analyzing a milligram or submilligram sized sample and putting that in relationship with the genesis of whole planetary crusts has to be viewed with care. This leaves a dilemma on one hand, to minimize the sample size as far as possible in order to have the possibility of returning as many different samples as possible, and on the other hand to take a sample large enough to be representative. Whole rock samples are very useful, but should not exceed the 20 to 50 g range, except in cases of extreme inhomogeneity, because for larger samples the information tends to become redundant. Soil samples should be in the 2 to 10 g range, permitting the splitting of the returned samples for studies in different laboratories with variety of techniques.

  16. A Review of Current Methods for Analysis of Mycotoxins in Herbal Medicines

    PubMed Central

    Zhang, Lei; Dou, Xiao-Wen; Zhang, Cheng; Logrieco, Antonio F.; Yang, Mei-Hua

    2018-01-01

    The presence of mycotoxins in herbal medicines is an established problem throughout the entire world. The sensitive and accurate analysis of mycotoxin in complicated matrices (e.g., herbs) typically involves challenging sample pretreatment procedures and an efficient detection instrument. However, although numerous reviews have been published regarding the occurrence of mycotoxins in herbal medicines, few of them provided a detailed summary of related analytical methods for mycotoxin determination. This review focuses on analytical techniques including sampling, extraction, cleanup, and detection for mycotoxin determination in herbal medicines established within the past ten years. Dedicated sections of this article address the significant developments in sample preparation, and highlight the importance of this procedure in the analytical technology. This review also summarizes conventional chromatographic techniques for mycotoxin qualification or quantitation, as well as recent studies regarding the development and application of screening assays such as enzyme-linked immunosorbent assays, lateral flow immunoassays, aptamer-based lateral flow assays, and cytometric bead arrays. The present work provides a good insight regarding the advanced research that has been done and closes with an indication of future demand for the emerging technologies. PMID:29393905

  17. Advances in spectroscopic methods for quantifying soil carbon

    USGS Publications Warehouse

    Reeves, James B.; McCarty, Gregory W.; Calderon, Francisco; Hively, W. Dean

    2012-01-01

    The current gold standard for soil carbon (C) determination is elemental C analysis using dry combustion. However, this method requires expensive consumables, is limited by the number of samples that can be processed (~100/d), and is restricted to the determination of total carbon. With increased interest in soil C sequestration, faster methods of analysis are needed, and there is growing interest in methods based on diffuse reflectance spectroscopy in the visible, near-infrared or mid-infrared spectral ranges. These spectral methods can decrease analytical requirements and speed sample processing, be applied to large landscape areas using remote sensing imagery, and be used to predict multiple analytes simultaneously. However, the methods require localized calibrations to establish the relationship between spectral data and reference analytical data, and also have additional, specific problems. For example, remote sensing is capable of scanning entire watersheds for soil carbon content but is limited to the surface layer of tilled soils and may require difficult and extensive field sampling to obtain proper localized calibration reference values. The objective of this chapter is to discuss the present state of spectroscopic methods for determination of soil carbon.

  18. Extending religion-health research to secular minorities: issues and concerns.

    PubMed

    Hwang, Karen; Hammer, Joseph H; Cragun, Ryan T

    2011-09-01

    Claims about religion's beneficial effects on physical and psychological health have received substantial attention in popular media, but empirical support for these claims is mixed. Many of these claims are tenuous because they fail to address basic methodological issues relating to construct validity, sampling methods or analytical problems. A more conceptual problem has to do with the near universal lack of atheist control samples. While many studies include samples of individuals classified as "low spirituality" or religious "nones", these groups are heterogeneous and contain only a fraction of members who would be considered truly secular. We illustrate the importance of including an atheist control group whenever possible in the religiosity/spirituality and health research and discuss areas for further investigation.

  19. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    PubMed

    West, Brady T; Sakshaug, Joseph W; Aurelien, Guy Alain S

    2016-01-01

    Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data.

  20. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    PubMed Central

    West, Brady T.; Sakshaug, Joseph W.; Aurelien, Guy Alain S.

    2016-01-01

    Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data. PMID:27355817

  1. Psychosocial Treatment Efficacy for Disruptive Behavior Problems in Very Young Children: A Meta-Analytic Examination

    PubMed Central

    Comer, Jonathan S.; Chow, Candice; Chan, Priscilla T.; Cooper-Vince, Christine; Wilson, Lianna A.S.

    2012-01-01

    Objective Service use trends showing increased off-label prescribing in very young children and reduced psychotherapy use raise concerns about quality of care for early disruptive behavior problems. Meta-analysis can empirically clarify best practices and guide clinical decision making by providing a quantitative synthesis of a body of literature, identifying the magnitude of overall effects across studies, and determining systematic factors associated with effect variations. Method We used random-effects meta-analytic procedures to empirically evaluate the overall effect of psychosocial treatments on early disruptive behavior problems, as well as potential moderators of treatment response. Thirty-six controlled trials, evaluating 3,042 children, met selection criteria (mean sample age, 4.7 years; 72.0% male; 33.1% minority youth). Results Psychosocial treatments collectively demonstrated a large and sustained effect on early disruptive behavior problems (Hedges’ g = 0.82), with the largest effects associated with behavioral treatments (Hedges’ g = 0.88), samples with higher proportions of older and male youth, and comparisons against treatment as usual (Hedges’ g = 1.17). Across trials, effects were largest for general externalizing problems (Hedges’ g =0.90) and problems of oppositionality and noncompliance (Hedges’ g = 0.76), and were weakest, relatively speaking, for problems of impulsivity and hyperactivity (Hedges’ g = 0.61). Conclusions In the absence of controlled trials evaluating psychotropic interventions, findings provide robust quantitative support that psychosocial treatments should constitute first-line treatment for early disruptive behavior problems. Against a backdrop of concerning trends in the availability and use of supported interventions, findings underscore the urgency of improving dissemination efforts for supported psychosocial treatment options, and removing systematic barriers to psychosocial care for affected youth. PMID:23265631

  2. On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood

    ERIC Educational Resources Information Center

    Karabatsos, George

    2017-01-01

    This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon…

  3. Determination of selenium in the environment and in biological material.

    PubMed Central

    Bem, E M

    1981-01-01

    This paper reviews the following problems, sampling, decomposition procedures and most important analytical methods used for selenium determination, e.g., neutron activation analysis, atomic absorption spectrometry, gas-liquid chromatography, spectrophotometry, fluorimetry, and x-ray fluorescence. This review covers the literature mainly from 1975 to 1977. PMID:7007035

  4. An Empirical Evaluation of Factor Reliability.

    ERIC Educational Resources Information Center

    Jackson, Douglas N.; Morf, Martin E.

    The psychometric reliability of a factor, defined as its generalizability across samples drawn from the same population of tests, is considered as a necessary precondition for the scientific meaningfulness of factor analytic results. A solution to the problem of generalizability is illustrated empirically on data from a set of tests designed to…

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoenig, M.; Elsen, Y.V.; Cauter, R.V.

    The progressive degradation of the pyrolytic graphite surface of atomizers provides variable and misleading results of molybdenum peak-height measurements. The changes in the peak shapes produce no analytical problems during the lifetime of the atomizer (approx.300 firings) when integrated absorbance (A.s signals) is considered and the possible base-line drifts are controlled. This was demonstrated on plant samples mineralized by simple digestion with a mixture of HNO/sub 3/ and H/sub 2/O/sub 2/. The value of this method was assessed by comparison with a standard dry oxidation method and by molybdenum determination in National Bureau of Standards reference plant samples. The relativemore » standard deviations (n = 5) of the full analytical procedure do not exceed 7%. 13 references, 3 figures, 3 tables.« less

  6. Simultaneous determination of thirteen different steroid hormones using micro UHPLC-MS/MS with on-line SPE system.

    PubMed

    Márta, Zoltán; Bobály, Balázs; Fekete, Jenő; Magda, Balázs; Imre, Tímea; Mészáros, Katalin Viola; Bálint, Mária; Szabó, Pál Tamás

    2018-02-20

    Ultratrace analysis of sample components requires excellent analytical performance in terms of limits of quantitation (LOQ). Micro UHPLC coupled to sensitive tandem mass spectrometry provides state of the art solution for such analytical problems. Using on-line SPE with column switching on a micro UHPLC-MS/MS system allowed to decrease LOQ without any complex sample preparation protocol. The presented method is capable of reaching satisfactory low LOQ values for analysis of thirteen different steroid molecules from human plasma without the most commonly used off-line SPE or compound derivatization. Steroids were determined by using two simple sample preparation methods, based on lower and higher plasma steroid concentrations. In the first method, higher analyte concentrations were directly determined after protein precipitation with methanol. The organic phase obtained from the precipitation was diluted with water and directly injected into the LC-MS system. In the second method, low steroid levels were determined by concentrating the organic phase after steroid extraction. In this case, analytes were extracted with ethyl acetate and reconstituted in 90/10 water/acetonitrile following evaporation to dryness. This step provided much lower LOQs, outperforming previously published values. The method has been validated and subsequently applied to clinical laboratory measurement. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. General statistical considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eberhardt, L L; Gilbert, R O

    From NAEG plutonium environmental studies program meeting; Las Vegas, Nevada, USA (2 Oct 1973). The high sampling variability encountered in environmental plutonium studies along with high analytical costs makes it very important that efficient soil sampling plans be used. However, efficient sampling depends on explicit and simple statements of the objectives of the study. When there are multiple objectives it may be difficult to devise a wholly suitable sampling scheme. Sampling for long-term changes in plutonium concentration in soils may also be complex and expensive. Further attention to problems associated with compositing samples is recommended, as is the consistent usemore » of random sampling as a basic technique. (auth)« less

  8. Enzyme Biosensors for Biomedical Applications: Strategies for Safeguarding Analytical Performances in Biological Fluids

    PubMed Central

    Rocchitta, Gaia; Spanu, Angela; Babudieri, Sergio; Latte, Gavinella; Madeddu, Giordano; Galleri, Grazia; Nuvoli, Susanna; Bagella, Paola; Demartis, Maria Ilaria; Fiore, Vito; Manetti, Roberto; Serra, Pier Andrea

    2016-01-01

    Enzyme-based chemical biosensors are based on biological recognition. In order to operate, the enzymes must be available to catalyze a specific biochemical reaction and be stable under the normal operating conditions of the biosensor. Design of biosensors is based on knowledge about the target analyte, as well as the complexity of the matrix in which the analyte has to be quantified. This article reviews the problems resulting from the interaction of enzyme-based amperometric biosensors with complex biological matrices containing the target analyte(s). One of the most challenging disadvantages of amperometric enzyme-based biosensor detection is signal reduction from fouling agents and interference from chemicals present in the sample matrix. This article, therefore, investigates the principles of functioning of enzymatic biosensors, their analytical performance over time and the strategies used to optimize their performance. Moreover, the composition of biological fluids as a function of their interaction with biosensing will be presented. PMID:27249001

  9. [Modal failure analysis and effects in the detection of errors in the transport of samples to the clinical laboratory].

    PubMed

    Parés-Pollán, L; Gonzalez-Quintana, A; Docampo-Cordeiro, J; Vargas-Gallego, C; García-Álvarez, G; Ramos-Rodríguez, V; Diaz Rubio-García, M P

    2014-01-01

    Owing to the decrease in values of biochemical glucose parameter in some samples from external extraction centres, and the risk this implies to patient safety; it was decided to apply an adaptation of the «Health Services Failure Mode and Effects Analysis» (HFMEA) to manage risk during the pre-analytical phase of sample transportation from external centres to clinical laboratories. A retrospective study of glucose parameter was conducted during two consecutive months. The analysis was performed in its different phases: to define the HFMEA topic, assemble the team, graphically describe the process, conduct a hazard analysis, design the intervention and indicators, and identify a person to be responsible for ensuring completion of each action. The results of glucose parameter in one of the transport routes, were significantly lower (P=.006). The errors and potential causes of this problem were analysed, and criteria of criticality and detectability were applied (score≥8) in the decision tree. It was decided to: develop a document management system; reorganise extractions and transport routes in some centres; quality control of the sample container ice-packs, and the time and temperature during transportation. This work proposes quality indicators for controlling time and temperature of transported samples in the pre-analytical phase. Periodic review of certain laboratory parameters can help to detect problems in transporting samples. The HFMEA technique is useful for the clinical laboratory. Copyright © 2013 SECA. Published by Elsevier Espana. All rights reserved.

  10. Analytical techniques for identification and study of organic matter in returned lunar samples

    NASA Technical Reports Server (NTRS)

    Burlingame, A. L.

    1974-01-01

    The results of geochemical research are reviewed. Emphasis is placed on the contribution of mass spectrometric data to the solution of specific structural problems. Information on the mass spectrometric behavior of compounds of geochemical interest is reviewed and currently available techniques of particular importance to geochemistry, such as gas chromatograph-mass spectrometer coupling, modern sample introduction methods, and computer application in high resolution mass spectrometry, receive particular attention.

  11. Trace level detection of compounds related to the chemical weapons convention by 1H-detected 13C NMR spectroscopy executed with a sensitivity-enhanced, cryogenic probehead.

    PubMed

    Cullinan, David B; Hondrogiannis, George; Henderson, Terry J

    2008-04-15

    Two-dimensional 1H-13C HSQC (heteronuclear single quantum correlation) and fast-HMQC (heteronuclear multiple quantum correlation) pulse sequences were implemented using a sensitivity-enhanced, cryogenic probehead for detecting compounds relevant to the Chemical Weapons Convention present in complex mixtures. The resulting methods demonstrated exceptional sensitivity for detecting the analytes at trace level concentrations. 1H-13C correlations of target analytes at < or = 25 microg/mL were easily detected in a sample where the 1H solvent signal was approximately 58,000-fold more intense than the analyte 1H signals. The problem of overlapping signals typically observed in conventional 1H spectroscopy was essentially eliminated, while 1H and 13C chemical shift information could be derived quickly and simultaneously from the resulting spectra. The fast-HMQC pulse sequences generated magnitude mode spectra suitable for detailed analysis in approximately 4.5 h and can be used in experiments to efficiently screen a large number of samples. The HSQC pulse sequences, on the other hand, required roughly twice the data acquisition time to produce suitable spectra. These spectra, however, were phase-sensitive, contained considerably more resolution in both dimensions, and proved to be superior for detecting analyte 1H-13C correlations. Furthermore, a HSQC spectrum collected with a multiplicity-edited pulse sequence provided additional structural information valuable for identifying target analytes. The HSQC pulse sequences are ideal for collecting high-quality data sets with overnight acquisitions and logically follow the use of fast-HMQC pulse sequences to rapidly screen samples for potential target analytes. Use of the pulse sequences considerably improves the performance of NMR spectroscopy as a complimentary technique for the screening, identification, and validation of chemical warfare agents and other small-molecule analytes present in complex mixtures and environmental samples.

  12. Analytical Chemistry Laboratory Progress Report for FY 1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.W.; Boparai, A.S.; Bowers, D.L.

    The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year (FY) 1994 (October 1993 through September 1994). This annual report is the eleventh for the ACL and describes continuing effort on projects, work on new projects, and contributions of the ACL staff to various programs at ANL. The Analytical Chemistry Laboratory is a full-cost-recovery service center, with the primary mission of providing a broad range of analytical chemistry support services to the scientific and engineering programs at ANL. The ACL also has a research program inmore » analytical chemistry, conducts instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL handles a wide range of analytical problems. Some routine or standard analyses are done, but it is common for the Argonne programs to generate unique problems that require significant development of methods and adaption of techniques to obtain useful analytical data. The ACL has four technical groups -- Chemical Analysis, Instrumental Analysis, Organic Analysis, and Environmental Analysis -- which together include about 45 technical staff members. Talents and interests of staff members cross the group lines, as do many projects within the ACL. The Chemical Analysis Group uses wet- chemical and instrumental methods for elemental, compositional, and isotopic determinations in solid, liquid, and gaseous samples and provides specialized analytical services. Major instruments in this group include an ion chromatograph (IC), an inductively coupled plasma/atomic emission spectrometer (ICP/AES), spectrophotometers, mass spectrometers (including gas-analysis and thermal-ionization mass spectrometers), emission spectrographs, autotitrators, sulfur and carbon determinators, and a kinetic phosphorescence uranium analyzer.« less

  13. Three-Dimensional Solution of the Free Vibration Problem for Metal-Ceramic Shells Using the Method of Sampling Surfaces

    NASA Astrophysics Data System (ADS)

    Kulikov, G. M.; Plotnikova, S. V.

    2017-03-01

    The possibility of using the method of sampling surfaces (SaS) for solving the free vibration problem of threedimensional elasticity for metal-ceramic shells is studied. According to this method, in the shell body, an arbitrary number of SaS parallel to its middle surface are selected in order to take displacements of these surfaces as unknowns. The SaS pass through the nodes of a Chebyshev polynomial, which improves the convergence of the SaS method significantly. As a result, the SaS method can be used to obtain analytical solutions of the vibration problem for metal-ceramic plates and cylindrical shells that asymptotically approach the exact solutions of elasticity as the number of SaS tends to infinity.

  14. Case report of unexplained hypocalcaemia in a slightly haemolysed sample.

    PubMed

    Cornes, Michael

    2017-06-15

    The case presented highlights a common pre-analytical problem identified in the laboratory that was initially missed. It concerns a young, generally healthy adult patient with no significant medical history and no significant family history. They presented with common flu like symptoms to their primary care clinician who considered this was most likely a viral problem that would pass with time. The clinician, however, did some routine bloods to reassure the patient despite a lack of clinical indication. When the sample was analysed the sample was haemolysed with strikingly low calcium. This led to the patient being called into hospital for urgent repeat investigations, all of which turned out to be within normal ranges. On further investigation the original sample was found to be contaminated. This result would normally have been flagged but was missed due to the complication of haemolysis.

  15. Installation-restoration program. Phase 2. Confirmation/quantification. Stage 1 for Mather AFB, Sacramento, California. Volume 1. Final report, September 1983-June 1986

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    A Problem Confirmation Study was performed at three sites on Mather AFB identified in the Phase I investigation as requiring further study (the Air command and warning Area, the 7100 Area, the West Ditch) and the Northeast Perimeter. The field investigation was conducted from February 1984 to June 1985 and included installation of 11 monitor wells, collection of groundwater samples from the monitor wells and 15 base production wells, and collection of sediment samples from two locations on the West Ditch. Analytes included oil and grease, TOC, volatile organic compounds (VOA), as well as dimethylnitrosamine, phenols, pesticides, and dissolved metalsmore » at some specific sites. Based on the hydrogeologic complexity of the physical setting and the findings of the sampling and analytical work, follow-on investigations were recommended at all three sites.« less

  16. Get with the System: General Systems Theory for Business Officials.

    ERIC Educational Resources Information Center

    Graczyk, Sandra L.

    1993-01-01

    An introduction to general systems theory and an overview of vocabulary and concepts are presented to introduce school business officials to systems thinking and to foster its use as an analytical tool. The theory is then used to analyze a sample problem: planning changes to a district's administrative computer system. (eight references) (MLF)

  17. Temperament Profiles from Infancy to Middle Childhood: Development and Associations with Behavior Problems

    ERIC Educational Resources Information Center

    Janson, Harald; Mathiesen, Kristin S.

    2008-01-01

    The authors applied I-States as Objects Analysis (ISOA), a recently proposed person-oriented analytic approach, to the study of temperament development in 921 Norwegian children from a population-based sample. A 5-profile classification based on cluster analysis of standardized mother reports of activity, sociability, emotionality, and shyness at…

  18. The Effect of Sample Size on Parametric and Nonparametric Factor Analytical Methods

    ERIC Educational Resources Information Center

    Kalkan, Ömür Kaya; Kelecioglu, Hülya

    2016-01-01

    Linear factor analysis models used to examine constructs underlying the responses are not very suitable for dichotomous or polytomous response formats. The associated problems cannot be eliminated by polychoric or tetrachoric correlations in place of the Pearson correlation. Therefore, we considered parameters obtained from the NOHARM and FACTOR…

  19. Trace analysis in the food and beverage industry by capillary gas chromatography: system performance and maintenance.

    PubMed

    Hayes, M A

    1988-04-01

    Gas chromatography (GC) is the most widely used analytical technique in the food and beverage industry. This paper addresses the problems of sample preparation and system maintenance to ensure the most sensitive, durable, and efficient results for trace analysis by GC in this industry.

  20. Integrating laboratory robots with analytical instruments--must it really be so difficult?

    PubMed

    Kramer, G W

    1990-09-01

    Creating a reliable system from discrete laboratory instruments is often a task fraught with difficulties. While many modern analytical instruments are marvels of detection and data handling, attempts to create automated analytical systems incorporating such instruments are often frustrated by their human-oriented control structures and their egocentricity. The laboratory robot, while fully susceptible to these problems, extends such compatibility issues to the physical dimensions involving sample interchange, manipulation, and event timing. The workcell concept was conceived to describe the procedure and equipment necessary to carry out a single task during sample preparation. This notion can be extended to organize all operations in an automated system. Each workcell, no matter how complex its local repertoire of functions, must be minimally capable of accepting information (commands, data), returning information on demand (status, results), and being started, stopped, and reset by a higher level device. Even the system controller should have a mode where it can be directed by instructions from a higher level.

  1. Matrix-enhanced secondary ion mass spectrometry: The Alchemist's solution?

    NASA Astrophysics Data System (ADS)

    Delcorte, Arnaud

    2006-07-01

    Because of the requirements of large molecule characterization and high-lateral resolution SIMS imaging, the possibility of improving molecular ion yields by the use of specific sample preparation procedures has recently generated a renewed interest in the static SIMS community. In comparison with polyatomic projectiles, however, signal enhancement by a matrix might appear to some as the alchemist's versus the scientist's solution to the current problems of organic SIMS. In this contribution, I would like to discuss critically the pros and cons of matrix-enhanced SIMS procedures, in the new framework that includes polyatomic ion bombardment. This discussion is based on a short review of the experimental and theoretical developments achieved in the last decade with respect to the three following approaches: (i) blending the analyte with a low-molecular weight organic matrix (MALDI-type preparation procedure); (ii) mixing alkali/noble metal salts with the analyte; (iii) evaporating a noble metal layer on the analyte sample surface (organic molecules, polymers).

  2. Recent advances in analytical methods for the determination of 4-alkylphenols and bisphenol A in solid environmental matrices: A critical review.

    PubMed

    Salgueiro-González, N; Castiglioni, S; Zuccato, E; Turnes-Carou, I; López-Mahía, P; Muniategui-Lorenzo, S

    2018-09-18

    The problem of endocrine disrupting compounds (EDCs) in the environment has become a worldwide concern in recent decades. Besides their toxicological effects at low concentrations and their widespread use in industrial and household applications, these pollutants pose a risk for non-target organisms and also for public safety. Analytical methods to determine these compounds at trace levels in different matrices are urgently needed. This review critically discusses trends in analytical methods for well-known EDCs like alkylphenols and bisphenol A in solid environmental matrices, including sediment and aquatic biological samples (from 2006 to 2018). Information about extraction, clean-up and determination is covered in detail, including analytical quality parameters (QA/QC). Conventional and novel analytical techniques are compared, with their advantages and drawbacks. Ultrasound assisted extraction followed by solid phase extraction clean-up is the most widely used procedure for sediment and aquatic biological samples, although softer extraction conditions have been employed for the latter. The use of liquid chromatography followed by tandem mass spectrometry has greatly increased in the last five years. The majority of these methods have been employed for the analysis of river sediments and bivalve molluscs because of their usefulness in aquatic ecosystem (bio)monitoring programs. Green, simple, fast analytical methods are now needed to determine these compounds in complex matrices. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. On the accuracy of analytical models of impurity segregation during directional melt crystallization and their applicability for quantitative calculations

    NASA Astrophysics Data System (ADS)

    Voloshin, A. E.; Prostomolotov, A. I.; Verezub, N. A.

    2016-11-01

    The paper deals with the analysis of the accuracy of some one-dimensional (1D) analytical models of the axial distribution of impurities in the crystal grown from a melt. The models proposed by Burton-Prim-Slichter, Ostrogorsky-Muller and Garandet with co-authors are considered, these models are compared to the results of a two-dimensional (2D) numerical simulation. Stationary solutions as well as solutions for the initial transient regime obtained using these models are considered. The sources of errors are analyzed, a conclusion is made about the applicability of 1D analytical models for quantitative estimates of impurity incorporation into the crystal sample as well as for the solution of the inverse problems.

  4. Analytical methodologies for aluminium speciation in environmental and biological samples--a review.

    PubMed

    Bi, S P; Yang, X D; Zhang, F P; Wang, X L; Zou, G W

    2001-08-01

    It is recognized that aluminium (Al) is a potential environmental hazard. Acidic deposition has been linked to increased Al concentrations in natural waters. Elevated levels of Al might have serious consequences for biological communities. Of particular interest is the speciation of Al in aquatic environments, because Al toxicity depends on its forms and concentrations. In this paper, advances in analytical methodologies for Al speciation in environmental and biological samples during the past five years are reviewed. Concerns about the specific problems of Al speciation and highlights of some important methods are elucidated in sections devoted to hybrid techniques (HPLC or FPLC coupled with ET-AAS, ICP-AES, or ICP-MS), flow-injection analysis (FIA), nuclear magnetic resonance (27Al NMR), electrochemical analysis, and computer simulation. More than 130 references are cited.

  5. Laboratory Methods for the Measurement of Pollutants in Water and Waste Effluents

    NASA Technical Reports Server (NTRS)

    Ballinger, Dwight G.

    1971-01-01

    The requirement for accurate, precise, and rapid analytical procedures for the examination of water and waste samples requires the use of a variety of instruments. The instrumentation in water laboratories includes atomic absorption, UV-visible. and infrared spectrophotometers, automatic colorimetric analyzers, gas chromatographs and mass spectrometers. Because of the emphasis on regulatory action, attention is being directed toward quality control of analytical results. Among the challenging problems are the differentiation of metallic species in water at nanogram concentrations, rapid measurement of free cyanide and free ammonia, more sensitive methods for arsenic and selenium and improved characterization of organic contaminants.

  6. An Undergraduate Field Experiment for Measuring Exposure to Environmental Tobacco Smoke in Indoor Environments

    NASA Astrophysics Data System (ADS)

    Marsella, Adam M.; Huang, Jiping; Ellis, David A.; Mabury, Scott A.

    1999-12-01

    An undergraduate field experiment is described for the measurement of nicotine and various carbonyl compounds arising from environmental tobacco smoke. Students are introduced to practical techniques in HPLC-UV and GC-NPD. Also introduced are current methods in personal air sampling using small and portable field sampling pumps. Carbonyls (formaldehyde, acetaldehyde, acrolein, and acetone) are sampled with silica solid-phase extraction cartridges impregnated with 2,4-dinitrophenylhydrazine, eluted, and analyzed by HPLC-UV (360-380 nm). Nicotine is sampled using XAD-2 cartridges, extracted, and analyzed by GC-NPD. Students gain an appreciation for the problems associated with measuring ubiquitous pollutants such as formaldehyde, as well as the issue of chromatographic peak resolution when trying to resolve closely eluting peaks. By allowing the students to formulate their own hypothesis and sampling scheme, critical thinking and problem solving are developed in addition to analysis skills. As an experiment in analytical environmental chemistry, this laboratory introduces the application of field sampling and analysis techniques to the undergraduate lab.

  7. Robust analysis of the hydrophobic basic analytes loratadine and desloratadine in pharmaceutical preparations and biological fluids by sweeping-cyclodextrin-modified micellar electrokinetic chromatography.

    PubMed

    El-Awady, Mohamed; Belal, Fathalla; Pyell, Ute

    2013-09-27

    The analysis of hydrophobic basic analytes by micellar electrokinetic chromatography (MEKC) is usually challenging because of the tendency of these analytes to be adsorbed onto the inner capillary wall in addition to the difficulty to separate these compounds as they exhibit extremely high retention factors. A robust and reliable method for the simultaneous determination of loratadine (LOR) and its major metabolite desloratadine (DSL) is developed based on cyclodextrin-modified micellar electrokinetic chromatography (CD-MEKC) with acidic sample matrix and basic background electrolyte (BGE). The influence of the sample matrix on the reachable focusing efficiency is studied. It is shown that the application of a low pH sample solution mitigates problems associated with the low solubility of the hydrophobic basic analytes in aqueous solution while having advantages with regard to on-line focusing. Moreover, the use of a basic BGE reduces the adsorption of these analytes in the separation compartment. The separation of the studied analytes is achieved in less than 7min using a BGE consisting of 10mmolL(-1) disodium tetraborate buffer, pH 9.30 containing 40mmolL(-1) SDS and 20mmolL(-1) hydroxypropyl-β-CD while the sample solution is composed of 10mmolL(-1) phosphoric acid, pH 2.15. A full validation study of the developed method based on the pharmacopeial guidelines is performed. The method is successfully applied to the analysis of the studied drugs in tablets without interference of tablet additives as well as the analysis of spiked human urine without any sample pretreatment. Furthermore, DSL can be detected as an impurity in LOR bulk powder at the stated pharmacopeial limit (0.1%, w/w). The selectivity of the developed method allows the analysis of LOR and DSL in combination with the co-formulated drug pseudoephedrine. It is shown that in CD-MEKC with basic BGE, solute-wall interactions are effectively suppressed allowing the development of efficient and precise methods for the determination of hydrophobic basic analytes, whereas the use of a low pH sample solution has a positive impact on the attainable sweeping efficiency without compromising peak shape and resolution. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Search for and analysis of radioactive halos in lunar material

    NASA Technical Reports Server (NTRS)

    Gentry, R. V.

    1976-01-01

    The lunar halo search was conducted because halos in terrestrial minerals serve as pointers to localized radioactivity, and make possible analytical studies on the problems of isotopic dating and mode of crystallization of the host mineral. Ancillary studies were conducted on terrestrial halos and on certain samples of special origin such as tektites and meteorites.

  9. Patterns of Adolescent Bullying Behaviors: Physical, Verbal, Exclusion, Rumor, and Cyber

    ERIC Educational Resources Information Center

    Wang, Jing; Iannotti, Ronald J.; Luk, Jeremy W.

    2012-01-01

    Patterns of engagement in cyber bullying and four types of traditional bullying were examined using latent class analysis (LCA). Demographic differences and externalizing problems were evaluated across latent class membership. Data were obtained from the 2005-2006 Health Behavior in School-aged Survey and the analytic sample included 7,508 U.S.…

  10. Solid state electro-optic color filter and iris

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Test results obtained have confirmed the practicality of the solid state electro-optic filters as an optical control element in a television system. Neutral-density control range in excess of 1000:1 has been obtained on sample filters. Test results, measurements in a complete camera system, discussions of problem areas, analytical comparisons, and recommendations for future investigations are included.

  11. Destructive analysis capabilities for plutonium and uranium characterization at Los Alamos National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tandon, Lav; Kuhn, Kevin J; Drake, Lawrence R

    Los Alamos National Laboratory's (LANL) Actinide Analytical Chemistry (AAC) group has been in existence since the Manhattan Project. It maintains a complete set of analytical capabilities for performing complete characterization (elemental assay, isotopic, metallic and non metallic trace impurities) of uranium and plutonium samples in different forms. For a majority of the customers there are strong quality assurance (QA) and quality control (QC) objectives including highest accuracy and precision with well defined uncertainties associated with the analytical results. Los Alamos participates in various international and national programs such as the Plutonium Metal Exchange Program, New Brunswick Laboratory's (NBL' s) Safeguardsmore » Measurement Evaluation Program (SME) and several other inter-laboratory round robin exercises to monitor and evaluate the data quality generated by AAC. These programs also provide independent verification of analytical measurement capabilities, and allow any technical problems with analytical measurements to be identified and corrected. This presentation will focus on key analytical capabilities for destructive analysis in AAC and also comparative data between LANL and peer groups for Pu assay and isotopic analysis.« less

  12. An Investigation to Manufacturing Analytical Services Composition using the Analytical Target Cascading Method.

    PubMed

    Tien, Kai-Wen; Kulvatunyou, Boonserm; Jung, Kiwook; Prabhu, Vittaldas

    2017-01-01

    As cloud computing is increasingly adopted, the trend is to offer software functions as modular services and compose them into larger, more meaningful ones. The trend is attractive to analytical problems in the manufacturing system design and performance improvement domain because 1) finding a global optimization for the system is a complex problem; and 2) sub-problems are typically compartmentalized by the organizational structure. However, solving sub-problems by independent services can result in a sub-optimal solution at the system level. This paper investigates the technique called Analytical Target Cascading (ATC) to coordinate the optimization of loosely-coupled sub-problems, each may be modularly formulated by differing departments and be solved by modular analytical services. The result demonstrates that ATC is a promising method in that it offers system-level optimal solutions that can scale up by exploiting distributed and modular executions while allowing easier management of the problem formulation.

  13. Analytical performances of food microbiology laboratories - critical analysis of 7 years of proficiency testing results.

    PubMed

    Abdel Massih, M; Planchon, V; Polet, M; Dierick, K; Mahillon, J

    2016-02-01

    Based on the results of 19 food microbiology proficiency testing (PT) schemes, this study aimed to assess the laboratory performances, to highlight the main sources of unsatisfactory analytical results and to suggest areas of improvement. The 2009-2015 results of REQUASUD and IPH PT, involving a total of 48 laboratories, were analysed. On average, the laboratories failed to detect or enumerate foodborne pathogens in 3·0% of the tests. Thanks to a close collaboration with the PT participants, the causes of outliers could be identified in 74% of the cases. The main causes of erroneous PT results were either pre-analytical (handling of the samples, timing of analysis), analytical (unsuitable methods, confusion of samples, errors in colony counting or confirmation) or postanalytical mistakes (calculation and encoding of results). PT schemes are a privileged observation post to highlight analytical problems, which would otherwise remain unnoticed. In this perspective, this comprehensive study of PT results provides insight into the sources of systematic errors encountered during the analyses. This study draws the attention of the laboratories to the main causes of analytical errors and suggests practical solutions to avoid them, in an educational purpose. The observations support the hypothesis that regular participation to PT, when followed by feed-back and appropriate corrective actions, can play a key role in quality improvement and provide more confidence in the laboratory testing results. © 2015 The Society for Applied Microbiology.

  14. What we-authors, reviewers and editors of scientific work-can learn from the analytical history of biological 3-nitrotyrosine.

    PubMed

    Tsikas, Dimitrios

    2017-07-15

    Tyrosine and tyrosine residues in proteins are attacked by the reactive oxygen and nitrogen species peroxynitrite (O=N-OO - ) to generate 3-nitrotyrosine (3-NT) and 3-nitrotyrosine-proteins (3-NTProt), respectively. 3-NT and 3-NTProt are widely accepted as biomarkers of nitr(os)ative stress. Over the years many different analytical methods have been reported for 3-NT and 3-NTProt. Reported concentrations often differ by more than three orders of magnitude, indicative of serious analytical problems. Strategies to overcome pre-analytical and analytical shortcomings and pitfalls have been proposed. The present review investigated whether recently published work on the quantitative measurement of biological 3-nitrotyrosine did adequately consider the analytical past of this biomolecule. 3-Nitrotyrosine was taken as a representative of biomolecules that occur in biological samples in the pM-to-nM concentration range. This examination revealed that in many cases the main protagonists involved in the publication of scientific work, i.e., authors, reviewers and editors, failed to do so. Learning from the analytical history of 3-nitrotyrosine means advancing analytical and biological science and implies the following key issues. (1) Choosing the most reliable analytical approach in terms of sensitivity and accuracy; presently this is best feasible by stable-isotope dilution tandem mass spectrometry coupled with gas chromatography (GC-MS/MS) or liquid chromatography (LC-MS/MS). (2) Minimizing artificial formation of 3-nitrotyrosine during sample work up, a major pitfall in 3-nitrotyrosine analysis. (3) Validating adequately the final method in the intendent biological matrix and the established concentration range. (4) Inviting experts in the field for critical evaluation of the novelty and reliability of the proposed analytical method, placing special emphasis on the compliance of the analytical outcome with 3-nitrotyrosine concentrations obtained by validated GC-MS/MS and LC-MS/MS methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Sampling of atmospheric carbonyl compounds for determination by liquid chromatography after 2,4-dinitrophenylhydrazine labelling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vairavamurthy, A.; Roberts, J.M.; Newman, L.

    1991-06-01

    Carbonyl compounds are both primary (directly emitted) and secondary (formed in situ) atmospheric species, which play a major role in tropospheric photochemistry. Because of trace concentrations (parts-per-billion and lower), ambient air measurements of carbonyls pose serious analytical problems. Generally, chromatographic approaches combined with chemical derivatization have been used to enhance sensitivity and selectivity in analysis. Currently, the liquid chromatographic method coupled to 2,4-dinitrophenylhydrazine derivatization (DNPH-LC) is in widespread use. Interferences arising from similar compounds are greatly minimized by chromatographic separation; however, those in the air sampling step, especially with ozone, continue to be problematic and remain to be resolved. Wemore » discuss here the different sampling techniques used for time-integrated collection of carbonyls in the DNPH-LC methods. Emphasis is placed on addressing: (1) the principles, advantages, and limitations of sampling techniques; (2) problems associated with reagent blank and sampling instrument; and (3) effects of atmospheric co-pollutants, especially ozone. 58 refs., 8 figs., 3 tabs.« less

  16. Critical and systematic evaluation of data for estimating human exposures to 2,4-dichlorophenoxyacetic acid (2,4-D) - quality and generalizability.

    PubMed

    LaKind, Judy S; Burns, Carol J; Naiman, Daniel Q; O'Mahony, Cian; Vilone, Giulia; Burns, Annette J; Naiman, Joshua S

    2017-01-01

    The herbicide 2,4-dichlorophenoxyacetic acid (2,4-D) has been commercially available since the 1940's. Despite decades of data on 2,4-D in food, air, soil, and water, as well as in humans, the quality the quality of these data has not been comprehensively evaluated. Using selected elements of the Biomonitoring, Environmental Epidemiology, and Short-lived Chemicals (BEES-C) instrument (temporal variability, avoidance of sample contamination, analyte stability, and urinary methods of matrix adjustment), the quality of 156 publications of environmental- and biomonitoring-based 2,4-D data was examined. Few publications documented steps were taken to avoid sample contamination. Similarly, most studies did not demonstrate the stability of the analyte from sample collection to analysis. Less than half of the biomonitoring publications reported both creatinine-adjusted and unadjusted urine concentrations. The scope and detail of data needed to assess temporal variability and sources of 2,4-D varied widely across the reviewed studies. Exposures to short-lived chemicals such as 2,4-D are impacted by numerous and changing external factors including application practices and formulations. At a minimum, greater transparency in reporting of quality control measures is needed. Perhaps the greatest challenge for the exposure community is the ability to reach consensus on how to address problems specific to short-lived chemical exposures in observational epidemiology investigations. More extensive conversations are needed to advance our understanding of human exposures and enable interpretation of these data to catch up to analytical capabilities. The problems defined in this review remain exquisitely difficult to address for chemicals like 2,4-D, with short and variable environmental and physiological half-lives and with exposures impacted by numerous and changing external factors.

  17. Multiclass Bayes error estimation by a feature space sampling technique

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.

    1979-01-01

    A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.

  18. Eco-analytical Methodology in Environmental Problems Monitoring

    NASA Astrophysics Data System (ADS)

    Agienko, M. I.; Bondareva, E. P.; Chistyakova, G. V.; Zhironkina, O. V.; Kalinina, O. I.

    2017-01-01

    Among the problems common to all mankind, which solutions influence the prospects of civilization, the problem of ecological situation monitoring takes very important place. Solution of this problem requires specific methodology based on eco-analytical comprehension of global issues. Eco-analytical methodology should help searching for the optimum balance between environmental problems and accelerating scientific and technical progress. The fact that Governments, corporations, scientists and nations focus on the production and consumption of material goods cause great damage to environment. As a result, the activity of environmentalists is developing quite spontaneously, as a complement to productive activities. Therefore, the challenge posed by the environmental problems for the science is the formation of geo-analytical reasoning and the monitoring of global problems common for the whole humanity. So it is expected to find the optimal trajectory of industrial development to prevent irreversible problems in the biosphere that could stop progress of civilization.

  19. The Fundamental Flaws of Immunoassays and Potential Solutions Using Tandem Mass Spectrometry

    PubMed Central

    Hoofnagle, Andrew N.; Wener, Mark H.

    2009-01-01

    Immunoassays have made it possible to measure dozens of individual proteins and other analytes in human samples for help in establishing the diagnosis and prognosis of disease. In too many cases the results of those measurements are misleading and can lead to unnecessary treatment or missed opportunities for therapeutic interventions. These cases stem from problems inherent to immunoassays performed with human samples, which include a lack of concordance across platforms, autoantibodies, anti-reagent antibodies, and the high-dose hook effect. Tandem mass spectrometry may represent a detection method capable of alleviating many of the flaws inherent to immunoassays. We review our understanding of the problems associated with immunoassays on human specimens and describe methodologies using tandem mass spectrometry that could solve some of those problems. We also provide a critical discussion of the potential pitfalls of novel mass spectrometric approaches in the clinical laboratory. PMID:19538965

  20. Causal inference and the data-fusion problem

    PubMed Central

    Bareinboim, Elias; Pearl, Judea

    2016-01-01

    We review concepts, principles, and tools that unify current approaches to causal analysis and attend to new challenges presented by big data. In particular, we address the problem of data fusion—piecing together multiple datasets collected under heterogeneous conditions (i.e., different populations, regimes, and sampling methods) to obtain valid answers to queries of interest. The availability of multiple heterogeneous datasets presents new opportunities to big data analysts, because the knowledge that can be acquired from combined data would not be possible from any individual source alone. However, the biases that emerge in heterogeneous environments require new analytical tools. Some of these biases, including confounding, sampling selection, and cross-population biases, have been addressed in isolation, largely in restricted parametric models. We here present a general, nonparametric framework for handling these biases and, ultimately, a theoretical solution to the problem of data fusion in causal inference tasks. PMID:27382148

  1. Identification of Fatty Acids, Phospholipids, and Their Oxidation Products Using Matrix-Assisted Laser Desorption Ionization Mass Spectrometry and Electrospray Ionization Mass Spectrometry

    ERIC Educational Resources Information Center

    Harmon, Christopher W.; Mang, Stephen A.; Greaves, John; Finlayson-Pitts, Barbara J.

    2010-01-01

    Electrospray ionization mass spectrometry (ESI-MS) and matrix-assisted laser desorption ionization mass spectrometry (MALDI-MS) have found increasing application in the analysis of biological samples. Using these techniques to solve problems in analytical chemistry should be an essential component of the training of undergraduate chemists. We…

  2. A framework for inference about carnivore density from unstructured spatial sampling of scat using detector dogs

    Treesearch

    Craig M. Thompson; J. Andrew Royle; James D. Garner

    2012-01-01

    Wildlife management often hinges upon an accurate assessment of population density. Although undeniably useful, many of the traditional approaches to density estimation such as visual counts, livetrapping, or mark–recapture suffer from a suite of methodological and analytical weaknesses. Rare, secretive, or highly mobile species exacerbate these problems through the...

  3. On Heat Transfer through a Solid Slab Heated Uniformly and Periodically: Determination of Thermal Properties

    ERIC Educational Resources Information Center

    Rojas-Trigos, J. B.; Bermejo-Arenas, J. A.; Marin, E.

    2012-01-01

    In this paper, some heat transfer characteristics through a sample that is uniformly heated on one of its surfaces by a power density modulated by a periodical square wave are discussed. The solution of this problem has two contributions, comprising a transient term and an oscillatory term, superposed to it. The analytical solution is compared to…

  4. Some thoughts on problems associated with various sampling media used for environmental monitoring

    USGS Publications Warehouse

    Horowitz, A.J.

    1997-01-01

    Modern analytical instrumentation is capable of measuring a variety of trace elements at concentrations down into the single or double digit parts-per-trillion (ng l-1) range. This holds for the three most common sample media currently used in environmental monitoring programs: filtered water, whole-water and separated suspended sediment. Unfortunately, current analytical capabilities have exceeded the current capacity to collect both uncontaminated and representative environmental samples. The success of any trace element monitoring program requires that this issue be both understood and addressed. The environmental monitoring of trace elements requires the collection of calendar- and event-based dissolved and suspended sediment samples. There are unique problems associated with the collection and chemical analyses of both types of sample media. Over the past 10 years, reported ambient dissolved trace element concentrations have declined. Generally, these decreases do not reflect better water quality, but rather improvements in the procedures used to collect, process, preserve and analyze these samples without contaminating them during these steps. Further, recent studies have shown that the currently accepted operational definition of dissolved constituents (material passing a 0.45 ??m membrane filter) is inadequat owing to sampling and processing artifacts. The existence of these artifacts raises questions about the generation of accurate, precise and comparable 'dissolved' trace element data. Suspended sediment and associated trace elements can display marked short- and long-term spatial and temporal variability. This implies that spatially representative samples only can be obtained by generating composites using depth- and width-integrated sampling techniques. Additionally, temporal variations have led to the view that the determination of annual trace element fluxes may require nearly constant (e.g., high-frequency) sampling and subsequent chemical analyses. Ultimately, sampling frequency for flux estimates becomes dependent on the time period of concern (daily, weekly, monthly, yearly) and the amount of acceptable error associated with these estimates.

  5. The role of light microscopy in aerospace analytical laboratories

    NASA Technical Reports Server (NTRS)

    Crutcher, E. R.

    1977-01-01

    Light microscopy has greatly reduced analytical flow time and added new dimensions to laboratory capability. Aerospace analytical laboratories are often confronted with problems involving contamination, wear, or material inhomogeneity. The detection of potential problems and the solution of those that develop necessitate the most sensitive and selective applications of sophisticated analytical techniques and instrumentation. This inevitably involves light microscopy. The microscope can characterize and often identify the cause of a problem in 5-15 minutes with confirmatory tests generally less than one hour. Light microscopy has and will make a very significant contribution to the analytical capabilities of aerospace laboratories.

  6. Analytical pricing formulas for hybrid variance swaps with regime-switching

    NASA Astrophysics Data System (ADS)

    Roslan, Teh Raihana Nazirah; Cao, Jiling; Zhang, Wenjun

    2017-11-01

    The problem of pricing discretely-sampled variance swaps under stochastic volatility, stochastic interest rate and regime-switching is being considered in this paper. An extension of the Heston stochastic volatility model structure is done by adding the Cox-Ingersoll-Ross (CIR) stochastic interest rate model. In addition, the parameters of the model are permitted to have transitions following a Markov chain process which is continuous and discoverable. This hybrid model can be used to illustrate certain macroeconomic conditions, for example the changing phases of business stages. The outcome of our regime-switching hybrid model is presented in terms of analytical pricing formulas for variance swaps.

  7. Ambient Mass Spectrometry Imaging Using Direct Liquid Extraction Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laskin, Julia; Lanekoff, Ingela

    2015-11-13

    Mass spectrometry imaging (MSI) is a powerful analytical technique that enables label-free spatial localization and identification of molecules in complex samples.1-4 MSI applications range from forensics5 to clinical research6 and from understanding microbial communication7-8 to imaging biomolecules in tissues.1, 9-10 Recently, MSI protocols have been reviewed.11 Ambient ionization techniques enable direct analysis of complex samples under atmospheric pressure without special sample pretreatment.3, 12-16 In fact, in ambient ionization mass spectrometry, sample processing (e.g., extraction, dilution, preconcentration, or desorption) occurs during the analysis.17 This substantially speeds up analysis and eliminates any possible effects of sample preparation on the localization of moleculesmore » in the sample.3, 8, 12-14, 18-20 Venter and co-workers have classified ambient ionization techniques into three major categories based on the sample processing steps involved: 1) liquid extraction techniques, in which analyte molecules are removed from the sample and extracted into a solvent prior to ionization; 2) desorption techniques capable of generating free ions directly from substrates; and 3) desorption techniques that produce larger particles subsequently captured by an electrospray plume and ionized.17 This review focuses on localized analysis and ambient imaging of complex samples using a subset of ambient ionization methods broadly defined as “liquid extraction techniques” based on the classification introduced by Venter and co-workers.17 Specifically, we include techniques where analyte molecules are desorbed from solid or liquid samples using charged droplet bombardment, liquid extraction, physisorption, chemisorption, mechanical force, laser ablation, or laser capture microdissection. Analyte extraction is followed by soft ionization that generates ions corresponding to intact species. Some of the key advantages of liquid extraction techniques include the ease of operation, ability to analyze samples in their native environments, speed of analysis, and ability to tune the extraction solvent composition to a problem at hand. For example, solvent composition may be optimized for efficient extraction of different classes of analytes from the sample or for quantification or online derivatization through reactive analysis. In this review, we will: 1) introduce individual liquid extraction techniques capable of localized analysis and imaging, 2) describe approaches for quantitative MSI experiments free of matrix effects, 3) discuss advantages of reactive analysis for MSI experiments, and 4) highlight selected applications (published between 2012 and 2015) that focus on imaging and spatial profiling of molecules in complex biological and environmental samples.« less

  8. Prevalence of Pre-Analytical Errors in Clinical Chemistry Diagnostic Labs in Sulaimani City of Iraqi Kurdistan

    PubMed Central

    2017-01-01

    Background Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Materials and Methods Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician’s request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. Results The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results. PMID:28107395

  9. Prevalence of Pre-Analytical Errors in Clinical Chemistry Diagnostic Labs in Sulaimani City of Iraqi Kurdistan.

    PubMed

    Najat, Dereen

    2017-01-01

    Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician's request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results.

  10. Limited stability of thiopurine metabolites in blood samples: relevant in research and clinical practise.

    PubMed

    de Graaf, P; Vos, R M; de Boer, N H K; Sinjewel, A; Jharap, B; Mulder, C J J; van Bodegraven, A A; Veldkamp, A I

    2010-06-01

    Monitoring of thiopurine metabolites 6-thioguanine nucleotides (6-TGN) and 6-methylmercaptopurine (6-MMP) is used to assess compliance and explain adverse reactions in IBD-patients. Correlations between dosage, metabolite concentrations and therapeutic efficacy or toxicity are contradictive. Research is complicated by analytical problems as matrices analyzed and analytical procedures vary widely. Moreover, stability of thiopurine metabolites is not well documented, yet pivotal for interpretation of analytical outcomes. Therefore, we prospectively investigated metabolite stability in blood samples under standard storage conditions. Stability at room temperature and refrigeration (22 degrees C, 4 degrees C) was investigated during 1 week and frozen samples (-20 degrees C, -80 degrees C) were analyzed during 6 months storage. Ten patient samples were analyzed for each study period. Median 6-TGN concentrations on day 7 decreased significantly to 53% and 90% during storage at ambient temperature or refrigeration. Median 6-MMP concentrations on day 7 decreased significantly to 55% and 86%, respectively. Samples stored at -20 degrees C also showed significant decreases in both 6-TGN and 6-MMP in comparison with baseline values. At -80 degrees C, only 6-MMP showed a significant decrease in values compared to baseline. The stability of thiopurine metabolites is clearly a limiting factor in studies investigating utilisation of TDM and correlations with therapeutic outcome in IBD-patients. This has to be accounted for in clinical practice and (multi-center) trials investigating thiopurine drugs. Copyright 2010. Published by Elsevier B.V.

  11. A modeling approach to compare ΣPCB concentrations between congener-specific analyses

    USGS Publications Warehouse

    Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.

    2017-01-01

    Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time. 

  12. National survey on the pre-analytical variability in a representative cohort of Italian laboratories.

    PubMed

    Lippi, Giuseppe; Montagnana, Martina; Giavarina, Davide

    2006-01-01

    Owing to remarkable advances in automation, laboratory technology and informatics, the pre-analytical phase has become the major source of variability in laboratory testing. The present survey investigated the development of several pre-analytical processes within a representative cohort of Italian clinical laboratories. A seven-point questionnaire was designed to investigate the following issues: 1a) the mean outpatient waiting time before check-in and 1b) the mean time from check-in to sample collection; 2) the mean time from sample collection to analysis; 3) the type of specimen collected for clinical chemistry testing; 4) the degree of pre-analytical automation; 5a) the number of samples shipped to other laboratories and 5b) the availability of standardised protocols for transportation; 6) the conditions for specimen storage; and 7) the availability and type of guidelines for management of unsuitable specimens. The questionnaire was administered to 150 laboratory specialists attending the SIMEL (Italian Society of Laboratory Medicine) National Meeting in June 2006. 107 questionnaires (71.3%) were returned. Data analysis revealed a high degree of variability among laboratories for the time required for check-in, outpatient sampling, sample transportation to the referral laboratory and analysis upon the arrival. Only 31% of laboratories have automated some pre-analytical steps. Of the 87% of laboratories that ship specimens to other facilities without sample preparation, 19% have no standardised protocol for transportation. For conventional clinical chemistry testing, 74% of the laboratories use serum evacuated tubes (59% with and 15% without serum separator), whereas the remaining 26% use lithium-heparin evacuated tubes (11% with and 15% without plasma separator). The storage period and conditions for rerun/retest vary widely. Only 63% of laboratories have a codified procedure for the management of unsuitable specimens, which are recognised by visual inspection (69%) or automatic detection (29%). Only 56% of the laboratories have standardised procedures for the management of unsuitable specimens, which vary widely on a local basis. The survey highlights broad heterogeneity in several pre-analytical processes among Italian laboratories. The lack of reliable guidelines encompassing evidence-based practice is a major problem for the standardisation of this crucial part of the testing process and represents a major challenge for laboratory medicine in the 2000s.

  13. [Patient identification errors and biological samples in the analytical process: Is it possible to improve patient safety?].

    PubMed

    Cuadrado-Cenzual, M A; García Briñón, M; de Gracia Hills, Y; González Estecha, M; Collado Yurrita, L; de Pedro Moro, J A; Fernández Pérez, C; Arroyo Fernández, M

    2015-01-01

    Patient identification errors and biological samples are one of the problems with the highest risk factor in causing an adverse event in the patient. To detect and analyse the causes of patient identification errors in analytical requests (PIEAR) from emergency departments, and to develop improvement strategies. A process and protocol was designed, to be followed by all professionals involved in the requesting and performing of laboratory tests. Evaluation and monitoring indicators of PIEAR were determined, before and after the implementation of these improvement measures (years 2010-2014). A total of 316 PIEAR were detected in a total of 483,254 emergency service requests during the study period, representing a mean of 6.80/10,000 requests. Patient identification failure was the most frequent in all the 6-monthly periods assessed, with a significant difference (P<.0001). The improvement strategies applied showed to be effective in detecting PIEAR, as well as the prevention of such errors. However, we must continue working with this strategy, promoting a culture of safety for all the professionals involved, and trying to achieve the goal that 100% of the analytical and samples are properly identified. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.

  14. A Literature Review-Problem Definition Studies on Selected Toxic Chemicals. Volume 1. Occupational Health and Safety Aspects of Diesel Fuel and White Smoke Generated from It

    DTIC Science & Technology

    1978-04-01

    properties heating oil :mammalian toxicity sampling ABSTRACT (Cotthum an to~" aid* if neoe...m mE ide .Ids bp block ,ninborj Literature is reviewed (75...mutations. Diesel fuel may be excreted from the body in the urine , or exhaled from the lungs. Persons who may be exposed to diesel fuel white smoke should be...37 VIII. Carcinogenicity, Mutagenicity, Teratogenicity 42 IX. Industrial Hygiene Practices and Standards 43 X. Sampling and Analytic Techniques 45

  15. Groundwater quality assessment for drinking and agriculture purposes in Abhar city, Iran.

    PubMed

    Jafari, Khadijeh; Asghari, Farzaneh Baghal; Hoseinzadeh, Edris; Heidari, Zahra; Radfard, Majid; Saleh, Hossein Najafi; Faraji, Hossein

    2018-08-01

    The main objective of this study is to assess the quality of groundwater for drinking consume and agriculture purposes in abhar city. The analytical results shows higher concentration of electrical conductivity (100%), total hardness (66.7%), total dissolved solids (40%), magnesium (23%), Sulfate (13.3%) which indicates signs of deterioration as per WHO and Iranian standards for drinking consume. Agricultural index, in terms of the hardness index, 73.3% of the samples in hard water category and 73.3% in sodium content were classified as good. Therefore, the main problem in the agricultural sector was the total hardness Water was estimated. For the RSC index, all 100% of the samples were desirable. In the physicochemical parameters of drinking water, 100% of the samples were undesirable in terms of electrical conductivity and 100% of the samples were desirable for sodium and chlorine parameters. Therefore, the main water problem in Abhar is related to electrical conductivity and water total hardness.

  16. Coping with effects of high dissolved salt samples on the inductively coupled plasma spectrometer

    Treesearch

    Jane E. Hislop; James W. Hornbeck; James W. Hornbeck

    2002-01-01

    Research on acidic forest soils typically uses unbuffered salt solutions as extractants for exchangeable cations. Our lab uses 1 M NH4C1 extractant for exchangeable cations (Ca, K, Mg, and Na) and 1 M KC1 for exchangeable aluminum. The resulting high dissolved salt solutions presented chronic analytical problems on flame atomic absorption spectrophotometer (AAS) and...

  17. Non-linear analytic and coanalytic problems ( L_p-theory, Clifford analysis, examples)

    NASA Astrophysics Data System (ADS)

    Dubinskii, Yu A.; Osipenko, A. S.

    2000-02-01

    Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the "orthogonal" sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented.

  18. Laser-induced breakdown spectroscopy (LIBS), part I: review of basic diagnostics and plasma-particle interactions: still-challenging issues within the analytical plasma community.

    PubMed

    Hahn, David W; Omenetto, Nicoló

    2010-12-01

    Laser-induced breakdown spectroscopy (LIBS) has become a very popular analytical method in the last decade in view of some of its unique features such as applicability to any type of sample, practically no sample preparation, remote sensing capability, and speed of analysis. The technique has a remarkably wide applicability in many fields, and the number of applications is still growing. From an analytical point of view, the quantitative aspects of LIBS may be considered its Achilles' heel, first due to the complex nature of the laser-sample interaction processes, which depend upon both the laser characteristics and the sample material properties, and second due to the plasma-particle interaction processes, which are space and time dependent. Together, these may cause undesirable matrix effects. Ways of alleviating these problems rely upon the description of the plasma excitation-ionization processes through the use of classical equilibrium relations and therefore on the assumption that the laser-induced plasma is in local thermodynamic equilibrium (LTE). Even in this case, the transient nature of the plasma and its spatial inhomogeneity need to be considered and overcome in order to justify the theoretical assumptions made. This first article focuses on the basic diagnostics aspects and presents a review of the past and recent LIBS literature pertinent to this topic. Previous research on non-laser-based plasma literature, and the resulting knowledge, is also emphasized. The aim is, on one hand, to make the readers aware of such knowledge and on the other hand to trigger the interest of the LIBS community, as well as the larger analytical plasma community, in attempting some diagnostic approaches that have not yet been fully exploited in LIBS.

  19. Quality-control results for ground-water and surface-water data, Sacramento River Basin, California, National Water-Quality Assessment, 1996-1998

    USGS Publications Warehouse

    Munday, Cathy; Domagalski, Joseph L.

    2003-01-01

    Evaluating the extent that bias and variability affect the interpretation of ground- and surface-water data is necessary to meet the objectives of the National Water-Quality Assessment (NAWQA) Program. Quality-control samples used to evaluate the bias and variability include annual equipment blanks, field blanks, field matrix spikes, surrogates, and replicates. This report contains quality-control results for the constituents critical to the ground- and surface-water components of the Sacramento River Basin study unit of the NAWQA Program. A critical constituent is one that was detected frequently (more than 50 percent of the time in blank samples), was detected at amounts exceeding water-quality standards or goals, or was important for the interpretation of water-quality data. Quality-control samples were collected along with ground- and surface-water samples during the high intensity phase (cycle 1) of the Sacramento River Basin NAWQA beginning early in 1996 and ending in 1998. Ground-water field blanks indicated contamination of varying levels of significance when compared with concentrations detected in environmental ground-water samples for ammonia, dissolved organic carbon, aluminum, and copper. Concentrations of aluminum in surface-water field blanks were significant when compared with environmental samples. Field blank samples collected for pesticide and volatile organic compound analyses revealed no contamination in either ground- or surface-water samples that would effect the interpretation of environmental data, with the possible exception of the volatile organic compound trichloromethane (chloroform) in ground water. Replicate samples for ground water and surface water indicate that variability resulting from sample collection, processing, and analysis was generally low. Some of the larger maximum relative percentage differences calculated for replicate samples occurred between samples having lowest absolute concentration differences and(or) values near the reporting limit. Surrogate recoveries for pesticides analyzed by gas chromatography/mass spectrometry (GC/MS), pesticides analyzed by high performance liquid chromatography (HPLC), and volatile organic compounds in ground- and surface-water samples were within the acceptable limits of 70 to 130 percent and median recovery values between 82 and 113 percent. The recovery percentages for surrogate compounds analyzed by HPLC had the highest standard deviation, 20 percent for ground-water samples and 16 percent for surface-water samples, and the lowest median values, 82 percent for ground-water samples and 91 percent for surface-water samples. Results were consistent with the recovery results described for the analytical methods. Field matrix spike recoveries for pesticide compounds analyzed using GC/MS in ground- and surface-water samples were comparable with published recovery data. Recoveries of carbofuran, a critical constituent in ground- and surface-water studies, and desethyl atrazine, a critical constituent in the ground-water study, could not be calculated because of problems with the analytical method. Recoveries of pesticides analyzed using HPLC in ground- and surface-water samples were generally low and comparable with published recovery data. Other methodological problems for HPLC analytes included nondetection of the spike compounds and estimated values of spike concentrations. Recovery of field matrix spikes for volatile organic compounds generally were within the acceptable range, 70 and 130 percent for both ground- and surface-water samples, and median recoveries from 62 to 127 percent. High or low recoveries could be related to errors in the field, such as double spiking or using spike solution past its expiration date, rather than problems during analysis. The methodological changes in the field spike protocol during the course of the Sacramento River Basin study, which included decreasing the amount of spike solu

  20. Comparative study of some commercial samples of naga bhasma.

    PubMed

    Wadekar, Mrudula; Gogte, Viswas; Khandagale, Prasad; Prabhune, Asmita

    2004-04-01

    Naga bhasma is one of those reputed ayurvedic bhasmas which are claimed to possess some extraordinary medical properties. However, identification of a genuine sample of naga bhasma is a challenging problem. Because at present naga bhasma is manufactured by different ayurvedic pharmacies, by following different methods, these products are not standardised either from chemical and structural point of view. Therefore, comparative study of these samples using modern analytical techniques is important and necessary to understand their current status. In this communication, such study of naga bhasma from chemical and structural point of view is reported by using XRD, IR and UV spectroscopy and thermogravimetry.

  1. Approximated analytical solution to an Ebola optimal control problem

    NASA Astrophysics Data System (ADS)

    Hincapié-Palacio, Doracelly; Ospina, Juan; Torres, Delfim F. M.

    2016-11-01

    An analytical expression for the optimal control of an Ebola problem is obtained. The analytical solution is found as a first-order approximation to the Pontryagin Maximum Principle via the Euler-Lagrange equation. An implementation of the method is given using the computer algebra system Maple. Our analytical solutions confirm the results recently reported in the literature using numerical methods.

  2. Existence, uniqueness and regularity of a time-periodic probability density distribution arising in a sedimentation-diffusion problem

    NASA Technical Reports Server (NTRS)

    Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard

    1988-01-01

    The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.

  3. Micro-focused ultrasonic solid-liquid extraction (muFUSLE) combined with HPLC and fluorescence detection for PAHs determination in sediments: optimization and linking with the analytical minimalism concept.

    PubMed

    Capelo, J L; Galesio, M M; Felisberto, G M; Vaz, C; Pessoa, J Costa

    2005-06-15

    Analytical minimalism is a concept that deals with the optimization of all stages of an analytical procedure so that it becomes less time, cost, sample, reagent and energy consuming. The guide-lines provided in the USEPA extraction method 3550B recommend the use of focused ultrasound (FU), i.e., probe sonication, for the solid-liquid extraction of Polycyclic Aromatic Hydrocarbons, PAHs, but ignore the principle of analytical minimalism. The problems related with the dead sonication zones, often present when high volumes are sonicated with probe, are also not addressed. In this work, we demonstrate that successful extraction and quantification of PAHs from sediments can be done with low sample mass (0.125g), low reagent volume (4ml), short sonication time (3min) and low sonication amplitude (40%). Two variables are here particularly taken into account for total extraction: (i) the design of the extraction vessel and (ii) the solvent used to carry out the extraction. Results showed PAHs recoveries (EPA priority list) ranged between 77 and 101%, accounting for more than 95% for most of the PAHs here studied, as compared with the values obtained after soxhlet extraction. Taking into account the results reported in this work we recommend a revision of the EPA guidelines for PAHs extraction from solid matrices with focused ultrasound, so that these match the analytical minimalism concept.

  4. The role of analytical science in natural resource decision making

    NASA Astrophysics Data System (ADS)

    Miller, Alan

    1993-09-01

    There is a continuing debate about the proper role of analytical (positivist) science in natural resource decision making. Two diametrically opposed views are evident, arguing for and against a more extended role for scientific information. The debate takes on a different complexion if one recognizes that certain kinds of problem, referred to here as “wicked” or “trans-science” problems, may not be amenable to the analytical process. Indeed, the mistaken application of analytical methods to trans-science problems may not only be a waste of time and money but also serve to hinder policy development. Since many environmental issues are trans-science in nature, then it follows that alternatives to analytical science need to be developed. In this article, the issues involved in the debate are clarified by examining the impact of the use of analytical methods in a particular case, the spruce budworm controversy in New Brunswick. The article ends with some suggestions about a “holistic” approach to the problem.

  5. Happy software developers solve problems better: psychological measurements in empirical software engineering

    PubMed Central

    Wang, Xiaofeng; Abrahamsson, Pekka

    2014-01-01

    For more than thirty years, it has been claimed that a way to improve software developers’ productivity and software quality is to focus on people and to provide incentives to make developers satisfied and happy. This claim has rarely been verified in software engineering research, which faces an additional challenge in comparison to more traditional engineering fields: software development is an intellectual activity and is dominated by often-neglected human factors (called human aspects in software engineering research). Among the many skills required for software development, developers must possess high analytical problem-solving skills and creativity for the software construction process. According to psychology research, affective states—emotions and moods—deeply influence the cognitive processing abilities and performance of workers, including creativity and analytical problem solving. Nonetheless, little research has investigated the correlation between the affective states, creativity, and analytical problem-solving performance of programmers. This article echoes the call to employ psychological measurements in software engineering research. We report a study with 42 participants to investigate the relationship between the affective states, creativity, and analytical problem-solving skills of software developers. The results offer support for the claim that happy developers are indeed better problem solvers in terms of their analytical abilities. The following contributions are made by this study: (1) providing a better understanding of the impact of affective states on the creativity and analytical problem-solving capacities of developers, (2) introducing and validating psychological measurements, theories, and concepts of affective states, creativity, and analytical-problem-solving skills in empirical software engineering, and (3) raising the need for studying the human factors of software engineering by employing a multidisciplinary viewpoint. PMID:24688866

  6. Happy software developers solve problems better: psychological measurements in empirical software engineering.

    PubMed

    Graziotin, Daniel; Wang, Xiaofeng; Abrahamsson, Pekka

    2014-01-01

    For more than thirty years, it has been claimed that a way to improve software developers' productivity and software quality is to focus on people and to provide incentives to make developers satisfied and happy. This claim has rarely been verified in software engineering research, which faces an additional challenge in comparison to more traditional engineering fields: software development is an intellectual activity and is dominated by often-neglected human factors (called human aspects in software engineering research). Among the many skills required for software development, developers must possess high analytical problem-solving skills and creativity for the software construction process. According to psychology research, affective states-emotions and moods-deeply influence the cognitive processing abilities and performance of workers, including creativity and analytical problem solving. Nonetheless, little research has investigated the correlation between the affective states, creativity, and analytical problem-solving performance of programmers. This article echoes the call to employ psychological measurements in software engineering research. We report a study with 42 participants to investigate the relationship between the affective states, creativity, and analytical problem-solving skills of software developers. The results offer support for the claim that happy developers are indeed better problem solvers in terms of their analytical abilities. The following contributions are made by this study: (1) providing a better understanding of the impact of affective states on the creativity and analytical problem-solving capacities of developers, (2) introducing and validating psychological measurements, theories, and concepts of affective states, creativity, and analytical-problem-solving skills in empirical software engineering, and (3) raising the need for studying the human factors of software engineering by employing a multidisciplinary viewpoint.

  7. Problem Formulation in Knowledge Discovery via Data Analytics (KDDA) for Environmental Risk Management

    PubMed Central

    Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason

    2016-01-01

    With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM3 ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs. PMID:27983713

  8. Problem Formulation in Knowledge Discovery via Data Analytics (KDDA) for Environmental Risk Management.

    PubMed

    Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason

    2016-12-15

    With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM³ ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs.

  9. Collaborative Visual Analytics: A Health Analytics Approach to Injury Prevention

    PubMed Central

    Fisher, Brian; Smith, Jennifer; Pike, Ian

    2017-01-01

    Background: Accurate understanding of complex health data is critical in order to deal with wicked health problems and make timely decisions. Wicked problems refer to ill-structured and dynamic problems that combine multidimensional elements, which often preclude the conventional problem solving approach. This pilot study introduces visual analytics (VA) methods to multi-stakeholder decision-making sessions about child injury prevention; Methods: Inspired by the Delphi method, we introduced a novel methodology—group analytics (GA). GA was pilot-tested to evaluate the impact of collaborative visual analytics on facilitating problem solving and supporting decision-making. We conducted two GA sessions. Collected data included stakeholders’ observations, audio and video recordings, questionnaires, and follow up interviews. The GA sessions were analyzed using the Joint Activity Theory protocol analysis methods; Results: The GA methodology triggered the emergence of ‘common ground’ among stakeholders. This common ground evolved throughout the sessions to enhance stakeholders’ verbal and non-verbal communication, as well as coordination of joint activities and ultimately collaboration on problem solving and decision-making; Conclusions: Understanding complex health data is necessary for informed decisions. Equally important, in this case, is the use of the group analytics methodology to achieve ‘common ground’ among diverse stakeholders about health data and their implications. PMID:28895928

  10. Collaborative Visual Analytics: A Health Analytics Approach to Injury Prevention.

    PubMed

    Al-Hajj, Samar; Fisher, Brian; Smith, Jennifer; Pike, Ian

    2017-09-12

    Background : Accurate understanding of complex health data is critical in order to deal with wicked health problems and make timely decisions. Wicked problems refer to ill-structured and dynamic problems that combine multidimensional elements, which often preclude the conventional problem solving approach. This pilot study introduces visual analytics (VA) methods to multi-stakeholder decision-making sessions about child injury prevention; Methods : Inspired by the Delphi method, we introduced a novel methodology-group analytics (GA). GA was pilot-tested to evaluate the impact of collaborative visual analytics on facilitating problem solving and supporting decision-making. We conducted two GA sessions. Collected data included stakeholders' observations, audio and video recordings, questionnaires, and follow up interviews. The GA sessions were analyzed using the Joint Activity Theory protocol analysis methods; Results : The GA methodology triggered the emergence of ' common g round ' among stakeholders. This common ground evolved throughout the sessions to enhance stakeholders' verbal and non-verbal communication, as well as coordination of joint activities and ultimately collaboration on problem solving and decision-making; Conclusion s : Understanding complex health data is necessary for informed decisions. Equally important, in this case, is the use of the group analytics methodology to achieve ' common ground' among diverse stakeholders about health data and their implications.

  11. Method for Operating a Sensor to Differentiate Between Analytes in a Sample

    DOEpatents

    Kunt, Tekin; Cavicchi, Richard E; Semancik, Stephen; McAvoy, Thomas J

    1998-07-28

    Disclosed is a method for operating a sensor to differentiate between first and second analytes in a sample. The method comprises the steps of determining a input profile for the sensor which will enhance the difference in the output profiles of the sensor as between the first analyte and the second analyte; determining a first analyte output profile as observed when the input profile is applied to the sensor; determining a second analyte output profile as observed when the temperature profile is applied to the sensor; introducing the sensor to the sample while applying the temperature profile to the sensor, thereby obtaining a sample output profile; and evaluating the sample output profile as against the first and second analyte output profiles to thereby determine which of the analytes is present in the sample.

  12. Estimation of the curvature of the solid liquid interface during Bridgman crystal growth

    NASA Astrophysics Data System (ADS)

    Barat, Catherine; Duffar, Thierry; Garandet, Jean-Paul

    1998-11-01

    An approximate solution for the solid/liquid interface curvature due to the crucible effect in crystal growth is derived from simple heat flux considerations. The numerical modelling of the problem carried out with the help of the finite element code FIDAP supports the predictions of our analytical expression and allows to identify its range of validity. Experimental interface curvatures, measured in gallium antimonide samples grown by the vertical Bridgman method, are seen to compare satisfactorily to analytical and numerical results. Other literature data are also in fair agreement with the predictions of our models in the case where the amount of heat carried by the crucible is small compared to the overall heat flux.

  13. Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods

    NASA Astrophysics Data System (ADS)

    Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.

    2011-12-01

    Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.

  14. Sampled-data-based consensus and containment control of multiple harmonic oscillators: A motion-planning approach

    NASA Astrophysics Data System (ADS)

    Liu, Yongfang; Zhao, Yu; Chen, Guanrong

    2016-11-01

    This paper studies the distributed consensus and containment problems for a group of harmonic oscillators with a directed communication topology. First, for consensus without a leader, a class of distributed consensus protocols is designed by using motion planning and Pontryagin's principle. The proposed protocol only requires relative information measurements at the sampling instants, without requiring information exchange over the sampled interval. By using stability theory and the properties of stochastic matrices, it is proved that the distributed consensus problem can be solved in the motion planning framework. Second, for the case with multiple leaders, a class of distributed containment protocols is developed for followers such that their positions and velocities can ultimately converge to the convex hull formed by those of the leaders. Compared with the existing consensus algorithms, a remarkable advantage of the proposed sampled-data-based protocols is that the sampling periods, communication topologies and control gains are all decoupled and can be separately designed, which relaxes many restrictions in controllers design. Finally, some numerical examples are given to illustrate the effectiveness of the analytical results.

  15. Background contamination by coplanar polychlorinated biphenyls (PCBs) in trace level high resolution gas chromatography/high resolution mass spectrometry (HRGC/HRMS) analytical procedures.

    PubMed

    Ferrario, J; Byrne, C; Dupuy, A E

    1997-06-01

    The addition of the "dioxin-like" polychlorinated biphenyl (PCB) congeners to the assessment of risk associated with the 2,3,7,8-chlorine substituted dioxins and furans has dramatically increased the number of laboratories worldwide that are developing analytical procedures for their detection and quantitation. Most of these procedures are based on established sample preparation and analytical techniques employing high resolution gas chromatography/high resolution mass spectrometry (HRGC/HRMS), which are used for the analyses of dioxin/furans at low parts-per-trillion (ppt) levels. A significant and widespread problem that arises when using these sample preparation procedures for the analysis of coplanar PCBs is the presence of background levels of these congeners. Industrial processes, urban incineration, leaking electrical transformers, hazardous waste accidents, and improper waste disposal practices have released appreciable quantities of PCBs into the environment. This contamination has resulted in the global distribution of these compounds via the atmosphere and their ubiquitous presence in ambient air. The background presence of these compounds in method blanks must be addressed when determining the exact concentrations of these and other congeners in environmental samples. In this study reliable procedures were developed to accurately define these background levels and assess their variability over the course of the study. The background subtraction procedures developed and employed increase the probability that the values reported accurately represent the concentrations found in the samples and were not biased due to this background contamination.

  16. Background contamination by coplanar polychlorinated biphenyls (PCBs) in trace level high resolution gas chromatography/high resolution mass spectrometry (HRGC/HRMS) analytical procedures

    NASA Technical Reports Server (NTRS)

    Ferrario, J.; Byrne, C.; Dupuy, A. E. Jr

    1997-01-01

    The addition of the "dioxin-like" polychlorinated biphenyl (PCB) congeners to the assessment of risk associated with the 2,3,7,8-chlorine substituted dioxins and furans has dramatically increased the number of laboratories worldwide that are developing analytical procedures for their detection and quantitation. Most of these procedures are based on established sample preparation and analytical techniques employing high resolution gas chromatography/high resolution mass spectrometry (HRGC/HRMS), which are used for the analyses of dioxin/furans at low parts-per-trillion (ppt) levels. A significant and widespread problem that arises when using these sample preparation procedures for the analysis of coplanar PCBs is the presence of background levels of these congeners. Industrial processes, urban incineration, leaking electrical transformers, hazardous waste accidents, and improper waste disposal practices have released appreciable quantities of PCBs into the environment. This contamination has resulted in the global distribution of these compounds via the atmosphere and their ubiquitous presence in ambient air. The background presence of these compounds in method blanks must be addressed when determining the exact concentrations of these and other congeners in environmental samples. In this study reliable procedures were developed to accurately define these background levels and assess their variability over the course of the study. The background subtraction procedures developed and employed increase the probability that the values reported accurately represent the concentrations found in the samples and were not biased due to this background contamination.

  17. High heating rate thermal desorption for molecular surface sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ovchinnikova, Olga S.; Van Berkel, Gary J.

    2016-03-29

    A method for analyzing a sample having at least one analyte includes the step of heating the sample at a rate of at least 10.sup.6 K/s to thermally desorb at least one analyte from the sample. The desorbed analyte is collected. The analyte can then be analyzed.

  18. Enhanced spot preparation for liquid extractive sampling and analysis

    DOEpatents

    Van Berkel, Gary J.; King, Richard C.

    2015-09-22

    A method for performing surface sampling of an analyte, includes the step of placing the analyte on a stage with a material in molar excess to the analyte, such that analyte-analyte interactions are prevented and the analyte can be solubilized for further analysis. The material can be a matrix material that is mixed with the analyte. The material can be provided on a sample support. The analyte can then be contacted with a solvent to extract the analyte for further processing, such as by electrospray mass spectrometry.

  19. Fast transient analysis and first-stage collision-induced dissociation with the flowing atmospheric-pressure afterglow ionization source to improve analyte detection and identification.

    PubMed

    Shelley, Jacob T; Hieftje, Gary M

    2010-04-01

    The recent development of ambient desorption/ionization mass spectrometry (ADI-MS) has enabled fast, simple analysis of many different sample types. The ADI-MS sources have numerous advantages, including little or no required sample pre-treatment, simple mass spectra, and direct analysis of solids and liquids. However, problems of competitive ionization and limited fragmentation require sample-constituent separation, high mass accuracy, and/or tandem mass spectrometry (MS/MS) to detect, identify, and quantify unknown analytes. To maintain the inherent high throughput of ADI-MS, it is essential for the ion source/mass analyzer combination to measure fast transient signals and provide structural information. In the current study, the flowing atmospheric-pressure afterglow (FAPA) ionization source is coupled with a time-of-flight mass spectrometer (TOF-MS) to analyze fast transient signals (<500 ms FWHM). It was found that gas chromatography (GC) coupled with the FAPA source resulted in a reproducible (<5% RSD) and sensitive (detection limits of <6 fmol for a mixture of herbicides) system with analysis times of ca. 5 min. Introducing analytes to the FAPA in a transient was also shown to significantly reduce matrix effects caused by competitive ionization by minimizing the number and amount of constituents introduced into the ionization source. Additionally, MS/MS with FAPA-TOF-MS, enabling analyte identification, was performed via first-stage collision-induced dissociation (CID). Lastly, molecular and structural information was obtained across a fast transient peak by modulating the conditions that caused the first-stage CID.

  20. The Relationship of Social Problem-Solving Skills and Dysfunctional Attitudes with Risk of Drug Abuse among Dormitory Students at Isfahan University of Medical Sciences

    PubMed Central

    Nasrazadani, Ehteram; Maghsoudi, Jahangir; Mahrabi, Tayebeh

    2017-01-01

    Background: Dormitory students encounter multiple social factors which cause pressure, such as new social relationships, fear of the future, and separation from family, which could cause serious problems such as tendency toward drug abuse. This research was conducted with the goal to determine social problem-solving skills, dysfunctional attitudes, and risk of drug abuse among dormitory students of Isfahan University of Medical Sciences, Iran. Materials and Methods: This was a descriptive-analytical, correlational, and cross-sectional research. The research sample consisted of 211 students living in dormitories. The participants were selected using randomized quota sampling method. The data collection tools included the Social Problem-Solving Inventory (SPSI), Dysfunctional Attitude Scale (DAS), and Identifying People at Risk of Addiction Questionnaire. Results: The results indicated an inverse relationship between social problem-solving skills and risk of drug abuse (P = 0.0002), a direct relationship between dysfunctional attitude and risk of drug abuse (P = 0.030), and an inverse relationship between social problem-solving skills and dysfunctional attitude among students (P = 0.0004). Conclusions: Social problem-solving skills have a correlation with dysfunctional attitudes. As a result, teaching these skills and the way to create efficient attitudes should be considered in dormitory students. PMID:28904539

  1. The Relationship of Social Problem-Solving Skills and Dysfunctional Attitudes with Risk of Drug Abuse among Dormitory Students at Isfahan University of Medical Sciences.

    PubMed

    Nasrazadani, Ehteram; Maghsoudi, Jahangir; Mahrabi, Tayebeh

    2017-01-01

    Dormitory students encounter multiple social factors which cause pressure, such as new social relationships, fear of the future, and separation from family, which could cause serious problems such as tendency toward drug abuse. This research was conducted with the goal to determine social problem-solving skills, dysfunctional attitudes, and risk of drug abuse among dormitory students of Isfahan University of Medical Sciences, Iran. This was a descriptive-analytical, correlational, and cross-sectional research. The research sample consisted of 211 students living in dormitories. The participants were selected using randomized quota sampling method. The data collection tools included the Social Problem-Solving Inventory (SPSI), Dysfunctional Attitude Scale (DAS), and Identifying People at Risk of Addiction Questionnaire. The results indicated an inverse relationship between social problem-solving skills and risk of drug abuse ( P = 0.0002), a direct relationship between dysfunctional attitude and risk of drug abuse ( P = 0.030), and an inverse relationship between social problem-solving skills and dysfunctional attitude among students ( P = 0.0004). Social problem-solving skills have a correlation with dysfunctional attitudes. As a result, teaching these skills and the way to create efficient attitudes should be considered in dormitory students.

  2. Determination of palladium, platinum and rhodium in used automobile catalysts and active pharmaceutical ingredients using high-resolution continuum source graphite furnace atomic absorption spectrometry and direct solid sample analysis

    NASA Astrophysics Data System (ADS)

    Resano, Martín; Flórez, María del Rosario; Queralt, Ignasi; Marguí, Eva

    2015-03-01

    This work investigates the potential of high-resolution continuum source graphite furnace atomic absorption spectrometry for the direct determination of Pd, Pt and Rh in two samples of very different nature. While analysis of active pharmaceutical ingredients is straightforward and it is feasible to minimize matrix effects, to the point that calibration can be carried out against aqueous standard solutions, the analysis of used automobile catalysts is more challenging requiring the addition of a chemical modifier (NH4F·HF) to help in releasing the analytes, a more vigorous temperature program and the use of a solid standard (CRM ERM®-EB504) for calibration. However, in both cases it was possible to obtain accurate results and precision values typically better than 10% RSD in a fast and simple way, while only two determinations are needed for the three analytes, since Pt and Rh can be simultaneously monitored in both types of samples. Overall, the methods proposed seem suited for the determination of these analytes in such types of samples, offering a greener and faster alternative that circumvents the traditional problems associated with sample digestion, requiring a small amount of sample only (0.05 mg per replicate for catalysts, and a few milligrams for the pharmaceuticals) and providing sufficient sensitivity to easily comply with regulations. The LODs achieved were 6.5 μg g- 1 (Pd), 8.3 μg g- 1 (Pt) and 9.3 μg g- 1 (Rh) for catalysts, which decreased to 0.08 μg g- 1 (Pd), 0.15 μg g- 1 (Pt) and 0.10 μg g- 1 (Rh) for pharmaceuticals.

  3. Effect of Geometry on Electrokinetic Characterization of Solid Surfaces.

    PubMed

    Kumar, Abhijeet; Kleinen, Jochen; Venzmer, Joachim; Gambaryan-Roisman, Tatiana

    2017-08-01

    An analytical approach is presented to describe pressure-driven streaming current (I str ) and streaming potential (U str ) generation in geometrically complex samples, for which the classical Helmholtz-Smoluchowski (H-S) equation is known to be inaccurate. The new approach is valid under the same prerequisite conditions that are used for the development of the H-S equation, that is, the electrical double layers (EDLs) are sufficiently thin and surface conductivity and electroviscous effects are negligible. The analytical methodology is developed using linear velocity profiles to describe liquid flow inside of EDLs and using simplifying approximations to describe macroscopic flow. At first, a general expression is obtained to describe the I str generated in different cross sections of an arbitrarily shaped sample. Thereafter, assuming that the generated U str varies only along the pressure-gradient direction, an expression describing the variation of generated U str along the sample length is obtained. These expressions describing I str and U str generation constitute the theoretical foundation of this work, which is first applied to a set of three nonuniform cross-sectional capillaries and thereafter to a square array of cylindrical fibers (model porous media) for both parallel and transverse fiber orientation cases. Although analytical solutions cannot be obtained for real porous substrates because of their random structure, the new theory provides useful insights into the effect of important factors such as fiber orientation, sample porosity, and sample dimensions. The solutions obtained for the model porous media are used to device strategies for more accurate zeta potential determination of porous fiber plugs. The new approach could be thus useful in resolving the long-standing problem of sample geometry dependence of zeta potential measurements.

  4. Future Lunar Sampling Missions: Big Returns on Small Samples

    NASA Astrophysics Data System (ADS)

    Shearer, C. K.; Borg, L.

    2002-01-01

    The next sampling missions to the Moon will result in the return of sample mass (100g to 1 kg) substantially smaller than those returned by the Apollo missions (380 kg). Lunar samples to be returned by these missions are vital for: (1) calibrating the late impact history of the inner solar system that can then be extended to other planetary surfaces; (2) deciphering the effects of catastrophic impacts on a planetary body (i.e. Aitken crater); (3) understanding the very late-stage thermal and magmatic evolution of a cooling planet; (4) exploring the interior of a planet; and (5) examining volatile reservoirs and transport on an airless planetary body. Can small lunar samples be used to answer these and other pressing questions concerning important solar system processes? Two potential problems with small, robotically collected samples are placing them in a geologic context and extracting robust planetary information. Although geologic context will always be a potential problem with any planetary sample, new lunar samples can be placed within the context of the important Apollo - Luna collections and the burgeoning planet-scale data sets for the lunar surface and interior. Here we illustrate the usefulness of applying both new or refined analytical approaches in deciphering information locked in small lunar samples.

  5. The Students Decision Making in Solving Discount Problem

    ERIC Educational Resources Information Center

    Abdillah; Nusantara, Toto; Subanji; Susanto, Hery; Abadyo

    2016-01-01

    This research is reviewing students' process of decision making intuitively, analytically, and interactively. The research done by using discount problem which specially created to explore student's intuition, analytically, and interactively. In solving discount problems, researcher exploring student's decision in determining their attitude which…

  6. Final report on mid-polarity analytes in food matrix: mid-polarity pesticides in tea

    NASA Astrophysics Data System (ADS)

    Sin, Della W. M.; Li, Hongmei; Wong, S. K.; Lo, M. F.; Wong, Y. L.; Wong, Y. C.; Mok, C. S.

    2015-01-01

    At the Paris meeting in April 2011, the CCQM Working Group on Organic Analysis (OAWG) agreed on a suite of Track A studies meant to support the assessment of measurement capabilities needed for the delivery of measurement services within the scope of the OAWG Terms of Reference. One of the studies discussed and agreed upon for the suite of ten Track A studies that support the 5-year plan of the CCQM Core Competence assessment was CCQM-K95 'Mid-Polarity Analytes in Food Matrix: Mid-Polarity Pesticides in Tea'. This key comparison was co-organized by the Government Laboratory of Hong Kong Special Administrative Region (GL) and the National Institute of Metrology, China (NIM). To allow wider participation, a pilot study, CCQM-P136, was run in parallel. Participants' capabilities in measuring mid-polarity analytes in food matrix were demonstrated through this key comparison. Most of the participating NMIs/DIs successfully measured beta-endosulfan and endosulfan sulphate in the sample, however, there is room for further improvement for some participants. This key comparison involved not only extraction, clean-up, analytical separation and selective detection of the analytes in a complex food matrix, but also the pre-treatment procedures of the material before the extraction process. The problem of incomplete extraction of the incurred analytes from the sample matrix may not be observed simply by using spike recovery. The relative standard deviations for the data included in the KCRV calculation in this key comparison were less than 7 % which was acceptable given the complexity of the matrix, the level of the analytes and the complexity of the analytical procedure. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  7. Bias Assessment of General Chemistry Analytes using Commutable Samples.

    PubMed

    Koerbin, Gus; Tate, Jillian R; Ryan, Julie; Jones, Graham Rd; Sikaris, Ken A; Kanowski, David; Reed, Maxine; Gill, Janice; Koumantakis, George; Yen, Tina; St John, Andrew; Hickman, Peter E; Simpson, Aaron; Graham, Peter

    2014-11-01

    Harmonisation of reference intervals for routine general chemistry analytes has been a goal for many years. Analytical bias may prevent this harmonisation. To determine if analytical bias is present when comparing methods, the use of commutable samples, or samples that have the same properties as the clinical samples routinely analysed, should be used as reference samples to eliminate the possibility of matrix effect. The use of commutable samples has improved the identification of unacceptable analytical performance in the Netherlands and Spain. The International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) has undertaken a pilot study using commutable samples in an attempt to determine not only country specific reference intervals but to make them comparable between countries. Australia and New Zealand, through the Australasian Association of Clinical Biochemists (AACB), have also undertaken an assessment of analytical bias using commutable samples and determined that of the 27 general chemistry analytes studied, 19 showed sufficiently small between method biases as to not prevent harmonisation of reference intervals. Application of evidence based approaches including the determination of analytical bias using commutable material is necessary when seeking to harmonise reference intervals.

  8. Microfluidic-Based sample chips for radioactive solutions

    DOE PAGES

    Tripp, J. L.; Law, J. D.; Smith, T. E.; ...

    2015-01-01

    Historical nuclear fuel cycle process sampling techniques required sample volumes ranging in the tens of milliliters. The radiation levels experienced by analytical personnel and equipment, in addition to the waste volumes generated from analysis of these samples, have been significant. These sample volumes also impacted accountability inventories of required analytes during process operations. To mitigate radiation dose and other issues associated with the historically larger sample volumes, a microcapillary sample chip was chosen for further investigation. The ability to obtain microliter volume samples coupled with a remote automated means of sample loading, tracking, and transporting to the analytical instrument wouldmore » greatly improve analytical efficiency while reducing both personnel exposure and radioactive waste volumes. Sample chip testing was completed to determine the accuracy, repeatability, and issues associated with the use of microfluidic sample chips used to supply µL sample volumes of lanthanide analytes dissolved in nitric acid for introduction to an analytical instrument for elemental analysis.« less

  9. Multifunctional Au NPs-polydopamine-polyvinylidene fluoride membrane chips as probe for enrichment and rapid detection of organic contaminants.

    PubMed

    Wang, Saihua; Niu, Hongyun; Cai, Yaqi; Cao, Dong

    2018-05-01

    High-throughput and rapid detection of hazardous compounds in complicated samples is essential for the solution of environmental problems. We have prepared a "pH-paper-like" chip which can rapidly "indicate" the occurrence of organic contaminants just through dipping the chip in water samples for short time followed by fast analysis with surface-assisted laser desorption/ionization time-of-flight mass spectrometry (SALDI-TOF MS). The chips are composed of polyvinylidene fluoride membrane (PVDFM), polydopamine (PDA) film and Au nanoparticles (Au NPs), which are layer-by-layer assembled according to the adhesion, self-polymerization and reduction property of dopamine. In the Au NPs loaded polydopamine-polyvinylidene fluoride membrane (Au NPs-PDA-PVDFM) chips, PVDFM combined with PDA film are responsible for the enrichment of organic analyte through hydrophobic interactions and π-π stacking; Au NPs serve as effective SALDI matrix for the rapid detection of target analyte. After dipping into water solution for minutes, the Au-PDA-PVDFM chips with enriched organic analytes can be detected directly with SALDI-TOF MS. The good solid-phase extraction performance of the PDA-PVDFM components, remarkable matrix effect of the loaded AuNPs, and sensitivity of the SALDI-TOF MS technique ensure excellent sensitivity and reproducibility for the quantification of trace levels of organic contaminants in environmental water samples. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. New trends in the analytical determination of emerging contaminants and their transformation products in environmental waters.

    PubMed

    Agüera, Ana; Martínez Bueno, María Jesús; Fernández-Alba, Amadeo R

    2013-06-01

    Since the so-called emerging contaminants were established as a new group of pollutants of environmental concern, a great effort has been devoted to the knowledge of their distribution, fate and effects in the environment. After more than 20 years of work, a significant improvement in knowledge about these contaminants has been achieved, but there is still a large gap of information on the growing number of new potential contaminants that are appearing and especially of their unpredictable transformation products. Although the environmental problem arising from emerging contaminants must be addressed from an interdisciplinary point of view, it is obvious that analytical chemistry plays an important role as the first step of the study, as it allows establishing the presence of chemicals in the environment, estimate their concentration levels, identify sources and determine their degradation pathways. These tasks involve serious difficulties requiring different analytical solutions adjusted to purpose. Thus, the complexity of the matrices requires highly selective analytical methods; the large number and variety of compounds potentially present in the samples demands the application of wide scope methods; the low concentrations at which these contaminants are present in the samples require a high detection sensitivity, and high demands on the confirmation and high structural information are needed for the characterisation of unknowns. New developments on analytical instrumentation have been applied to solve these difficulties. Furthermore and not less important has been the development of new specific software packages intended for data acquisition and, in particular, for post-run analysis. Thus, the use of sophisticated software tools has allowed successful screening analysis, determining several hundreds of analytes, and assisted in the structural elucidation of unknown compounds in a timely manner.

  11. Analytic semigroups: Applications to inverse problems for flexible structures

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Rebnord, D. A.

    1990-01-01

    Convergence and stability results for least squares inverse problems involving systems described by analytic semigroups are presented. The practical importance of these results is demonstrated by application to several examples from problems of estimation of material parameters in flexible structures using accelerometer data.

  12. RE-EVALUATION OF APPLICABILITY OF AGENCY SAMPLE HOLDING TIMES

    EPA Science Inventory

    Holding times are the length of time a sample can be stored after collection and prior to analysis without significantly affecting the analytical results. Holding times vary with the analyte, sample matrix, and analytical methodology used to quantify the analytes concentration. ...

  13. Reliable quantification of phthalates in environmental matrices (air, water, sludge, sediment and soil): a review.

    PubMed

    Net, Sopheak; Delmont, Anne; Sempéré, Richard; Paluselli, Andrea; Ouddane, Baghdad

    2015-05-15

    Because of their widespread application, phthalates or phthalic acid esters (PAEs) are ubiquitous in the environment. Their presence has attracted considerable attention due to their potential impacts on ecosystem functioning and on public health, so their quantification has become a necessity. Various extraction procedures as well as gas/liquid chromatography and mass spectrometry detection techniques are found as suitable for reliable detection of such compounds. However, PAEs are ubiquitous in the laboratory environment including ambient air, reagents, sampling equipment, and various analytical devices, that induces difficult analysis of real samples with a low PAE background. Therefore, accurate PAE analysis in environmental matrices is a challenging task. This paper reviews the extensive literature data on the techniques for PAE quantification in natural media. Sampling, sample extraction/pretreatment and detection for quantifying PAEs in different environmental matrices (air, water, sludge, sediment and soil) have been reviewed and compared. The concept of "green analytical chemistry" for PAE determination is also discussed. Moreover useful information about the material preparation and the procedures of quality control and quality assurance are presented to overcome the problem of sample contamination and these encountered due to matrix effects in order to avoid overestimating PAE concentrations in the environment. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Quantitative determination of methamphetamine in oral fluid by liquid-liquid extraction and gas chromatography/mass spectrometry.

    PubMed

    Bahmanabadi, L; Akhgari, M; Jokar, F; Sadeghi, H B

    2017-02-01

    Methamphetamine abuse is one of the most medical and social problems many countries face. In spite of the ban on the use of methamphetamine, it is widely available in Iran's drug black market. There are many analytical methods for the detection of methamphetamine in biological specimen. Oral fluid has become a popular specimen to test for the presence of methamphetamine. The purpose of the present study was to develop a method for the extraction and detection of methamphetamine in oral fluid samples using liquid-liquid extraction (LLE) and gas chromatography/mass spectrometry (GC/MS) methods. An analytical study was designed in that blank and 50 authentic oral fluid samples were collected to be first extracted by LLE and subsequently analysed by GC/MS. The method was fully validated and showed an excellent intra- and inter-assay precision (reflex sympathetic dystrophy ˂ 10%) for external quality control samples. Recovery with LLE methods was 96%. Limit of detection and limit of quantitation were 5 and 15 ng/mL, respectively. The method showed high selectivity, no additional peak due to interfering substances in samples was observed. The introduced method was sensitive, accurate and precise enough for the extraction of methamphetamine from oral fluid samples in forensic toxicology laboratories.

  15. Theoretical considerations on the optogalvanic detection of laser induced fluorescence in atmospheric pressure atomizers

    NASA Astrophysics Data System (ADS)

    Omenetto, N.; Smith, B. W.; Winefordner, J. D.

    1989-01-01

    Several theoretical considerations are given on the potential and practical capabilities of a detector of fluorescence radiation whose operating principle is based on a multi-step excitation-ionization scheme involving the fluorescence photons as the first excitation step. This detection technique, which was first proposed by MATVEEVet al. [ Zh. Anal Khim.34, 846 (1979)], combines two independent atomizers, one analytical cell for the excitation of the sample fluorescence and one cell, filled with pure analyte atomic vapor, acting as the ionization detector. One laser beam excites the analyte fluorescence in the analytical cell and one (or two) laser beams are used to ionize the excited atoms in the detector. Several different causes of signal and noise are evaluated, together with a discussion on possible analytical atom reservoirs (flames, furnaces) and laser sources which could be used with this approach. For properly devised conditions, i.e. optical saturation of the fluorescence and unity ionization efficiency, detection limits well below pg/ml in solution and well below femtograms as absolute amounts in furnaces can be predicted. However, scattering problems, which are absent in a conventional laser-enhanced ionization set-up, may be important in this approach.

  16. A Unifying Review of Bioassay-Guided Fractionation, Effect-Directed Analysis and Related Techniques

    PubMed Central

    Weller, Michael G.

    2012-01-01

    The success of modern methods in analytical chemistry sometimes obscures the problem that the ever increasing amount of analytical data does not necessarily give more insight of practical relevance. As alternative approaches, toxicity- and bioactivity-based assays can deliver valuable information about biological effects of complex materials in humans, other species or even ecosystems. However, the observed effects often cannot be clearly assigned to specific chemical compounds. In these cases, the establishment of an unambiguous cause-effect relationship is not possible. Effect-directed analysis tries to interconnect instrumental analytical techniques with a biological/biochemical entity, which identifies or isolates substances of biological relevance. Successful application has been demonstrated in many fields, either as proof-of-principle studies or even for complex samples. This review discusses the different approaches, advantages and limitations and finally shows some practical examples. The broad emergence of effect-directed analytical concepts might lead to a true paradigm shift in analytical chemistry, away from ever growing lists of chemical compounds. The connection of biological effects with the identification and quantification of molecular entities leads to relevant answers to many real life questions. PMID:23012539

  17. Quantitative evaluation of analyte transport on microfluidic paper-based analytical devices (μPADs).

    PubMed

    Ota, Riki; Yamada, Kentaro; Suzuki, Koji; Citterio, Daniel

    2018-02-07

    The transport efficiency during capillary flow-driven sample transport on microfluidic paper-based analytical devices (μPADs) made from filter paper has been investigated for a selection of model analytes (Ni 2+ , Zn 2+ , Cu 2+ , PO 4 3- , bovine serum albumin, sulforhodamine B, amaranth) representing metal cations, complex anions, proteins and anionic molecules. For the first time, the transport of the analytical target compounds rather than the sample liquid, has been quantitatively evaluated by means of colorimetry and absorption spectrometry-based methods. The experiments have revealed that small paperfluidic channel dimensions, additional user operation steps (e.g. control of sample volume, sample dilution, washing step) as well as the introduction of sample liquid wicking areas allow to increase analyte transport efficiency. It is also shown that the interaction of analytes with the negatively charged cellulosic paper substrate surface is strongly influenced by the physico-chemical properties of the model analyte and can in some cases (Cu 2+ ) result in nearly complete analyte depletion during sample transport. The quantitative information gained through these experiments is expected to contribute to the development of more sensitive μPADs.

  18. Poor quality drugs: grand challenges in high throughput detection, countrywide sampling, and forensics in developing countries†

    PubMed Central

    Fernandez, Facundo M.; Hostetler, Dana; Powell, Kristen; Kaur, Harparkash; Green, Michael D.; Mildenhall, Dallas C.; Newton, Paul N.

    2012-01-01

    Throughout history, poor quality medicines have been a persistent problem, with periodical crises in the supply of antimicrobials, such as fake cinchona bark in the 1600s and fake quinine in the 1800s. Regrettably, this problem seems to have grown in the last decade, especially afflicting unsuspecting patients and those seeking medicines via on-line pharmacies. Here we discuss some of the challenges related to the fight against poor quality drugs, and counterfeits in particular, with an emphasis on the analytical tools available, their relative performance, and the necessary workflows needed for distinguishing between genuine, substandard, degraded and counterfeit medicines. PMID:21107455

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.W.; Heinrich, R.R.; Graczyk, D.G.

    The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for fiscal year 1988 (October 1987 through September 1988). The Analytical Chemistry Laboratory is a full-cost recovery service center, with the primary mission of providing a broad range of analytical chemistry support services to the scientific and engineering programs at ANL. In addition, the ACL conducts a research program in analytical chemistry, works on instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL handles a wide range of analytical problems, from routinemore » standard analyses to unique problems that require significant development of methods and techniques.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.W.; Heinrich, R.R.; Graczyk, D.G.

    The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year 1989 (October 1988 through September 1989). The Analytical Chemistry Laboratory is a full-cost-recovery service center, with the primary mission of providing a broad range of analytical chemistry support services to the scientific and engineering programs at ANL. In addition, the ACL conducts a research program in analytical chemistry, works on instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL handles a wide range of analytical problems, from routine standardmore » analyses to unique problems that require significant development of methods and techniques.« less

  1. Analytical-numerical solution of a nonlinear integrodifferential equation in econometrics

    NASA Astrophysics Data System (ADS)

    Kakhktsyan, V. M.; Khachatryan, A. Kh.

    2013-07-01

    A mixed problem for a nonlinear integrodifferential equation arising in econometrics is considered. An analytical-numerical method is proposed for solving the problem. Some numerical results are presented.

  2. Post-analytical stability of 23 common chemistry and immunochemistry analytes in incurred samples.

    PubMed

    Nielsen, Betina Klint; Frederiksen, Tina; Friis-Hansen, Lennart; Larsen, Pia Bükmann

    2017-12-01

    Storage of blood samples after centrifugation, decapping and initial sampling allows ordering of additional blood tests. The pre-analytic stability of biochemistry and immunochemistry analytes has been studied in detail, but little is known about the post-analytical stability in incurred samples. We examined the stability of 23 routine analytes on the Dimension Vista® (Siemens Healthineers, Denmark): 42-60 routine samples in lithium-heparin gel tubes (Vacutainer, BD, USA) were centrifuged at 3000×g for 10min. Immediately after centrifugation, initial concentration of analytes were measured in duplicate (t=0). The tubes were stored decapped at room temperature and re-analyzed after 2, 4, 6, 8 and 10h in singletons. The concentration from reanalysis were normalized to initial concentration (t=0). Internal acceptance criteria for bias and total error were used to determine stability of each analyte. Additionally, evaporation from the decapped blood collection tubes and the residual platelet count in the plasma after centrifugation were quantified. We report a post-analytical stability of most routine analytes of ≥8h and do therefore - with few exceptions - suggest a standard 8hour-time limit for reordering and reanalysis of analytes in incurred samples. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  3. On Statistical Approaches for Demonstrating Analytical Similarity in the Presence of Correlation.

    PubMed

    Yang, Harry; Novick, Steven; Burdick, Richard K

    Analytical similarity is the foundation for demonstration of biosimilarity between a proposed product and a reference product. For this assessment, currently the U.S. Food and Drug Administration (FDA) recommends a tiered system in which quality attributes are categorized into three tiers commensurate with their risk and approaches of varying statistical rigor are subsequently used for the three-tier quality attributes. Key to the analyses of Tiers 1 and 2 quality attributes is the establishment of equivalence acceptance criterion and quality range. For particular licensure applications, the FDA has provided advice on statistical methods for demonstration of analytical similarity. For example, for Tier 1 assessment, an equivalence test can be used based on an equivalence margin of 1.5 σ R , where σ R is the reference product variability estimated by the sample standard deviation S R from a sample of reference lots. The quality range for demonstrating Tier 2 analytical similarity is of the form X̄ R ± K × σ R where the constant K is appropriately justified. To demonstrate Tier 2 analytical similarity, a large percentage (e.g., 90%) of test product must fall in the quality range. In this paper, through both theoretical derivations and simulations, we show that when the reference drug product lots are correlated, the sample standard deviation S R underestimates the true reference product variability σ R As a result, substituting S R for σ R in the Tier 1 equivalence acceptance criterion and the Tier 2 quality range inappropriately reduces the statistical power and the ability to declare analytical similarity. Also explored is the impact of correlation among drug product lots on Type I error rate and power. Three methods based on generalized pivotal quantities are introduced, and their performance is compared against a two-one-sided tests (TOST) approach. Finally, strategies to mitigate risk of correlation among the reference products lots are discussed. A biosimilar is a generic version of the original biological drug product. A key component of a biosimilar development is the demonstration of analytical similarity between the biosimilar and the reference product. Such demonstration relies on application of statistical methods to establish a similarity margin and appropriate test for equivalence between the two products. This paper discusses statistical issues with demonstration of analytical similarity and provides alternate approaches to potentially mitigate these problems. © PDA, Inc. 2016.

  4. Recent advances in hopanoids analysis: Quantification protocols overview, main research targets and selected problems of complex data exploration.

    PubMed

    Zarzycki, Paweł K; Portka, Joanna K

    2015-09-01

    Pentacyclic triterpenoids, particularly hopanoids, are organism-specific compounds and are generally considered as useful biomarkers that allow fingerprinting and classification of biological, environmental and geological samples. Simultaneous quantification of various hopanoids together with battery of related non-polar and low-molecular mass compounds may provide principal information for geochemical and environmental research focusing on both modern and ancient investigations. Target compounds can be derived from microbial biomass, water columns, sediments, coals, crude fossils or rocks. This create number of analytical problems due to different composition of the analytical matrix and interfering compounds and therefore, proper optimization of quantification protocols for such biomarkers is still the challenge. In this work we summarizing typical analytical protocols that were recently applied for quantification of hopanoids like compounds from different samples. Main steps including components of interest extraction, pre-purification, fractionation, derivatization and quantification involving gas (1D and 2D) as well as liquid separation techniques (liquid-liquid extraction, solid-phase extraction, planar and low resolution column chromatography, high-performance liquid chromatography) are described and discussed from practical point of view, mainly based on the experimental papers that were published within last two years, where significant increase in hopanoids research was noticed. The second aim of this review is to describe the latest research trends concerning determination of hopanoids and related low-molecular mass lipids analyzed in various samples including sediments, rocks, coals, crude oils and plant fossils as well as stromatolites and microbial biomass cultivated under different conditions. It has been found that majority of the most recent papers are based on uni- or bivariate approach for complex data analysis. Data interpretation involves number of physicochemical parameters and hopanoids quantities or given biomarkers mass ratios derived from high-throughput separation and detection systems, typically GC-MS and HPLC-MS. Based on quantitative data reported in recently published experimental works it has been demonstrated that multivariate data analysis using e.g. principal components computations may significantly extend our knowledge concerning proper biomarkers selection and samples classification by means of hopanoids and related non-polar compounds. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Inorganic trace analysis by mass spectrometry

    NASA Astrophysics Data System (ADS)

    Becker, Johanna Sabine; Dietze, Hans-Joachim

    1998-10-01

    Mass spectrometric methods for the trace analysis of inorganic materials with their ability to provide a very sensitive multielemental analysis have been established for the determination of trace and ultratrace elements in high-purity materials (metals, semiconductors and insulators), in different technical samples (e.g. alloys, pure chemicals, ceramics, thin films, ion-implanted semiconductors), in environmental samples (waters, soils, biological and medical materials) and geological samples. Whereas such techniques as spark source mass spectrometry (SSMS), laser ionization mass spectrometry (LIMS), laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS), glow discharge mass spectrometry (GDMS), secondary ion mass spectrometry (SIMS) and inductively coupled plasma mass spectrometry (ICP-MS) have multielemental capability, other methods such as thermal ionization mass spectrometry (TIMS), accelerator mass spectrometry (AMS) and resonance ionization mass spectrometry (RIMS) have been used for sensitive mono- or oligoelemental ultratrace analysis (and precise determination of isotopic ratios) in solid samples. The limits of detection for chemical elements using these mass spectrometric techniques are in the low ng g -1 concentration range. The quantification of the analytical results of mass spectrometric methods is sometimes difficult due to a lack of matrix-fitted multielement standard reference materials (SRMs) for many solid samples. Therefore, owing to the simple quantification procedure of the aqueous solution, inductively coupled plasma mass spectrometry (ICP-MS) is being increasingly used for the characterization of solid samples after sample dissolution. ICP-MS is often combined with special sample introduction equipment (e.g. flow injection, hydride generation, high performance liquid chromatography (HPLC) or electrothermal vaporization) or an off-line matrix separation and enrichment of trace impurities (especially for characterization of high-purity materials and environmental samples) is used in order to improve the detection limits of trace elements. Furthermore, the determination of chemical elements in the trace and ultratrace concentration range is often difficult and can be disturbed through mass interferences of analyte ions by molecular ions at the same nominal mass. By applying double-focusing sector field mass spectrometry at the required mass resolution—by the mass spectrometric separation of molecular ions from the analyte ions—it is often possible to overcome these interference problems. Commercial instrumental equipment, the capability (detection limits, accuracy, precision) and the analytical application fields of mass spectrometric methods for the determination of trace and ultratrace elements and for surface analysis are discussed.

  6. Fast solver for large scale eddy current non-destructive evaluation problems

    NASA Astrophysics Data System (ADS)

    Lei, Naiguang

    Eddy current testing plays a very important role in non-destructive evaluations of conducting test samples. Based on Faraday's law, an alternating magnetic field source generates induced currents, called eddy currents, in an electrically conducting test specimen. The eddy currents generate induced magnetic fields that oppose the direction of the inducing magnetic field in accordance with Lenz's law. In the presence of discontinuities in material property or defects in the test specimen, the induced eddy current paths are perturbed and the associated magnetic fields can be detected by coils or magnetic field sensors, such as Hall elements or magneto-resistance sensors. Due to the complexity of the test specimen and the inspection environments, the availability of theoretical simulation models is extremely valuable for studying the basic field/flaw interactions in order to obtain a fuller understanding of non-destructive testing phenomena. Theoretical models of the forward problem are also useful for training and validation of automated defect detection systems. Theoretical models generate defect signatures that are expensive to replicate experimentally. In general, modelling methods can be classified into two categories: analytical and numerical. Although analytical approaches offer closed form solution, it is generally not possible to obtain largely due to the complex sample and defect geometries, especially in three-dimensional space. Numerical modelling has become popular with advances in computer technology and computational methods. However, due to the huge time consumption in the case of large scale problems, accelerations/fast solvers are needed to enhance numerical models. This dissertation describes a numerical simulation model for eddy current problems using finite element analysis. Validation of the accuracy of this model is demonstrated via comparison with experimental measurements of steam generator tube wall defects. These simulations generating two-dimension raster scan data typically takes one to two days on a dedicated eight-core PC. A novel direct integral solver for eddy current problems and GPU-based implementation is also investigated in this research to reduce the computational time.

  7. New analytical solutions to the two-phase water faucet problem

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-06-17

    Here, the one-dimensional water faucet problem is one of the classical benchmark problems originally proposed by Ransom to study the two-fluid two-phase flow model. With certain simplifications, such as massless gas phase and no wall and interfacial frictions, analytical solutions had been previously obtained for the transient liquid velocity and void fraction distribution. The water faucet problem and its analytical solutions have been widely used for the purposes of code assessment, benchmark and numerical verifications. In our previous study, the Ransom’s solutions were used for the mesh convergence study of a high-resolution spatial discretization scheme. It was found that, atmore » the steady state, an anticipated second-order spatial accuracy could not be achieved, when compared to the existing Ransom’s analytical solutions. A further investigation showed that the existing analytical solutions do not actually satisfy the commonly used two-fluid single-pressure two-phase flow equations. In this work, we present a new set of analytical solutions of the water faucet problem at the steady state, considering the gas phase density’s effect on pressure distribution. This new set of analytical solutions are used for mesh convergence studies, from which anticipated second-order of accuracy is achieved for the 2nd order spatial discretization scheme. In addition, extended Ransom’s transient solutions for the gas phase velocity and pressure are derived, with the assumption of decoupled liquid and gas pressures. Numerical verifications on the extended Ransom’s solutions are also presented.« less

  8. Major to ultra trace element bulk rock analysis of nanoparticulate pressed powder pellets by LA-ICP-MS

    NASA Astrophysics Data System (ADS)

    Peters, Daniel; Pettke, Thomas

    2016-04-01

    An efficient, clean procedure for bulk rock major to trace element analysis by 193 nm Excimer LA-ICP-MS analysis of nanoparticulate pressed powder pellets (PPPs) employing a binder is presented. Sample powders are milled in water suspension in a planetary ball mill, reducing average grain size by about one order of magnitude compared to common dry milling protocols. Microcrystalline cellulose (MCC) is employed as a binder, improving the mechanical strength of the PPP and the ablation behaviour, because MCC absorbs 193 nm laser light well. Use of MCC binder allows for producing cohesive pellets of materials that cannot be pelletized in their pure forms, such as quartz powder. Rigorous blank quantification was performed on synthetic quartz treated like rock samples, demonstrating that procedural blanks are irrelevant except for a few elements at the 10 ng g-1 concentration level. The LA-ICP-MS PPP analytical procedure was optimised and evaluated using six different SRM powders (JP-1, UB-N, BCR-2, GSP-2, OKUM, and MUH-1). Calibration based on external standardization using SRM 610, SRM 612, BCR-2G, and GSD-1G glasses allows for evaluation of possible matrix effects during LA-ICP-MS analysis. The data accuracy of the PPP LA-ICP-MS analytical procedure compares well to that achieved for liquid ICP-MS and LA-ICP-MS glass analysis, except for element concentrations below ˜30 ng g-1, where liquid ICP-MS offers more precise data and in part lower limits of detection. Uncertainties on the external reproducibility of LA-ICP-MS PPP element concentrations are of the order of 0.5 to 2 % (1σ standard deviation) for concentrations exceeding ˜1 μg g-1. For lower element concentrations these uncertainties increase to 5-10% or higher when analyte-depending limits of detection (LOD) are approached, and LODs do not significantly differ from glass analysis. Sample homogeneity is demonstrated by the high analytical precision, except for very few elements where grain size effects can rarely still be resolved analytically. Matrix effects are demonstrated for PPP analysis of diverse rock compositions and basalt glass analysis when externally calibrated based on SRM 610 and SRM 612 glasses; employing basalt glass GSD-1G or BCR-2G for external standardisation basically eliminates these problems. Perhaps the most prominent progress of the LA-ICP-MS PPP analytical procedure presented here is the fact that trace elements not commonly analysed, i.e. new, unconventional geochemical tracers, can be measured straightforwardly, including volatile elements, the flux elements Li and B, the chalcophile elements As, Sb, Tl, Bi, and elements that alloy with metal containers employed in conventional glass production approaches. The method presented here thus overcomes many common problems and limitations in analytical geochemistry and is shown to be an efficient alternative for bulk rock trace elements analysis.

  9. Optimal weighting in fNL constraints from large scale structure in an idealised case

    NASA Astrophysics Data System (ADS)

    Slosar, Anže

    2009-03-01

    We consider the problem of optimal weighting of tracers of structure for the purpose of constraining the non-Gaussianity parameter fNL. We work within the Fisher matrix formalism expanded around fiducial model with fNL = 0 and make several simplifying assumptions. By slicing a general sample into infinitely many samples with different biases, we derive the analytic expression for the relevant Fisher matrix element. We next consider weighting schemes that construct two effective samples from a single sample of tracers with a continuously varying bias. We show that a particularly simple ansatz for weighting functions can recover all information about fNL in the initial sample that is recoverable using a given bias observable and that simple division into two equal samples is considerably suboptimal when sampling of modes is good, but only marginally suboptimal in the limit where Poisson errors dominate.

  10. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    NASA Astrophysics Data System (ADS)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  11. Personal exposure assessment to particulate metals using a paper-based analytical device

    NASA Astrophysics Data System (ADS)

    Cate, David; Volckens, John; Henry, Charles

    2013-03-01

    The development of a paper-based analytical device (PAD) for assessing personal exposure to particulate metals will be presented. Human exposure to metal aerosols, such as those that occur in the mining, construction, and manufacturing industries, has a significant impact on the health of our workforce, costing an estimated $10B in the U.S and causing approximately 425,000 premature deaths world-wide each year. Occupational exposure to particulate metals affects millions of individuals in manufacturing, construction (welding, cutting, blasting), and transportation (combustion, utility maintenance, and repair services) industries. Despite these effects, individual workers are rarely assessed for their exposure to particulate metals, due mainly to the high cost and effort associated with personal exposure measurement. Current exposure assessment methods for particulate metals call for an 8-hour filter sample, after which time, the filter sample is transported to a laboratory and analyzed by inductively-coupled plasma (ICP). The time from sample collection to reporting is typically weeks and costs several hundred dollars per sample. To exacerbate the issue, method detection limits suffer because of sample dilution during digestion. The lack of sensitivity hampers task-based exposure assessment, for which sampling times may be tens of minutes. To address these problems, and as a first step towards using microfluidics for personal exposure assessment, we have developed PADs for measurement of Pb, Cd, Cr, Fe, Ni, and Cu in aerosolized particulate matter.

  12. Metabolomic analysis-Addressing NMR and LC-MS related problems in human feces sample preparation.

    PubMed

    Moosmang, Simon; Pitscheider, Maria; Sturm, Sonja; Seger, Christoph; Tilg, Herbert; Halabalaki, Maria; Stuppner, Hermann

    2017-10-31

    Metabolomics is a well-established field in fundamental clinical research with applications in different human body fluids. However, metabolomic investigations in feces are currently an emerging field. Fecal sample preparation is a demanding task due to high complexity and heterogeneity of the matrix. To gain access to the information enclosed in human feces it is necessary to extract the metabolites and make them accessible to analytical platforms like NMR or LC-MS. In this study different pre-analytical parameters and factors were investigated i.e. water content, different extraction solvents, influence of freeze-drying and homogenization, ratios of sample weight to extraction solvent, and their respective impact on metabolite profiles acquired by NMR and LC-MS. The results indicate that profiles are strongly biased by selection of extraction solvent or drying of samples, which causes different metabolites to be lost, under- or overstated. Additionally signal intensity and reproducibility of the measurement were found to be strongly dependent on sample pre-treatment steps: freeze-drying and homogenization lead to improved release of metabolites and thus increased signals, but at the same time induced variations and thus deteriorated reproducibility. We established the first protocol for extraction of human fecal samples and subsequent measurement with both complementary techniques NMR and LC-MS. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Method of and apparatus for determining the similarity of a biological analyte from a model constructed from known biological fluids

    DOEpatents

    Robinson, Mark R.; Ward, Kenneth J.; Eaton, Robert P.; Haaland, David M.

    1990-01-01

    The characteristics of a biological fluid sample having an analyte are determined from a model constructed from plural known biological fluid samples. The model is a function of the concentration of materials in the known fluid samples as a function of absorption of wideband infrared energy. The wideband infrared energy is coupled to the analyte containing sample so there is differential absorption of the infrared energy as a function of the wavelength of the wideband infrared energy incident on the analyte containing sample. The differential absorption causes intensity variations of the infrared energy incident on the analyte containing sample as a function of sample wavelength of the energy, and concentration of the unknown analyte is determined from the thus-derived intensity variations of the infrared energy as a function of wavelength from the model absorption versus wavelength function.

  14. Applications of surface analytical techniques in Earth Sciences

    NASA Astrophysics Data System (ADS)

    Qian, Gujie; Li, Yubiao; Gerson, Andrea R.

    2015-03-01

    This review covers a wide range of surface analytical techniques: X-ray photoelectron spectroscopy (XPS), scanning photoelectron microscopy (SPEM), photoemission electron microscopy (PEEM), dynamic and static secondary ion mass spectroscopy (SIMS), electron backscatter diffraction (EBSD), atomic force microscopy (AFM). Others that are relatively less widely used but are also important to the Earth Sciences are also included: Auger electron spectroscopy (AES), low energy electron diffraction (LEED) and scanning tunnelling microscopy (STM). All these techniques probe only the very top sample surface layers (sub-nm to several tens of nm). In addition, we also present several other techniques i.e. Raman microspectroscopy, reflection infrared (IR) microspectroscopy and quantitative evaluation of minerals by scanning electron microscopy (QEMSCAN) that penetrate deeper into the sample, up to several μm, as all of them are fundamental analytical tools for the Earth Sciences. Grazing incidence synchrotron techniques, sensitive to surface measurements, are also briefly introduced at the end of this review. (Scanning) transmission electron microscopy (TEM/STEM) is a special case that can be applied to characterisation of mineralogical and geological sample surfaces. Since TEM/STEM is such an important technique for Earth Scientists, we have also included it to draw attention to the capability of TEM/STEM applied as a surface-equivalent tool. While this review presents most of the important techniques for the Earth Sciences, it is not an all-inclusive bibliography of those analytical techniques. Instead, for each technique that is discussed, we first give a very brief introduction about its principle and background, followed by a short section on approaches to sample preparation that are important for researchers to appreciate prior to the actual sample analysis. We then use examples from publications (and also some of our known unpublished results) within the Earth Sciences to show how each technique is applied and used to obtain specific information and to resolve real problems, which forms the central theme of this review. Although this review focuses on applications of these techniques to study mineralogical and geological samples, we also anticipate that researchers from other research areas such as Material and Environmental Sciences may benefit from this review.

  15. Gradient design for liquid chromatography using multi-scale optimization.

    PubMed

    López-Ureña, S; Torres-Lapasió, J R; Donat, R; García-Alvarez-Coque, M C

    2018-01-26

    In reversed phase-liquid chromatography, the usual solution to the "general elution problem" is the application of gradient elution with programmed changes of organic solvent (or other properties). A correct quantification of chromatographic peaks in liquid chromatography requires well resolved signals in a proper analysis time. When the complexity of the sample is high, the gradient program should be accommodated to the local resolution needs of each analyte. This makes the optimization of such situations rather troublesome, since enhancing the resolution for a given analyte may imply a collateral worsening of the resolution of other analytes. The aim of this work is to design multi-linear gradients that maximize the resolution, while fulfilling some restrictions: all peaks should be eluted before a given maximal time, the gradient should be flat or increasing, and sudden changes close to eluting peaks are penalized. Consequently, an equilibrated baseline resolution for all compounds is sought. This goal is achieved by splitting the optimization problem in a multi-scale framework. In each scale κ, an optimization problem is solved with N κ  ≈ 2 κ variables that are used to build the gradients. The N κ variables define cubic splines written in terms of a B-spline basis. This allows expressing gradients as polygonals of M points approximating the splines. The cubic splines are built using subdivision schemes, a technique of fast generation of smooth curves, compatible with the multi-scale framework. Owing to the nature of the problem and the presence of multiple local maxima, the algorithm used in the optimization problem of each scale κ should be "global", such as the pattern-search algorithm. The multi-scale optimization approach is successfully applied to find the best multi-linear gradient for resolving a mixture of amino acid derivatives. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Probabilistic data integration and computational complexity

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.; Mosegaard, K.

    2016-12-01

    Inverse problems in Earth Sciences typically refer to the problem of inferring information about properties of the Earth from observations of geophysical data (the result of nature's solution to the `forward' problem). This problem can be formulated more generally as a problem of `integration of information'. A probabilistic formulation of data integration is in principle simple: If all information available (from e.g. geology, geophysics, remote sensing, chemistry…) can be quantified probabilistically, then different algorithms exist that allow solving the data integration problem either through an analytical description of the combined probability function, or sampling the probability function. In practice however, probabilistic based data integration may not be easy to apply successfully. This may be related to the use of sampling methods, which are known to be computationally costly. But, another source of computational complexity is related to how the individual types of information are quantified. In one case a data integration problem is demonstrated where the goal is to determine the existence of buried channels in Denmark, based on multiple sources of geo-information. Due to one type of information being too informative (and hence conflicting), this leads to a difficult sampling problems with unrealistic uncertainty. Resolving this conflict prior to data integration, leads to an easy data integration problem, with no biases. In another case it is demonstrated how imperfections in the description of the geophysical forward model (related to solving the wave-equation) can lead to a difficult data integration problem, with severe bias in the results. If the modeling error is accounted for, the data integration problems becomes relatively easy, with no apparent biases. Both examples demonstrate that biased information can have a dramatic effect on the computational efficiency solving a data integration problem and lead to biased results, and under-estimation of uncertainty. However, in both examples, one can also analyze the performance of the sampling methods used to solve the data integration problem to indicate the existence of biased information. This can be used actively to avoid biases in the available information and subsequently in the final uncertainty evaluation.

  17. Quality performance of laboratory testing in pharmacies: a collaborative evaluation.

    PubMed

    Zaninotto, Martina; Miolo, Giorgia; Guiotto, Adriano; Marton, Silvia; Plebani, Mario

    2016-11-01

    The quality performance and the comparability between results of pharmacies point-of-care-testing (POCT) and institutional laboratories have been evaluated. Eight pharmacies participated in the project: a capillary specimen collected by the pharmacist and, simultaneously, a lithium-heparin sample drawn by a physician of laboratory medicine for the pharmacy customers (n=106) were analyzed in the pharmacy and in the laboratory, respectively. Glucose, cholesterol, HDL-cholesterol, triglycerides, creatinine, uric acid, aspartate aminotransferase, alanine aminotransferase, were measured using: Reflotron, n=5; Samsung, n=1; Cardiocheck PA, n=1; Cholestech LDX, n=1 and Cobas 8000. The POCT analytical performance only (phase 2) were evaluated testing, in pharmacies and in the laboratory, the lithium heparin samples from a female drawn fasting daily in a week, and a control sample containing high concentrations of glucose, cholesterol and triglycerides. For all parameters, except triglycerides, the slopes showed a satisfactory correlation. For triglycerides, a median value higher in POCT in comparison to the laboratory (1.627 mmol/L vs. 0.950 mmol/L) has been observed. The agreement in the subjects classification, demonstrates that for glucose, 70% of the subjects show concentrations below the POCT recommended level (5.8-6.1 mmol/L), while 56% are according to the laboratory limit (<5.6 mmol/L). Total cholesterol exhibits a similar trend while POCT triglycerides show a greater percentage of increased values (21% vs. 9%). The reduction in triglycerides bias (phase 2) suggests that differences between POCT and central laboratory is attributable to a pre-analytical problem. The results confirm the acceptable analytical performance of POCT pharmacies and specific criticisms in the pre- and post-analytical phases.

  18. Analysis of variation matrix array by bilinear least squares-residual bilinearization (BLLS-RBL) for resolving and quantifying of foodstuff dyes in a candy sample.

    PubMed

    Asadpour-Zeynali, Karim; Maryam Sajjadi, S; Taherzadeh, Fatemeh; Rahmanian, Reza

    2014-04-05

    Bilinear least square (BLLS) method is one of the most suitable algorithms for second-order calibration. Original BLLS method is not applicable to the second order pH-spectral data when an analyte has more than one spectroscopically active species. Bilinear least square-residual bilinearization (BLLS-RBL) was developed to achieve the second order advantage for analysis of complex mixtures. Although the modified method is useful, the pure profiles cannot be obtained and only the linear combination will be obtained. Moreover, for prediction of analyte in an unknown sample, the original algorithm of RBL may diverge; instead of converging to the desired analyte concentrations. Therefore, Gauss Newton-RLB algorithm should be used, which is not as simple as original protocol. Also, the analyte concentration can be predicted on the basis of each of the equilibrating species of the component of interest that are not exactly the same. The aim of the present work is to tackle the non-uniqueness problem in the second order calibration of monoprotic acid mixtures and divergence of RBL. Each pH-absorbance matrix was pretreated by subtraction of the first spectrum from other spectra in the data set to produce full rank array that is called variation matrix. Then variation matrices were analyzed uniquely by original BLLS-RBL that is more parsimonious than its modified counterpart. The proposed method was performed on the simulated as well as the analysis of real data. Sunset yellow and Carmosine as monoprotic acids were determined in candy sample in the presence of unknown interference by this method. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Large-Scale Interlaboratory Study to Develop, Analytically Validate and Apply Highly Multiplexed, Quantitative Peptide Assays to Measure Cancer-Relevant Proteins in Plasma*

    PubMed Central

    Abbatiello, Susan E.; Schilling, Birgit; Mani, D. R.; Zimmerman, Lisa J.; Hall, Steven C.; MacLean, Brendan; Albertolle, Matthew; Allen, Simon; Burgess, Michael; Cusack, Michael P.; Gosh, Mousumi; Hedrick, Victoria; Held, Jason M.; Inerowicz, H. Dorota; Jackson, Angela; Keshishian, Hasmik; Kinsinger, Christopher R.; Lyssand, John; Makowski, Lee; Mesri, Mehdi; Rodriguez, Henry; Rudnick, Paul; Sadowski, Pawel; Sedransk, Nell; Shaddox, Kent; Skates, Stephen J.; Kuhn, Eric; Smith, Derek; Whiteaker, Jeffery R.; Whitwell, Corbin; Zhang, Shucha; Borchers, Christoph H.; Fisher, Susan J.; Gibson, Bradford W.; Liebler, Daniel C.; MacCoss, Michael J.; Neubert, Thomas A.; Paulovich, Amanda G.; Regnier, Fred E.; Tempst, Paul; Carr, Steven A.

    2015-01-01

    There is an increasing need in biology and clinical medicine to robustly and reliably measure tens to hundreds of peptides and proteins in clinical and biological samples with high sensitivity, specificity, reproducibility, and repeatability. Previously, we demonstrated that LC-MRM-MS with isotope dilution has suitable performance for quantitative measurements of small numbers of relatively abundant proteins in human plasma and that the resulting assays can be transferred across laboratories while maintaining high reproducibility and quantitative precision. Here, we significantly extend that earlier work, demonstrating that 11 laboratories using 14 LC-MS systems can develop, determine analytical figures of merit, and apply highly multiplexed MRM-MS assays targeting 125 peptides derived from 27 cancer-relevant proteins and seven control proteins to precisely and reproducibly measure the analytes in human plasma. To ensure consistent generation of high quality data, we incorporated a system suitability protocol (SSP) into our experimental design. The SSP enabled real-time monitoring of LC-MRM-MS performance during assay development and implementation, facilitating early detection and correction of chromatographic and instrumental problems. Low to subnanogram/ml sensitivity for proteins in plasma was achieved by one-step immunoaffinity depletion of 14 abundant plasma proteins prior to analysis. Median intra- and interlaboratory reproducibility was <20%, sufficient for most biological studies and candidate protein biomarker verification. Digestion recovery of peptides was assessed and quantitative accuracy improved using heavy-isotope-labeled versions of the proteins as internal standards. Using the highly multiplexed assay, participating laboratories were able to precisely and reproducibly determine the levels of a series of analytes in blinded samples used to simulate an interlaboratory clinical study of patient samples. Our study further establishes that LC-MRM-MS using stable isotope dilution, with appropriate attention to analytical validation and appropriate quality control measures, enables sensitive, specific, reproducible, and quantitative measurements of proteins and peptides in complex biological matrices such as plasma. PMID:25693799

  20. The Efficacy of Problem-Based Learning in an Analytical Laboratory Course for Pre-Service Chemistry Teachers

    ERIC Educational Resources Information Center

    Yoon, Heojeong; Woo, Ae Ja; Treagust, David; Chandrasegaran, A. L.

    2014-01-01

    The efficacy of problem-based learning (PBL) in an analytical chemistry laboratory course was studied using a programme that was designed and implemented with 20 students in a treatment group over 10 weeks. Data from 26 students in a traditional analytical chemistry laboratory course were used for comparison. Differences in the creative thinking…

  1. Sensitive screening of abused drugs in dried blood samples using ultra-high-performance liquid chromatography-ion booster-quadrupole time-of-flight mass spectrometry.

    PubMed

    Chepyala, Divyabharathi; Tsai, I-Lin; Liao, Hsiao-Wei; Chen, Guan-Yuan; Chao, Hsi-Chun; Kuo, Ching-Hua

    2017-03-31

    An increased rate of drug abuse is a major social problem worldwide. The dried blood spot (DBS) sampling technique offers many advantages over using urine or whole blood sampling techniques. This study developed a simple and efficient ultra-high-performance liquid chromatography-ion booster-quadrupole time-of-flight mass spectrometry (UHPLC-IB-QTOF-MS) method for the analysis of abused drugs and their metabolites using DBS. Fifty-seven compounds covering the most commonly abused drugs, including amphetamines, opioids, cocaine, benzodiazepines, barbiturates, and many other new and emerging abused drugs, were selected as the target analytes of this study. An 80% acetonitrile solvent with a 5-min extraction by Geno grinder was used for sample extraction. A Poroshell column was used to provide efficient separation, and under optimal conditions, the analytical times were 15 and 5min in positive and negative ionization modes, respectively. Ionization parameters of both electrospray ionization source and ion booster (IB) source containing an extra heated zone were optimized to achieve the best ionization efficiency of the investigated abused drugs. In spite of their structural diversity, most of the abused drugs showed an enhanced mass response with the high temperature ionization from an extra heated zone of IB source. Compared to electrospray ionization, the ion booster (IB) greatly improved the detection sensitivity for 86% of the analytes by 1.5-14-fold and allowed the developed method to detect trace amounts of compounds on the DBS cards. The validation results showed that the coefficients of variation of intra-day and inter-day precision in terms of the signal intensity were lower than 19.65%. The extraction recovery of all analytes was between 67.21 and 115.14%. The limits of detection of all analytes were between 0.2 and 35.7ngmL -1 . The stability study indicated that 7% of compounds showed poor stability (below 50%) on the DBS cards after 6 months of storage at room temperature and -80°C. The reported method provides a new direction for abused drug screening using DBS. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. An Eye Tracking Study of High- and Low-Performing Students in Solving Interactive and Analytical Problems

    ERIC Educational Resources Information Center

    Hu, Yiling; Wu, Bian; Gu, Xiaoqing

    2017-01-01

    Test results from the Program for International Student Assessment (PISA) reveal that Shanghai students performed less well in solving interactive problems (those that require uncovering necessary information) than in solving analytical problems (those having all information disclosed at the outset). Accordingly, this study investigates…

  3. Unifying Approach to Analytical Chemistry and Chemical Analysis: Problem-Oriented Role of Chemical Analysis.

    ERIC Educational Resources Information Center

    Pardue, Harry L.; Woo, Jannie

    1984-01-01

    Proposes an approach to teaching analytical chemistry and chemical analysis in which a problem to be resolved is the focus of a course. Indicates that this problem-oriented approach is intended to complement detailed discussions of fundamental and applied aspects of chemical determinations and not replace such discussions. (JN)

  4. Analytical Derivation: An Epistemic Game for Solving Mathematically Based Physics Problems

    ERIC Educational Resources Information Center

    Bajracharya, Rabindra R.; Thompson, John R.

    2016-01-01

    Problem solving, which often involves multiple steps, is an integral part of physics learning and teaching. Using the perspective of the epistemic game, we documented a specific game that is commonly pursued by students while solving mathematically based physics problems: the "analytical derivation" game. This game involves deriving an…

  5. Convergence analysis of two-node CMFD method for two-group neutron diffusion eigenvalue problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeong, Yongjin; Park, Jinsu; Lee, Hyun Chul

    2015-12-01

    In this paper, the nonlinear coarse-mesh finite difference method with two-node local problem (CMFD2N) is proven to be unconditionally stable for neutron diffusion eigenvalue problems. The explicit current correction factor (CCF) is derived based on the two-node analytic nodal method (ANM2N), and a Fourier stability analysis is applied to the linearized algorithm. It is shown that the analytic convergence rate obtained by the Fourier analysis compares very well with the numerically measured convergence rate. It is also shown that the theoretical convergence rate is only governed by the converged second harmonic buckling and the mesh size. It is also notedmore » that the convergence rate of the CCF of the CMFD2N algorithm is dependent on the mesh size, but not on the total problem size. This is contrary to expectation for eigenvalue problem. The novel points of this paper are the analytical derivation of the convergence rate of the CMFD2N algorithm for eigenvalue problem, and the convergence analysis based on the analytic derivations.« less

  6. Characterizing Protein Complexes with UV absorption, Light Scattering, and Refractive Index Detection.

    NASA Astrophysics Data System (ADS)

    Trainoff, Steven

    2009-03-01

    Many modern pharmaceuticals and naturally occurring biomolecules consist of complexes of proteins and polyethylene glycol or carbohydrates. In the case of vaccine development, these complexes are often used to induce or amplify immune responses. For protein therapeutics they are used to modify solubility and function, or to control the rate of degradation and elimination of a drug from the body. Characterizing the stoichiometry of these complexes is an important industrial problem that presents a formidable challenge to analytical instrument designers. Traditional analytical methods, such as using florescent tagging, chemical assays, and mass spectrometry perturb the system so dramatically that the complexes are often destroyed or uncontrollably modified by the measurement. A solution to this problem consists of fractionating the samples and then measuring the fractions using sequential non-invasive detectors that are sensitive to different components of the complex. We present results using UV absorption, which is primarily sensitive to the protein fraction, Light Scattering, which measures the total weight average molar mass, and Refractive Index detection, which measures the net concentration. We also present a solution of the problem inter-detector band-broadening problem that has heretofore made this approach impractical. Presented will be instrumentation and an analysis method that overcome these obstacles and make this technique a reliable and robust way of non-invasively characterizing these industrially important compounds.

  7. New trends in astrodynamics and applications: optimal trajectories for space guidance.

    PubMed

    Azimov, Dilmurat; Bishop, Robert

    2005-12-01

    This paper represents recent results on the development of optimal analytic solutions to the variation problem of trajectory optimization and their application in the construction of on-board guidance laws. The importance of employing the analytically integrated trajectories in a mission design is discussed. It is assumed that the spacecraft is equipped with a power-limited propulsion and moving in a central Newtonian field. Satisfaction of the necessary and sufficient conditions for optimality of trajectories is analyzed. All possible thrust arcs and corresponding classes of the analytical solutions are classified based on the propulsion system parameters and performance index of the problem. The solutions are presented in a form convenient for applications in escape, capture, and interorbital transfer problems. Optimal guidance and neighboring optimal guidance problems are considered. It is shown that the analytic solutions can be used as reference trajectories in constructing the guidance algorithms for the maneuver problems mentioned above. An illustrative example of a spiral trajectory that terminates on a given elliptical parking orbit is discussed.

  8. Bias and precision of selected analytes reported by the National Atmospheric Deposition Program and National Trends Network, 1984

    USGS Publications Warehouse

    Brooks, M.H.; Schroder, L.J.; Willoughby, T.C.

    1987-01-01

    The U.S. Geological Survey operated a blind audit sample program during 1974 to test the effects of the sample handling and shipping procedures used by the National Atmospheric Deposition Program and National Trends Network on the quality of wet deposition data produced by the combined networks. Blind audit samples, which were dilutions of standard reference water samples, were submitted by network site operators to the central analytical laboratory disguised as actual wet deposition samples. Results from the analyses of blind audit samples were used to calculate estimates of analyte bias associated with all network wet deposition samples analyzed in 1984 and to estimate analyte precision. Concentration differences between double blind samples that were submitted to the central analytical laboratory and separate analyses of aliquots of those blind audit samples that had not undergone network sample handling and shipping were used to calculate analyte masses that apparently were added to each blind audit sample by routine network handling and shipping procedures. These calculated masses indicated statistically significant biases for magnesium, sodium , potassium, chloride, and sulfate. Median calculated masses were 41.4 micrograms (ug) for calcium, 14.9 ug for magnesium, 23.3 ug for sodium, 0.7 ug for potassium, 16.5 ug for chloride and 55.3 ug for sulfate. Analyte precision was estimated using two different sets of replicate measures performed by the central analytical laboratory. Estimated standard deviations were similar to those previously reported. (Author 's abstract)

  9. Micropowder collecting technique for stable isotope analysis of carbonates.

    PubMed

    Sakai, Saburo; Kodan, Tsuyoshi

    2011-05-15

    Micromilling is a conventional technique used in the analysis of the isotopic composition of geological materials, which improves the spatial resolution of sample collection for analysis. However, a problem still remains concerning the recovery ratio of the milled sample. We constructed a simple apparatus consisting of a vacuum pump, a sintered metal filter, electrically conductive rubber stopper and a stainless steel tube for transferring the milled powder into a reaction vial. In our preliminary experiments on carbonate powder, we achieved a rapid recovery of 5 to 100 µg of carbonate with a high recovery ratio (>90%). This technique shortens the sample preparation time, improves the recovery ratio, and homogenizes the sample quantity, which, in turn, improves the analytical reproducibility. Copyright © 2011 John Wiley & Sons, Ltd.

  10. Blank and sample handling troubleshooting in ultratrace analysis of alkylphenols and bisphenol A by liquid chromatography tandem mass spectrometry.

    PubMed

    Salgueiro-González, N; Concha-Graña, E; Turnes-Carou, I; Muniategui-Lorenzo, S; López-Mahía, P; Prada-Rodríguez, D

    2012-11-15

    Blank contamination is a notorious problem in the ultratrace analysis of alkylphenols and bisphenol A. The achievement of low detection limits is complicated due to the high background signals. Furthermore, overestimations and underestimations in the analytical results can occur when blank levels are not stable. Thus, a review of sources of blank contamination in this type of analysis was carried out. Several sources of contamination were identified and useful guidelines are proposed for the determination of these compounds in water samples by liquid chromatography coupled with mass spectrometry. The system contamination was maintained below 0.09 ng (reagent blank) for all compounds and below 0.003 μg L(-1) (procedure blank). The main improvement was obtained by using LC-MS grade solvent in the mobile phase and PTFE syringe filters for the filtration of the sample extracts. Sample handling aspects such as filtration and storage of the water samples were also considered. The filtration of the samples should be avoided because both contamination and adsorption problems were observed when different kinds of filters were assayed. The refrigerated storage of water samples should be limited to 5 days (without addition of methanol) or 8 days (with 5% methanol). Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Comparison of serum, EDTA plasma and P100 plasma for luminex-based biomarker multiplex assays in patients with chronic obstructive pulmonary disease in the SPIROMICS study.

    PubMed

    O'Neal, Wanda K; Anderson, Wayne; Basta, Patricia V; Carretta, Elizabeth E; Doerschuk, Claire M; Barr, R Graham; Bleecker, Eugene R; Christenson, Stephanie A; Curtis, Jeffrey L; Han, Meilan K; Hansel, Nadia N; Kanner, Richard E; Kleerup, Eric C; Martinez, Fernando J; Miller, Bruce E; Peters, Stephen P; Rennard, Stephen I; Scholand, Mary Beth; Tal-Singer, Ruth; Woodruff, Prescott G; Couper, David J; Davis, Sonia M

    2014-01-08

    As a part of the longitudinal Chronic Obstructive Pulmonary Disease (COPD) study, Subpopulations and Intermediate Outcome Measures in COPD study (SPIROMICS), blood samples are being collected from 3200 subjects with the goal of identifying blood biomarkers for sub-phenotyping patients and predicting disease progression. To determine the most reliable sample type for measuring specific blood analytes in the cohort, a pilot study was performed from a subset of 24 subjects comparing serum, Ethylenediaminetetraacetic acid (EDTA) plasma, and EDTA plasma with proteinase inhibitors (P100). 105 analytes, chosen for potential relevance to COPD, arranged in 12 multiplex and one simplex platform (Myriad-RBM) were evaluated in duplicate from the three sample types from 24 subjects. The reliability coefficient and the coefficient of variation (CV) were calculated. The performance of each analyte and mean analyte levels were evaluated across sample types. 20% of analytes were not consistently detectable in any sample type. Higher reliability and/or smaller CV were determined for 12 analytes in EDTA plasma compared to serum, and for 11 analytes in serum compared to EDTA plasma. While reliability measures were similar for EDTA plasma and P100 plasma for a majority of analytes, CV was modestly increased in P100 plasma for eight analytes. Each analyte within a multiplex produced independent measurement characteristics, complicating selection of sample type for individual multiplexes. There were notable detectability and measurability differences between serum and plasma. Multiplexing may not be ideal if large reliability differences exist across analytes measured within the multiplex, especially if values differ based on sample type. For some analytes, the large CV should be considered during experimental design, and the use of duplicate and/or triplicate samples may be necessary. These results should prove useful for studies evaluating selection of samples for evaluation of potential blood biomarkers.

  12. Comparison of serum, EDTA plasma and P100 plasma for luminex-based biomarker multiplex assays in patients with chronic obstructive pulmonary disease in the SPIROMICS study

    PubMed Central

    2014-01-01

    Background As a part of the longitudinal Chronic Obstructive Pulmonary Disease (COPD) study, Subpopulations and Intermediate Outcome Measures in COPD study (SPIROMICS), blood samples are being collected from 3200 subjects with the goal of identifying blood biomarkers for sub-phenotyping patients and predicting disease progression. To determine the most reliable sample type for measuring specific blood analytes in the cohort, a pilot study was performed from a subset of 24 subjects comparing serum, Ethylenediaminetetraacetic acid (EDTA) plasma, and EDTA plasma with proteinase inhibitors (P100™). Methods 105 analytes, chosen for potential relevance to COPD, arranged in 12 multiplex and one simplex platform (Myriad-RBM) were evaluated in duplicate from the three sample types from 24 subjects. The reliability coefficient and the coefficient of variation (CV) were calculated. The performance of each analyte and mean analyte levels were evaluated across sample types. Results 20% of analytes were not consistently detectable in any sample type. Higher reliability and/or smaller CV were determined for 12 analytes in EDTA plasma compared to serum, and for 11 analytes in serum compared to EDTA plasma. While reliability measures were similar for EDTA plasma and P100 plasma for a majority of analytes, CV was modestly increased in P100 plasma for eight analytes. Each analyte within a multiplex produced independent measurement characteristics, complicating selection of sample type for individual multiplexes. Conclusions There were notable detectability and measurability differences between serum and plasma. Multiplexing may not be ideal if large reliability differences exist across analytes measured within the multiplex, especially if values differ based on sample type. For some analytes, the large CV should be considered during experimental design, and the use of duplicate and/or triplicate samples may be necessary. These results should prove useful for studies evaluating selection of samples for evaluation of potential blood biomarkers. PMID:24397870

  13. Confocal Raman imaging and chemometrics applied to solve forensic document examination involving crossed lines and obliteration cases by a depth profiling study.

    PubMed

    Borba, Flávia de Souza Lins; Jawhari, Tariq; Saldanha Honorato, Ricardo; de Juan, Anna

    2017-03-27

    This article describes a non-destructive analytical method developed to solve forensic document examination problems involving crossed lines and obliteration. Different strategies combining confocal Raman imaging and multivariate curve resolution-alternating least squares (MCR-ALS) are presented. Multilayer images were acquired at subsequent depth layers into the samples. It is the first time that MCR-ALS is applied to multilayer images for forensic purposes. In this context, this method provides a single set of pure spectral ink signatures and related distribution maps for all layers examined from the sole information in the raw measurement. Four cases were investigated, namely, two concerning crossed lines with different degrees of ink similarity and two related to obliteration, where previous or no knowledge about the identity of the obliterated ink was available. In the crossing line scenario, MCR-ALS analysis revealed the ink nature and the chronological order in which strokes were drawn. For obliteration cases, results making active use of information about the identity of the obliterated ink in the chemometric analysis were of similar quality as those where the identity of the obliterated ink was unknown. In all obliteration scenarios, the identity of inks and the obliterated text were satisfactorily recovered. The analytical methodology proposed is of general use for analytical forensic document examination problems, and considers different degrees of complexity and prior available information. Besides, the strategies of data analysis proposed can be applicable to any other kind of problem in which multilayer Raman images from multicomponent systems have to be interpreted.

  14. RCRA Facility investigation report for Waste Area Grouping 6 at Oak Ridge National Laboratory, Oak Ridge, Tennessee. Volume 5, Technical Memorandums 06-09A, 06-10A, and 06-12A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    This report provides a detailed summary of the activities carried out to sample groundwater at Waste Area Grouping (WAG) 6. The analytical results for samples collected during Phase 1, Activity 2 of the WAG 6 Resource Conservation and Recovery Act Facility Investigation (RFI) are also presented. In addition, analytical results for Phase 1, activity sampling events for which data were not previously reported are included in this TM. A summary of the groundwater sampling activities of WAG 6, to date, are given in the Introduction. The Methodology section describes the sampling procedures and analytical parameters. Six attachments are included. Attachmentsmore » 1 and 2 provide analytical results for selected RFI groundwater samples and ORNL sampling event. Attachment 3 provides a summary of the contaminants detected in each well sampled for all sampling events conducted at WAG 6. Bechtel National Inc. (BNI)/IT Corporation Contract Laboratory (IT) RFI analytical methods and detection limits are given in Attachment 4. Attachment 5 provides the Oak Ridge National Laboratory (ORNL)/Analytical Chemistry Division (ACD) analytical methods and detection limits and Resource Conservation and Recovery Act (RCRA) quarterly compliance monitoring (1988--1989). Attachment 6 provides ORNL/ACD groundwater analytical methods and detection limits (for the 1990 RCRA semi-annual compliance monitoring).« less

  15. A matrix-assisted laser desorption/ionization mass spectroscopy method for the analysis of small molecules by integrating chemical labeling with the supramolecular chemistry of cucurbituril.

    PubMed

    Ding, Jun; Xiao, Hua-Ming; Liu, Simin; Wang, Chang; Liu, Xin; Feng, Yu-Qi

    2018-10-05

    Although several methods have realized the analysis of low molecular weight (LMW) compounds using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) by overcoming the problem of interference with MS signals in the low mass region derived from conventional organic matrices, this emerging field still requires strategies to address the issue of analyzing complex samples containing LMW components in addition to the LMW compounds of interest, and solve the problem of lack of universality. The present study proposes an integrated strategy that combines chemical labeling with the supramolecular chemistry of cucurbit [n]uril (CB [n]) for the MALDI MS analysis of LMW compounds in complex samples. In this strategy, the target LMW compounds are first labeled by introducing a series of bifunctional reagents that selectively react with the target analytes and also form stable inclusion complexes with CB [n]. Then, the labeled products act as guest molecules that readily and selectively form stable inclusion complexes with CB [n]. This strategy relocates the MS signals of the LMW compounds of interest from the low mass region suffering high interference to the high mass region where interference with low mass components is absent. Experimental results demonstrate that a wide range of LMW compounds, including carboxylic acids, aldehydes, amines, thiol, and cis-diols, can be successfully detected using the proposed strategy, and the limits of detection were in the range of 0.01-1.76 nmol/mL. In addition, the high selectivity of the labeling reagents for the target analytes in conjunction with the high selectivity of the binding between the labeled products and CB [n] ensures an absence of signal interference with the non-targeted LMW components of complex samples. Finally, the feasibility of the proposed strategy for complex sample analysis is demonstrated by the accurate and rapid quantitative analysis of aldehydes in saliva and herbal medicines. As such, this work not only provides an alternative method for the detection of various LMW compounds using MALDI MS, but also can be applied to the selective and high-throughput analysis of LMW analytes in complex samples. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Efficient alignment-free DNA barcode analytics.

    PubMed

    Kuksa, Pavel; Pavlovic, Vladimir

    2009-11-10

    In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding.

  17. Model-free and analytical EAP reconstruction via spherical polar Fourier diffusion MRI.

    PubMed

    Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid

    2010-01-01

    How to estimate the diffusion Ensemble Average Propagator (EAP) from the DWI signals in q-space is an open problem in diffusion MRI field. Many methods were proposed to estimate the Orientation Distribution Function (ODF) that is used to describe the fiber direction. However, ODF is just one of the features of the EAP. Compared with ODF, EAP has the full information about the diffusion process which reflects the complex tissue micro-structure. Diffusion Orientation Transform (DOT) and Diffusion Spectrum Imaging (DSI) are two important methods to estimate the EAP from the signal. However, DOT is based on mono-exponential assumption and DSI needs a lot of samplings and very large b values. In this paper, we propose Spherical Polar Fourier Imaging (SPFI), a novel model-free fast robust analytical EAP reconstruction method, which almost does not need any assumption of data and does not need too many samplings. SPFI naturally combines the DWI signals with different b-values. It is an analytical linear transformation from the q-space signal to the EAP profile represented by Spherical Harmonics (SH). We validated the proposed methods in synthetic data, phantom data and real data. It works well in all experiments, especially for the data with low SNR, low anisotropy, and non-exponential decay.

  18. Improved online δ18O measurements of nitrogen- and sulfur-bearing organic materials and a proposed analytical protocol

    USGS Publications Warehouse

    Qi, H.; Coplen, T.B.; Wassenaar, L.I.

    2011-01-01

    It is well known that N2 in the ion source of a mass spectrometer interferes with the CO background during the δ18O measurement of carbon monoxide. A similar problem arises with the high-temperature conversion (HTC) analysis of nitrogenous O-bearing samples (e.g. nitrates and keratins) to CO for δ18O measurement, where the sample introduces a significant N2 peak before the CO peak, making determination of accurate oxygen isotope ratios difficult. Although using a gas chromatography (GC) column longer than that commonly provided by manufacturers (0.6 m) can improve the efficiency of separation of CO and N2 and using a valve to divert nitrogen and prevent it from entering the ion source of a mass spectrometer improved measurement results, biased δ18O values could still be obtained. A careful evaluation of the performance of the GC separation column was carried out. With optimal GC columns, the δ18O reproducibility of human hair keratins and other keratin materials was better than ±0.15 ‰ (n = 5; for the internal analytical reproducibility), and better than ±0.10 ‰ (n = 4; for the external analytical reproducibility).

  19. Improved Analytical Sensitivity of Lateral Flow Assay using Sponge for HBV Nucleic Acid Detection.

    PubMed

    Tang, Ruihua; Yang, Hui; Gong, Yan; Liu, Zhi; Li, XiuJun; Wen, Ting; Qu, ZhiGuo; Zhang, Sufeng; Mei, Qibing; Xu, Feng

    2017-05-02

    Hepatitis B virus (HBV) infection is a serious public health problem, which can be transmitted through various routes (e.g., blood donation) and cause hepatitis, liver cirrhosis and liver cancer. Hence, it is necessary to do diagnostic screening for high-risk HBV patients in these transmission routes. Nowadays, protein-based technologies have been used for HBV testing, which however involve the issues of large sample volume, antibody instability and poor specificity. Nucleic acid hybridization-based lateral flow assay (LFA) holds great potential to address these limitations due to its low-cost, rapid, and simple features, but the poor analytical sensitivity of LFA restricts its application. In this study, we developed a low-cost, simple and easy-to-use method to improve analytical sensitivity by integrating sponge shunt into LFA to decrease the fluid flow rate. The thickness, length and hydrophobicity of the sponge shunt were sequentially optimized, and achieved 10-fold signal enhancement in nucleic acid testing of HBV as compared to the unmodified LFA. The enhancement was further confirmed by using HBV clinical samples, where we achieved the detection limit of 10 3 copies/ml as compared to 10 4 copies/ml in unmodified LFA. The improved LFA holds great potential for diseases diagnostics, food safety control and environment monitoring at point-of-care.

  20. NHEXAS PHASE I MARYLAND STUDY--QA ANALYTICAL RESULTS FOR PESTICIDES IN SPIKE SAMPLES

    EPA Science Inventory

    The Pesticides in Spikes data set contains the analytical results of measurements of up to 17 pesticides in 12 control samples (spikes) from 11 households. Measurements were made in samples of blood serum. Controls were used to assess recovery of target analytes from a sample m...

  1. Representation of complex probabilities and complex Gibbs sampling

    NASA Astrophysics Data System (ADS)

    Salcedo, Lorenzo Luis

    2018-03-01

    Complex weights appear in Physics which are beyond a straightforward importance sampling treatment, as required in Monte Carlo calculations. This is the wellknown sign problem. The complex Langevin approach amounts to effectively construct a positive distribution on the complexified manifold reproducing the expectation values of the observables through their analytical extension. Here we discuss the direct construction of such positive distributions paying attention to their localization on the complexified manifold. Explicit localized representations are obtained for complex probabilities defined on Abelian and non Abelian groups. The viability and performance of a complex version of the heat bath method, based on such representations, is analyzed.

  2. Method for improving accuracy in full evaporation headspace analysis.

    PubMed

    Xie, Wei-Qi; Chai, Xin-Sheng

    2017-05-01

    We report a new headspace analytical method in which multiple headspace extraction is incorporated with the full evaporation technique. The pressure uncertainty caused by the solid content change in the samples has a great impact to the measurement accuracy in the conventional full evaporation headspace analysis. The results (using ethanol solution as the model sample) showed that the present technique is effective to minimize such a problem. The proposed full evaporation multiple headspace extraction analysis technique is also automated and practical, and which could greatly broaden the applications of the full-evaporation-based headspace analysis. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Does Incubation Enhance Problem Solving? A Meta-Analytic Review

    ERIC Educational Resources Information Center

    Sio, Ut Na; Ormerod, Thomas C.

    2009-01-01

    A meta-analytic review of empirical studies that have investigated incubation effects on problem solving is reported. Although some researchers have reported increased solution rates after an incubation period (i.e., a period of time in which a problem is set aside prior to further attempts to solve), others have failed to find effects. The…

  4. Analytical derivation: An epistemic game for solving mathematically based physics problems

    NASA Astrophysics Data System (ADS)

    Bajracharya, Rabindra R.; Thompson, John R.

    2016-06-01

    Problem solving, which often involves multiple steps, is an integral part of physics learning and teaching. Using the perspective of the epistemic game, we documented a specific game that is commonly pursued by students while solving mathematically based physics problems: the analytical derivation game. This game involves deriving an equation through symbolic manipulations and routine mathematical operations, usually without any physical interpretation of the processes. This game often creates cognitive obstacles in students, preventing them from using alternative resources or better approaches during problem solving. We conducted hour-long, semi-structured, individual interviews with fourteen introductory physics students. Students were asked to solve four "pseudophysics" problems containing algebraic and graphical representations. The problems required the application of the fundamental theorem of calculus (FTC), which is one of the most frequently used mathematical concepts in physics problem solving. We show that the analytical derivation game is necessary, but not sufficient, to solve mathematically based physics problems, specifically those involving graphical representations.

  5. Generalized bipartite quantum state discrimination problems with sequential measurements

    NASA Astrophysics Data System (ADS)

    Nakahira, Kenji; Kato, Kentaro; Usuda, Tsuyoshi Sasaki

    2018-02-01

    We investigate an optimization problem of finding quantum sequential measurements, which forms a wide class of state discrimination problems with the restriction that only local operations and one-way classical communication are allowed. Sequential measurements from Alice to Bob on a bipartite system are considered. Using the fact that the optimization problem can be formulated as a problem with only Alice's measurement and is convex programming, we derive its dual problem and necessary and sufficient conditions for an optimal solution. Our results are applicable to various practical optimization criteria, including the Bayes criterion, the Neyman-Pearson criterion, and the minimax criterion. In the setting of the problem of finding an optimal global measurement, its dual problem and necessary and sufficient conditions for an optimal solution have been widely used to obtain analytical and numerical expressions for optimal solutions. Similarly, our results are useful to obtain analytical and numerical expressions for optimal sequential measurements. Examples in which our results can be used to obtain an analytical expression for an optimal sequential measurement are provided.

  6. Method and apparatus for processing a test sample to concentrate an analyte in the sample from a solvent in the sample

    DOEpatents

    Turner, Terry D.; Beller, Laurence S.; Clark, Michael L.; Klingler, Kerry M.

    1997-01-01

    A method of processing a test sample to concentrate an analyte in the sample from a solvent in the sample includes: a) boiling the test sample containing the analyte and solvent in a boiling chamber to a temperature greater than or equal to the solvent boiling temperature and less than the analyte boiling temperature to form a rising sample vapor mixture; b) passing the sample vapor mixture from the boiling chamber to an elongated primary separation tube, the separation tube having internal sidewalls and a longitudinal axis, the longitudinal axis being angled between vertical and horizontal and thus having an upper region and a lower region; c) collecting the physically transported liquid analyte on the internal sidewalls of the separation tube; and d) flowing the collected analyte along the angled internal sidewalls of the separation tube to and pass the separation tube lower region. The invention also includes passing a turbulence inducing wave through a vapor mixture to separate physically transported liquid second material from vaporized first material. Apparatus are also disclosed for effecting separations. Further disclosed is a fluidically powered liquid test sample withdrawal apparatus for withdrawing a liquid test sample from a test sample container and for cleaning the test sample container.

  7. Method and apparatus for processing a test sample to concentrate an analyte in the sample from a solvent in the sample

    DOEpatents

    Turner, T.D.; Beller, L.S.; Clark, M.L.; Klingler, K.M.

    1997-10-14

    A method of processing a test sample to concentrate an analyte in the sample from a solvent in the sample includes: (a) boiling the test sample containing the analyte and solvent in a boiling chamber to a temperature greater than or equal to the solvent boiling temperature and less than the analyte boiling temperature to form a rising sample vapor mixture; (b) passing the sample vapor mixture from the boiling chamber to an elongated primary separation tube, the separation tube having internal sidewalls and a longitudinal axis, the longitudinal axis being angled between vertical and horizontal and thus having an upper region and a lower region; (c) collecting the physically transported liquid analyte on the internal sidewalls of the separation tube; and (d) flowing the collected analyte along the angled internal sidewalls of the separation tube to and pass the separation tube lower region. The invention also includes passing a turbulence inducing wave through a vapor mixture to separate physically transported liquid second material from vaporized first material. Apparatus is also disclosed for effecting separations. Further disclosed is a fluidically powered liquid test sample withdrawal apparatus for withdrawing a liquid test sample from a test sample container and for cleaning the test sample container. 8 figs.

  8. A review of the occurrence, analyses, toxicity, and biodegradation of naphthenic acids.

    PubMed

    Clemente, Joyce S; Fedorak, Phillip M

    2005-07-01

    Naphthenic acids occur naturally in crude oils and in oil sands bitumens. They are toxic components in refinery wastewaters and in oil sands extraction waters. In addition, there are many industrial uses for naphthenic acids, so there is a potential for their release to the environment from a variety of activities. Studies have shown that naphthenic acids are susceptible to biodegradation, which decreases their concentration and reduces toxicity. This is a complex group of carboxylic acids with the general formula CnH(2n+Z)O2, where n indicates the carbon number and Z specifies the hydrogen deficiency resulting from ring formation. Measuring the concentrations of naphthenic acids in environmental samples and determining the chemical composition of a naphthenic acids mixture are huge analytical challenges. However, new analytical methods are being applied to these problems and progress is being made to better understand this mixture of chemically similar compounds. This paper reviews a variety of analytical methods and their application to assessing biodegradation of naphthenic acids.

  9. Dry granular avalanche impact force on a rigid wall: Analytic shock solution versus discrete element simulations

    NASA Astrophysics Data System (ADS)

    Albaba, Adel; Lambert, Stéphane; Faug, Thierry

    2018-05-01

    The present paper investigates the mean impact force exerted by a granular mass flowing down an incline and impacting a rigid wall of semi-infinite height. First, this granular flow-wall interaction problem is modeled by numerical simulations based on the discrete element method (DEM). These DEM simulations allow computing the depth-averaged quantities—thickness, velocity, and density—of the incoming flow and the resulting mean force on the rigid wall. Second, that problem is described by a simple analytic solution based on a depth-averaged approach for a traveling compressible shock wave, whose volume is assumed to shrink into a singular surface, and which coexists with a dead zone. It is shown that the dead-zone dynamics and the mean force on the wall computed from DEM can be reproduced reasonably well by the analytic solution proposed over a wide range of slope angle of the incline. These results are obtained by feeding the analytic solution with the thickness, the depth-averaged velocity, and the density averaged over a certain distance along the incline rather than flow quantities taken at a singular section before the jump, thus showing that the assumption of a shock wave volume shrinking into a singular surface is questionable. The finite length of the traveling wave upstream of the grains piling against the wall must be considered. The sensitivity of the model prediction to that sampling length remains complicated, however, which highlights the need of further investigation about the properties and the internal structure of the propagating granular wave.

  10. Meeting future information needs for Great Lakes fisheries management

    USGS Publications Warehouse

    Christie, W.J.; Collins, John J.; Eck, Gary W.; Goddard, Chris I.; Hoenig, John M.; Holey, Mark; Jacobson, Lawrence D.; MacCallum, Wayne; Nepszy, Stephen J.; O'Gorman, Robert; Selgeby, James

    1987-01-01

    Description of information needs for management of Great Lakes fisheries is complicated by recent changes in biology and management of the Great Lakes, development of new analytical methodologies, and a transition in management from a traditional unispecies approach to a multispecies/community approach. A number of general problems with the collection and management of data and information for fisheries management need to be addressed (i.e. spatial resolution, reliability, computerization and accessibility of data, design of sampling programs, standardization and coordination among agencies, and the need for periodic review of procedures). Problems with existing data collection programs include size selectivity and temporal trends in the efficiency of fishing gear, inadequate creel survey programs, bias in age estimation, lack of detailed sea lamprey (Petromyzon marinus) wounding data, and data requirements for analytical techniques that are underutilized by managers of Great Lakes fisheries. The transition to multispecies and community approaches to fisheries management will require policy decisions by the management agencies, adequate funding, and a commitment to develop programs for collection of appropriate data on a long-term basis.

  11. A microstructural lattice model for strain oriented problems: A combined Monte Carlo finite element technique

    NASA Technical Reports Server (NTRS)

    Gayda, J.; Srolovitz, D. J.

    1987-01-01

    A specialized, microstructural lattice model, termed MCFET for combined Monte Carlo Finite Element Technique, was developed which simulates microstructural evolution in material systems where modulated phases occur and the directionality of the modulation is influenced by internal and external stresses. In this approach, the microstructure is discretized onto a fine lattice. Each element in the lattice is labelled in accordance with its microstructural identity. Diffusion of material at elevated temperatures is simulated by allowing exchanges of neighboring elements if the exchange lowers the total energy of the system. A Monte Carlo approach is used to select the exchange site while the change in energy associated with stress fields is computed using a finite element technique. The MCFET analysis was validated by comparing this approach with a closed form, analytical method for stress assisted, shape changes of a single particle in an infinite matrix. Sample MCFET analytical for multiparticle problems were also run and in general the resulting microstructural changes associated with the application of an external stress are similar to that observed in Ni-Al-Cr alloys at elevated temperature.

  12. Does the Cognitive Reflection Test actually capture heuristic versus analytic reasoning styles in older adults?

    PubMed

    Hertzog, Christopher; Smith, R Marit; Ariel, Robert

    2018-01-01

    Background/Study Context: This study evaluated adult age differences in the original three-item Cognitive Reflection Test (CRT; Frederick, 2005, The Journal of Economic Perspectives, 19, 25-42) and an expanded seven-item version of that test (Toplak et al., 2013, Thinking and Reasoning, 20, 147-168). The CRT is a numerical problem-solving test thought to capture a disposition towards either rapid, intuition-based problem solving (Type I reasoning) or a more thoughtful, analytical problem-solving approach (Type II reasoning). Test items are designed to induce heuristically guided errors that can be avoided if using an appropriate numerical representation of the test problems. We evaluated differences between young adults and old adults in CRT performance and correlates of CRT performance. Older adults (ages 60 to 80) were paid volunteers who participated in experiments assessing age differences in self-regulated learning. Young adults (ages 17 to 35) were students participating for pay as part of a project assessing measures of critical thinking skills or as a young comparison group in the self-regulated learning study. There were age differences in the number of CRT correct responses in two independent samples. Results with the original three-item CRT found older adults to have a greater relative proportion of errors based on providing the intuitive lure. However, younger adults actually had a greater proportion of intuitive errors on the long version of the CRT, relative to older adults. Item analysis indicated a much lower internal consistency of CRT items for older adults. These outcomes do not offer full support for the argument that older adults are higher in the use of a "Type I" cognitive style. The evidence was also consistent with an alternative hypothesis that age differences were due to lower levels of numeracy in the older samples. Alternative process-oriented evaluations of how older adults solve CRT items will probably be needed to determine conditions under which older adults manifest an increase in the Type I dispositional tendency to opt for superficial, heuristically guided problem representations in numerical problem-solving tasks.

  13. Portable sample preparation and analysis system for micron and sub-micron particle characterization using light scattering and absorption spectroscopy

    DOEpatents

    Stark, Peter C [Los Alamos, NM; Zurek, Eduardo [Barranquilla, CO; Wheat, Jeffrey V [Fort Walton Beach, FL; Dunbar, John M [Santa Fe, NM; Olivares, Jose A [Los Alamos, NM; Garcia-Rubio, Luis H [Temple Terrace, FL; Ward, Michael D [Los Alamos, NM

    2011-07-26

    There is provided a method and device for remote sampling, preparation and optical interrogation of a sample using light scattering and light absorption methods. The portable device is a filtration-based device that removes interfering background particle material from the sample matrix by segregating or filtering the chosen analyte from the sample solution or matrix while allowing the interfering background particles to be pumped out of the device. The segregated analyte is then suspended in a diluent for analysis. The device is capable of calculating an initial concentration of the analyte, as well as diluting the analyte such that reliable optical measurements can be made. Suitable analytes include cells, microorganisms, bioparticles, pathogens and diseases. Sample matrixes include biological fluids such as blood and urine, as well as environmental samples including waste water.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.W.; Heinrich, R.R.; Graczyk, D.G.

    The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year 1991 (October 1990 through September 1991). This is the eighth annual report for the ACL. The Analytical Chemistry Laboratory is a full-cost-recovery service center, with the primary mission of providing a broad range of analytical chemistry support services to the scientific and engineering programs at ANL. In addition, the ACL conducts a research program in analytical chemistry, works on instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL handlesmore » a wide range of analytical problems, from routine standard analyses to unique problems that require significant development of methods and techniques.« less

  15. Simultaneous determination of eight flavonoids in propolis using chemometrics-assisted high performance liquid chromatography-diode array detection.

    PubMed

    Sun, Yan-Mei; Wu, Hai-Long; Wang, Jian-Yao; Liu, Zhi; Zhai, Min; Yu, Ru-Qin

    2014-07-01

    A fast analytical strategy of second-order calibration method based on the alternating trilinear decomposition algorithm (ATLD)-assisted high performance liquid chromatography coupled with a diode array detector (HPLC-DAD) was established for the simultaneous determination of eight flavonoids (rutin, quercetin, luteolin, kaempferol, isorhamnetin, apigenin, galangin and chrysin) in propolis capsules samples. The chromatographic separation was implemented on a Wondasil™ C18 column (250mm×4.6mm, 5μm) within 13min with a binary mobile phase composed of water with 1% formic acid and methanol at a flow rate of 1.0mLmin(-1) after flavonoids were only extracted with methanol by ultrasound extraction for 15min. The baseline problem was overcome by considering background drift as additional compositions or factors as well as the target analytes, and ATLD was employed to handle the overlapping peaks from analytes of interest or from analytes and co-eluting matrix compounds. The linearity was good with the correlation coefficients no less than 0.9947; the limit of detections (LODs) within the range of 3.39-33.05ngmL(-1) were low enough; the accuracy was confirmed by the recoveries ranged from 91.9% to 110.2% and the root-mean-square-error of predictions (RMSEPs) less than 1.1μg/mL. The results indicated that the chromatographic method with the aid of ATLD is efficient, sensitive and cost-effective and can realize the resolution and accurate quantification of flavonoids even in the presence of interferences, thus providing an alternative method for accurate quantification of analytes especially when the complete separation is not easily accomplished. The method was successfully applied to propolis capsules samples and the satisfactory results were obtained. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Comparison of three-way and four-way calibration for the real-time quantitative analysis of drug hydrolysis in complex dynamic samples by excitation-emission matrix fluorescence.

    PubMed

    Yin, Xiao-Li; Gu, Hui-Wen; Liu, Xiao-Lu; Zhang, Shan-Hui; Wu, Hai-Long

    2018-03-05

    Multiway calibration in combination with spectroscopic technique is an attractive tool for online or real-time monitoring of target analyte(s) in complex samples. However, how to choose a suitable multiway calibration method for the resolution of spectroscopic-kinetic data is a troubling problem in practical application. In this work, for the first time, three-way and four-way fluorescence-kinetic data arrays were generated during the real-time monitoring of the hydrolysis of irinotecan (CPT-11) in human plasma by excitation-emission matrix fluorescence. Alternating normalization-weighted error (ANWE) and alternating penalty trilinear decomposition (APTLD) were used as three-way calibration for the decomposition of the three-way kinetic data array, whereas alternating weighted residual constraint quadrilinear decomposition (AWRCQLD) and alternating penalty quadrilinear decomposition (APQLD) were applied as four-way calibration to the four-way kinetic data array. The quantitative results of the two kinds of calibration models were fully compared from the perspective of predicted real-time concentrations, spiked recoveries of initial concentration, and analytical figures of merit. The comparison study demonstrated that both three-way and four-way calibration models could achieve real-time quantitative analysis of the hydrolysis of CPT-11 in human plasma under certain conditions. However, it was also found that both of them possess some critical advantages and shortcomings during the process of dynamic analysis. The conclusions obtained in this paper can provide some helpful guidance for the reasonable selection of multiway calibration models to achieve the real-time quantitative analysis of target analyte(s) in complex dynamic systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Simulator for multilevel optimization research

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Young, K. C.

    1986-01-01

    A computer program designed to simulate and improve multilevel optimization techniques is described. By using simple analytic functions to represent complex engineering analyses, the simulator can generate and test a large variety of multilevel decomposition strategies in a relatively short time. This type of research is an essential step toward routine optimization of large aerospace systems. The paper discusses the types of optimization problems handled by the simulator and gives input and output listings and plots for a sample problem. It also describes multilevel implementation techniques which have value beyond the present computer program. Thus, this document serves as a user's manual for the simulator and as a guide for building future multilevel optimization applications.

  18. Methods of analysis by the U.S. Geological Survey National Water Quality Laboratory; determination of selected carbamate pesticides in water by high-performance liquid chromatography

    USGS Publications Warehouse

    Werner, S.L.; Johnson, S.M.

    1994-01-01

    As part of its primary responsibility concerning water as a national resource, the U.S. Geological Survey collects and analyzes samples of ground water and surface water to determine water quality. This report describes the method used since June 1987 to determine selected total-recoverable carbamate pesticides present in water samples. High- performance liquid chromatography is used to separate N-methyl carbamates, N-methyl carbamoyloximes, and an N-phenyl carbamate which have been extracted from water and concentrated in dichloromethane. Analytes, surrogate compounds, and reference compounds are eluted from the analytical column within 25 minutes. Two modes of analyte detection are used: (1) a photodiode-array detector measures and records ultraviolet-absorbance profiles, and (2) a fluorescence detector measures and records fluorescence from an analyte derivative produced when analyte hydrolysis is combined with chemical derivatization. Analytes are identified and confirmed in a three-stage process by use of chromatographic retention time, ultraviolet (UV) spectral comparison, and derivatization/fluorescence detection. Quantitative results are based on the integration of single-wavelength UV-absorbance chromatograms and on comparison with calibration curves derived from external analyte standards that are run with samples as part of an instrumental analytical sequence. Estimated method detection limits vary for each analyte, depending on the sample matrix conditions, and range from 0.5 microgram per liter to as low as 0.01 microgram per liter. Reporting levels for all analytes have been set at 0.5 microgram per liter for this method. Corrections on the basis of percentage recoveries of analytes spiked into distilled water are not applied to values calculated for analyte concentration in samples. These values for analyte concentrations instead indicate the quantities recovered by the method from a particular sample matrix.

  19. NHEXAS PHASE I MARYLAND STUDY--QA ANALYTICAL RESULTS FOR PESTICIDE METABOLITES IN SPIKE SAMPLES

    EPA Science Inventory

    The Pesticides in Spikes data set contains the analytical results of measurements of up to 17 pesticides in 12 control samples (spikes) from 11 households. Measurements were made in samples of blood serum. Controls were used to assess recovery of target analytes from a sample m...

  20. Investigations of Some Liquid Matrixes for Analyte Quantification by MALDI

    NASA Astrophysics Data System (ADS)

    Moon, Jeong Hee; Park, Kyung Man; Ahn, Sung Hee; Lee, Seong Hoon; Kim, Myung Soo

    2015-06-01

    Sample inhomogeneity is one of the obstacles preventing the generation of reproducible mass spectra by MALDI and to their use for the purpose of analyte quantification. As a potential solution to this problem, we investigated MALDI with some liquid matrixes prepared by nonstoichiometric mixing of acids and bases. Out of 27 combinations of acids and bases, liquid matrixes could be produced from seven. When the overall spectral features were considered, two liquid matrixes using α-cyano-4-hydroxycinnamic acid as the acid and 3-aminoquinoline and N,N-diethylaniline as bases were the best choices. In our previous study of MALDI with solid matrixes, we found that three requirements had to be met for the generation of reproducible spectra and for analyte quantification: (1) controlling the temperature by fixing the total ion count, (2) plotting the analyte-to-matrix ion ratio versus the analyte concentration as the calibration curve, and (3) keeping the matrix suppression below a critical value. We found that the same requirements had to be met in MALDI with liquid matrixes as well. In particular, although the liquid matrixes tested here were homogeneous, they failed to display spot-to-spot spectral reproducibility unless the first requirement above was met. We also found that analyte-derived ions could not be produced efficiently by MALDI with the above liquid matrixes unless the analyte was sufficiently basic. In this sense, MALDI processes with solid and liquid matrixes should be regarded as complementary techniques rather than as competing ones.

  1. Investigations of Some Liquid Matrixes for Analyte Quantification by MALDI.

    PubMed

    Moon, Jeong Hee; Park, Kyung Man; Ahn, Sung Hee; Lee, Seong Hoon; Kim, Myung Soo

    2015-10-01

    Sample inhomogeneity is one of the obstacles preventing the generation of reproducible mass spectra by MALDI and to their use for the purpose of analyte quantification. As a potential solution to this problem, we investigated MALDI with some liquid matrixes prepared by nonstoichiometric mixing of acids and bases. Out of 27 combinations of acids and bases, liquid matrixes could be produced from seven. When the overall spectral features were considered, two liquid matrixes using α-cyano-4-hydroxycinnamic acid as the acid and 3-aminoquinoline and N,N-diethylaniline as bases were the best choices. In our previous study of MALDI with solid matrixes, we found that three requirements had to be met for the generation of reproducible spectra and for analyte quantification: (1) controlling the temperature by fixing the total ion count, (2) plotting the analyte-to-matrix ion ratio versus the analyte concentration as the calibration curve, and (3) keeping the matrix suppression below a critical value. We found that the same requirements had to be met in MALDI with liquid matrixes as well. In particular, although the liquid matrixes tested here were homogeneous, they failed to display spot-to-spot spectral reproducibility unless the first requirement above was met. We also found that analyte-derived ions could not be produced efficiently by MALDI with the above liquid matrixes unless the analyte was sufficiently basic. In this sense, MALDI processes with solid and liquid matrixes should be regarded as complementary techniques rather than as competing ones.

  2. The Statistical Value of Raw Fluorescence Signal in Luminex xMAP Based Multiplex Immunoassays

    PubMed Central

    Breen, Edmond J.; Tan, Woei; Khan, Alamgir

    2016-01-01

    Tissue samples (plasma, saliva, serum or urine) from 169 patients classified as either normal or having one of seven possible diseases are analysed across three 96-well plates for the presences of 37 analytes using cytokine inflammation multiplexed immunoassay panels. Censoring for concentration data caused problems for analysis of the low abundant analytes. Using fluorescence analysis over concentration based analysis allowed analysis of these low abundant analytes. Mixed-effects analysis on the resulting fluorescence and concentration responses reveals a combination of censoring and mapping the fluorescence responses to concentration values, through a 5PL curve, changed observed analyte concentrations. Simulation verifies this, by showing a dependence on the mean florescence response and its distribution on the observed analyte concentration levels. Differences from normality, in the fluorescence responses, can lead to differences in concentration estimates and unreliable probabilities for treatment effects. It is seen that when fluorescence responses are normally distributed, probabilities of treatment effects for fluorescence based t-tests has greater statistical power than the same probabilities from concentration based t-tests. We add evidence that the fluorescence response, unlike concentration values, doesn’t require censoring and we show with respect to differential analysis on the fluorescence responses that background correction is not required. PMID:27243383

  3. Advanced Curation: Solving Current and Future Sample Return Problems

    NASA Technical Reports Server (NTRS)

    Fries, M.; Calaway, M.; Evans, C.; McCubbin, F.

    2015-01-01

    Advanced Curation is a wide-ranging and comprehensive research and development effort at NASA Johnson Space Center that identifies and remediates sample related issues. For current collections, Advanced Curation investigates new cleaning, verification, and analytical techniques to assess their suitability for improving curation processes. Specific needs are also assessed for future sample return missions. For each need, a written plan is drawn up to achieve the requirement. The plan draws while upon current Curation practices, input from Curators, the analytical expertise of the Astromaterials Research and Exploration Science (ARES) team, and suitable standards maintained by ISO, IEST, NIST and other institutions. Additionally, new technologies are adopted on the bases of need and availability. Implementation plans are tested using customized trial programs with statistically robust courses of measurement, and are iterated if necessary until an implementable protocol is established. Upcoming and potential NASA missions such as OSIRIS-REx, the Asteroid Retrieval Mission (ARM), sample return missions in the New Frontiers program, and Mars sample return (MSR) all feature new difficulties and specialized sample handling requirements. The Mars 2020 mission in particular poses a suite of challenges since the mission will cache martian samples for possible return to Earth. In anticipation of future MSR, the following problems are among those under investigation: What is the most efficient means to achieve the less than 1.0 ng/sq cm total organic carbon (TOC) cleanliness required for all sample handling hardware? How do we maintain and verify cleanliness at this level? The Mars 2020 Organic Contamination Panel (OCP) predicts that organic carbon, if present, will be present at the "one to tens" of ppb level in martian near-surface samples. The same samples will likely contain wt% perchlorate salts, or approximately 1,000,000x as much perchlorate oxidizer as organic carbon. The chemical kinetics of this reaction are poorly understood at present under the conditions of cached or curated martian samples. Among other parameters, what is the maximum temperature allowed during storage in order to preserve native martian organic compounds for analysis? What is the best means to collect headspace gases from cached martian (and other) samples? This gas will contain not only martian atmosphere but also off-gassed volatiles from the cached solids.

  4. Others' Anger Makes People Work Harder Not Smarter: The Effect of Observing Anger and Sarcasm on Creative and Analytic Thinking

    ERIC Educational Resources Information Center

    Miron-Spektor, Ella; Efrat-Treister, Dorit; Rafaeli, Anat; Schwarz-Cohen, Orit

    2011-01-01

    The authors examine whether and how observing anger influences thinking processes and problem-solving ability. In 3 studies, the authors show that participants who listened to an angry customer were more successful in solving analytic problems, but less successful in solving creative problems compared with participants who listened to an…

  5. The analytic solution of the firm's cost-minimization problem with box constraints and the Cobb-Douglas model

    NASA Astrophysics Data System (ADS)

    Bayón, L.; Grau, J. M.; Ruiz, M. M.; Suárez, P. M.

    2012-12-01

    One of the most well-known problems in the field of Microeconomics is the Firm's Cost-Minimization Problem. In this paper we establish the analytical expression for the cost function using the Cobb-Douglas model and considering maximum constraints for the inputs. Moreover we prove that it belongs to the class C1.

  6. IMPROVED METHOD FOR THE STORAGE OF GROUND WATER SAMPLES CONTAINING VOLATILE ORGANIC ANALYTES

    EPA Science Inventory

    The sorption of volatile organic analytes from water samples by the Teflon septum surface used with standard glass 40-ml sample collection vials was investigated. Analytes tested included alkanes, isoalkanes, olefins, cycloalkanes, a cycloalkene, monoaromatics, a polynuclear arom...

  7. Nanophotonic particle simulation and inverse design using artificial neural networks.

    PubMed

    Peurifoy, John; Shen, Yichen; Jing, Li; Yang, Yi; Cano-Renteria, Fidel; DeLacy, Brendan G; Joannopoulos, John D; Tegmark, Max; Soljačić, Marin

    2018-06-01

    We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles. We find that the network needs to be trained on only a small sampling of the data to approximate the simulation to high precision. Once the neural network is trained, it can simulate such optical processes orders of magnitude faster than conventional simulations. Furthermore, the trained neural network can be used to solve nanophotonic inverse design problems by using back propagation, where the gradient is analytical, not numerical.

  8. Problem Definition Study on Techniques and Methodologies for Evaluating the Chemical and Toxicological Properties of Combustion Products of Gun Systems. Volume 1.

    DTIC Science & Technology

    1988-03-01

    methods that can resolve the various compounds are required. This chapter specifically focuses on analytical and sampling metho - dology used to determine...Salmonella typhimurium TA1538. Cancer Res. 35:2461-2468. Huy, N. D., R. Belleau, and P. E. Roy. 1975. Toxicity of marijuana and tobacco smoking in the... Medicine Division (HSHA-IPM) Fort Sam Houston, TX 78234 Commander U.S. Army Materiel Command ATTN: AMSCG 5001 Eisenhower Avenue Alexandria, VA 22333

  9. Mature red blood cells: from optical model to inverse light-scattering problem.

    PubMed

    Gilev, Konstantin V; Yurkin, Maxim A; Chernyshova, Ekaterina S; Strokotov, Dmitry I; Chernyshev, Andrei V; Maltsev, Valeri P

    2016-04-01

    We propose a method for characterization of mature red blood cells (RBCs) morphology, based on measurement of light-scattering patterns (LSPs) of individual RBCs with the scanning flow cytometer and on solution of the inverse light-scattering (ILS) problem for each LSP. We considered a RBC shape model, corresponding to the minimal bending energy of the membrane with isotropic elasticity, and constructed an analytical approximation, which allows rapid simulation of the shape, given the diameter and minimal and maximal thicknesses. The ILS problem was solved by the nearest-neighbor interpolation using a preliminary calculated database of 250,000 theoretical LSPs. For each RBC in blood sample we determined three abovementioned shape characteristics and refractive index, which also allows us to calculate volume, surface area, sphericity index, spontaneous curvature, hemoglobin concentration and content.

  10. Mature red blood cells: from optical model to inverse light-scattering problem

    PubMed Central

    Gilev, Konstantin V.; Yurkin, Maxim A.; Chernyshova, Ekaterina S.; Strokotov, Dmitry I.; Chernyshev, Andrei V.; Maltsev, Valeri P.

    2016-01-01

    We propose a method for characterization of mature red blood cells (RBCs) morphology, based on measurement of light-scattering patterns (LSPs) of individual RBCs with the scanning flow cytometer and on solution of the inverse light-scattering (ILS) problem for each LSP. We considered a RBC shape model, corresponding to the minimal bending energy of the membrane with isotropic elasticity, and constructed an analytical approximation, which allows rapid simulation of the shape, given the diameter and minimal and maximal thicknesses. The ILS problem was solved by the nearest-neighbor interpolation using a preliminary calculated database of 250,000 theoretical LSPs. For each RBC in blood sample we determined three abovementioned shape characteristics and refractive index, which also allows us to calculate volume, surface area, sphericity index, spontaneous curvature, hemoglobin concentration and content. PMID:27446656

  11. Reliable use of determinants to solve nonlinear structural eigenvalue problems efficiently

    NASA Technical Reports Server (NTRS)

    Williams, F. W.; Kennedy, D.

    1988-01-01

    The analytical derivation, numerical implementation, and performance of a multiple-determinant parabolic interpolation method (MDPIM) for use in solving transcendental eigenvalue (critical buckling or undamped free vibration) problems in structural mechanics are presented. The overall bounding, eigenvalue-separation, qualified parabolic interpolation, accuracy-confirmation, and convergence-recovery stages of the MDPIM are described in detail, and the numbers of iterations required to solve sample plane-frame problems using the MDPIM are compared with those for a conventional bisection method and for the Newtonian method of Simpson (1984) in extensive tables. The MDPIM is shown to use 31 percent less computation time than bisection when accuracy of 0.0001 is required, but 62 percent less when accuracy of 10 to the -8th is required; the time savings over the Newtonian method are about 10 percent.

  12. A framework for inference about carnivore density from unstructured spatial sampling of scat using detector dogs

    USGS Publications Warehouse

    Thompson, Craig M.; Royle, J. Andrew; Garner, James D.

    2012-01-01

    Wildlife management often hinges upon an accurate assessment of population density. Although undeniably useful, many of the traditional approaches to density estimation such as visual counts, livetrapping, or mark–recapture suffer from a suite of methodological and analytical weaknesses. Rare, secretive, or highly mobile species exacerbate these problems through the reality of small sample sizes and movement on and off study sites. In response to these difficulties, there is growing interest in the use of non-invasive survey techniques, which provide the opportunity to collect larger samples with minimal increases in effort, as well as the application of analytical frameworks that are not reliant on large sample size arguments. One promising survey technique, the use of scat detecting dogs, offers a greatly enhanced probability of detection while at the same time generating new difficulties with respect to non-standard survey routes, variable search intensity, and the lack of a fixed survey point for characterizing non-detection. In order to account for these issues, we modified an existing spatially explicit, capture–recapture model for camera trap data to account for variable search intensity and the lack of fixed, georeferenced trap locations. We applied this modified model to a fisher (Martes pennanti) dataset from the Sierra National Forest, California, and compared the results (12.3 fishers/100 km2) to more traditional density estimates. We then evaluated model performance using simulations at 3 levels of population density. Simulation results indicated that estimates based on the posterior mode were relatively unbiased. We believe that this approach provides a flexible analytical framework for reconciling the inconsistencies between detector dog survey data and density estimation procedures.

  13. A simple and highly sensitive UPLC-ESI-MS/MS method for the simultaneous quantification of nicotine, cotinine, and the tobacco-specific carcinogens N'-nitrosonornicotine and 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone in serum samples.

    PubMed

    Loukotková, Lucie; VonTungeln, Linda S; Vanlandingham, Michelle; da Costa, Gonçalo Gamboa

    2018-01-01

    According to the World Health Organization, the consumption of tobacco products is the single largest cause of preventable deaths in the world, exceeding the total aggregated number of deaths caused by diseases such as AIDS, tuberculosis, and malaria. An important element in the evaluation of the health risks associated with the consumption of tobacco products is the assessment of the internal exposure to the tobacco constituents responsible for their addictive (e.g. nicotine) and carcinogenic (e.g. N-nitrosamines such as NNN and NNK) properties. However, the assessment of the serum levels of these compounds is often challenging from an analytical standpoint, in particular when limited sample volumes are available and low detection limits are required. Currently available analytical methods often rely on complex multi-step sample preparation procedures, which are prone to low analyte recoveries and ex-vivo contamination due to the ubiquitous nature of these compounds as background contaminants. In order to circumvent these problems, we report a facile and highly sensitive method for the simultaneous quantification of nicotine, cotinine, NNN, and NNK in serum samples. The method relies on a simple "one pot" liquid-liquid extraction procedure and isotope dilution ultra-high pressure (UPLC) hydrophilic interaction liquid chromatography (HILIC) coupled with tandem mass spectrometry. The method requires only 10μL of serum and presents a limit of quantification of 0.02nmol (3000pg/mL) nicotine, 0.6pmol (100pg/mL) cotinine, 0.05pmol NNK (10pg/mL), and 0.06pmol NNN (10pg/mL), making it appropriate for pharmacokinetic evaluations. Published by Elsevier B.V.

  14. Comparative spectral analysis of veterinary powder product by continuous wavelet and derivative transforms

    NASA Astrophysics Data System (ADS)

    Dinç, Erdal; Kanbur, Murat; Baleanu, Dumitru

    2007-10-01

    Comparative simultaneous determination of chlortetracycline and benzocaine in the commercial veterinary powder product was carried out by continuous wavelet transform (CWT) and classical derivative transform (or classical derivative spectrophotometry). In this quantitative spectral analysis, two proposed analytical methods do not require any chemical separation process. In the first step, several wavelet families were tested to find an optimal CWT for the overlapping signal processing of the analyzed compounds. Subsequently, we observed that the coiflets (COIF-CWT) method with dilation parameter, a = 400, gives suitable results for this analytical application. For a comparison, the classical derivative spectrophotometry (CDS) approach was also applied to the simultaneous quantitative resolution of the same analytical problem. Calibration functions were obtained by measuring the transform amplitudes corresponding to zero-crossing points for both CWT and CDS methods. The utility of these two analytical approaches were verified by analyzing various synthetic mixtures consisting of chlortetracycline and benzocaine and they were applied to the real samples consisting of veterinary powder formulation. The experimental results obtained from the COIF-CWT approach were statistically compared with those obtained by classical derivative spectrophotometry and successful results were reported.

  15. Results of the U. S. Geological Survey's analytical evaluation program for standard reference samples distributed in April 2001

    USGS Publications Warehouse

    Woodworth, M.T.; Connor, B.F.

    2001-01-01

    This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-165 (trace constituents), M-158 (major constituents), N-69 (nutrient constituents), N-70 (nutrient constituents), P-36 (low ionic-strength constituents), and Hg-32 (mercury) -- that were distributed in April 2001 to laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data received from 73 laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.

  16. Results of the U. S. Geological Survey's Analytical Evaluation Program for Standard Reference Samples Distributed in March 2002

    USGS Publications Warehouse

    Woodworth, M.T.; Conner, B.F.

    2002-01-01

    This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T- 169 (trace constituents), M- 162 (major constituents), N-73 (nutrient constituents), N-74 (nutrient constituents), P-38 (low ionic-strength constituents), and Hg-34 (mercury) -- that were distributed in March 2002 to laboratories enrolled in the U.S. Geological Survey sponsored intedaboratory testing program. Analytical data received from 93 laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.

  17. Results of the U.S. Geological Survey's analytical evaluation program for standard reference samples distributed in September 2002

    USGS Publications Warehouse

    Woodworth, Mark T.; Connor, Brooke F.

    2003-01-01

    This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-171 (trace constituents), M-164 (major constituents), N-75 (nutrient constituents), N-76 (nutrient constituents), P-39 (low ionic-strength constituents), and Hg-35 (mercury) -- that were distributed in September 2002 to laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data received from 102 laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.

  18. Results of the U.S. Geological Survey's analytical evaluation program for standard reference samples distributed in September 2001

    USGS Publications Warehouse

    Woodworth, Mark T.; Connor, Brooke F.

    2002-01-01

    This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-167 (trace constituents), M-160 (major constituents), N-71 (nutrient constituents), N-72 (nutrient constituents), P-37 (low ionic-strength constituents), and Hg-33 (mercury) -- that were distributed in September 2001 to laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data received from 98 laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.

  19. Results of the U.S. Geological Survey's Analytical Evaluation Program for Standard Reference Samples Distributed in March 2000

    USGS Publications Warehouse

    Farrar, Jerry W.; Copen, Ashley M.

    2000-01-01

    This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-161 (trace constituents), M-154 (major constituents), N-65 (nutrient constituents), N-66 nutrient constituents), P-34 (low ionic strength constituents), and Hg-30 (mercury) -- that were distributed in March 2000 to 144 laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data that were received from 132 of the laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.

  20. Results of the U.S. Geological Survey's analytical evaluation program for standard reference samples distributed in October 1999

    USGS Publications Warehouse

    Farrar, T.W.

    2000-01-01

    This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-159 (trace constituents), M-152 (major constituents), N-63 (nutrient constituents), N-64 (nutrient constituents), P-33 (low ionic strength constituents), and Hg-29 (mercury) -- that were distributed in October 1999 to 149 laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data that were received from 131 of the laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.

  1. Results of the U.S. Geological Survey's analytical evaluation program for standard reference samples distributed in March 2003

    USGS Publications Warehouse

    Woodworth, Mark T.; Connor, Brooke F.

    2003-01-01

    This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-173 (trace constituents), M-166 (major constituents), N-77 (nutrient constituents), N-78 (nutrient constituents), P-40 (low ionic-strength constituents), and Hg-36 (mercury) -- that were distributed in March 2003 to laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data received from 110 laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.

  2. Results of the U. S. Geological Survey's analytical evaluation program for standard reference samples distributed in October 2000

    USGS Publications Warehouse

    Connor, B.F.; Currier, J.P.; Woodworth, M.T.

    2001-01-01

    This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-163 (trace constituents), M-156 (major constituents), N-67 (nutrient constituents), N-68 (nutrient constituents), P-35 (low ionic strength constituents), and Hg-31 (mercury) -- that were distributed in October 2000 to 126 laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data that were received from 122 of the laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.

  3. Design of Biomedical Robots for Phenotype Prediction Problems

    PubMed Central

    deAndrés-Galiana, Enrique J.; Sonis, Stephen T.

    2016-01-01

    Abstract Genomics has been used with varying degrees of success in the context of drug discovery and in defining mechanisms of action for diseases like cancer and neurodegenerative and rare diseases in the quest for orphan drugs. To improve its utility, accuracy, and cost-effectiveness optimization of analytical methods, especially those that translate to clinically relevant outcomes, is critical. Here we define a novel tool for genomic analysis termed a biomedical robot in order to improve phenotype prediction, identifying disease pathogenesis and significantly defining therapeutic targets. Biomedical robot analytics differ from historical methods in that they are based on melding feature selection methods and ensemble learning techniques. The biomedical robot mathematically exploits the structure of the uncertainty space of any classification problem conceived as an ill-posed optimization problem. Given a classifier, there exist different equivalent small-scale genetic signatures that provide similar predictive accuracies. We perform the sensitivity analysis to noise of the biomedical robot concept using synthetic microarrays perturbed by different kinds of noises in expression and class assignment. Finally, we show the application of this concept to the analysis of different diseases, inferring the pathways and the correlation networks. The final aim of a biomedical robot is to improve knowledge discovery and provide decision systems to optimize diagnosis, treatment, and prognosis. This analysis shows that the biomedical robots are robust against different kinds of noises and particularly to a wrong class assignment of the samples. Assessing the uncertainty that is inherent to any phenotype prediction problem is the right way to address this kind of problem. PMID:27347715

  4. Design of Biomedical Robots for Phenotype Prediction Problems.

    PubMed

    deAndrés-Galiana, Enrique J; Fernández-Martínez, Juan Luis; Sonis, Stephen T

    2016-08-01

    Genomics has been used with varying degrees of success in the context of drug discovery and in defining mechanisms of action for diseases like cancer and neurodegenerative and rare diseases in the quest for orphan drugs. To improve its utility, accuracy, and cost-effectiveness optimization of analytical methods, especially those that translate to clinically relevant outcomes, is critical. Here we define a novel tool for genomic analysis termed a biomedical robot in order to improve phenotype prediction, identifying disease pathogenesis and significantly defining therapeutic targets. Biomedical robot analytics differ from historical methods in that they are based on melding feature selection methods and ensemble learning techniques. The biomedical robot mathematically exploits the structure of the uncertainty space of any classification problem conceived as an ill-posed optimization problem. Given a classifier, there exist different equivalent small-scale genetic signatures that provide similar predictive accuracies. We perform the sensitivity analysis to noise of the biomedical robot concept using synthetic microarrays perturbed by different kinds of noises in expression and class assignment. Finally, we show the application of this concept to the analysis of different diseases, inferring the pathways and the correlation networks. The final aim of a biomedical robot is to improve knowledge discovery and provide decision systems to optimize diagnosis, treatment, and prognosis. This analysis shows that the biomedical robots are robust against different kinds of noises and particularly to a wrong class assignment of the samples. Assessing the uncertainty that is inherent to any phenotype prediction problem is the right way to address this kind of problem.

  5. Mind the gaps - the epidemiology of poor-quality anti-malarials in the malarious world - analysis of the WorldWide Antimalarial Resistance Network database

    PubMed Central

    2014-01-01

    Background Poor quality medicines threaten the lives of millions of patients and are alarmingly common in many parts of the world. Nevertheless, the global extent of the problem remains unknown. Accurate estimates of the epidemiology of poor quality medicines are sparse and are influenced by sampling methodology and diverse chemical analysis techniques. In order to understand the existing data, the Antimalarial Quality Scientific Group at WWARN built a comprehensive, open-access, global database and linked Antimalarial Quality Surveyor, an online visualization tool. Analysis of the database is described here, the limitations of the studies and data reported, and their public health implications discussed. Methods The database collates customized summaries of 251 published anti-malarial quality reports in English, French and Spanish by time and location since 1946. It also includes information on assays to determine quality, sampling and medicine regulation. Results No publicly available reports for 60.6% (63) of the 104 malaria-endemic countries were found. Out of 9,348 anti-malarials sampled, 30.1% (2,813) failed chemical/packaging quality tests with 39.3% classified as falsified, 2.3% as substandard and 58.3% as poor quality without evidence available to categorize them as either substandard or falsified. Only 32.3% of the reports explicitly described their definitions of medicine quality and just 9.1% (855) of the samples collected in 4.6% (six) surveys were conducted using random sampling techniques. Packaging analysis was only described in 21.5% of publications and up to twenty wrong active ingredients were found in falsified anti-malarials. Conclusions There are severe neglected problems with anti-malarial quality but there are important caveats to accurately estimate the prevalence and distribution of poor quality anti-malarials. The lack of reports in many malaria-endemic areas, inadequate sampling techniques and inadequate chemical analytical methods and instrumental procedures emphasizes the need to interpret medicine quality results with caution. The available evidence demonstrates the need for more investment to improve both sampling and analytical methodology and to achieve consensus in defining different types of poor quality medicines. PMID:24712972

  6. Discreet passive explosive detection through 2-sided waveguided fluorescence

    DOEpatents

    Harper, Ross James [Stillwater, OK; la Grone, Marcus [Cushing, OK; Fisher, Mark [Stillwater, OK

    2011-10-18

    The current invention provides a passive sampling device suitable for collecting and detecting the presence of target analytes. In particular, the passive sampling device is suitable for detecting nitro-aromatic compounds. The current invention further provides a passive sampling device reader suitable for determining the collection of target analytes. Additionally, the current invention provides methods for detecting target analytes using the passive sampling device and the passive sampling device reader.

  7. [Method of fused sample preparation after nitrify-determination of primary and minor elements in manganese ore by X-ray fluorescence spectrometry].

    PubMed

    Song, Yi; Guo, Fen; Gu, Song-hai

    2007-02-01

    Eight components, i. e. Mn, SiO2, Fe, P, Al2O3, CaO, MgO and S, in manganese ore were determined by X-ray fluorescence spectrometer. Because manganese ore sample releases a lot of air bubbles during fusion which effect accuracy and reproducibility of determination, nitric acid was added to the sample to destroy organic matter before fusion by the mixture flux at 1000 degrees C. This method solved the problem that the flux splashed during fusion because organic matter volatilized brought out a lot of air bubbles, eliminated particle size effects and mineral effect, while solved the problem of volatilization of sulfur during fusion. The experiments for the selection of the sample preparation conditions, i. e. fusion flux, fusion time and volume of HNO3, were carried out. The matrix effects on absorption and enhancement were corrected by variable theoretical alpha coefficient to expand the range of determination. Moreover, the precision and accuracy experiments were performed. In comparison with chemical analysis method, the quantitative analytical results for each component are satisfactory. The method has proven rapid, precise and simple.

  8. Review of analytical models to stream depletion induced by pumping: Guide to model selection

    NASA Astrophysics Data System (ADS)

    Huang, Ching-Sheng; Yang, Tao; Yeh, Hund-Der

    2018-06-01

    Stream depletion due to groundwater extraction by wells may cause impact on aquatic ecosystem in streams, conflict over water rights, and contamination of water from irrigation wells near polluted streams. A variety of studies have been devoted to addressing the issue of stream depletion, but a fundamental framework for analytical modeling developed from aquifer viewpoint has not yet been found. This review shows key differences in existing models regarding the stream depletion problem and provides some guidelines for choosing a proper analytical model in solving the problem of concern. We introduce commonly used models composed of flow equations, boundary conditions, well representations and stream treatments for confined, unconfined, and leaky aquifers. They are briefly evaluated and classified according to six categories of aquifer type, flow dimension, aquifer domain, stream representation, stream channel geometry, and well type. Finally, we recommend promising analytical approaches that can solve stream depletion problem in reality with aquifer heterogeneity and irregular geometry of stream channel. Several unsolved stream depletion problems are also recommended.

  9. The limited relevance of analytical ethics to the problems of bioethics.

    PubMed

    Holmes, R L

    1990-04-01

    Philosophical ethics comprises metaethics, normative ethics and applied ethics. These have characteristically received analytic treatment by twentieth-century Anglo-American philosophy. But there has been disagreement over their interrelationship to one another and the relationship of analytical ethics to substantive morality--the making of moral judgments. I contend that the expertise philosophers have in either theoretical or applied ethics does not equip them to make sounder moral judgments on the problems of bioethics than nonphilosophers. One cannot "apply" theories like Kantianism or consequentialism to get solutions to practical moral problems unless one knows which theory is correct, and that is a metaethical question over which there is no consensus. On the other hand, to presume to be able to reach solutions through neutral analysis of problems is unavoidably to beg controversial theoretical issues in the process. Thus, while analytical ethics can play an important clarificatory role in bioethics, it can neither provide, nor substitute for, moral wisdom.

  10. The PAC-MAN model: Benchmark case for linear acoustics in computational physics

    NASA Astrophysics Data System (ADS)

    Ziegelwanger, Harald; Reiter, Paul

    2017-10-01

    Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.

  11. New approach to resolve the humidity problem in VOC determination in outdoor air samples using solid adsorbent tubes followed by TD-GC-MS.

    PubMed

    Maceira, Alba; Vallecillos, Laura; Borrull, Francesc; Marcé, Rosa Maria

    2017-12-01

    This study describes the humidity effect in the sampling process by adsorbent tubes followed by thermal desorption and gas chromatography-mass spectrometry (TD-GC-MS) for the determination of volatile organic compounds (VOCs) in air samples and evaluates possible solutions to this problem. Two multi-sorbent bed tubes, Tenax TA/Carbograph 1TD and Carbotrap B/Carbopack X/Carboxen 569, were tested in order to evaluate their behaviour in the presence of environmental humidity. Humidity problems were demonstrated with carbon-based tubes, while Tenax-based tubes did not display any influence. Silica gel, a molecular sieve and CaCl 2 were tried out as materials for drying tube to remove air humidity, placed prior to the sampling tube to prevent water from entering. The pre-tubes filled with 0.5g of CaCl 2 showed the best results with respect to their blanks, the analytes recoveries and their ability to remove ambient humidity. To avoid the possible agglomeration of CaCl 2 during the sampling process in high relative humidity atmospheres, 0.1g of diatomaceous earth were mixed with the desiccant agent. The applicability of the CaCl 2 pre-tube as drying agent prior to Carbotrap B/Carbopack X/Carboxen 569 tubes was tested in urban and industrial locations with samplings of air at high relative humidity. In addition, the results were compared with those obtained using Tenax TA/Carbograph 1TD tubes. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Does Early Algebraic Reasoning Differ as a Function of Students’ Difficulty with Calculations versus Word Problems?

    PubMed Central

    Powell, Sarah R.; Fuchs, Lynn S.

    2014-01-01

    According to national mathematics standards, algebra instruction should begin at kindergarten and continue through elementary school. Most often, teachers address algebra in the elementary grades with problems related to solving equations or understanding functions. With 789 2nd- grade students, we administered (a) measures of calculations and word problems in the fall and (b) an assessment of pre-algebraic reasoning, with items that assessed solving equations and functions, in the spring. Based on the calculation and word-problem measures, we placed 148 students into 1 of 4 difficulty status categories: typically performing, calculation difficulty, word-problem difficulty, or difficulty with calculations and word problems. Analyses of variance were conducted on the 148 students; path analytic mediation analyses were conducted on the larger sample of 789 students. Across analyses, results corroborated the finding that word-problem difficulty is more strongly associated with difficulty with pre-algebraic reasoning. As an indicator of later algebra difficulty, word-problem difficulty may be a more useful predictor than calculation difficulty, and students with word-problem difficulty may require a different level of algebraic reasoning intervention than students with calculation difficulty. PMID:25309044

  13. Limitless Analytic Elements

    NASA Astrophysics Data System (ADS)

    Strack, O. D. L.

    2018-02-01

    We present equations for new limitless analytic line elements. These elements possess a virtually unlimited number of degrees of freedom. We apply these new limitless analytic elements to head-specified boundaries and to problems with inhomogeneities in hydraulic conductivity. Applications of these new analytic elements to practical problems involving head-specified boundaries require the solution of a very large number of equations. To make the new elements useful in practice, an efficient iterative scheme is required. We present an improved version of the scheme presented by Bandilla et al. (2007), based on the application of Cauchy integrals. The limitless analytic elements are useful when modeling strings of elements, rivers for example, where local conditions are difficult to model, e.g., when a well is close to a river. The solution of such problems is facilitated by increasing the order of the elements to obtain a good solution. This makes it unnecessary to resort to dividing the element in question into many smaller elements to obtain a satisfactory solution.

  14. Validating Analytical Protocols to Determine Selected Pesticides and PCBs Using Routine Samples.

    PubMed

    Pindado Jiménez, Oscar; García Alonso, Susana; Pérez Pastor, Rosa María

    2017-01-01

    This study aims at providing recommendations concerning the validation of analytical protocols by using routine samples. It is intended to provide a case-study on how to validate the analytical methods in different environmental matrices. In order to analyze the selected compounds (pesticides and polychlorinated biphenyls) in two different environmental matrices, the current work has performed and validated two analytical procedures by GC-MS. A description is given of the validation of the two protocols by the analysis of more than 30 samples of water and sediments collected along nine months. The present work also scopes the uncertainty associated with both analytical protocols. In detail, uncertainty of water sample was performed through a conventional approach. However, for the sediments matrices, the estimation of proportional/constant bias is also included due to its inhomogeneity. Results for the sediment matrix are reliable, showing a range 25-35% of analytical variability associated with intermediate conditions. The analytical methodology for the water matrix determines the selected compounds with acceptable recoveries and the combined uncertainty ranges between 20 and 30%. Analyzing routine samples is rarely applied to assess trueness of novel analytical methods and up to now this methodology was not focused on organochlorine compounds in environmental matrices.

  15. Is a pre-analytical process for urinalysis required?

    PubMed

    Petit, Morgane; Beaudeux, Jean-Louis; Majoux, Sandrine; Hennequin, Carole

    2017-10-01

    For the reliable urinary measurement of calcium, phosphate and uric acid, a pre-analytical process by adding acid or base to urine samples at laboratory is recommended in order to dissolve precipitated solutes. Several studies on different kind of samples and analysers have previously shown that a such pre-analytical treatment is useless. The objective was to study the necessity of pre-analytical treatment of urine on samples collected using the V-Monovette ® (Sarstedt) system and measured on the analyser Architect C16000 (Abbott Diagnostics). Sixty urinary samples of hospitalized patients were selected (n=30 for calcium and phosphate, and n=30 for uric acid). After acidification of urine samples for measurement of calcium and phosphate, and alkalinisation for measurement of uric acid respectively, differences between results before and after the pre-analytical treatment were compared to acceptable limits recommended by the French society of clinical biology (SFBC). No difference in concentration between before and after pre-analytical treatment of urine samples exceeded acceptable limits from SFBC for measurement of calcium and uric acid. For phosphate, only one sample exceeded these acceptable limits, showing a result paradoxically lower after acidification. In conclusion, in agreement with previous study, our results show that acidification or alkalinisation of urine samples from 24 h urines or from urination is not a pre-analytical necessity for measurement of calcium, phosphate and uric acid.

  16. MODFLOW-2000 Ground-Water Model?User Guide to the Subsidence and Aquifer-System Compaction (SUB) Package

    USGS Publications Warehouse

    Hoffmann, Jörn; Leake, S.A.; Galloway, D.L.; Wilson, Alicia M.

    2003-01-01

    This report documents a computer program, the Subsidence and Aquifer-System Compaction (SUB) Package, to simulate aquifer-system compaction and land subsidence using the U.S. Geological Survey modular finite-difference ground-water flow model, MODFLOW-2000. The SUB Package simulates elastic (recoverable) compaction and expansion, and inelastic (permanent) compaction of compressible fine-grained beds (interbeds) within the aquifers. The deformation of the interbeds is caused by head or pore-pressure changes, and thus by changes in effective stress, within the interbeds. If the stress is less than the preconsolidation stress of the sediments, the deformation is elastic; if the stress is greater than the preconsolidation stress, the deformation is inelastic. The propagation of head changes within the interbeds is defined by a transient, one-dimensional (vertical) diffusion equation. This equation accounts for delayed release of water from storage or uptake of water into storage in the interbeds. Properties that control the timing of the storage changes are vertical hydraulic diffusivity and interbed thickness. The SUB Package supersedes the Interbed Storage Package (IBS1) for MODFLOW, which assumes that water is released from or taken into storage with changes in head in the aquifer within a single model time step and, therefore, can be reasonably used to simulate only thin interbeds. The SUB Package relaxes this assumption and can be used to simulate time-dependent drainage and compaction of thick interbeds and confining units. The time-dependent drainage can be turned off, in which case the SUB Package gives results identical to those from IBS1. Three sample problems illustrate the usefulness of the SUB Package. One sample problem verifies that the package works correctly. This sample problem simulates the drainage of a thick interbed in response to a step change in head in the adjacent aquifer and closely matches the analytical solution. A second sample problem illustrates the effects of seasonally varying discharge and recharge to an aquifer system with a thick interbed. A third sample problem simulates a multilayered regional ground-water basin. Model input files for the third sample problem are included in the appendix.

  17. Porous protective solid phase micro-extractor sheath

    DOEpatents

    Andresen, Brian D.; Randich, Erik

    2005-03-29

    A porous protective sheath for active extraction media used in solid phase microextraction (SPME). The sheath permits exposure of the media to the environment without the necessity of extending a fragile coated fiber from a protective tube or needle. Subsequently, the sheath can pierce and seal with GC-MS septums, allowing direct injection of samples into inlet ports of analytical equipment. Use of the porous protective sheath, within which the active extraction media is contained, mitigates the problems of: 1) fiber breakage while the fiber is extended during sampling, 2) active media coating loss caused by physical contact of the bare fiber with the sampling environment; and 3) coating slough-off during fiber extension and retraction operations caused by rubbing action between the fiber and protective needle or tube.

  18. Discreet passive explosive detection through 2-sided wave guided fluorescence

    DOEpatents

    Harper, Ross James; la Grone, Marcus; Fisher, Mark

    2012-10-16

    The current invention provides a passive sampling device suitable for collecting and detecting the presence of target analytes. In particular, the passive sampling device is suitable for detecting nitro-aromatic compounds. The current invention further provides a passive sampling device reader suitable for determining the collection of target analytes. Additionally, the current invention provides methods for detecting target analytes using the passive sampling device and the passive sampling device reader.

  19. Hanford analytical sample projections FY 1998--FY 2002

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joyce, S.M.

    1998-02-12

    Analytical Services projections are compiled for the Hanford site based on inputs from the major programs for the years 1998 through 2002. Projections are categorized by radiation level, protocol, sample matrix and program. Analyses requirements are also presented. This document summarizes the Hanford sample projections for fiscal years 1998 to 2002. Sample projections are based on inputs submitted to Analytical Services covering Environmental Restoration, Tank Waste Remediation Systems (TWRS), Solid Waste, Liquid Effluents, Spent Nuclear Fuels, Transition Projects, Site Monitoring, Industrial Hygiene, Analytical Services and miscellaneous Hanford support activities. In addition, details on laboratory scale technology (development) work, Sample Management,more » and Data Management activities are included. This information will be used by Hanford Analytical Services (HAS) and the Sample Management Working Group (SMWG) to assure that laboratories and resources are available and effectively utilized to meet these documented needs.« less

  20. Complex mixture analysis by photoionization mass spectrometry with a VUV hydrogen laser source

    NASA Astrophysics Data System (ADS)

    Huth, T. C.; Denton, M. B.

    1985-12-01

    Trace organic analysis in complex matrix presents one of the most challenging problems in analytical mass spectrometry. When ionization is accomplished non-selectively using electron impact, extensive sample clean-up is often necessary in order to isolate the analyte from the matrix. Sample preparation can be greatly reduced when the VUV H2 laser is used to selectively photoionize only a small fraction of compounds introduced into the ion source. This device produces parent ions only for all compounds whose ionization potentials lie below a threshold value determined by the photon energy of 7.8 eV. The only observed interference arises from electron impact ionization, when scattered laser radiation interacts with metal surfaces, producing electrons which are then accelerated by potential fields inside the source. These can be suppressed to levels acceptable for practical analysis through proper instrumental design. Results are presented which indicate the ability of this ion source to discriminate against interfering matrix components, in simple extracts from a variety of complex real world matrices, such as brewed coffee, beer, and urine.

  1. Solvent-free MALDI-MS for the analysis of a membrane protein via the mini ball mill approach: case study of bacteriorhodopsin.

    PubMed

    Trimpin, Sarah; Deinzer, Max L

    2007-01-01

    A mini ball mill (MBM) solvent-free matrix-assisted laser desorption/ionization (MALDI) mass spectrometry (MS) method allows for the analysis of bacteriorhodopsin (BR), an integral membrane protein that previously presented special analytical problems. For well-defined signals in the molecular ion region of the analytes, a desalting procedure of the MBM sample directly on the MALDI target plate was used to reduce adduction by sodium and other cations that are normally attendant with hydrophobic peptides and proteins as a result of the sample preparation procedure. Mass analysis of the intact hydrophobic protein and the few hydrophobic and hydrophilic tryptic peptides available in the digest is demonstrated with this robust new approach. MS and MS/MS spectra of BR tryptic peptides and intact protein were generally superior to the traditional solvent-based method using the desalted "dry" MALDI preparation procedure. The solvent-free method expands the range of peptides that can be effectively analyzed by MALDI-MS to those that are hydrophobic and solubility-limited.

  2. Spectral multivariate calibration without laboratory prepared or determined reference analyte values.

    PubMed

    Ottaway, Josh; Farrell, Jeremy A; Kalivas, John H

    2013-02-05

    An essential part to calibration is establishing the analyte calibration reference samples. These samples must characterize the sample matrix and measurement conditions (chemical, physical, instrumental, and environmental) of any sample to be predicted. Calibration usually requires measuring spectra for numerous reference samples in addition to determining the corresponding analyte reference values. Both tasks are typically time-consuming and costly. This paper reports on a method named pure component Tikhonov regularization (PCTR) that does not require laboratory prepared or determined reference values. Instead, an analyte pure component spectrum is used in conjunction with nonanalyte spectra for calibration. Nonanalyte spectra can be from different sources including pure component interference samples, blanks, and constant analyte samples. The approach is also applicable to calibration maintenance when the analyte pure component spectrum is measured in one set of conditions and nonanalyte spectra are measured in new conditions. The PCTR method balances the trade-offs between calibration model shrinkage and the degree of orthogonality to the nonanalyte content (model direction) in order to obtain accurate predictions. Using visible and near-infrared (NIR) spectral data sets, the PCTR results are comparable to those obtained using ridge regression (RR) with reference calibration sets. The flexibility of PCTR also allows including reference samples if such samples are available.

  3. Measuring myokines with cardiovascular functions: pre-analytical variables affecting the analytical output.

    PubMed

    Lombardi, Giovanni; Sansoni, Veronica; Banfi, Giuseppe

    2017-08-01

    In the last few years, a growing number of molecules have been associated to an endocrine function of the skeletal muscle. Circulating myokine levels, in turn, have been associated with several pathophysiological conditions including the cardiovascular ones. However, data from different studies are often not completely comparable or even discordant. This would be due, at least in part, to the whole set of situations related to the preparation of the patient prior to blood sampling, blood sampling procedure, processing and/or store. This entire process constitutes the pre-analytical phase. The importance of the pre-analytical phase is often not considered. However, in routine diagnostics, the 70% of the errors are in this phase. Moreover, errors during the pre-analytical phase are carried over in the analytical phase and affects the final output. In research, for example, when samples are collected over a long time and by different laboratories, a standardized procedure for sample collecting and the correct procedure for sample storage are acknowledged. In this review, we discuss the pre-analytical variables potentially affecting the measurement of myokines with cardiovascular functions.

  4. Assessment regarding the use of the computer aided analytical models in the calculus of the general strength of a ship hull

    NASA Astrophysics Data System (ADS)

    Hreniuc, V.; Hreniuc, A.; Pescaru, A.

    2017-08-01

    Solving a general strength problem of a ship hull may be done using analytical approaches which are useful to deduce the buoyancy forces distribution, the weighting forces distribution along the hull and the geometrical characteristics of the sections. These data are used to draw the free body diagrams and to compute the stresses. The general strength problems require a large amount of calculi, therefore it is interesting how a computer may be used to solve such problems. Using computer programming an engineer may conceive software instruments based on analytical approaches. However, before developing the computer code the research topic must be thoroughly analysed, in this way being reached a meta-level of understanding of the problem. The following stage is to conceive an appropriate development strategy of the original software instruments useful for the rapid development of computer aided analytical models. The geometrical characteristics of the sections may be computed using a bool algebra that operates with ‘simple’ geometrical shapes. By ‘simple’ we mean that for the according shapes we have direct calculus relations. In the set of ‘simple’ shapes we also have geometrical entities bounded by curves approximated as spline functions or as polygons. To conclude, computer programming offers the necessary support to solve general strength ship hull problems using analytical methods.

  5. Statistical methods for astronomical data with upper limits. I - Univariate distributions

    NASA Technical Reports Server (NTRS)

    Feigelson, E. D.; Nelson, P. I.

    1985-01-01

    The statistical treatment of univariate censored data is discussed. A heuristic derivation of the Kaplan-Meier maximum-likelihood estimator from first principles is presented which results in an expression amenable to analytic error analysis. Methods for comparing two or more censored samples are given along with simple computational examples, stressing the fact that most astronomical problems involve upper limits while the standard mathematical methods require lower limits. The application of univariate survival analysis to six data sets in the recent astrophysical literature is described, and various aspects of the use of survival analysis in astronomy, such as the limitations of various two-sample tests and the role of parametric modelling, are discussed.

  6. Quality-assurance results for routine water analysis in US Geological Survey laboratories, water year 1991

    USGS Publications Warehouse

    Maloney, T.J.; Ludtke, A.S.; Krizman, T.L.

    1994-01-01

    The US. Geological Survey operates a quality- assurance program based on the analyses of reference samples for the National Water Quality Laboratory in Arvada, Colorado, and the Quality of Water Service Unit in Ocala, Florida. Reference samples containing selected inorganic, nutrient, and low ionic-strength constituents are prepared and disguised as routine samples. The program goal is to determine precision and bias for as many analytical methods offered by the participating laboratories as possible. The samples typically are submitted at a rate of approximately 5 percent of the annual environmental sample load for each constituent. The samples are distributed to the laboratories throughout the year. Analytical data for these reference samples reflect the quality of environmental sample data produced by the laboratories because the samples are processed in the same manner for all steps from sample login through data release. The results are stored permanently in the National Water Data Storage and Retrieval System. During water year 1991, 86 analytical procedures were evaluated at the National Water Quality Laboratory and 37 analytical procedures were evaluated at the Quality of Water Service Unit. An overall evaluation of the inorganic (major ion and trace metal) constituent data for water year 1991 indicated analytical imprecision in the National Water Quality Laboratory for 5 of 67 analytical procedures: aluminum (whole-water recoverable, atomic emission spectrometric, direct-current plasma); calcium (atomic emission spectrometric, direct); fluoride (ion-exchange chromatographic); iron (whole-water recoverable, atomic absorption spectrometric, direct); and sulfate (ion-exchange chromatographic). The results for 11 of 67 analytical procedures had positive or negative bias during water year 1991. Analytical imprecision was indicated in the determination of two of the five National Water Quality Laboratory nutrient constituents: orthophosphate as phosphorus and phosphorus. A negative or positive bias condition was indicated in three of five nutrient constituents. There was acceptable precision and no indication of bias for the 14 low ionic-strength analytical procedures tested in the National Water Quality Laboratory program and for the 32 inorganic and 5 nutrient analytical procedures tested in the Quality of Water Service Unit during water year 1991.

  7. Analytical techniques for steroid estrogens in water samples - A review.

    PubMed

    Fang, Ting Yien; Praveena, Sarva Mangala; deBurbure, Claire; Aris, Ahmad Zaharin; Ismail, Sharifah Norkhadijah Syed; Rasdi, Irniza

    2016-12-01

    In recent years, environmental concerns over ultra-trace levels of steroid estrogens concentrations in water samples have increased because of their adverse effects on human and animal life. Special attention to the analytical techniques used to quantify steroid estrogens in water samples is therefore increasingly important. The objective of this review was to present an overview of both instrumental and non-instrumental analytical techniques available for the determination of steroid estrogens in water samples, evidencing their respective potential advantages and limitations using the Need, Approach, Benefit, and Competition (NABC) approach. The analytical techniques highlighted in this review were instrumental and non-instrumental analytical techniques namely gas chromatography mass spectrometry (GC-MS), liquid chromatography mass spectrometry (LC-MS), enzyme-linked immuno sorbent assay (ELISA), radio immuno assay (RIA), yeast estrogen screen (YES) assay, and human breast cancer cell line proliferation (E-screen) assay. The complexity of water samples and their low estrogenic concentrations necessitates the use of highly sensitive instrumental analytical techniques (GC-MS and LC-MS) and non-instrumental analytical techniques (ELISA, RIA, YES assay and E-screen assay) to quantify steroid estrogens. Both instrumental and non-instrumental analytical techniques have their own advantages and limitations. However, the non-instrumental ELISA analytical techniques, thanks to its lower detection limit and simplicity, its rapidity and cost-effectiveness, currently appears to be the most reliable for determining steroid estrogens in water samples. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Analytical study of sandwich structures using Euler-Bernoulli beam equation

    NASA Astrophysics Data System (ADS)

    Xue, Hui; Khawaja, H.

    2017-01-01

    This paper presents an analytical study of sandwich structures. In this study, the Euler-Bernoulli beam equation is solved analytically for a four-point bending problem. Appropriate initial and boundary conditions are specified to enclose the problem. In addition, the balance coefficient is calculated and the Rule of Mixtures is applied. The focus of this study is to determine the effective material properties and geometric features such as the moment of inertia of a sandwich beam. The effective parameters help in the development of a generic analytical correlation for complex sandwich structures from the perspective of four-point bending calculations. The main outcomes of these analytical calculations are the lateral displacements and longitudinal stresses for each particular material in the sandwich structure.

  9. Approximate analytical description of the elastic strain field due to an inclusion in a continuous medium with cubic anisotropy

    NASA Astrophysics Data System (ADS)

    Nenashev, A. V.; Koshkarev, A. A.; Dvurechenskii, A. V.

    2018-03-01

    We suggest an approach to the analytical calculation of the strain distribution due to an inclusion in elastically anisotropic media for the case of cubic anisotropy. The idea consists in the approximate reduction of the anisotropic problem to a (simpler) isotropic problem. This gives, for typical semiconductors, an improvement in accuracy by an order of magnitude, compared to the isotropic approximation. Our method allows using, in the case of elastically anisotropic media, analytical solutions obtained for isotropic media only, such as analytical formulas for the strain due to polyhedral inclusions. The present work substantially extends the applicability of analytical results, making them more suitable for describing real systems, such as epitaxial quantum dots.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.W.; Heinrich, R.R.; Jensen, K.J.

    The Analytical Chemistry Laboratory is a full-cost-recovery service center, with the primary mission of providing a broad range of technical support services to the scientific and engineering programs at ANL. In addition, ACL conducts a research program in analytical chemistry, works on instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL handles a wide range of analytical problems, from routine standard analyses to unique problems that require significant development of methods and techniques. The purpose of this report is to summarize the technical and administrative activities of the Analytical Chemistry Laboratory (ACL) atmore » Argonne National Laboratory (ANL) for Fiscal Year 1985 (October 1984 through September 1985). This is the second annual report for the ACL. 4 figs., 1 tab.« less

  11. Sampling hazelnuts for aflatoxin: uncertainty associated with sampling, sample preparation, and analysis.

    PubMed

    Ozay, Guner; Seyhan, Ferda; Yilmaz, Aysun; Whitaker, Thomas B; Slate, Andrew B; Giesbrecht, Francis

    2006-01-01

    The variability associated with the aflatoxin test procedure used to estimate aflatoxin levels in bulk shipments of hazelnuts was investigated. Sixteen 10 kg samples of shelled hazelnuts were taken from each of 20 lots that were suspected of aflatoxin contamination. The total variance associated with testing shelled hazelnuts was estimated and partitioned into sampling, sample preparation, and analytical variance components. Each variance component increased as aflatoxin concentration (either B1 or total) increased. With the use of regression analysis, mathematical expressions were developed to model the relationship between aflatoxin concentration and the total, sampling, sample preparation, and analytical variances. The expressions for these relationships were used to estimate the variance for any sample size, subsample size, and number of analyses for a specific aflatoxin concentration. The sampling, sample preparation, and analytical variances associated with estimating aflatoxin in a hazelnut lot at a total aflatoxin level of 10 ng/g and using a 10 kg sample, a 50 g subsample, dry comminution with a Robot Coupe mill, and a high-performance liquid chromatographic analytical method are 174.40, 0.74, and 0.27, respectively. The sampling, sample preparation, and analytical steps of the aflatoxin test procedure accounted for 99.4, 0.4, and 0.2% of the total variability, respectively.

  12. Tank vapor characterization project. Headspace vapor characterization of Hanford waste tank 241-BY-108: Second comparison study results from samples collected on 3/28/96

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, B.L.; Pool, K.H.; Evans, J.C.

    1997-01-01

    This report describes the analytical results of vapor samples taken from the headspace of waste storage tank 241-BY-108 (Tank BY-108) at the Hanford Site in Washington State. The results described in this report is the second in a series comparing vapor sampling of the tank headspace using the Vapor Sampling System (VSS) and In Situ Vapor Sampling (ISVS) system without high efficiency particulate air (HEPA) prefiltration. The results include air concentrations of water (H{sub 2}O) and ammonia (NH{sub 3}), permanent gases, total non-methane organic compounds (TO-12), and individual organic analytes collected in SUMMA{trademark} canisters and on triple sorbent traps (TSTs).more » Samples were collected by Westinghouse Hanford Company (WHC) and analyzed by Pacific Northwest National Laboratory (PNNL). Analyses were performed by the Vapor Analytical Laboratory (VAL) at PNNL. Analyte concentrations were based on analytical results and, where appropriate, sample volume measurements provided by WHC.« less

  13. Analytical Chemistry Laboratory. Progress report for FY 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.W.; Boparai, A.S.; Bowers, D.L.

    The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year (FY) 1996. This annual report is the thirteenth for the ACL. It describes effort on continuing and new projects and contributions of the ACL staff to various programs at ANL. The ACL operates in the ANL system as a full-cost-recovery service center, but has a mission that includes a complementary research and development component: The Analytical Chemistry Laboratory will provide high-quality, cost-effective chemical analysis and related technical support to solve research problems of our clients --more » Argonne National Laboratory, the Department of Energy, and others -- and will conduct world-class research and development in analytical chemistry and its applications. Because of the diversity of research and development work at ANL, the ACL handles a wide range of analytical chemistry problems. Some routine or standard analyses are done, but the ACL usually works with commercial laboratories if our clients require high-volume, production-type analyses. It is common for ANL programs to generate unique problems that require significant development of methods and adaption of techniques to obtain useful analytical data. Thus, much of the support work done by the ACL is very similar to our applied analytical chemistry research.« less

  14. NHEXAS PHASE I REGION 5 STUDY--QA ANALYTICAL RESULTS FOR METALS IN SPIKES

    EPA Science Inventory

    This data set includes analytical results for measurements of metals in 49 field control samples (spikes). Measurements were made for up to 11 metals in samples of water, blood, and urine. Field controls were used to assess recovery of target analytes from a sample media during s...

  15. AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)

    EPA Science Inventory

    Abstract

    A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...

  16. Distributed event-triggered consensus strategy for multi-agent systems under limited resources

    NASA Astrophysics Data System (ADS)

    Noorbakhsh, S. Mohammad; Ghaisari, Jafar

    2016-01-01

    The paper proposes a distributed structure to address an event-triggered consensus problem for multi-agent systems which aims at concurrent reduction in inter-agent communication, control input actuation and energy consumption. Following the proposed approach, asymptotic convergence of all agents to consensus requires that each agent broadcasts its sampled-state to the neighbours and updates its control input only at its own triggering instants, unlike the existing related works. Obviously, it decreases the network bandwidth usage, sensor energy consumption, computation resources usage and actuator wears. As a result, it facilitates the implementation of the proposed consensus protocol in the real-world applications with limited resources. The stability of the closed-loop system under an event-based protocol is proved analytically. Some numerical results are presented which confirm the analytical discussion on the effectiveness of the proposed design.

  17. Validation of an isotope dilution, ICP-MS method based on internal mass bias correction for the determination of trace concentrations of Hg in sediment cores.

    PubMed

    Ciceri, E; Recchia, S; Dossi, C; Yang, L; Sturgeon, R E

    2008-01-15

    The development and validation of a method for the determination of mercury in sediments using a sector field inductively coupled plasma mass spectrometer (SF-ICP-MS) for detection is described. The utilization of isotope dilution (ID) calibration is shown to solve analytical problems related to matrix composition. Mass bias is corrected using an internal mass bias correction technique, validated against the traditional standard bracketing method. The overall analytical protocol is validated against NRCC PACS-2 marine sediment CRM. The estimated limit of detection is 12ng/g. The proposed procedure was applied to the analysis of a real sediment core sampled to a depth of 160m in Lake Como, where Hg concentrations ranged from 66 to 750ng/g.

  18. Efficient alignment-free DNA barcode analytics

    PubMed Central

    Kuksa, Pavel; Pavlovic, Vladimir

    2009-01-01

    Background In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. Results New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Conclusion Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding. PMID:19900305

  19. 6S Return Samples: Assessment of Air Quality in the International Space Station (ISS) Based on Solid Sorbent Air Sampler (SSAS) and Formaldehyde Monitoring Kit (FMK) Analyses

    NASA Technical Reports Server (NTRS)

    James, John T.

    2004-01-01

    The toxicological assessments of SSAS and FMK analytical results are reported. Analytical methods have not changed from earlier reports. Surrogate standard recoveries from the SSAS tubes were 66-76% for 13C-acetone, 85-96% for fluorobenzene, and 73-89% for chlorobenzene. Post-flight flows were far below pre-flight flows and an investigation of the problem revealed that the reduced flow was caused by a leak at the interface of the pump inlet tube and the pump head. This resulted in degradation of pump efficiency. Further investigation showed that the problem occurred before the SSAS was operated on orbit and that use of the post-flight flows yielded consistent and useful results. Recoveries from formaldehyde control badges were 86 to 104%. The two general criteria used to assess air quality are the total-non-methane-volatile organic hydrocarbons (NMVOCs) and the total T-value (minus the CO2 and formaldehyde contributions). The T values will not be reported for these data due to the flow anomaly. Control of atmospheric alcohols is important to the water recovery system engineers, hence total alcohols (including acetone) are also shown for each sample. Octafluoropropane (OFP) is not efficiently trapped by the sorbents used in the SSAS. Because formaldehyde is quantified from sorbent badges, its concentration is also listed separately. These five indices of air quality are summarized.

  20. Irregular analytical errors in diagnostic testing - a novel concept.

    PubMed

    Vogeser, Michael; Seger, Christoph

    2018-02-23

    In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC-isotope-dilution mass spectrometry methods are increasingly used for pre-market validation of routine diagnostic assays (these tests also involve substantial sets of clinical validation samples). Based on this definition/terminology, we list recognized causes of irregular analytical error as a risk catalog for clinical chemistry in this article. These issues include reproducible individual analytical errors (e.g. caused by anti-reagent antibodies) and non-reproducible, sporadic errors (e.g. errors due to incorrect pipetting volume due to air bubbles in a sample), which can both lead to inaccurate results and risks for patients.

  1. Quantitative trace analysis of polyfluorinated alkyl substances (PFAS) in ambient air samples from Mace Head (Ireland): A method intercomparison

    NASA Astrophysics Data System (ADS)

    Jahnke, Annika; Barber, Jonathan L.; Jones, Kevin C.; Temme, Christian

    A method intercomparison study of analytical methods for the determination of neutral, volatile polyfluorinated alkyl substances (PFAS) was carried out in March, 2006. Environmental air samples were collected in triplicate at the European background site Mace Head on the west coast of Ireland, a site dominated by 'clean' westerly winds coming across the Atlantic. Extraction and analysis were performed at two laboratories active in PFAS research using their in-house methods. Airborne polyfluorinated telomer alcohols (FTOHs), fluorooctane sulfonamides and sulfonamidoethanols (FOSAs/FOSEs) as well as additional polyfluorinated compounds were investigated. Different native and isotope-labelled internal standards (IS) were applied at various steps in the analytical procedure to evaluate the different quantification strategies. Field blanks revealed no major blank problems. European background concentrations observed at Mace Head were found to be in a similar range to Arctic data reported in the literature. Due to trace-levels at the remote site, only FTOH data sets were complete and could therefore be compared between the laboratories. Additionally, FOSEs could partly be included. Data comparison revealed that despite the challenges inherent in analysis of airborne PFAS and the low concentrations, all methods applied in this study obtained similar results. However, application of isotope-labelled IS early in the analytical procedure leads to more precise results and is therefore recommended.

  2. Infant pathways to externalizing behavior: evidence of Genotype x Environment interaction.

    PubMed

    Leve, Leslie D; Kerr, David C R; Shaw, Daniel; Ge, Xiaojia; Neiderhiser, Jenae M; Scaramella, Laura V; Reid, John B; Conger, Rand; Reiss, David

    2010-01-01

    To further the understanding of the effects of early experiences, 9-month-old infants were observed during a frustration task. The analytical sample was composed of 348 linked triads of participants (adoptive parents, adopted child, and birth parent[s]) from a prospective adoption study. It was hypothesized that genetic risk for externalizing problems and affect dysregulation in the adoptive parents would independently and interactively predict a known precursor to externalizing problems: heightened infant attention to frustrating events. Results supported the moderation hypotheses involving adoptive mother affect dysregulation: Infants at genetic risk showed heightened attention to frustrating events only when the adoptive mother had higher levels of anxious and depressive symptoms. The Genotype x Environment interaction pattern held when substance use during pregnancy was considered.

  3. Cross-reactivity profiles of legumes and tree nuts using the xMAP® multiplex food allergen detection assay.

    PubMed

    Cho, Chung Y; Oles, Carolyn; Nowatzke, William; Oliver, Kerry; Garber, Eric A E

    2017-10-01

    The homology between proteins in legumes and tree nuts makes it common for individuals with food allergies to be allergic to multiple legumes and tree nuts. This propensity for allergenic and antigenic cross-reactivity means that commonly employed commercial immunodiagnostic assays (e.g., dipsticks) for the detection of food allergens may not always accurately detect, identify, and quantitate legumes and tree nuts unless additional orthogonal analytical methods or secondary measures of analysis are employed. The xMAP ® Multiplex Food Allergen Detection Assay (FADA) was used to determine the cross-reactivity patterns and the utility of multi-antibody antigenic profiling to distinguish between legumes and tree nuts. Pure legumes and tree nuts extracted using buffered detergent displayed a high level of cross-reactivity that decreased upon dilution or by using a buffer (UD buffer) designed to increase the stringency of binding conditions and reduce the occurrence of false positives due to plant-derived lectins. Testing for unexpected food allergens or the screening for multiple food allergens often involves not knowing the identity of the allergen present, its concentration, or the degree of modification during processing. As such, the analytical response measured may represent multiple antigens of varying antigenicity (cross-reactivity). This problem of multiple potential analytes is usually unresolved and the focus becomes the primary analyte, the antigen the antibody was raised against, or quantitative interpretation of the content of the analytical sample problematic. The alternative solution offered here to this problem is the use of an antigenic profile as generated by the xMAP FADA using multiple antibodies (bead sets). By comparing the antigenic profile to standards, the allergen may be identified along with an estimate of the concentration present. Cluster analysis of the xMAP FADA data was also performed and agreed with the known phylogeny of the legumes and tree nuts being analyzed. Graphical abstract The use of cluster analysis to compare the multi-antigen profiles of food allergens.

  4. A sample preparation method for recovering suppressed analyte ions in MALDI TOF MS.

    PubMed

    Lou, Xianwen; de Waal, Bas F M; Milroy, Lech-Gustav; van Dongen, Joost L J

    2015-05-01

    In matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI TOF MS), analyte signals can be substantially suppressed by other compounds in the sample. In this technical note, we describe a modified thin-layer sample preparation method that significantly reduces the analyte suppression effect (ASE). In our method, analytes are deposited on top of the surface of matrix preloaded on the MALDI plate. To prevent embedding of analyte into the matrix crystals, the sample solution were prepared without matrix and efforts were taken not to re-dissolve the preloaded matrix. The results with model mixtures of peptides, synthetic polymers and lipids show that detection of analyte ions, which were completely suppressed using the conventional dried-droplet method, could be effectively recovered by using our method. Our findings suggest that the incorporation of analytes in the matrix crystals has an important contributory effect on ASE. By reducing ASE, our method should be useful for the direct MALDI MS analysis of multicomponent mixtures. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Results of the U.S. Geological Survey's Analytical Evaluation Program for standard reference samples distributed in March 1999

    USGS Publications Warehouse

    Farrar, Jerry W.; Chleboun, Kimberly M.

    1999-01-01

    This report presents the results of the U.S. Geological Survey's analytical evaluation program for 8 standard reference samples -- T-157 (trace constituents), M-150 (major constituents), N-61 (nutrient constituents), N-62 (nutrient constituents), P-32 (low ionic strength constituents), GWT-5 (ground-water trace constituents), GWM- 4 (ground-water major constituents),and Hg-28 (mercury) -- that were distributed in March 1999 to 120 laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data that were received from 111 of the laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the seven reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the 8 standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.

  6. Exact analytical solution of a classical Josephson tunnel junction problem

    NASA Astrophysics Data System (ADS)

    Kuplevakhsky, S. V.; Glukhov, A. M.

    2010-10-01

    We give an exact and complete analytical solution of the classical problem of a Josephson tunnel junction of arbitrary length W ɛ(0,∞) in the presence of external magnetic fields and transport currents. Contrary to a wide-spread belief, the exact analytical solution unambiguously proves that there is no qualitative difference between so-called "small" (W≪1) and "large" junctions (W≫1). Another unexpected physical implication of the exact analytical solution is the existence (in the current-carrying state) of unquantized Josephson vortices carrying fractional flux and located near one of the edges of the junction. We also refine the mathematical definition of critical transport current.

  7. Construction Method of Analytical Solutions to the Mathematical Physics Boundary Problems for Non-Canonical Domains

    NASA Astrophysics Data System (ADS)

    Mobarakeh, Pouyan Shakeri; Grinchenko, Victor T.

    2015-06-01

    The majority of practical cases of acoustics problems requires solving the boundary problems in non-canonical domains. Therefore construction of analytical solutions of mathematical physics boundary problems for non-canonical domains is both lucrative from the academic viewpoint, and very instrumental for elaboration of efficient algorithms of quantitative estimation of the field characteristics under study. One of the main solving ideologies for such problems is based on the superposition method that allows one to analyze a wide class of specific problems with domains which can be constructed as the union of canonically-shaped subdomains. It is also assumed that an analytical solution (or quasi-solution) can be constructed for each subdomain in one form or another. However, this case implies some difficulties in the construction of calculation algorithms, insofar as the boundary conditions are incompletely defined in the intervals, where the functions appearing in the general solution are orthogonal to each other. We discuss several typical examples of problems with such difficulties, we study their nature and identify the optimal methods to overcome them.

  8. Interlaboratory comparability, bias, and precision for four laboratories measuring constituents in precipitation, November 1982-August 1983

    USGS Publications Warehouse

    Brooks, M.H.; Schroder, L.J.; Malo, B.A.

    1985-01-01

    Four laboratories were evaluated in their analysis of identical natural and simulated precipitation water samples. Interlaboratory comparability was evaluated using analysis of variance coupled with Duncan 's multiple range test, and linear-regression models describing the relations between individual laboratory analytical results for natural precipitation samples. Results of the statistical analyses indicate that certain pairs of laboratories produce different results when analyzing identical samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple range test on data produced by the laboratories from the analysis of identical simulated precipitation samples. Bias for a given analyte produced by a single laboratory has been indicated when the laboratory mean for that analyte is shown to be significantly different from the mean for the most-probable analyte concentrations in the simulated precipitation samples. Ion-chromatographic methods for the determination of chloride, nitrate, and sulfate have been compared with the colorimetric methods that were also in use during the study period. Comparisons were made using analysis of variance coupled with Duncan 's multiple range test for means produced by the two methods. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Analyte estimated precisions have been compared using F-tests and differences in analyte precisions for laboratory pairs have been reported. (USGS)

  9. Trace level detection of analytes using artificial olfactometry

    NASA Technical Reports Server (NTRS)

    Lewis, Nathan S. (Inventor); Severin, Erik J. (Inventor); Wong, Bernard (Inventor)

    2002-01-01

    The present invention provides a device for detecting the presence of an analyte, such as for example, a lightweight device, including: a sample chamber having a fluid inlet port for the influx of the analyte; a fluid concentrator in flow communication with the sample chamber wherein the fluid concentrator has an absorbent material capable of absorbing the analyte and capable of desorbing a concentrated analyte; and an array of sensors in fluid communication with the concentrated analyte to be released from the fluid concentrator.

  10. Recent developments and future trends in solid phase microextraction techniques towards green analytical chemistry.

    PubMed

    Spietelun, Agata; Marcinkowski, Łukasz; de la Guardia, Miguel; Namieśnik, Jacek

    2013-12-20

    Solid phase microextraction find increasing applications in the sample preparation step before chromatographic determination of analytes in samples with a complex composition. These techniques allow for integrating several operations, such as sample collection, extraction, analyte enrichment above the detection limit of a given measuring instrument and the isolation of analytes from sample matrix. In this work the information about novel methodological and instrumental solutions in relation to different variants of solid phase extraction techniques, solid-phase microextraction (SPME), stir bar sorptive extraction (SBSE) and magnetic solid phase extraction (MSPE) is presented, including practical applications of these techniques and a critical discussion about their advantages and disadvantages. The proposed solutions fulfill the requirements resulting from the concept of sustainable development, and specifically from the implementation of green chemistry principles in analytical laboratories. Therefore, particular attention was paid to the description of possible uses of novel, selective stationary phases in extraction techniques, inter alia, polymeric ionic liquids, carbon nanotubes, and silica- and carbon-based sorbents. The methodological solutions, together with properly matched sampling devices for collecting analytes from samples with varying matrix composition, enable us to reduce the number of errors during the sample preparation prior to chromatographic analysis as well as to limit the negative impact of this analytical step on the natural environment and the health of laboratory employees. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. External quality-assurance results for the National Atmospheric Deposition Program / National Trends Network and Mercury Deposition Network, 2004

    USGS Publications Warehouse

    Wetherbee, Gregory A.; Latysh, Natalie E.; Greene, Shannon M.

    2006-01-01

    The U.S. Geological Survey (USGS) used five programs to provide external quality-assurance monitoring for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) and two programs to provide external quality-assurance monitoring for the NADP/Mercury Deposition Network (NADP/MDN) during 2004. An intersite-comparison program was used to estimate accuracy and precision of field-measured pH and specific-conductance. The variability and bias of NADP/NTN data attributed to field exposure, sample handling and shipping, and laboratory chemical analysis were estimated using the sample-handling evaluation (SHE), field-audit, and interlaboratory-comparison programs. Overall variability of NADP/NTN data was estimated using a collocated-sampler program. Variability and bias of NADP/MDN data attributed to field exposure, sample handling and shipping, and laboratory chemical analysis were estimated using a system-blank program and an interlaboratory-comparison program. In two intersite-comparison studies, approximately 89 percent of NADP/NTN site operators met the pH measurement accuracy goals, and 94.7 to 97.1 percent of NADP/NTN site operators met the accuracy goals for specific conductance. Field chemistry measurements were discontinued by NADP at the end of 2004. As a result, the USGS intersite-comparison program also was discontinued at the end of 2004. Variability and bias in NADP/NTN data due to sample handling and shipping were estimated from paired-sample concentration differences and specific conductance differences obtained for the SHE program. Median absolute errors (MAEs) equal to less than 3 percent were indicated for all measured analytes except potassium and hydrogen ion. Positive bias was indicated for most of the measured analytes except for calcium, hydrogen ion and specific conductance. Negative bias for hydrogen ion and specific conductance indicated loss of hydrogen ion and decreased specific conductance from contact of the sample with the collector bucket. Field-audit results for 2004 indicate dissolved analyte loss in more than one-half of NADP/NTN wet-deposition samples for all analytes except chloride. Concentrations of contaminants also were estimated from field-audit data. On the basis of 2004 field-audit results, at least 25 percent of the 2004 NADP/NTN concentrations for sodium, potassium, and chloride were lower than the maximum sodium, potassium, and chloride contamination likely to be found in 90 percent of the samples with 90-percent confidence. Variability and bias in NADP/NTN data attributed to chemical analysis by the NADP Central Analytical Laboratory (CAL) were comparable to the variability and bias estimated for other laboratories participating in the interlaboratory-comparison program for all analytes. Variability in NADP/NTN ammonium data evident in 2002-03 was reduced substantially during 2004. Sulfate, hydrogen-ion, and specific conductance data reported by CAL during 2004 were positively biased. A significant (a = 0.05) bias was identified for CAL sodium, potassium, ammonium, and nitrate data, but the absolute values of the median differences for these analytes were less than the method detection limits. No detections were reported for CAL analyses of deionized-water samples, indicating that contamination was not a problem for CAL. Control charts show that CAL data were within statistical control during at least 90 percent of 2004. Most 2004 CAL interlaboratory-comparison results for synthetic wet-deposition solutions were within ?10 percent of the most probable values (MPVs) for solution concentrations except for chloride, nitrate, sulfate, and specific conductance results from one sample in November and one specific conductance result in December. Overall variability of NADP/NTN wet-deposition measurements was estimated during water year 2004 by the median absolute errors for weekly wet-deposition sample concentrations and precipitation measurements for tw

  12. Tensile behaviors of three-dimensionally free-formable titanium mesh plates for bone graft applications

    NASA Astrophysics Data System (ADS)

    He, Jianmei

    2017-11-01

    Present metal artificial bones for bone grafts have the problems like too heavy and excessive elastic modulus compared with natural bones. In this study, three-dimensionally (3D) free-formable titanium mesh plates for bone graft applications was introduced to improve these problems. Fundamental mesh shapes and patterns were designed under different base shapes and design parameters through three dimensional CAD tools from higher flexibility and strength points of view. Based on the designed mesh shape and patterns, sample specimens of titanium mesh plates with different base shapes and design variables were manufactured through laser processing. Tensile properties of the sample titanium mesh plates like volume density, tensile elastic modulus were experimentally and analytically evaluated. Experimental results showed that such titanium mesh plates had much higher flexibility and their mechanical properties could be controlled to close to the natural bones. More details on the mechanical properties of titanium mesh plates including compression, bending, torsion and durability will be carried out in future study.

  13. Identification of microplastics by FTIR and Raman microscopy: a novel silicon filter substrate opens the important spectral range below 1300 cm(-1) for FTIR transmission measurements.

    PubMed

    Käppler, Andrea; Windrich, Frank; Löder, Martin G J; Malanin, Mikhail; Fischer, Dieter; Labrenz, Matthias; Eichhorn, Klaus-Jochen; Voit, Brigitte

    2015-09-01

    The presence of microplastics in aquatic ecosystems is a topical problem and leads to the need of appropriate and reliable analytical methods to distinctly identify and to quantify these particles in environmental samples. As an example transmission, Fourier transform infrared (FTIR) imaging can be used to analyze samples directly on filters without any visual presorting, when the environmental sample was afore extracted, purified, and filtered. However, this analytical approach is strongly restricted by the limited IR transparency of conventional filter materials. Within this study, we describe a novel silicon (Si) filter substrate produced by photolithographic microstructuring, which guarantees sufficient transparency for the broad mid-infrared region of 4000-600 cm(-1). This filter type features holes with a diameter of 10 μm and exhibits adequate mechanical stability. Furthermore, it will be shown that our Si filter substrate allows a distinct identification of the most common microplastics, polyethylene (PE), and polypropylene (PP), in the characteristic fingerprint region (1400-600 cm(-1)). Moreover, using the Si filter substrate, a differentiation of microparticles of polyesters having quite similar chemical structure, like polyethylene terephthalate (PET) and polybutylene terephthalate (PBT), is now possible, which facilitates a visualization of their distribution within a microplastic sample by FTIR imaging. Finally, this Si filter can also be used as substrate for Raman microscopy-a second complementary spectroscopic technique-to identify microplastic samples.

  14. An on-line push/pull perfusion-based hollow-fiber liquid-phase microextraction system for high-performance liquid chromatographic determination of alkylphenols in water samples.

    PubMed

    Chao, Yu-Ying; Jian, Zhi-Xuan; Tu, Yi-Ming; Wang, Hsaio-Wen; Huang, Yeou-Lih

    2013-06-07

    In this study, we employed a novel on-line method, push/pull perfusion hollow-fiber liquid-phase microextraction (PPP-HF-LPME), to extract 4-tert-butylphenol, 2,4-di-tert-butylphenol, 4-n-nonylphenol, and 4-n-octylphenol from river and tap water samples; we then separated and quantified the extracted analytes through high-performance liquid chromatography (HPLC). Using this approach, we overcame the problem of fluid loss across the porous HF membrane to the donor phase, permitting on-line coupling of HF-LPME to HPLC. In our PPP-HF-LPME system, we used a push/pull syringe pump as the driving source to perfuse the acceptor phase, while employing a heating mantle and an ultrasonic probe to accelerate mass transfer. We optimized the experimental conditions such as the nature of the HF supported intermediary phase and the acceptor phase, the composition of the donor and acceptor phases, the sample temperature, and the sonication conditions. Our proposed method provided relative standard deviations of 3.1-6.2%, coefficients of determination (r(2)) of 0.9989-0.9998, and limits of detection of 0.03-0.2 ng mL(-1) for the analytes under the optimized conditions. When we applied this method to analyses of river and tap water samples, our results confirmed that this microextraction technique allows reliable monitoring of alkylphenols in water samples.

  15. Evaluation of polyethersulfone performance for the microextraction of polar chlorinated herbicides from environmental water samples.

    PubMed

    Prieto, Ailette; Rodil, Rosario; Quintana, José Benito; Cela, Rafael; Möder, Monika; Rodríguez, Isaac

    2014-05-01

    In this work, the suitability of bulk polyethersulfone (PES) for sorptive microextraction of eight polar, chlorinated phenoxy acids and dicamba from environmental water samples is assessed and the analytical features of the optimized method are compared to those reported for other microextraction techniques. Under optimized conditions, extractions were performed with samples (18 mL) adjusted at pH 2 and containing a 30% (w/v) of sodium chloride, using a tubular PES sorbent (1 cm length × 0.7 mm o.d., sorbent volume 8 µL). Equilibrium conditions were achieved after 3h of direct sampling, with absolute extraction efficiencies ranging from 39 to 66%, depending on the compound. Analytes were recovered soaking the polymer with 0.1 mL of ethyl acetate, derivatized and determined by gas chromatography-mass spectrometry (GC-MS). Achieved quantification limits (LOQs) varied between 0.005 and 0.073 ng mL(-1). After normalization with the internal surrogate (IS), the efficiency of the extraction was only moderately affected by the particular characteristics of different water samples (surface and sewage water); thus, pseudo-external calibration, using spiked ultrapure water solutions, can be used as quantification technique. The reduced cost of the PES polymer allowed considering it as a disposable sorbent, avoiding variations in the performance of the extraction due to cross-contamination problems and/or surface modification with usage. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Dynamic nuclear polarization in solid samples by electrical-discharge-induced radicals

    NASA Astrophysics Data System (ADS)

    Katz, Itai; Blank, Aharon

    2015-12-01

    Dynamic nuclear polarization (DNP) is a method for enhancing nuclear magnetic resonance (NMR) signals that has many potential applications in chemistry and medicine. Traditionally, DNP signal enhancement is achieved through the use of exogenous radicals mixed in a solution with the molecules of interest. Here we show that proton DNP signal enhancements can be obtained for solid samples without the use of solvent and exogenous radicals. Radicals are generated primarily on the surface of a solid sample using electrical discharges. These radicals are found suitable for DNP. They are stable under moderate vacuum conditions, yet readily annihilate upon compound dissolution or air exposure. This feature makes them attractive for use in medical applications, where the current variety of radicals used for DNP faces regulatory problems. In addition, this solvent-free method may be found useful for analytical NMR of solid samples which cannot tolerate solvents, such as certain pharmaceutical products.

  17. Systems and methods for laser assisted sample transfer to solution for chemical analysis

    DOEpatents

    Van Berkel, Gary J.; Kertesz, Vilmos; Ovchinnikova, Olga S.

    2014-06-03

    Systems and methods are described for laser ablation of an analyte from a specimen and capturing of the analyte in a dispensed solvent to form a testing solution. A solvent dispensing and extraction system can form a liquid microjunction with the specimen. The solvent dispensing and extraction system can include a surface sampling probe. The laser beam can be directed through the surface sampling probe. The surface sampling probe can also serve as an atomic force microscopy probe. The surface sampling probe can form a seal with the specimen. The testing solution including the analyte can then be analyzed using an analytical instrument or undergo further processing.

  18. Systems and methods for laser assisted sample transfer to solution for chemical analysis

    DOEpatents

    Van Berkel, Gary J.; Kertesz, Vilmos; Ovchinnikova, Olga S.

    2015-09-29

    Systems and methods are described for laser ablation of an analyte from a specimen and capturing of the analyte in a dispensed solvent to form a testing solution. A solvent dispensing and extraction system can form a liquid microjunction with the specimen. The solvent dispensing and extraction system can include a surface sampling probe. The laser beam can be directed through the surface sampling probe. The surface sampling probe can also serve as an atomic force microscopy probe. The surface sampling probe can form a seal with the specimen. The testing solution including the analyte can then be analyzed using an analytical instrument or undergo further processing.

  19. Systems and methods for laser assisted sample transfer to solution for chemical analysis

    DOEpatents

    Van Berkel, Gary J; Kertesz, Vilmos; Ovchinnikova, Olga S

    2013-08-27

    Systems and methods are described for laser ablation of an analyte from a specimen and capturing of the analyte in a dispensed solvent to form a testing solution. A solvent dispensing and extraction system can form a liquid microjunction with the specimen. The solvent dispensing and extraction system can include a surface sampling probe. The laser beam can be directed through the surface sampling probe. The surface sampling probe can also serve as an atomic force microscopy probe. The surface sampling probe can form a seal with the specimen. The testing solution including the analyte can then be analyzed using an analytical instrument or undergo further processing.

  20. Applications of reversible covalent chemistry in analytical sample preparation.

    PubMed

    Siegel, David

    2012-12-07

    Reversible covalent chemistry (RCC) adds another dimension to commonly used sample preparation techniques like solid-phase extraction (SPE), solid-phase microextraction (SPME), molecular imprinted polymers (MIPs) or immuno-affinity cleanup (IAC): chemical selectivity. By selecting analytes according to their covalent reactivity, sample complexity can be reduced significantly, resulting in enhanced analytical performance for low-abundance target analytes. This review gives a comprehensive overview of the applications of RCC in analytical sample preparation. The major reactions covered include reversible boronic ester formation, thiol-disulfide exchange and reversible hydrazone formation, targeting analyte groups like diols (sugars, glycoproteins and glycopeptides, catechols), thiols (cysteinyl-proteins and cysteinyl-peptides) and carbonyls (carbonylated proteins, mycotoxins). Their applications range from low abundance proteomics to reversible protein/peptide labelling to antibody chromatography to quantitative and qualitative food analysis. In discussing the potential of RCC, a special focus is on the conditions and restrictions of the utilized reaction chemistry.

  1. Targeted Analyte Detection by Standard Addition Improves Detection Limits in MALDI Mass Spectrometry

    PubMed Central

    Eshghi, Shadi Toghi; Li, Xingde; Zhang, Hui

    2014-01-01

    Matrix-assisted laser desorption/ionization has proven an effective tool for fast and accurate determination of many molecules. However, the detector sensitivity and chemical noise compromise the detection of many invaluable low-abundance molecules from biological and clinical samples. To challenge this limitation, we developed a targeted analyte detection (TAD) technique. In TAD, the target analyte is selectively elevated by spiking a known amount of that analyte into the sample, thereby raising its concentration above the noise level, where we take advantage of the improved sensitivity to detect the presence of the endogenous analyte in the sample. We assessed TAD on three peptides in simple and complex background solutions with various exogenous analyte concentrations in two MALDI matrices. TAD successfully improved the limit of detection (LOD) of target analytes when the target peptides were added to the sample in a concentration close to optimum concentration. The optimum exogenous concentration was estimated through a quantitative method to be approximately equal to the original LOD for each target. Also, we showed that TAD could achieve LOD improvements on an average of 3-fold in a simple and 2-fold in a complex sample. TAD provides a straightforward assay to improve the LOD of generic target analytes without the need for costly hardware modifications. PMID:22877355

  2. Targeted analyte detection by standard addition improves detection limits in matrix-assisted laser desorption/ionization mass spectrometry.

    PubMed

    Toghi Eshghi, Shadi; Li, Xingde; Zhang, Hui

    2012-09-18

    Matrix-assisted laser desorption/ionization (MALDI) has proven an effective tool for fast and accurate determination of many molecules. However, the detector sensitivity and chemical noise compromise the detection of many invaluable low-abundance molecules from biological and clinical samples. To challenge this limitation, we developed a targeted analyte detection (TAD) technique. In TAD, the target analyte is selectively elevated by spiking a known amount of that analyte into the sample, thereby raising its concentration above the noise level, where we take advantage of the improved sensitivity to detect the presence of the endogenous analyte in the sample. We assessed TAD on three peptides in simple and complex background solutions with various exogenous analyte concentrations in two MALDI matrices. TAD successfully improved the limit of detection (LOD) of target analytes when the target peptides were added to the sample in a concentration close to optimum concentration. The optimum exogenous concentration was estimated through a quantitative method to be approximately equal to the original LOD for each target. Also, we showed that TAD could achieve LOD improvements on an average of 3-fold in a simple and 2-fold in a complex sample. TAD provides a straightforward assay to improve the LOD of generic target analytes without the need for costly hardware modifications.

  3. Performance evaluation soil samples utilizing encapsulation technology

    DOEpatents

    Dahlgran, J.R.

    1999-08-17

    Performance evaluation soil samples and method of their preparation uses encapsulation technology to encapsulate analytes which are introduced into a soil matrix for analysis and evaluation by analytical laboratories. Target analytes are mixed in an appropriate solvent at predetermined concentrations. The mixture is emulsified in a solution of polymeric film forming material. The emulsified solution is polymerized to form microcapsules. The microcapsules are recovered, quantitated and introduced into a soil matrix in a predetermined ratio to form soil samples with the desired analyte concentration. 1 fig.

  4. Performance evaluation soil samples utilizing encapsulation technology

    DOEpatents

    Dahlgran, James R.

    1999-01-01

    Performance evaluation soil samples and method of their preparation using encapsulation technology to encapsulate analytes which are introduced into a soil matrix for analysis and evaluation by analytical laboratories. Target analytes are mixed in an appropriate solvent at predetermined concentrations. The mixture is emulsified in a solution of polymeric film forming material. The emulsified solution is polymerized to form microcapsules. The microcapsules are recovered, quantitated and introduced into a soil matrix in a predetermined ratio to form soil samples with the desired analyte concentration.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Covered are: analytical laboratory operations (ALO) sample receipt and control, ALO data report/package preparation review and control, single shell tank (PST) project sample tracking system, sample receiving, analytical balances, duties and responsibilities of sample custodian, sample refrigerator temperature monitoring, security, assignment of staff responsibilities, sample storage, data reporting, and general requirements for glassware.

  6. Magnetic separation techniques in sample preparation for biological analysis: a review.

    PubMed

    He, Jincan; Huang, Meiying; Wang, Dongmei; Zhang, Zhuomin; Li, Gongke

    2014-12-01

    Sample preparation is a fundamental and essential step in almost all the analytical procedures, especially for the analysis of complex samples like biological and environmental samples. In past decades, with advantages of superparamagnetic property, good biocompatibility and high binding capacity, functionalized magnetic materials have been widely applied in various processes of sample preparation for biological analysis. In this paper, the recent advancements of magnetic separation techniques based on magnetic materials in the field of sample preparation for biological analysis were reviewed. The strategy of magnetic separation techniques was summarized. The synthesis, stabilization and bio-functionalization of magnetic nanoparticles were reviewed in detail. Characterization of magnetic materials was also summarized. Moreover, the applications of magnetic separation techniques for the enrichment of protein, nucleic acid, cell, bioactive compound and immobilization of enzyme were described. Finally, the existed problems and possible trends of magnetic separation techniques for biological analysis in the future were proposed. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Recent developments in the analysis of toxic elements.

    PubMed

    Lisk, D J

    1974-06-14

    One may conclude that it is impractical to confine oneself to any one analytical method since ever more sensitive instrumentation continues to be produced. However, in certain methods such as anodic stripping voltammetry and flameless atomic absorption it may be background contamination from reagent impurities and surroundings rather than instrument sensitivity which controls the limits of element detection. The problem of contamination from dust or glassware is greatly magnified when the sample size becomes ever smaller. Air entering laboratories near highways may contain trace quantities of lead, cadmium, barium, antimony, and other elements from engine exhaust. Even plastic materials contacting the sample may be suspect as a source of contamination since specific metals may be used as catalysts in the synthesis of the plastic and traces may be retained in it. Certain elements may even be deliberately added to plastics during manufacture for identification purposes. Nondestructive methods such as neutron activation and x-ray techniques thus offer great advantages not only in time but in the elimination of impurities introduced during sample ashing. Future improvements in attainable limits of detection may arise largely from progress in the ultrapurification of reagents and "clean-room" techniques. Finally, the competence of the analyst is also vitally important in the skillful operation of modern complex analytical instrumentation and in the experienced evaluation of data.

  8. Hill Problem Analytical Theory to the Order Four. Application to the Computation of Frozen Orbits around Planetary Satellites

    NASA Technical Reports Server (NTRS)

    Lara, Martin; Palacian, Jesus F.

    2007-01-01

    Frozen orbits of the Hill problem are determined in the double averaged problem, where short and long period terms are removed by means of Lie transforms. The computation of initial conditions of corresponding quasi periodic solutions in the non-averaged problem is straightforward for the perturbation method used provides the explicit equations of the transformation that connects the averaged and non-averaged models. A fourth order analytical theory reveals necessary for the accurate computation of quasi periodic, frozen orbits.

  9. Methods of Analysis by the U.S. Geological Survey National Water Quality Laboratory - Determination of Moderate-Use Pesticides and Selected Degradates in Water by C-18 Solid-Phase Extraction and Gas Chromatography/Mass Spectrometry

    USGS Publications Warehouse

    Sandstrom, Mark W.; Stroppel, Max E.; Foreman, William T.; Schroeder, Michael P.

    2001-01-01

    A method for the isolation and analysis of 21 parent pesticides and 20 pesticide degradates in natural-water samples is described. Water samples are filtered to remove suspended particulate matter and then are pumped through disposable solid-phase-extraction columns that contain octadecyl-bonded porous silica to extract the analytes. The columns are dried by using nitrogen gas, and adsorbed analytes are eluted with ethyl acetate. Extracted analytes are determined by capillary-column gas chromatography/mass spectrometry with selected-ion monitoring of three characteristic ions. The upper concentration limit is 2 micrograms per liter (?g/L) for most analytes. Single-operator method detection limits in reagent-water samples range from 0.00 1 to 0.057 ?g/L. Validation data also are presented for 14 parent pesticides and 20 degradates that were determined to have greater bias or variability, or shorter holding times than the other compounds. The estimated maximum holding time for analytes in pesticide-grade water before extraction was 4 days. The estimated maximum holding time for analytes after extraction on the dry solid-phase-extraction columns was 7 days. An optional on-site extraction procedure allows for samples to be collected and processed at remote sites where it is difficult to ship samples to the laboratory within the recommended pre-extraction holding time. The method complements existing U.S. Geological Survey Method O-1126-95 (NWQL Schedules 2001 and 2010) by using identical sample preparation and comparable instrument analytical conditions so that sample extracts can be analyzed by either method to expand the range of analytes determined from one water sample.

  10. Application of correlation constrained multivariate curve resolution alternating least-squares methods for determination of compounds of interest in biodiesel blends using NIR and UV-visible spectroscopic data.

    PubMed

    de Oliveira, Rodrigo Rocha; de Lima, Kássio Michell Gomes; Tauler, Romà; de Juan, Anna

    2014-07-01

    This study describes two applications of a variant of the multivariate curve resolution alternating least squares (MCR-ALS) method with a correlation constraint. The first application describes the use of MCR-ALS for the determination of biodiesel concentrations in biodiesel blends using near infrared (NIR) spectroscopic data. In the second application, the proposed method allowed the determination of the synthetic antioxidant N,N'-Di-sec-butyl-p-phenylenediamine (PDA) present in biodiesel mixtures from different vegetable sources using UV-visible spectroscopy. Well established multivariate regression algorithm, partial least squares (PLS), were calculated for comparison of the quantification performance in the models developed in both applications. The correlation constraint has been adapted to handle the presence of batch-to-batch matrix effects due to ageing effects, which might occur when different groups of samples were used to build a calibration model in the first application. Different data set configurations and diverse modes of application of the correlation constraint are explored and guidelines are given to cope with different type of analytical problems, such as the correction of matrix effects among biodiesel samples, where MCR-ALS outperformed PLS reducing the relative error of prediction RE (%) from 9.82% to 4.85% in the first application, or the determination of minor compound with overlapped weak spectroscopic signals, where MCR-ALS gave higher (RE (%)=3.16%) for prediction of PDA compared to PLS (RE (%)=1.99%), but with the advantage of recovering the related pure spectral profile of analytes and interferences. The obtained results show the potential of the MCR-ALS method with correlation constraint to be adapted to diverse data set configurations and analytical problems related to the determination of biodiesel mixtures and added compounds therein. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Recycling polymer residues to synthesize magnetic nanocomposites for dispersive micro-solid phase extraction.

    PubMed

    Ghambari, Hoda; Reyes-Gallardo, Emilia M; Lucena, Rafael; Saraji, Mohammad; Cárdenas, Soledad

    2017-08-01

    The ubiquitous presence of plastics, an obvious consequence of their usefulness and low price, has turned them into a problem of environmental and safety concern. The new plastic economy, an initiative recently launched by the World Economic Forum and Ellen MacArthur Foundation, with analytical support from McKinsey & Company, promotes a change in the use of plastic worldwide around three main pillars: redesign, reusing and recycling. Recycled plastics, with the aim of extending their life spam, can be used to synthesize materials for analytical purposes. In this article polystyrene (PS) trays, previously used for food packaging, are proposed as polymeric source for the synthesis of magnetic nanocomposites. The synthesis plays with the solubility of PS in different solvents in such a way that PS is gelated in the presence of cobalt ferrite nanoparticles which are finally embedded in the polymeric network. The extraction capability of the magnetic PS nanocomposite was evaluated using the determination of four parabens (methylparaben, ethylparaben, propylparaben and butylparaben) in water using liquid chromatography-tandem mass spectrometry as model analytical problem. Under the optimum conditions, limits of detection and quantification were in the range of 0.05-0.15 and 0.15-0.5ng/mL, respectively. The precisions, expressed as relative standard deviation (RSD), varied between 4.4% and 8.5% and the relative recoveries for analysis of the water samples were in the interval 81.2-104.5%. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Random Forest as a Predictive Analytics Alternative to Regression in Institutional Research

    ERIC Educational Resources Information Center

    He, Lingjun; Levine, Richard A.; Fan, Juanjuan; Beemer, Joshua; Stronach, Jeanne

    2018-01-01

    In institutional research, modern data mining approaches are seldom considered to address predictive analytics problems. The goal of this paper is to highlight the advantages of tree-based machine learning algorithms over classic (logistic) regression methods for data-informed decision making in higher education problems, and stress the success of…

  13. Intimacy Is a Transdiagnostic Problem for Cognitive Behavior Therapy: Functional Analytical Psychotherapy Is a Solution

    ERIC Educational Resources Information Center

    Wetterneck, Chad T.; Hart, John M.

    2012-01-01

    Problems with intimacy and interpersonal issues are exhibited across most psychiatric disorders. However, most of the targets in Cognitive Behavioral Therapy are primarily intrapersonal in nature, with few directly involved in interpersonal functioning and effective intimacy. Functional Analytic Psychotherapy (FAP) provides a behavioral basis for…

  14. Identification of multiple leaks in pipeline: Linearized model, maximum likelihood, and super-resolution localization

    NASA Astrophysics Data System (ADS)

    Wang, Xun; Ghidaoui, Mohamed S.

    2018-07-01

    This paper considers the problem of identifying multiple leaks in a water-filled pipeline based on inverse transient wave theory. The analytical solution to this problem involves nonlinear interaction terms between the various leaks. This paper shows analytically and numerically that these nonlinear terms are of the order of the leak sizes to the power two and; thus, negligible. As a result of this simplification, a maximum likelihood (ML) scheme that identifies leak locations and leak sizes separately is formulated and tested. It is found that the ML estimation scheme is highly efficient and robust with respect to noise. In addition, the ML method is a super-resolution leak localization scheme because its resolvable leak distance (approximately 0.15λmin , where λmin is the minimum wavelength) is below the Nyquist-Shannon sampling theorem limit (0.5λmin). Moreover, the Cramér-Rao lower bound (CRLB) is derived and used to show the efficiency of the ML scheme estimates. The variance of the ML estimator approximates the CRLB proving that the ML scheme belongs to class of best unbiased estimator of leak localization methods.

  15. Workplace Skills Taught in a Simulated Analytical Department

    NASA Astrophysics Data System (ADS)

    Sonchik Marine, Susan

    2001-11-01

    Integration of workplace skills into the academic setting is paramount for any chemical technology program. In addition to the expected chemistry content, courses must build proficiency in oral and written communication skills, computer skills, laboratory safety, and logical troubleshooting. Miami University's Chemical Technology II course is set up as a contract analytical laboratory. Students apply the advanced sampling techniques, quality assurance, standard methods, and statistical analyses they have studied. For further integration of workplace skills, weekly "department meetings" are held where the student, as members of the department, report on their work in process, present completed projects, and share what they have learned and what problems they have encountered. Information is shared between the experienced members of the department and those encountering problems or starting a new project. The instructor as department manager makes announcements, reviews company and department status, and assigns work for the coming week. The department members report results to clients in formal reports or in short memos. Factors affecting the success of the "department meeting" approach include the formality of the meeting room, use of an official agenda, the frequency, time, and duration of the meeting, and accountability of the students.

  16. Big Data Analytics with Datalog Queries on Spark.

    PubMed

    Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo

    2016-01-01

    There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.

  17. Big Data Analytics with Datalog Queries on Spark

    PubMed Central

    Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo

    2017-01-01

    There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics. PMID:28626296

  18. NHEXAS PHASE I ARIZONA STUDY--QA ANALYTICAL RESULTS FOR PESTICIDE METABOLITES IN BLANK SAMPLES

    EPA Science Inventory

    The Pesticide Metabolites in Blank Samples data set contains the analytical results of measurements of up to 4 pesticide metabolites in 3 blank samples from 3 households. Measurements were made in blank samples of urine. Blank samples were used to assess the potential for sampl...

  19. Design and performance of a new continuous-flow sample-introduction system for flame infrared-emission spectrometry: Applications in process analysis, flow injection analysis, and ion-exchange high-performance liquid chromatography.

    PubMed

    Lam, C K; Zhang, Y; Busch, M A; Busch, K W

    1993-06-01

    A new sample introduction system for the analysis of continuously flowing liquid streams by flame infrared-emission (FIRE) spectrometry has been developed. The system uses a specially designed purge cell to strip dissolved CO(2) from solution into a hydrogen gas stream that serves as the fuel for a hydrogen/air flame. Vibrationally excited CO(2) molecules present in the flame are monitored with a simple infrared filter (4.4 mum) photometer. The new system can be used to introduce analytes as a continuous liquid stream (process analysis mode) or on a discrete basis by sample injection (flow injection analysis mode). The key to the success of the method is the new purge-cell design. The small internal volume of the cell minimizes problems associated with purge-cell clean-out and produces sharp, reproducible signals. Spent analytical solution is continuously drained from the cell, making cell disconnection and cleaning between samples unnecessary. Under the conditions employed in this study, samples could be analyzed at a maximum rate of approximately 60/h. The new sample introduction system was successfully tested in both a process analysis- and a flow injection analysis mode for the determination of total inorganic carbon in Waco tap water. For the first time, flame infrared-emission spectrometry was successfully extended to non-volatile organic compounds by using chemical pretreatment with peroxydisulfate in the presence of silver ion to convert the analytes into dissolved carbon dioxide, prior to purging and detection by the FIRE radiometer. A test of the peroxydisulfate/Ag(+) reaction using six organic acids and five sugars indicated that all 11 compounds were oxidized to nearly the same extent. Finally, the new sample introduction system was used in conjunction with a simple filter FIRE radiometer as a detection system in ion-exchange high-performance liquid chromatography. Ion-exchange chromatograms are shown for two aqueous mixtures, one containing six organic acids and the second containing six mono-, di-, and trisaccharides.

  20. Analytical strategy for the determination of various arsenic species in landfill leachate containing high concentrations of chlorine and organic carbon by HPLC-ICPMS

    NASA Astrophysics Data System (ADS)

    Bae, J.; An, J.; Kim, J.; Jung, H.; Kim, K.; Yoon, C.; Yoon, H.

    2012-12-01

    As a variety of wastes containing arsenic are disposed of in landfills, such facilities can play a prominent role in disseminating arsenic sources to the environment. Since it is widely recognized that arsenic toxicity is highly dependent on its species, accurate determination of various arsenic species should be considered as one of the essential goals to properly account for the potential health risk of arsenic in human and the environment. The inductively coupled plasma mass spectrometry linked to high performance liquid chromatography (HPLC-ICPMS) is acknowledged as one of the most important tools for the trace analysis of metallic speciation because of its superior separation capability and detectability. However, the complexity of matrices can cause severe interferences in the analysis results, which is the problem often encountered with HPLC-ICPMS system. High concentration of organic carbon in a sample solution causes carbon build-up on the skimmer and sampling cone, which reduces analytical sensitivity and requires a high maintenance level for its cleaning. In addition, argon from the plasma and chlorine from the sample matrix may combine to form 40Ar35Cl, which has the same nominal mass to charge (m/z) ratio as arsenic. In this respect, analytical strategy for the determination of various arsenic species (e.g., inorganic arsenite and arsenate, monomethylarsonic acid, dimethylarsinic acid, dimethyldithioarsinic acid, and arsenobetaine) in landfill leachate containing high concentrations of chlorine and organic carbon was developed in the present study. Solid phase extraction disk (i.e., C18 disk), which does not significantly adsorb any target arsenic species, was used to remove organic carbon in sample solutions. In addition, helium (He) gas was injected into the collision reaction cell equipped in ICPMS to collapse 40Ar35Cl into individual 40Ar and 35Cl. Although He gas also decreased arsenic intensity by blocking 75As, its signal to noise ratio significantly increased after injecting He gas. We demonstrated that the analytical strategy was achieved improved sensitivity for the determination of various arsenic species in the landfill leachate as one of the complex matrices.

  1. Integrated sampling and analysis unit for the determination of sexual pheromones in environmental air using fabric phase sorptive extraction and headspace-gas chromatography-mass spectrometry.

    PubMed

    Alcudia-León, M Carmen; Lucena, Rafael; Cárdenas, Soledad; Valcárcel, Miguel; Kabir, Abuzar; Furton, Kenneth G

    2017-03-10

    This article presents a novel unit that integrates for the first time air sampling and preconcentration based on the use of fabric phase sorptive extraction principles. The determination of Tuta absoluta sexual pheromone traces in environmental air has been selected as analytical problem. For this aim, a novel laboratory-built unit made up of commercial brass elements as holder of the sol-gel coated fabric extracting phase has been designed and optimized. The performance of the integrated unit was evaluated analyzing environmental air sampled in tomato crops. The unit can work under sampling and analysis mode which eliminates any need for sorptive phase manipulation prior to instrumental analysis. In the sampling mode, the unit can be connected to a sampling pump to pass the air through the sorptive phase at a controlled flow-rate. In the analysis mode, it is placed in the gas chromatograph autosampler without any instrumental modification. It also diminishes the risk of cross contamination between sampling and analysis. The performance of the new unit has been evaluated using the main components of the sexual pheromone of Tuta absoluta [(3E,8Z,11Z)-tetradecatrien-1-yl acetate and (3E,8Z)-tetradecadien-1-yl acetate] as model analytes. The limits of detection for both compounds resulted to be 1.6μg and 0.8μg, respectively, while the precision (expressed as relative standard deviation) was better than 3.7%. Finally, the unit has been deployed in the field to analyze a number of real life samples, some of them were found positive. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    Here, the one-dimensional water faucet problem is one of the classical benchmark problems originally proposed by Ransom to study the two-fluid two-phase flow model. With certain simplifications, such as massless gas phase and no wall and interfacial frictions, analytical solutions had been previously obtained for the transient liquid velocity and void fraction distribution. The water faucet problem and its analytical solutions have been widely used for the purposes of code assessment, benchmark and numerical verifications. In our previous study, the Ransom’s solutions were used for the mesh convergence study of a high-resolution spatial discretization scheme. It was found that, atmore » the steady state, an anticipated second-order spatial accuracy could not be achieved, when compared to the existing Ransom’s analytical solutions. A further investigation showed that the existing analytical solutions do not actually satisfy the commonly used two-fluid single-pressure two-phase flow equations. In this work, we present a new set of analytical solutions of the water faucet problem at the steady state, considering the gas phase density’s effect on pressure distribution. This new set of analytical solutions are used for mesh convergence studies, from which anticipated second-order of accuracy is achieved for the 2nd order spatial discretization scheme. In addition, extended Ransom’s transient solutions for the gas phase velocity and pressure are derived, with the assumption of decoupled liquid and gas pressures. Numerical verifications on the extended Ransom’s solutions are also presented.« less

  3. Electronic aroma detection technology for forensic and law enforcement applications

    NASA Astrophysics Data System (ADS)

    Barshick, Stacy-Ann; Griest, Wayne H.; Vass, Arpad A.

    1997-02-01

    A major problem hindering criminal investigations is the lack of appropriate tools for proper crime scene investigations. Often locating important pieces of evidence means relying on the ability of trained detection canines. Development of analytical technology to uncover and analyze evidence, potentially at the scene, could serve to expedite criminal investigations, searches, and court proceedings. To address this problem, a new technology based on gas sensor arrays was investigated for its applicability to forensic and law enforcement problems. The technology employs an array of sensors that respond to volatile chemical components yielding a characteristic 'fingerprint' pattern representative of the vapor-phase composition of a sample. Sample aromas can be analyzed and identified using artificial neural networks that are trained on known aroma patterns. Several candidate applications based on known technological needs of the forensic and law enforcement communities have been investigated. These applications have included the detection of aromas emanating from cadavers to aid in determining time since death, drug detection for deterring the manufacture, sale, and use of drugs of abuse, and the analysis of fire debris for accelerant identification. The result to date for these applications have been extremely promising and demonstrate the potential applicability of this technology for forensic use.

  4. Analytic Formulation and Numerical Implementation of an Acoustic Pressure Gradient Prediction

    NASA Technical Reports Server (NTRS)

    Lee, Seongkyu; Brentner, Kenneth S.; Farassat, F.; Morris, Philip J.

    2008-01-01

    Two new analytical formulations of the acoustic pressure gradient have been developed and implemented in the PSU-WOPWOP rotor noise prediction code. The pressure gradient can be used to solve the boundary condition for scattering problems and it is a key aspect to solve acoustic scattering problems. The first formulation is derived from the gradient of the Ffowcs Williams-Hawkings (FW-H) equation. This formulation has a form involving the observer time differentiation outside the integrals. In the second formulation, the time differentiation is taken inside the integrals analytically. This formulation avoids the numerical time differentiation with respect to the observer time, which is computationally more efficient. The acoustic pressure gradient predicted by these new formulations is validated through comparison with available exact solutions for a stationary and moving monopole sources. The agreement between the predictions and exact solutions is excellent. The formulations are applied to the rotor noise problems for two model rotors. A purely numerical approach is compared with the analytical formulations. The agreement between the analytical formulations and the numerical method is excellent for both stationary and moving observer cases.

  5. DROMO formulation for planar motions: solution to the Tsien problem

    NASA Astrophysics Data System (ADS)

    Urrutxua, Hodei; Morante, David; Sanjurjo-Rivo, Manuel; Peláez, Jesús

    2015-06-01

    The two-body problem subject to a constant radial thrust is analyzed as a planar motion. The description of the problem is performed in terms of three perturbation methods: DROMO and two others due to Deprit. All of them rely on Hansen's ideal frame concept. An explicit, analytic, closed-form solution is obtained for this problem when the initial orbit is circular (Tsien problem), based on the DROMO special perturbation method, and expressed in terms of elliptic integral functions. The analytical solution to the Tsien problem is later used as a reference to test the numerical performance of various orbit propagation methods, including DROMO and Deprit methods, as well as Cowell and Kustaanheimo-Stiefel methods.

  6. Exact solution for an optimal impermeable parachute problem

    NASA Astrophysics Data System (ADS)

    Lupu, Mircea; Scheiber, Ernest

    2002-10-01

    In the paper there are solved direct and inverse boundary problems and analytical solutions are obtained for optimization problems in the case of some nonlinear integral operators. It is modeled the plane potential flow of an inviscid, incompressible and nonlimited fluid jet, witch encounters a symmetrical, curvilinear obstacle--the deflector of maximal drag. There are derived integral singular equations, for direct and inverse problems and the movement in the auxiliary canonical half-plane is obtained. Next, the optimization problem is solved in an analytical manner. The design of the optimal airfoil is performed and finally, numerical computations concerning the drag coefficient and other geometrical and aerodynamical parameters are carried out. This model corresponds to the Helmholtz impermeable parachute problem.

  7. Analytical stability criteria for the Caledonian Symmetric Four and Five Body Problems

    NASA Astrophysics Data System (ADS)

    Steves, Bonnie; Shoaib Afridi, Mohammad; Sweatman, Winston

    2017-06-01

    Analytical studies of the stability of three or more body gravitational systems are difficult because of the greater number of variables involved with the increasing number of bodies and the limitation of 10 integrals that exist in the gravitational n-body problem. Utilisation of symmetries or the neglecting of the masses of some of the bodies compared to others can simplify the dynamical problem and enable global analytical stability solutions to be derived. These symmetric and restricted few body systems with their analytical stability criterion can then provide useful information on the stability of the general few body system when near symmetry or the restricted situation. Even with symmetrical reductions, analytical stability derivations for four and five body problems are rare. In this paper, we develop an analytical stability criterion for the Caledonian Symmetric Five Body Problem (CS5BP) , a dynamically symmetrical planar problem with two pairs of equal masses and a fifth mass located at the centre of mass. Sundman’s inequality is applied to derive boundary surfaces to the allowed real motion of the system. This enables the derivation of a stability criterion valid for all time for the hierarchical stability of the CS5BP and its subset the Caledonian Symmetric Four Body Problem (CSFBP), where the central mass is taken to be equal to zero. We show that the hierarchical stability depends solely on the Szebehely constant C0, which is a function of the total energy H and angular momentum c. The critical value Ccrit at which the system becomes hierarchically stable for all time depends only on the two mass ratios of the symmetric five body system. We then explore the effect on the stability of the whole system of adding an increasing massive central body. It is shown both analytically and numerically that all CS5BPs and CSFBPs of different mass ratios are hierarchically stable if C0 > 0.0659 and C0 > 0.0465, respectively. The Caledonian Symmetric Four and Five Body gravitational models are relevant to the study of the stability and evolution of symmetric quadruple/quintuple stellar clusters and symmetric exoplanetary systems of two planets orbiting a binary/triplet of stars.

  8. Nanophotonic particle simulation and inverse design using artificial neural networks

    PubMed Central

    Peurifoy, John; Shen, Yichen; Jing, Li; Cano-Renteria, Fidel; DeLacy, Brendan G.; Joannopoulos, John D.; Tegmark, Max

    2018-01-01

    We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles. We find that the network needs to be trained on only a small sampling of the data to approximate the simulation to high precision. Once the neural network is trained, it can simulate such optical processes orders of magnitude faster than conventional simulations. Furthermore, the trained neural network can be used to solve nanophotonic inverse design problems by using back propagation, where the gradient is analytical, not numerical. PMID:29868640

  9. Report on the U.S. Geological Survey's evaluation program for standard reference samples distributed in October 1994 : T-131 (trace constituents), T-133 (trace constituents), M-132 (major constituents), N-43 (nutrients), N-44 (nutrients), P-23 (low ionic strength) and Hg-19 (mercury)

    USGS Publications Warehouse

    Long, H. Keith; Farrar, Jerry W.

    1995-01-01

    This report presents the results of the U.S. Geological Survey's analytical evaluation program for 7 standard reference samples--T-131 (trace constituents), T-133 (trace constituents), M-132 (major constituents), N-43 (nutrients), N-44 (nutrients), P-23 (low ionic strength), and Hg-19 (mercury). The samples were distributed in October 1994 to 131 laboratories registered in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data that were received from 121 of the laboratories were evaluated with respect to: overall laboratory performance and relative laboratory performance for each analyte in the seven reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the seven standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.

  10. An analytically iterative method for solving problems of cosmic-ray modulation

    NASA Astrophysics Data System (ADS)

    Kolesnyk, Yuriy L.; Bobik, Pavol; Shakhov, Boris A.; Putis, Marian

    2017-09-01

    The development of an analytically iterative method for solving steady-state as well as unsteady-state problems of cosmic-ray (CR) modulation is proposed. Iterations for obtaining the solutions are constructed for the spherically symmetric form of the CR propagation equation. The main solution of the considered problem consists of the zero-order solution that is obtained during the initial iteration and amendments that may be obtained by subsequent iterations. The finding of the zero-order solution is based on the CR isotropy during propagation in the space, whereas the anisotropy is taken into account when finding the next amendments. To begin with, the method is applied to solve the problem of CR modulation where the diffusion coefficient κ and the solar wind speed u are constants with an Local Interstellar Spectra (LIS) spectrum. The solution obtained with two iterations was compared with an analytical solution and with numerical solutions. Finally, solutions that have only one iteration for two problems of CR modulation with u = constant and the same form of LIS spectrum were obtained and tested against numerical solutions. For the first problem, κ is proportional to the momentum of the particle p, so it has the form κ = k0η, where η =p/m_0c. For the second problem, the diffusion coefficient is given in the form κ = k0βη, where β =v/c is the particle speed relative to the speed of light. There was a good matching of the obtained solutions with the numerical solutions as well as with the analytical solution for the problem where κ = constant.

  11. Unpacking students' atomistic uderstanding of stoichiometry

    NASA Astrophysics Data System (ADS)

    Baluyut, John Ysrael

    Despite the use by instructors of particulate nature of matter (PNOM) diagrams in the general chemistry classroom, misconceptions on stoichiometry continue to prevail among students tasked with conceptual problems on concepts of limiting and excess reagents, and reaction yields. This dissertation set out to explore students' understanding of stoichiometry at the microscopic level as they solved problems that using PNOM diagrams. In particular, the study investigated how students coordinated symbolic and microscopic representations to demonstrate their knowledge of stoichiometric concepts, quantified the prevalence and explained the nature of stoichiometric misconceptions in terms of dual processing and dual coding theories, and used eye tracking to identify visual behaviors that accompanied cognitive processes students used to solve conceptual stoichiometry problems with PNOM diagrams. Interviews with students asked to draw diagrams for specific stoichiometric situations showed dual processing systems were in play. Many students were found to have used these processing systems in a heuristic-analytic sequence. Heuristics, such as the factor-label method and the least amount misconception, were often used by students to select information for further processing in an attempt to reduce the cognitive load of the subsequent analytic stage of the solution process. Diagrams drawn by students were used then to develop an instrument administered over a much larger sample of the general chemistry student population. The robustness of the dual processing theory was manifested by response patterns observed with large proportions of the student samples. These response patterns suggest that many students seemed to rely on heuristics to respond to a specific item for one of two diagrams given for the same chemical context, and then used a more analytic approach in dealing with the same item for the other diagram. It was also found that many students incorrectly treated items dealing with the same chemical context independently of each other instead of using a more integrative approach. A comparison of the visual behaviors of high-performing subjects with those of low-performers revealed that high performers relied heavily on the given diagrams to obtain information. They were found to have spent more time fixating on diagrams, looked between the chemical equation and the diagram for each problem more often, and used their episodic memory more heavily to collect information early on than low performers did. Retrospective think-alouds used with eye tracking also revealed specific strategies, such as counting and balancing of atoms and molecules across both sides of a diagram, as well as comparing ratios between atoms and molecules in a diagram with those given in a balanced equation, used by students to analyze PNOM diagrams.

  12. Adjustment of pesticide concentrations for temporal changes in analytical recovery, 1992–2010

    USGS Publications Warehouse

    Martin, Jeffrey D.; Eberle, Michael

    2011-01-01

    Recovery is the proportion of a target analyte that is quantified by an analytical method and is a primary indicator of the analytical bias of a measurement. Recovery is measured by analysis of quality-control (QC) water samples that have known amounts of target analytes added ("spiked" QC samples). For pesticides, recovery is the measured amount of pesticide in the spiked QC sample expressed as a percentage of the amount spiked, ideally 100 percent. Temporal changes in recovery have the potential to adversely affect time-trend analysis of pesticide concentrations by introducing trends in apparent environmental concentrations that are caused by trends in performance of the analytical method rather than by trends in pesticide use or other environmental conditions. This report presents data and models related to the recovery of 44 pesticides and 8 pesticide degradates (hereafter referred to as "pesticides") that were selected for a national analysis of time trends in pesticide concentrations in streams. Water samples were analyzed for these pesticides from 1992 through 2010 by gas chromatography/mass spectrometry. Recovery was measured by analysis of pesticide-spiked QC water samples. Models of recovery, based on robust, locally weighted scatterplot smooths (lowess smooths) of matrix spikes, were developed separately for groundwater and stream-water samples. The models of recovery can be used to adjust concentrations of pesticides measured in groundwater or stream-water samples to 100 percent recovery to compensate for temporal changes in the performance (bias) of the analytical method.

  13. Numerical and analytical approaches to an advection-diffusion problem at small Reynolds number and large Péclet number

    NASA Astrophysics Data System (ADS)

    Fuller, Nathaniel J.; Licata, Nicholas A.

    2018-05-01

    Obtaining a detailed understanding of the physical interactions between a cell and its environment often requires information about the flow of fluid surrounding the cell. Cells must be able to effectively absorb and discard material in order to survive. Strategies for nutrient acquisition and toxin disposal, which have been evolutionarily selected for their efficacy, should reflect knowledge of the physics underlying this mass transport problem. Motivated by these considerations, in this paper we discuss the results from an undergraduate research project on the advection-diffusion equation at small Reynolds number and large Péclet number. In particular, we consider the problem of mass transport for a Stokesian spherical swimmer. We approach the problem numerically and analytically through a rescaling of the concentration boundary layer. A biophysically motivated first-passage problem for the absorption of material by the swimming cell demonstrates quantitative agreement between the numerical and analytical approaches. We conclude by discussing the connections between our results and the design of smart toxin disposal systems.

  14. STATISTICAL ANALYSIS OF TANK 18F FLOOR SAMPLE RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, S.

    2010-09-02

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 18F as per the statistical sampling plan developed by Shine [1]. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL [2]. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples resultsmore » [3] to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL{sub 95%}) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 18F. The uncertainty is quantified in this report by an upper 95% confidence limit (UCL{sub 95%}) on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL{sub 95%} was based entirely on the six current scrape sample results (each averaged across three analytical determinations).« less

  15. STATISTICAL ANALYSIS OF TANK 19F FLOOR SAMPLE RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, S.

    2010-09-02

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 19F as per the statistical sampling plan developed by Harris and Shine. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples resultsmore » to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL95%) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current scrape sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 19F. The uncertainty is quantified in this report by an UCL95% on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL95% was based entirely on the six current scrape sample results (each averaged across three analytical determinations).« less

  16. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--QA ANALYTICAL RESULTS FOR METALS IN REPLICATE SAMPLES

    EPA Science Inventory

    The Metals in Replicate Samples data set contains the analytical results of measurements of up to 2 metals in 172 replicate (duplicate) samples from 86 households. Measurements were made in samples of blood. Duplicate samples for a small percentage of the total number of sample...

  17. Results of the U.S. Geological Survey's Analytical Evaluation Program for standard reference samples: T-155 (trace constituents), M-148 (major constituents), N-59 (nutrient constituents), N-60 (nutrient constituents), P-31 (low ionic strength constituents), GWT-4 (ground-water trace constituents) and Hg-27 (Mercury) distributed in September 1998

    USGS Publications Warehouse

    Farrar, Jerry W.

    1999-01-01

    This report presents the results of the U.S. Geological Survey's analytical evaluation program for seven standard reference samples -- T-155 (trace constituents), M-148 (major constituents), N-59 (nutrient constituents), N-60 (nutrient constituents), P-31 (low ionic strength constituents), GWT-4 (ground-water trace constituents), and Hg- 27 (mercury) -- which were distributed in September 1998 to 162 laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data that were received from 136 of the laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the seven reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the seven standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.

  18. Effect of storage duration on cytokine stability in human serum and plasma.

    PubMed

    Vincent, Fabien B; Nim, Hieu T; Lee, Jacinta P W; Morand, Eric F; Harris, James

    2018-06-14

    Quantification of analytes such as cytokines in serum samples is intrinsic to translational research in immune diseases. Optimising pre-analytical conditions is critical for ensuring study quality, including evaluation of cytokine stability. We aimed to evaluate the effect on cytokine stability of storage duration prior to freezing of serum, and compare to plasma samples obtained from patients with systemic lupus erythematosus (SLE). Protein stability was analysed by simultaneously quantifying 18 analytes using a custom multi-analyte profile in SLE patient serum and plasma samples that had been prospectively stored at 4 °C for pre-determined periods between 0 and 30 days, prior to freezing. Six analytes were excluded from analysis, because most tested samples were above or below the limit of detection. Amongst the 12 analysed proteins, 11 did not show significant signal degradation. Significant signal degradation was observed from the fourth day of storage for a single analyte, CCL19. Proteins levels were more stable in unseparated serum compared to plasma for most analytes, with the exception of IL-37 which appears slightly more stable in plasma. Based on this, a maximum 3 days of storage at 4 °C for unseparated serum samples is recommended for biobanked samples intended for cytokine analysis in studies of human immune disease. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Liquid Metering Centrifuge Sticks (LMCS): A Centrifugal Approach to Metering Known Sample Volumes for Colorimetric Solid Phase Extraction (C-SPE)

    NASA Technical Reports Server (NTRS)

    Gazda, Daniel B.; Schultz, John R.; Clarke, Mark S.

    2007-01-01

    Phase separation is one of the most significant obstacles encountered during the development of analytical methods for water quality monitoring in spacecraft environments. Removing air bubbles from water samples prior to analysis is a routine task on earth; however, in the absence of gravity, this routine task becomes extremely difficult. This paper details the development and initial ground testing of liquid metering centrifuge sticks (LMCS), devices designed to collect and meter a known volume of bubble-free water in microgravity. The LMCS uses centrifugal force to eliminate entrapped air and reproducibly meter liquid sample volumes for analysis with Colorimetric Solid Phase Extraction (C-SPE). C-SPE is a sorption-spectrophotometric platform that is being developed as a potential spacecraft water quality monitoring system. C-SPE utilizes solid phase extraction membranes impregnated with analyte-specific colorimetric reagents to concentrate and complex target analytes in spacecraft water samples. The mass of analyte extracted from the water sample is determined using diffuse reflectance (DR) data collected from the membrane surface and an analyte-specific calibration curve. The analyte concentration can then be calculated from the mass of extracted analyte and the volume of the sample analyzed. Previous flight experiments conducted in microgravity conditions aboard the NASA KC-135 aircraft demonstrated that the inability to collect and meter a known volume of water using a syringe was a limiting factor in the accuracy of C-SPE measurements. Herein, results obtained from ground based C-SPE experiments using ionic silver as a test analyte and either the LMCS or syringes for sample metering are compared to evaluate the performance of the LMCS. These results indicate very good agreement between the two sample metering methods and clearly illustrate the potential of utilizing centrifugal forces to achieve phase separation and metering of water samples in microgravity.

  20. Impulsive-Analytic Disposition in Mathematical Problem Solving: A Survey and a Mathematics Test

    ERIC Educational Resources Information Center

    Lim, Kien H.; Wagler, Amy

    2012-01-01

    The Likelihood-to-Act (LtA) survey and a mathematics test were used in this study to assess students' impulsive-analytic disposition in the context of mathematical problem solving. The results obtained from these two instruments were compared to those obtained using two widely-used scales: Need for Cognition (NFC) and Barratt Impulsivity Scale…

  1. Student Learning and Evaluation in Analytical Chemistry Using a Problem-Oriented Approach and Portfolio Assessment

    ERIC Educational Resources Information Center

    Boyce, Mary C.; Singh, Kuki

    2008-01-01

    This paper describes a student-focused activity that promotes effective learning in analytical chemistry. Providing an environment where students were responsible for their own learning allowed them to participate at all levels from designing the problem to be addressed, planning the laboratory work to support their learning, to providing evidence…

  2. Strategies to avoid false negative findings in residue analysis using liquid chromatography coupled to time-of-flight mass spectrometry.

    PubMed

    Kaufmann, Anton; Butcher, Patrick

    2006-01-01

    Liquid chromatography coupled to orthogonal acceleration time-of-flight mass spectrometry (LC/TOF) provides an attractive alternative to liquid chromatography coupled to triple quadrupole mass spectrometry (LC/MS/MS) in the field of multiresidue analysis. The sensitivity and selectivity of LC/TOF approach those of LC/MS/MS. TOF provides accurate mass information and a significantly higher mass resolution than quadrupole analyzers. The available mass resolution of commercial TOF instruments ranging from 10 000 to 18 000 full width at half maximum (FWHM) is not, however, sufficient to completely exclude the problem of isobaric interferences (co-elution of analyte ions with matrix compounds of very similar mass). Due to the required data storage capacity, TOF raw data is commonly centroided before being electronically stored. However, centroiding can lead to a loss of data quality. The co-elution of a low intensity analyte peak with an isobaric, high intensity matrix compound can cause problems. Some centroiding algorithms might not be capable of deconvoluting such partially merged signals, leading to incorrect centroids.Co-elution of isobaric compounds has been deliberately simulated by injecting diluted binary mixtures of isobaric model substances at various relative intensities. Depending on the mass differences between the two isobaric compounds and the resolution provided by the TOF instrument, significant deviations in exact mass measurements and signal intensities were observed. The extraction of a reconstructed ion chromatogram based on very narrow mass windows can even result in the complete loss of the analyte signal. Guidelines have been proposed to avoid such problems. The use of sub-2 microm HPLC packing materials is recommended to improve chromatographic resolution and to reduce the risk of co-elution. The width of the extraction mass windows for reconstructed ion chromatograms should be defined according to the resolution of the TOF instrument. Alternative approaches include the spiking of the sample with appropriate analyte concentrations. Furthermore, enhanced software, capable of deconvoluting partially merged mass peaks, may become available. Copyright (c) 2006 John Wiley & Sons, Ltd.

  3. Similarity solution of the Boussinesq equation

    NASA Astrophysics Data System (ADS)

    Lockington, D. A.; Parlange, J.-Y.; Parlange, M. B.; Selker, J.

    Similarity transforms of the Boussinesq equation in a semi-infinite medium are available when the boundary conditions are a power of time. The Boussinesq equation is reduced from a partial differential equation to a boundary-value problem. Chen et al. [Trans Porous Media 1995;18:15-36] use a hodograph method to derive an integral equation formulation of the new differential equation which they solve by numerical iteration. In the present paper, the convergence of their scheme is improved such that numerical iteration can be avoided for all practical purposes. However, a simpler analytical approach is also presented which is based on Shampine's transformation of the boundary value problem to an initial value problem. This analytical approximation is remarkably simple and yet more accurate than the analytical hodograph approximations.

  4. NHEXAS PHASE I ARIZONA STUDY--QA ANALYTICAL RESULTS FOR METALS IN SPIKE SAMPLES

    EPA Science Inventory

    The Metals in Spike Samples data set contains the analytical results of measurements of up to 11 metals in 38 control samples (spikes) from 18 households. Measurements were made in spiked samples of dust, food, beverages, blood, urine, and dermal wipe residue. Spiked samples we...

  5. NHEXAS PHASE I ARIZONA STUDY--QA ANALYTICAL RESULTS FOR METALS IN REPLICATE SAMPLES

    EPA Science Inventory

    The Metals in Replicate Samples data set contains the analytical results of measurements of up to 27 metals in 133 replicate (duplicate) samples from 62 households. Measurements were made in samples of soil, blood, tap water, and drinking water. Duplicate samples for a small pe...

  6. NHEXAS PHASE I REGION 5 STUDY--QA ANALYTICAL RESULTS FOR VOCS IN REPLICATES

    EPA Science Inventory

    This data set includes analytical results for measurements of VOCs in 204 duplicate (replicate) samples. Measurements were made for up to 23 VOCs in samples of air, water, and blood. Duplicate samples (samples collected along with or next to the original samples) were collected t...

  7. NHEXAS PHASE I REGION 5 STUDY--QA ANALYTICAL RESULTS FOR METALS IN REPLICATES

    EPA Science Inventory

    This data set includes analytical results for measurements of metals in 490 duplicate (replicate) samples and for particles in 130 duplicate samples. Measurements were made for up to 11 metals in samples of air, dust, water, blood, and urine. Duplicate samples (samples collected ...

  8. NHEXAS PHASE I ARIZONA STUDY--QA ANALYTICAL RESULTS FOR PESTICIDE METABOLITES IN SPIKE SAMPLES

    EPA Science Inventory

    The Pesticide Metabolites in Spike Samples data set contains the analytical results of measurements of up to 4 pesticide metabolites in 3 control samples (spikes) from 3 households. Measurements were made in spiked samples of urine. Spiked samples were used to assess recovery o...

  9. Improving the analyte ion signal in matrix-assisted laser desorption/ionization imaging mass spectrometry via electrospray deposition by enhancing incorporation of the analyte in the matrix.

    PubMed

    Malys, Brian J; Owens, Kevin G

    2017-05-15

    Matrix-assisted laser desorption/ionization (MALDI) is widely used as the ionization method in high-resolution chemical imaging studies that seek to visualize the distribution of analytes within sectioned biological tissues. This work extends the use of electrospray deposition (ESD) to apply matrix with an additional solvent spray to incorporate and homogenize analyte within the matrix overlayer. Analytes and matrix are sequentially and independently applied by ESD to create a sample from which spectra are collected, mimicking a MALDI imaging mass spectrometry (IMS) experiment. Subsequently, an incorporation spray consisting of methanol is applied by ESD to the sample and another set of spectra are collected. The spectra prior to and after the incorporation spray are compared to evaluate the improvement in the analyte signal. Prior to the incorporation spray, samples prepared using α-cyano-4-hydroxycinnamic acid (CHCA) and 2,5-dihydroxybenzoic acid (DHB) as the matrix showed low signal while the sample using sinapinic acid (SA) initially exhibited good signal. Following the incorporation spray, the sample using SA did not show an increase in signal; the sample using DHB showed moderate gain factors of 2-5 (full ablation spectra) and 12-336 (raster spectra), while CHCA samples saw large increases in signal, with gain factors of 14-172 (full ablation spectra) and 148-1139 (raster spectra). The use of an incorporation spray to apply solvent by ESD to a matrix layer already deposited by ESD provides an increase in signal by both promoting incorporation of the analyte within and homogenizing the distribution of the incorporated analyte throughout the matrix layer. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Multiresidue analysis of acidic and polar organic contaminants in water samples by stir-bar sorptive extraction-liquid desorption-gas chromatography-mass spectrometry.

    PubMed

    Quintana, José Benito; Rodil, Rosario; Muniategui-Lorenzo, Soledad; López-Mahía, Purificación; Prada-Rodríguez, Darío

    2007-12-07

    The feasibility of stir-bar sorptive extraction (SBSE) followed by liquid desorption in combination with large volume injection (LVI)-in port silylation and gas chromatography-mass spectrometry (GC-MS) for the simultaneous determination of a broad range of 46 acidic and polar organic pollutants in water samples has been evaluated. The target analytes included phenols (nitrophenols, chlorophenols, bromophenols and alkylphenols), acidic herbicides (phenoxy acids and dicamba) and several pharmaceuticals. Experimental variables affecting derivatisation yield and peak shape as a function of different experimental PTV parameters [initial injection time, pressure and temperature and the ratio solvent volume/N-(tert-butyldimethylsilyl)-N-methyltrifluoroacetamide (MTBSTFA) volume] were first optimised by an experimental design approach. Subsequently, SBSE conditions, such as pH, ionic strength, agitation speed and extraction time were investigated. After optimisation, the method failed only for the extraction of most polar phenols and some pharmaceuticals, being suitable for the determination of 37 (out of 46) pollutants, with detection limits for these analytes ranging between 1 and 800 ng/L and being lower than 25 ng/L in most cases. Finally, the developed method was validated and applied to the determination of target analytes in various aqueous environmental matrices, including ground, river and wastewater. Acceptable accuracy (70-130%) and precision values (<20%) were obtained for most analytes independently of the matrix, with the exception of some alkylphenols, where an isotopically labelled internal standard would be required in order to correct for matrix effects. Among the drawbacks of the method, carryover was identified as the main problem even though the Twisters were cleaned repeatedly.

  11. Demonstration/Validation of the Snap Sampler Passive Ground Water Sampling Device for Sampling Inorganic Analytes at the Former Pease Air Force Base

    DTIC Science & Technology

    2009-07-01

    viii Unit Conversion Factors...sampler is also an economic alternative for sampling for inorganic analytes. ERDC/CRREL TR-09-12 xii Unit Conversion Factors Multiply By To Obtain...head- space and then covered with two layers of tightly fitting aluminum foil. To dissolve the analytes, the solutions were stirred for approximately

  12. Utility of the Rosenberg self-esteem scale.

    PubMed

    Davis, Clare; Kellett, Stephen; Beail, Nigel

    2009-05-01

    The Rosenberg Self-Esteem Scale (RSES) continues to be used to purportedly measure self-esteem of people with intellectual disabilities, despite the lack of sound evidence concerning its validity and reliability when employed with this population. The psychometric foundations of the RSES were analyzed here with a sample of 219 participants with intellectual disabilities. The factor analytic methods employed revealed two factors (Self-Worth and Self-Criticism) and more specific problems with RSES Items 5 and 8. Overall, this scale showed only moderate temporal and moderate internal reliability and poor aspects of criterion validity. Results are discussed with reference to either developing a new measure of self-esteem or redesigning and simplifying the RSES in order to increase its initial face validity in intellectual disability samples.

  13. [Analytical quality in biological monitoring of workers exposed to chemicals: experience of the Prevention and Safety at the Workplace Service in Modena].

    PubMed

    Alpaca, R I Paredes; Migliore, A; Di Rico, R; Canali, Claudia; Rota, Cristina; Trenti, T; Cariani, Elisabetta

    2010-01-01

    The quality of laboratory data is one of the main factors in guaranteeing efficacy of biological monitoring. To analyze the quality of laboratory data used for biological monitoring of exposed workers. A survey involving 18 companies employing 945 workers in the area of Modena, Italy, was carried out in 2008. Most of the 9 private laboratories receiving biological samples did not perform directly part or all of the laboratory assessments requested, but this was not indicated in the final report. Major problems were observed in the application of internal quality control, and only one laboratory participated in external quality assessment for blood lead measurements. Our results raise major concerns on the traceability and reliability of laboratory assessments performed for biomonitoring of exposed workers. Systematic evaluation of the quality of analytical data would be highly recommendable.

  14. Visualization of Microfloral Metabolism for Marine Waste Recycling

    PubMed Central

    Ogura, Tatsuki; Hoshino, Reona; Date, Yasuhiro; Kikuchi, Jun

    2016-01-01

    Marine biomass including fishery products are precious protein resources for human foods and are an alternative to livestock animals in order to reduce the virtual water problem. However, a large amount of marine waste can be generated from fishery products and it is not currently recycled. We evaluated the metabolism of digested marine waste using integrated analytical methods, under anaerobic conditions and the fertilization of abandoned agricultural soils. Dynamics of fish waste digestion revealed that samples of meat and bony parts had similar dynamics under anaerobic conditions in spite of large chemical variations in input marine wastes. Abandoned agricultural soils fertilized with fish waste accumulated some amino acids derived from fish waste, and accumulation of l-arginine and l-glutamine were higher in plant seedlings. Therefore, we have proposed an analytical method to visualize metabolic dynamics for recycling of fishery waste processes. PMID:26828528

  15. Sampling probe for microarray read out using electrospray mass spectrometry

    DOEpatents

    Van Berkel, Gary J.

    2004-10-12

    An automated electrospray based sampling system and method for analysis obtains samples from surface array spots having analytes. The system includes at least one probe, the probe including an inlet for flowing at least one eluting solvent to respective ones of a plurality of spots and an outlet for directing the analyte away from the spots. An automatic positioning system is provided for translating the probe relative to the spots to permit sampling of any spot. An electrospray ion source having an input fluidicly connected to the probe receives the analyte and generates ions from the analyte. The ion source provides the generated ions to a structure for analysis to identify the analyte, preferably being a mass spectrometer. The probe can be a surface contact probe, where the probe forms an enclosing seal along the periphery of the array spot surface.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.W.; Boparai, A.S.; Bowers, D.L.

    This report summarizes the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year (FY) 2000 (October 1999 through September 2000). This annual progress report, which is the seventeenth in this series for the ACL, describes effort on continuing projects, work on new projects, and contributions of the ACL staff to various programs at ANL. The ACL operates within the ANL system as a full-cost-recovery service center, but it has a mission that includes a complementary research and development component: The Analytical Chemistry Laboratory will provide high-quality, cost-effective chemical analysis and related technical support tomore » solve research problems of our clients--Argonne National Laboratory, the Department of Energy, and others--and will conduct world-class research and development in analytical chemistry and its applications. The ACL handles a wide range of analytical problems that reflects the diversity of research and development (R&D) work at ANL. Some routine or standard analyses are done, but the ACL operates more typically in a problem-solving mode in which development of methods is required or adaptation of techniques is needed to obtain useful analytical data. The ACL works with clients and commercial laboratories if a large number of routine analyses are required. Much of the support work done by the ACL is very similar to applied analytical chemistry research work.« less

  17. Report on the U.S. Geological Survey's Evaluation Program Standard Reference Samples Distributed in October 1995: T-137 (Trace Constituents), M-136 (Major Constituents), N-47 (Nutrient Constituents), N-48 (Nutrient Constituents), P-25 (Low Ionic Strength Constituents), and Hg-21 (Mercury)

    USGS Publications Warehouse

    Farrar, Jerry W.; Long, H. Keith

    1996-01-01

    This report presents the results of the U.S. Geological Survey's analytical evaluation program for 6 standard reference samples--T-137 (trace constituents), M-136 (major constituents), N-47 (nutrient constituents), N-48 (nutrient constituents), P-25 (low ionic strength constituents), and Hg-21 (mercury)--that were distributed in October 1995 to 149 laboratories registered in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data that were received from 136 of the laboratories were evaluated with respect to: overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.

  18. Report on the U.S. Geological Survey's evaluation program for standard reference samples distributed in April 1994; T-129 (trace constituents), M-130 (major constituents), N-42 (nutrients), P-22 (low ionic strength), and Hg-18 (mercury)

    USGS Publications Warehouse

    Long, H. Keith; Farrar, Jerry W.

    1994-01-01

    This report presents the results of the U.S. Geological Survey's analytical evaluation program for five standard reference samples--T-129 (trace constituents), M-130 (major constituents), N-42 (nutrients), P-22 (low ionic strength), Hg-18(mercury),--that were distributed in April 1994 to 157 laboratories registered in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data that were received from 133 of the laboratories were evaluated with respect to: overall laboratory performance and relative laboratory performance for each analyte in the five reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the five standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.

  19. Report of the U.S. Geological Survey's evaluation program for standard reference samples distributed in April 1993; T-123 (trace constituents), T-125 (trace constituents), M-126 (major constituents, N-38 (nutrients), N-39 (nutrients), P-20 (low ionic strength, and Hg-16 (mercury)

    USGS Publications Warehouse

    Long, H.K.; Farrar, J.W.

    1993-01-01

    This report presents the results of the U.S. Geological Survey's analytical evaluation program for seven standard reference samples--T-123 (trace constituents), T-125 (trace constituents), M-126 (major constituents), N-38 (nutrients), N-39 (Nutrients), P-20 (precipitation-low ionic strength), and Hg-16 (mercury)--that were distributed in April 1993 to 175 laboratories registered in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data received from 131 of the laboratories were evaluated with respect to: overall laboratory performance and relative laboratory performance for each analyte in the 7 reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the seven standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.

  20. A comparative study of neutron activation analysis and proton-induced X-ray emission analysis for the determination of heavy metals in estuarine sediments

    NASA Astrophysics Data System (ADS)

    Randle, K.; Al-Jundi, J.; Mamas, C. J. V.; Sokhi, R. S.; Earwaker, L. G.

    1993-06-01

    Our work on heavy metals in the estuarine environment has involved the use of two multielement techniques: neutron activation analysis (NAA) and proton-induced X-ray emission (PIXE) analysis. As PIXE is essentially a surface analytical technique problems may arise due to sample inhomogeneity and surface roughness. In order to assess the contribution of these effects we have compared the results from PIXE analysis with those from a technique which analyzes a larger bulk sample rather than just the surface. An obvious method was NAA. A series of sediment samples containing particles of variable diameter were compared. Pellets containing a few mg of sediment were prepared from each sample and analyzed by the PIXE technique using both an absolute and a comparitive method. For INAA the rest of the sample was then irradiated with thermal neutrons and element concentrations determined from analyses of the subsequent gamma-ray spectrum. Results from the two methods are discussed.

  1. Consensus Classification Using Non-Optimized Classifiers.

    PubMed

    Brownfield, Brett; Lemos, Tony; Kalivas, John H

    2018-04-03

    Classifying samples into categories is a common problem in analytical chemistry and other fields. Classification is usually based on only one method, but numerous classifiers are available with some being complex, such as neural networks, and others are simple, such as k nearest neighbors. Regardless, most classification schemes require optimization of one or more tuning parameters for best classification accuracy, sensitivity, and specificity. A process not requiring exact selection of tuning parameter values would be useful. To improve classification, several ensemble approaches have been used in past work to combine classification results from multiple optimized single classifiers. The collection of classifications for a particular sample are then combined by a fusion process such as majority vote to form the final classification. Presented in this Article is a method to classify a sample by combining multiple classification methods without specifically classifying the sample by each method, that is, the classification methods are not optimized. The approach is demonstrated on three analytical data sets. The first is a beer authentication set with samples measured on five instruments, allowing fusion of multiple instruments by three ways. The second data set is composed of textile samples from three classes based on Raman spectra. This data set is used to demonstrate the ability to classify simultaneously with different data preprocessing strategies, thereby reducing the need to determine the ideal preprocessing method, a common prerequisite for accurate classification. The third data set contains three wine cultivars for three classes measured at 13 unique chemical and physical variables. In all cases, fusion of nonoptimized classifiers improves classification. Also presented are atypical uses of Procrustes analysis and extended inverted signal correction (EISC) for distinguishing sample similarities to respective classes.

  2. Residues of 2-hydroxy-3-phenylpyrazine, a degradation product of some β-lactam antibiotics, in environmental water in Vietnam.

    PubMed

    Sy, Nguyen Van; Harada, Kazuo; Asayama, Megumi; Warisaya, Minae; Dung, Le Hong; Sumimura, Yoshinori; Diep, Khong Thi; Ha, Le Viet; Thang, Nguyen Nam; Hoa, Tran Thi Tuyet; Phu, Tran Minh; Khai, Pham Ngoc; Phuong, Nguyen Thanh; Tuyen, Le Danh; Yamamoto, Yoshimasa; Hirata, Kazumasa

    2017-04-01

    Antibiotic-resistant bacteria have become a serious problem worldwide, caused in part by the excessive use and discharge of antibiotics into the environment. Ampicillin (ABPC) is a widely used antibiotic. However, this chemical rapidly decomposes in water containing divalent cations like Ca 2+ and Mg 2+ , thus, detection of ABPC in environmental water is difficult. This study was carried out to evaluate the presence of 2-hydroxy-3-phenylpyrazine (HPP), one of the degradation products of ABPC and β-lactam antibiotics with an ABPC substructure, in environmental water. An analytical method for HPP monitoring in environmental water was developed using liquid chromatography/tandem mass spectrometry. The analyte was extracted from water samples and enriched using a solid-phase extraction cartridge. The quantification limit was 1 ng L -1 . The HPP recovery rates from spiked water samples of 25 and 125 ng L -1 were 84.1 and 86.1%, respectively. The method was then used to determine HPP residue levels in 98 environmental water samples from rivers, household ponds, and aquacultural ponds in Vietnam. HPP residues were detected in 60 samples. The HPP detection rates in rivers and household ponds were 42 and 79%, respectively. HPP was not detected in aquacultural ponds. HPP residue concentrations in the samples ranged from 1.3 to 413.3 ng L -1 . The residue levels in rivers flowing through city centres were higher than levels in other sampling locations. The findings of this study suggest that HPP is a promising marker for assessing the discharge of ABPC and β-lactam antibiotics with an ABPC substructure into the environment around sampling sites. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. GMOtrack: generator of cost-effective GMO testing strategies.

    PubMed

    Novak, Petra Krau; Gruden, Kristina; Morisset, Dany; Lavrac, Nada; Stebih, Dejan; Rotter, Ana; Zel, Jana

    2009-01-01

    Commercialization of numerous genetically modified organisms (GMOs) has already been approved worldwide, and several additional GMOs are in the approval process. Many countries have adopted legislation to deal with GMO-related issues such as food safety, environmental concerns, and consumers' right of choice, making GMO traceability a necessity. The growing extent of GMO testing makes it important to study optimal GMO detection and identification strategies. This paper formally defines the problem of routine laboratory-level GMO tracking as a cost optimization problem, thus proposing a shift from "the same strategy for all samples" to "sample-centered GMO testing strategies." An algorithm (GMOtrack) for finding optimal two-phase (screening-identification) testing strategies is proposed. The advantages of cost optimization with increasing GMO presence on the market are demonstrated, showing that optimization approaches to analytic GMO traceability can result in major cost reductions. The optimal testing strategies are laboratory-dependent, as the costs depend on prior probabilities of local GMO presence, which are exemplified on food and feed samples. The proposed GMOtrack approach, publicly available under the terms of the General Public License, can be extended to other domains where complex testing is involved, such as safety and quality assurance in the food supply chain.

  4. Pre-analytical and analytical variation of drug determination in segmented hair using ultra-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian

    2014-01-01

    Assessment of total uncertainty of analytical methods for the measurements of drugs in human hair has mainly been derived from the analytical variation. However, in hair analysis several other sources of uncertainty will contribute to the total uncertainty. Particularly, in segmental hair analysis pre-analytical variations associated with the sampling and segmentation may be significant factors in the assessment of the total uncertainty budget. The aim of this study was to develop and validate a method for the analysis of 31 common drugs in hair using ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) with focus on the assessment of both the analytical and pre-analytical sampling variations. The validated method was specific, accurate (80-120%), and precise (CV≤20%) across a wide linear concentration range from 0.025-25 ng/mg for most compounds. The analytical variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two bundles of hair collected from each subject after subtraction of the analytical component. For the most frequently detected analytes, the pre-analytical variation was estimated to be 26-69%. Thus, the pre-analytical variation was 3-7 folds larger than the analytical variation (7-13%) and hence the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could significantly affect the interpretation of results from segmental hair analysis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. A Surrogate-based Adaptive Sampling Approach for History Matching and Uncertainty Quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan; Zhang, Dongxiao; Lin, Guang

    A critical procedure in reservoir simulations is history matching (or data assimilation in a broader sense), which calibrates model parameters such that the simulation results are consistent with field measurements, and hence improves the credibility of the predictions given by the simulations. Often there exist non-unique combinations of parameter values that all yield the simulation results matching the measurements. For such ill-posed history matching problems, Bayesian theorem provides a theoretical foundation to represent different solutions and to quantify the uncertainty with the posterior PDF. Lacking an analytical solution in most situations, the posterior PDF may be characterized with a samplemore » of realizations, each representing a possible scenario. A novel sampling algorithm is presented here for the Bayesian solutions to history matching problems. We aim to deal with two commonly encountered issues: 1) as a result of the nonlinear input-output relationship in a reservoir model, the posterior distribution could be in a complex form, such as multimodal, which violates the Gaussian assumption required by most of the commonly used data assimilation approaches; 2) a typical sampling method requires intensive model evaluations and hence may cause unaffordable computational cost. In the developed algorithm, we use a Gaussian mixture model as the proposal distribution in the sampling process, which is simple but also flexible to approximate non-Gaussian distributions and is particularly efficient when the posterior is multimodal. Also, a Gaussian process is utilized as a surrogate model to speed up the sampling process. Furthermore, an iterative scheme of adaptive surrogate refinement and re-sampling ensures sampling accuracy while keeping the computational cost at a minimum level. The developed approach is demonstrated with an illustrative example and shows its capability in handling the above-mentioned issues. Multimodal posterior of the history matching problem is captured and are used to give a reliable production prediction with uncertainty quantification. The new algorithm reveals a great improvement in terms of computational efficiency comparing previously studied approaches for the sample problem.« less

  6. An overview on forensic analysis devoted to analytical chemists.

    PubMed

    Castillo-Peinado, L S; Luque de Castro, M D

    2017-05-15

    The present article has as main aim to show analytical chemists interested in forensic analysis the world they will face if decision in favor of being a forensic analytical chemist is adopted. With this purpose, the most outstanding aspects of forensic analysis in dealing with sampling (involving both bodily and no bodily samples), sample preparation, and analytical equipment used in detection, identification and quantitation of key sample components are critically discussed. The role of the great omics in forensic analysis, and the growing role of the youngest of the great omics -metabolomics- are also discussed. The foreseeable role of integrative omics is also outlined. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Quantification of HCV RNA in Liver Tissue by bDNA Assay.

    PubMed

    Dailey, P J; Collins, M L; Urdea, M S; Wilber, J C

    1999-01-01

    With this statement, Sherlock and Dooley have described two of the three major challenges involved in quantitatively measuring any analyte in tissue samples: the distribution of the analyte in the tissue; and the standard of reference, or denominator, with which to make comparisons between tissue samples. The third challenge for quantitative measurement of an analyte in tissue is to ensure reproducible and quantitative recovery of the analyte on extraction from tissue samples. This chapter describes a method that can be used to measure HCV RNA quantitatively in liver biopsy and tissue samples using the bDNA assay. All three of these challenges-distribution, denominator, and recovery-apply to the measurement of HCV RNA in liver biopsies.

  8. A multiobjective hybrid genetic algorithm for the capacitated multipoint network design problem.

    PubMed

    Lo, C C; Chang, W H

    2000-01-01

    The capacitated multipoint network design problem (CMNDP) is NP-complete. In this paper, a hybrid genetic algorithm for CMNDP is proposed. The multiobjective hybrid genetic algorithm (MOHGA) differs from other genetic algorithms (GAs) mainly in its selection procedure. The concept of subpopulation is used in MOHGA. Four subpopulations are generated according to the elitism reservation strategy, the shifting Prufer vector, the stochastic universal sampling, and the complete random method, respectively. Mixing these four subpopulations produces the next generation population. The MOHGA can effectively search the feasible solution space due to population diversity. The MOHGA has been applied to CMNDP. By examining computational and analytical results, we notice that the MOHGA can find most nondominated solutions and is much more effective and efficient than other multiobjective GAs.

  9. A Generalized Approach to Forensic Dye Identification: Development and Utility of Reference Libraries.

    PubMed

    Groves, Ethan; Palenik, Skip; Palenik, Christopher S

    2018-04-18

    While color is arguably the most important optical property of evidential fibers, the actual dyestuffs responsible for its expression in them are, in forensic trace evidence examinations, rarely analyzed and still less often identified. This is due, primarily, to the exceedingly small quantities of dye present in a single fiber as well as to the fact that dye identification is a challenging analytical problem, even when large quantities are available for analysis. Among the practical reasons for this are the wide range of dyestuffs available (and the even larger number of trade names), the low total concentration of dyes in the finished product, the limited amount of sample typically available for analysis in forensic cases, and the complexity of the dye mixtures that may exist within a single fiber. Literature on the topic of dye analysis is often limited to a specific method, subset of dyestuffs, or an approach that is not applicable given the constraints of a forensic analysis. Here, we present a generalized approach to dye identification that ( 1 ) combines several robust analytical methods, ( 2 ) is broadly applicable to a wide range of dye chemistries, application classes, and fiber types, and ( 3 ) can be scaled down to forensic casework-sized samples. The approach is based on the development of a reference collection of 300 commercially relevant textile dyes that have been characterized by a variety of microanalytical methods (HPTLC, Raman microspectroscopy, infrared microspectroscopy, UV-Vis spectroscopy, and visible microspectrophotometry). Although there is no single approach that is applicable to all dyes on every type of fiber, a combination of these analytical methods has been applied using a reproducible approach that permits the use of reference libraries to constrain the identity of and, in many cases, identify the dye (or dyes) present in a textile fiber sample.

  10. Flame atomic absorption spectrometric determination of trace amounts of nickel after extraction and preconcentration onto natural modified analcime zeolite loaded with 2-(5-bromo-2-pyridylazo)-5-diethylaminophenol.

    PubMed

    Afzali, Darush; Taher, Mohammad Ali; Mostafavi, Ali; Mahani, Mohammad Khayatzadeh

    2005-01-01

    Nickel is a moderately toxic element compared with other transition metals. However, inhalation of nickel and its compounds leads to serious problems, including cancer of the respiratory system and a skin disorder, nickel-eczema. Thus, attention has focused on the toxicity of nickel at low concentrations, and the development of reliable, analytical approaches for the determination of trace amounts of nickel is needed. This paper describes a simple, rapid, and sensitive flame atomic absorption spectrometric method for the determination of trace amounts of nickel in various samples after adsorption of its 2-(5-bromo-2-pyridylazo)-5-diethylaminophenol complex on a modified Analcime column in the pH range of 7.5-10.5. The retained analyte on the Analcime is recovered with 5.0 mL 2 M nitric acid and determined by flame atomic absorption spectrometry. The detection limit is 20 ng/mL, and the calibration curve is linear for analyte concentrations in the range of 0.1-8 microg/mL final solution, with a correlation coefficient of 0.9993. Eight replicate determinations of nickel at 2 microg/mL in the final solution gave an absorbance of 0.1222, with a relative standard deviation (RSD) of +/-1.2%. The interference of a large number of anions and cations was studied, and the proposed method was used for the determination of nickel in various standard reference samples. The accuracy of the proposed method was evaluated by analyzing standard reference samples, and the results were satisfactory (recoveries of >96%; RSD of <3.5%).

  11. Compilation of a near-infrared library for the construction of quantitative models of amoxicillin and potassium clavulanate oral dosage forms

    NASA Astrophysics Data System (ADS)

    Zou, Wen-bo; Chong, Xiao-meng; Wang, Yan; Hu, Chang-qin

    2018-05-01

    The accuracy of NIR quantitative models depends on calibration samples with concentration variability. Conventional sample collecting methods have some shortcomings especially the time-consuming which remains a bottleneck in the application of NIR models for Process Analytical Technology (PAT) control. A study was performed to solve the problem of sample selection collection for construction of NIR quantitative models. Amoxicillin and potassium clavulanate oral dosage forms were used as examples. The aim was to find a normal approach to rapidly construct NIR quantitative models using an NIR spectral library based on the idea of a universal model [2021]. The NIR spectral library of amoxicillin and potassium clavulanate oral dosage forms was defined and consisted of spectra of 377 batches of samples produced by 26 domestic pharmaceutical companies, including tablets, dispersible tablets, chewable tablets, oral suspensions, and granules. The correlation coefficient (rT) was used to indicate the similarities of the spectra. The samples’ calibration sets were selected from a spectral library according to the median rT of the samples to be analyzed. The rT of the samples selected was close to the median rT. The difference in rT of those samples was 1.0% to 1.5%. We concluded that sample selection is not a problem when constructing NIR quantitative models using a spectral library versus conventional methods of determining universal models. The sample spectra with a suitable concentration range in the NIR models were collected quickly. In addition, the models constructed through this method were more easily targeted.

  12. How Much Can We Learn from a Single Chromatographic Experiment? A Bayesian Perspective.

    PubMed

    Wiczling, Paweł; Kaliszan, Roman

    2016-01-05

    In this work, we proposed and investigated a Bayesian inference procedure to find the desired chromatographic conditions based on known analyte properties (lipophilicity, pKa, and polar surface area) using one preliminary experiment. A previously developed nonlinear mixed effect model was used to specify the prior information about a new analyte with known physicochemical properties. Further, the prior (no preliminary data) and posterior predictive distribution (prior + one experiment) were determined sequentially to search towards the desired separation. The following isocratic high-performance reversed-phase liquid chromatographic conditions were sought: (1) retention time of a single analyte within the range of 4-6 min and (2) baseline separation of two analytes with retention times within the range of 4-10 min. The empirical posterior Bayesian distribution of parameters was estimated using the "slice sampling" Markov Chain Monte Carlo (MCMC) algorithm implemented in Matlab. The simulations with artificial analytes and experimental data of ketoprofen and papaverine were used to test the proposed methodology. The simulation experiment showed that for a single and two randomly selected analytes, there is 97% and 74% probability of obtaining a successful chromatogram using none or one preliminary experiment. The desired separation for ketoprofen and papaverine was established based on a single experiment. It was confirmed that the search for a desired separation rarely requires a large number of chromatographic analyses at least for a simple optimization problem. The proposed Bayesian-based optimization scheme is a powerful method of finding a desired chromatographic separation based on a small number of preliminary experiments.

  13. NHEXAS PHASE I REGION 5 STUDY--METALS IN DUST ANALYTICAL RESULTS

    EPA Science Inventory

    This data set includes analytical results for measurements of metals in 1,906 dust samples. Dust samples were collected to assess potential residential sources of dermal and inhalation exposures and to examine relationships between analyte levels in dust and in personal and bioma...

  14. 10 CFR 26.168 - Blind performance testing.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...

  15. 10 CFR 26.168 - Blind performance testing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...

  16. 10 CFR 26.168 - Blind performance testing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...

  17. 10 CFR 26.168 - Blind performance testing.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...

  18. 10 CFR 26.168 - Blind performance testing.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...

  19. Simultaneous determination of carotenoids, tocopherols, retinol and cholesterol in ovine lyophilised samples of milk, meat, and liver and in unprocessed/raw samples of fat.

    PubMed

    Bertolín, J R; Joy, M; Rufino-Moya, P J; Lobón, S; Blanco, M

    2018-08-15

    An accurate, fast, economic and simple method to determine carotenoids, tocopherols, retinol and cholesterol in lyophilised samples of ovine milk, muscle and liver and raw samples of fat, which are difficult to lyophilise, is sought. Those analytes have been studied in animal tissues to trace forage feeding and unhealthy contents. The sample treatment consisted of mild overnight saponification, liquid-liquid extraction, evaporation with vacuum evaporator and redissolution. The quantification of the different analytes was performed by the use of ultra-high performance liquid chromatography with diode-array detector for carotenoids, retinol and cholesterol and fluorescence detector for tocopherols. The retention times of the analytes were short and the resolution between analytes was very high. The limits of detection and quantification were very low. This method is suitable for all the matrices and analytes and could be adapted to other animal species with minor changes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Simultaneous Spectrophotometric Determination of Rifampicin, Isoniazid and Pyrazinamide in a Single Step

    PubMed Central

    Asadpour-Zeynali, Karim; Saeb, Elhameh

    2016-01-01

    Three antituberculosis medications are investigated in this work consist of rifampicin, isoniazid and pyrazinamide. The ultra violet (UV) spectra of these compounds are overlapped, thus use of suitable chemometric methods are helpful for simultaneous spectrophotometric determination of them. A generalized version of net analyte signal standard addition method (GNASSAM) was used for determination of three antituberculosis medications as a model system. In generalized net analyte signal standard addition method only one standard solution was prepared for all analytes. This standard solution contains a mixture of all analytes of interest, and the addition of such solution to sample, causes increases in net analyte signal of each analyte which are proportional to the concentrations of analytes in added standards solution. For determination of concentration of each analyte in some synthetic mixtures, the UV spectra of pure analytes and each sample were recorded in the range of 210 nm-550 nm. The standard addition procedure was performed for each sample and the UV spectrum was recorded after each addition and finally the results were analyzed by net analyte signal method. Obtained concentrations show acceptable performance of GNASSAM in these cases. PMID:28243267

  1. Semi-analytical Karhunen-Loeve representation of irregular waves based on the prolate spheroidal wave functions

    NASA Astrophysics Data System (ADS)

    Lee, Gibbeum; Cho, Yeunwoo

    2018-01-01

    A new semi-analytical approach is presented to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of direct numerical approach to this matrix eigenvalue problem, which may suffer from the computational inaccuracy for big data, a pair of integral and differential equations are considered, which are related to the so-called prolate spheroidal wave functions (PSWF). First, the PSWF is expressed as a summation of a small number of the analytical Legendre functions. After substituting them into the PSWF differential equation, a much smaller size matrix eigenvalue problem is obtained than the direct numerical K-L matrix eigenvalue problem. By solving this with a minimal numerical effort, the PSWF and the associated eigenvalue of the PSWF differential equation are obtained. Then, the eigenvalue of the PSWF integral equation is analytically expressed by the functional values of the PSWF and the eigenvalues obtained in the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data such as ordinary irregular waves. It is found that, with the same accuracy, the required memory size of the present method is smaller than that of the direct numerical K-L representation and the computation time of the present method is shorter than that of the semi-analytical method based on the sinusoidal functions.

  2. Influence of a strong sample solvent on analyte dispersion in chromatographic columns.

    PubMed

    Mishra, Manoranjan; Rana, Chinar; De Wit, A; Martin, Michel

    2013-07-05

    In chromatographic columns, when the eluting strength of the sample solvent is larger than that of the carrier liquid, a deformation of the analyte zone occurs because its frontal part moves at a relatively high velocity due to a low retention factor in the sample solvent while the rear part of the analyte zone is more retained in the carrier liquid and hence moves at a lower velocity. The influence of this solvent strength effect on the separation of analytes is studied here theoretically using a mass balance model describing the spatio-temporal evolution of the eluent, the sample solvent and the analyte. The viscosity of the sample solvent and carrier fluid is supposed to be the same (i.e. no viscous fingering effects are taken into account). A linear isotherm adsorption with a retention factor depending upon the local concentration of the liquid phase is considered. The governing equations are numerically solved by using a Fourier spectral method and parametric studies are performed to analyze the effect of various governing parameters on the dispersion and skewness of the analyte zone. The distortion of this zone is found to depend strongly on the difference in eluting strength between the mobile phase and the sample solvent as well as on the sample volume. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--QA ANALYTICAL RESULTS FOR METALS IN SPIKE SAMPLES

    EPA Science Inventory

    The Metals in Spike Samples data set contains the analytical results of measurements of up to 11 metals in 15 control samples (spikes) from 11 households. Measurements were made in spiked samples of dust, food, and dermal wipe residue. Spiked samples were used to assess recover...

  4. Growing geometric reasoning in solving problems of analytical geometry through the mathematical communication problems to state Islamic university students

    NASA Astrophysics Data System (ADS)

    Mujiasih; Waluya, S. B.; Kartono; Mariani

    2018-03-01

    Skills in working on the geometry problems great needs of the competence of Geometric Reasoning. As a teacher candidate, State Islamic University (UIN) students need to have the competence of this Geometric Reasoning. When the geometric reasoning in solving of geometry problems has grown well, it is expected the students are able to write their ideas to be communicative for the reader. The ability of a student's mathematical communication is supposed to be used as a marker of the growth of their Geometric Reasoning. Thus, the search for the growth of geometric reasoning in solving of analytic geometry problems will be characterized by the growth of mathematical communication abilities whose work is complete, correct and sequential, especially in writing. Preceded with qualitative research, this article was the result of a study that explores the problem: Was the search for the growth of geometric reasoning in solving analytic geometry problems could be characterized by the growth of mathematical communication abilities? The main activities in this research were done through a series of activities: (1) Lecturer trains the students to work on analytic geometry problems that were not routine and algorithmic process but many problems that the process requires high reasoning and divergent/open ended. (2) Students were asked to do the problems independently, in detail, complete, order, and correct. (3) Student answers were then corrected each its stage. (4) Then taken 6 students as the subject of this research. (5) Research subjects were interviewed and researchers conducted triangulation. The results of this research, (1) Mathematics Education student of UIN Semarang, had adequate the mathematical communication ability, (2) the ability of this mathematical communication, could be a marker of the geometric reasoning in solving of problems, and (3) the geometric reasoning of UIN students had grown in a category that tends to be good.

  5. Manganese recycling in the United States in 1998

    USGS Publications Warehouse

    Jones, Thomas S.

    2003-01-01

    This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-163 (trace constituents), M-156 (major constituents), N-67 (nutrient constituents), N-68 (nutrient constituents), P-35 (low ionic strength constituents), and Hg-31 (mercury) -- that were distributed in October 2000 to 126 laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data that were received from 122 of the laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.

  6. Prospective Association of Childhood Receptive Vocabulary and Conduct Problems with Self-Reported Adolescent Delinquency: Tests of Mediation and Moderation in Sibling-Comparison Analyses

    PubMed Central

    Lahey, Benjamin B.; D'Onofrio, Brian M.; Van Hulle, Carol A.; Rathouz, Paul J.

    2014-01-01

    Associations among receptive vocabulary measured at 4–9 years, mother-reported childhood conduct problems at 4–9 years, and self-reported adolescent delinquency at 14–17 years were assessed using data from a prospective study of the offspring of a large U.S. nationally representative sample of women. A novel quasi-experimental strategy was used to rule out family-level confounding by estimating path-analytic associations within families in a sibling comparison design. This allowed simultaneous tests of the direct and indirect effects of receptive vocabulary and childhood conduct problems, and of their joint moderation, on adolescent delinquency without family-level environmental confounding. The significant association of receptive vocabulary with later adolescent delinquency was indirect, mediated by childhood conduct problems. Furthermore, a significant interaction between receptive vocabulary and childhood conduct problems reflected a steeper slope for the predictive association between childhood conduct problems and adolescent delinquency when receptive vocabulary scores were higher. These findings of significant indirect association were qualitatively identical in both population-level and within-family analyses, suggesting that they are not the result of family-level confounds. PMID:24736982

  7. Prospective association of childhood receptive vocabulary and conduct problems with self-reported adolescent delinquency: tests of mediation and moderation in sibling-comparison analyses.

    PubMed

    Lahey, Benjamin B; D'Onofrio, Brian M; Van Hulle, Carol A; Rathouz, Paul J

    2014-11-01

    Associations among receptive vocabulary measured at 4-9 years, mother-reported childhood conduct problems at 4-9 years, and self-reported adolescent delinquency at 14-17 years were assessed using data from a prospective study of the offspring of a large U.S. nationally representative sample of women. A novel quasi-experimental strategy was used to rule out family-level confounding by estimating path-analytic associations within families in a sibling comparison design. This allowed simultaneous tests of the direct and indirect effects of receptive vocabulary and childhood conduct problems, and of their joint moderation, on adolescent delinquency without family-level environmental confounding. The significant association of receptive vocabulary with later adolescent delinquency was indirect, mediated by childhood conduct problems. Furthermore, a significant interaction between receptive vocabulary and childhood conduct problems reflected a steeper slope for the predictive association between childhood conduct problems and adolescent delinquency when receptive vocabulary scores were higher. These findings of significant indirect association were qualitatively identical in both population-level and within-family analyses, suggesting that they are not the result of family-level confounds.

  8. Interlaboratory comparability, bias, and precision for four laboratories measuring analytes in wet deposition, October 1983-December 1984

    USGS Publications Warehouse

    Brooks, Myron H.; Schroder, LeRoy J.; Willoughby, Timothy C.

    1987-01-01

    Four laboratories involved in the routine analysis of wet-deposition samples participated in an interlaboratory comparison program managed by the U.S. Geological Survey. The four participants were: Illinois State Water Survey central analytical laboratory in Champaign, Illinois; U.S. Geological Survey national water-quality laboratories in Atlanta, Georgia, and Denver, Colorado; and Inland Waters Directorate national water-quality laboratory in Burlington, Ontario, Canada. Analyses of interlaboratory samples performed by the four laboratories from October 1983 through December 1984 were compared.Participating laboratories analyzed three types of interlaboratory samples--natural wet deposition, simulated wet deposition, and deionized water--for pH and specific conductance, and for dissolved calcium, magnesium, sodium, sodium, potassium, chloride, sulfate, nitrate, ammonium, and orthophosphate. Natural wet-deposition samples were aliquots of actual wet-deposition samples. Analyses of these samples by the four laboratories were compared using analysis of variance. Test results indicated that pH, calcium, nitrate, and ammonium results were not directly comparable among the four laboratories. Statistically significant differences between laboratory results probably only were meaningful for analyses of dissolved calcium. Simulated wet-deposition samples with known analyte concentrations were used to test each laboratory for analyte bias. Laboratory analyses of calcium, magnesium, sodium, potassium, chloride, sulfate, and nitrate were not significantly different from the known concentrations of these analytes when tested using analysis of variance. Deionized-water samples were used to test each laboratory for reporting of false positive values. The Illinois State Water Survey Laboratory reported the smallest percentage of false positive values for most analytes. Analyte precision was estimated for each laboratory from results of replicate measurements. In general, the Illinois State Water Survey laboratory achieved the greatest precision, whereas the U.S. Geological Survey laboratories achieved the least precision.

  9. Perspectives on making big data analytics work for oncology.

    PubMed

    El Naqa, Issam

    2016-12-01

    Oncology, with its unique combination of clinical, physical, technological, and biological data provides an ideal case study for applying big data analytics to improve cancer treatment safety and outcomes. An oncology treatment course such as chemoradiotherapy can generate a large pool of information carrying the 5Vs hallmarks of big data. This data is comprised of a heterogeneous mixture of patient demographics, radiation/chemo dosimetry, multimodality imaging features, and biological markers generated over a treatment period that can span few days to several weeks. Efforts using commercial and in-house tools are underway to facilitate data aggregation, ontology creation, sharing, visualization and varying analytics in a secure environment. However, open questions related to proper data structure representation and effective analytics tools to support oncology decision-making need to be addressed. It is recognized that oncology data constitutes a mix of structured (tabulated) and unstructured (electronic documents) that need to be processed to facilitate searching and subsequent knowledge discovery from relational or NoSQL databases. In this context, methods based on advanced analytics and image feature extraction for oncology applications will be discussed. On the other hand, the classical p (variables)≫n (samples) inference problem of statistical learning is challenged in the Big data realm and this is particularly true for oncology applications where p-omics is witnessing exponential growth while the number of cancer incidences has generally plateaued over the past 5-years leading to a quasi-linear growth in samples per patient. Within the Big data paradigm, this kind of phenomenon may yield undesirable effects such as echo chamber anomalies, Yule-Simpson reversal paradox, or misleading ghost analytics. In this work, we will present these effects as they pertain to oncology and engage small thinking methodologies to counter these effects ranging from incorporating prior knowledge, using information-theoretic techniques to modern ensemble machine learning approaches or combination of these. We will particularly discuss the pros and cons of different approaches to improve mining of big data in oncology. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. NHEXAS PHASE I MARYLAND STUDY--PESTICIDES IN DERMAL WIPES ANALYTICAL RESULTS

    EPA Science Inventory

    The Pesticides in Dermal Wipe Samples data set contains analytical results for measurements of up to 8 pesticides in 40 dermal wipe samples over 40 households. Each sample was collected from the primary respondent within each household. The sampling period occurred on the last ...

  11. NHEXAS PHASE I MARYLAND STUDY--QA ANALYTICAL RESULTS FOR PESTICIDES IN REPLICATE SAMPLES

    EPA Science Inventory

    The Pesticides in Replicates data set contains the analytical results of measurements of up to 10 pesticides in 68 replicate (duplicate) samples from 41 households. Measurements were made in samples of indoor air, dust, soil, drinking water, food, and beverages. Duplicate sampl...

  12. NHEXAS PHASE I MARYLAND STUDY--PESTICIDES IN BLOOD ANALYTICAL RESULTS

    EPA Science Inventory

    The Pesticides in Blood Serum data set contains analytical results for measurements of up to 17 pesticides in 358 blood samples over 79 households. Each sample was collected via a venous sample from the primary respondent within each household by a phlebotomist. Samples were ge...

  13. Evaluation of Cobas Integra 800 under simulated routine conditions in six laboratories.

    PubMed

    Redondo, Francisco L; Bermudez, Pilar; Cocco, Claudio; Colella, Francesca; Graziani, Maria Stella; Fiehn, Walter; Hierla, Thomas; Lemoël, Gisèle; Belliard, AnneMarie; Manene, Dieudonne; Meziani, Mourad; Liebel, Maryann; McQueen, Matthew J; Stockmann, Wolfgang

    2003-03-01

    The new selective access analyser Cobas Integra 800 from Roche Diagnostics was evaluated in an international multicentre study at six sites. Routine simulation experiments showed good performance and full functionality of the instrument and provocation of anomalous situations generated no problems. The new features on Cobas Integra 800, namely clot detection and dispensing control, worked according to specifications. The imprecision of Cobas Integra 800 fulfilled the proposed quality specifications regarding imprecision of analytical systems for clinical chemistry with few exceptions. Claims for linearity, drift, and carry-over were all within the defined specifications, except urea linearity. Interference exists in some cases, as could be expected due to the chemistries applied. Accuracy met the proposed quality specifications, except in some special cases. Method comparisons with Cobas Integra 700 showed good agreement; comparisons with other analysis systems yielded in several cases explicable deviations. Practicability of Cobas Integra 800 met or exceeded the requirements for more than 95% of all attributes rated. The strong points of the new analysis system were reagent handling, long stability of calibration curves, high number of tests on board, compatibility of the sample carrier to other Roche systems, and the sample integrity check for more reliable analytical results. The improvement of the workflow offered by the 5-position rack and STAT handling like on Cobas Integra 800 makes the instrument attractive for further consolidation in the medium-sized laboratory, for dedicated use of special analytes, and/or as back-up in the large routine laboratory.

  14. Statistical Learning Theory for High Dimensional Prediction: Application to Criterion-Keyed Scale Development

    PubMed Central

    Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul

    2016-01-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257

  15. Analytical Tools in School Finance Reform.

    ERIC Educational Resources Information Center

    Johns, R. L.

    This paper discusses the problem of analyzing variations in the educational opportunities provided by different school districts and describes how to assess the impact of school finance alternatives through use of various analytical tools. The author first examines relatively simple analytical methods, including calculation of per-pupil…

  16. Learning Analytics: Challenges and Limitations

    ERIC Educational Resources Information Center

    Wilson, Anna; Watson, Cate; Thompson, Terrie Lynn; Drew, Valerie; Doyle, Sarah

    2017-01-01

    Learning analytic implementations are increasingly being included in learning management systems in higher education. We lay out some concerns with the way learning analytics--both data and algorithms--are often presented within an unproblematized Big Data discourse. We describe some potential problems with the often implicit assumptions about…

  17. Back analysis of geomechanical parameters in underground engineering using artificial bee colony.

    PubMed

    Zhu, Changxing; Zhao, Hongbo; Zhao, Ming

    2014-01-01

    Accurate geomechanical parameters are critical in tunneling excavation, design, and supporting. In this paper, a displacements back analysis based on artificial bee colony (ABC) algorithm is proposed to identify geomechanical parameters from monitored displacements. ABC was used as global optimal algorithm to search the unknown geomechanical parameters for the problem with analytical solution. To the problem without analytical solution, optimal back analysis is time-consuming, and least square support vector machine (LSSVM) was used to build the relationship between unknown geomechanical parameters and displacement and improve the efficiency of back analysis. The proposed method was applied to a tunnel with analytical solution and a tunnel without analytical solution. The results show the proposed method is feasible.

  18. Methods for geochemical analysis

    USGS Publications Warehouse

    Baedecker, Philip A.

    1987-01-01

    The laboratories for analytical chemistry within the Geologic Division of the U.S. Geological Survey are administered by the Office of Mineral Resources. The laboratory analysts provide analytical support to those programs of the Geologic Division that require chemical information and conduct basic research in analytical and geochemical areas vital to the furtherance of Division program goals. Laboratories for research and geochemical analysis are maintained at the three major centers in Reston, Virginia, Denver, Colorado, and Menlo Park, California. The Division has an expertise in a broad spectrum of analytical techniques, and the analytical research is designed to advance the state of the art of existing techniques and to develop new methods of analysis in response to special problems in geochemical analysis. The geochemical research and analytical results are applied to the solution of fundamental geochemical problems relating to the origin of mineral deposits and fossil fuels, as well as to studies relating to the distribution of elements in varied geologic systems, the mechanisms by which they are transported, and their impact on the environment.

  19. Multiplexed Colorimetric Solid-Phase Extraction

    NASA Technical Reports Server (NTRS)

    Gazda, Daniel B.; Fritz, James S.; Porter, Marc D.

    2009-01-01

    Multiplexed colorimetric solid-phase extraction (MC-SPE) is an extension of colorimetric solid-phase extraction (C-SPE) an analytical platform that combines colorimetric reagents, solid phase extraction, and diffuse reflectance spectroscopy to quantify trace analytes in water. In CSPE, analytes are extracted and complexed on the surface of an extraction membrane impregnated with a colorimetric reagent. The analytes are then quantified directly on the membrane surface using a handheld diffuse reflectance spectrophotometer. Importantly, the use of solid-phase extraction membranes as the matrix for impregnation of the colorimetric reagents creates a concentration factor that enables the detection of low concentrations of analytes in small sample volumes. In extending C-SPE to a multiplexed format, a filter holder that incorporates discrete analysis channels and a jig that facilitates the concurrent operation of multiple sample syringes have been designed, enabling the simultaneous determination of multiple analytes. Separate, single analyte membranes, placed in a readout cartridge create unique, analyte-specific addresses at the exit of each channel. Following sample exposure, the diffuse reflectance spectrum of each address is collected serially and the Kubelka-Munk function is used to quantify each water quality parameter via calibration curves. In a demonstration, MC-SPE was used to measure the pH of a sample and quantitate Ag(I) and Ni(II).

  20. 40 CFR 141.23 - Inorganic chemical sampling and analytical requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... highest analytical result. (e) All public water systems (community; non-transient, non-community; and... each subsequent sample during the quarter(s) which previously resulted in the highest analytical result...). For information on the availability of this material at NARA, call 202-741-6030, or go to: http://www...

  1. \\tLaboratory Environmental Sample Disposal Information Document - Companion to Standardized Analytical Methods for Environmental Restoration Following Homeland Security Events (SAM) – Revision 5.0

    EPA Pesticide Factsheets

    Document is intended to provide general guidelines for use byEPA and EPA-contracted laboratories when disposing of samples and associated analytical waste following use of the analytical methods listed in SAM.

  2. Microarray-integrated optoelectrofluidic immunoassay system

    PubMed Central

    Han, Dongsik

    2016-01-01

    A microarray-based analytical platform has been utilized as a powerful tool in biological assay fields. However, an analyte depletion problem due to the slow mass transport based on molecular diffusion causes low reaction efficiency, resulting in a limitation for practical applications. This paper presents a novel method to improve the efficiency of microarray-based immunoassay via an optically induced electrokinetic phenomenon by integrating an optoelectrofluidic device with a conventional glass slide-based microarray format. A sample droplet was loaded between the microarray slide and the optoelectrofluidic device on which a photoconductive layer was deposited. Under the application of an AC voltage, optically induced AC electroosmotic flows caused by a microarray-patterned light actively enhanced the mass transport of target molecules at the multiple assay spots of the microarray simultaneously, which reduced tedious reaction time from more than 30 min to 10 min. Based on this enhancing effect, a heterogeneous immunoassay with a tiny volume of sample (5 μl) was successfully performed in the microarray-integrated optoelectrofluidic system using immunoglobulin G (IgG) and anti-IgG, resulting in improved efficiency compared to the static environment. Furthermore, the application of multiplex assays was also demonstrated by multiple protein detection. PMID:27190571

  3. Microarray-integrated optoelectrofluidic immunoassay system.

    PubMed

    Han, Dongsik; Park, Je-Kyun

    2016-05-01

    A microarray-based analytical platform has been utilized as a powerful tool in biological assay fields. However, an analyte depletion problem due to the slow mass transport based on molecular diffusion causes low reaction efficiency, resulting in a limitation for practical applications. This paper presents a novel method to improve the efficiency of microarray-based immunoassay via an optically induced electrokinetic phenomenon by integrating an optoelectrofluidic device with a conventional glass slide-based microarray format. A sample droplet was loaded between the microarray slide and the optoelectrofluidic device on which a photoconductive layer was deposited. Under the application of an AC voltage, optically induced AC electroosmotic flows caused by a microarray-patterned light actively enhanced the mass transport of target molecules at the multiple assay spots of the microarray simultaneously, which reduced tedious reaction time from more than 30 min to 10 min. Based on this enhancing effect, a heterogeneous immunoassay with a tiny volume of sample (5 μl) was successfully performed in the microarray-integrated optoelectrofluidic system using immunoglobulin G (IgG) and anti-IgG, resulting in improved efficiency compared to the static environment. Furthermore, the application of multiplex assays was also demonstrated by multiple protein detection.

  4. A review of polychlorinated biphenyls (PCBs) pollution in indoor air environment.

    PubMed

    Dai, Qizhou; Min, Xia; Weng, Mili

    2016-10-01

    Polychlorinated biphenyls (PCBs) were widely used in industrial production due to the unique physical and chemical properties. As a kind of persistent organic pollutants, the PCBs would lead to environment pollution and cause serious problems for human health. Thus, they have been banned since the 1980s due to the environment pollution in the past years. Indoor air is the most direct and important environment medium to human beings; thus, the PCBs pollution research in indoor air is important for the protection of human health. This paper introduces the industrial application and potential harm of PCBs, summarizes the sampling, extracting, and analytical methods of environment monitoring, and compares the indoor air levels of urban areas with those of industrial areas in different countries according to various reports. This paper can provide a basic summary for PCBs pollution control in the indoor air environment. The review of PCBs pollution in indoor air in China is still limited. In this paper, we introduce the industrial application and potential harm of PCBs, summarize the sampling, extracting, and analytical methods of environment monitoring, and compare the indoor air levels of urban areas with industrial areas in different countries according to various reports.

  5. The role of atomic absorption spectrometry in geochemical exploration

    USGS Publications Warehouse

    Viets, J.G.; O'Leary, R. M.

    1992-01-01

    In this paper we briefly describe the principles of atomic absorption spectrometry (AAS) and the basic hardware components necessary to make measurements of analyte concentrations. Then we discuss a variety of methods that have been developed for the introduction of analyte atoms into the light path of the spectrophotometer. This section deals with sample digestion, elimination of interferences, and optimum production of ground-state atoms, all critical considerations when choosing an AAS method. Other critical considerations are cost, speed, simplicity, precision, and applicability of the method to the wide range of materials sampled in geochemical exploration. We cannot attempt to review all of the AAS methods developed for geological materials but instead will restrict our discussion to some of those appropriate for geochemical exploration. Our background and familiarity are reflected in the methods we discuss, and we have no doubt overlooked many good methods. Our discussion should therefore be considered a starting point in finding the right method for the problem, rather than the end of the search. Finally, we discuss the future of AAS relative to other instrumental techniques and the promising new directions for AAS in geochemical exploration. ?? 1992.

  6. Quality-assurance results for routine water analyses in U.S. Geological Survey laboratories, water year 1998

    USGS Publications Warehouse

    Ludtke, Amy S.; Woodworth, Mark T.; Marsh, Philip S.

    2000-01-01

    The U.S. Geological Survey operates a quality-assurance program based on the analyses of reference samples for two laboratories: the National Water Quality Laboratory and the Quality of Water Service Unit. Reference samples that contain selected inorganic, nutrient, and low-level constituents are prepared and submitted to the laboratory as disguised routine samples. The program goal is to estimate precision and bias for as many analytical methods offered by the participating laboratories as possible. Blind reference samples typically are submitted at a rate of 2 to 5 percent of the annual environmental-sample load for each constituent. The samples are distributed to the laboratories throughout the year. The reference samples are subject to the identical laboratory handling, processing, and analytical procedures as those applied to environmental samples and, therefore, have been used as an independent source to verify bias and precision of laboratory analytical methods and ambient water-quality measurements. The results are stored permanently in the National Water Information System and the Blind Sample Project's data base. During water year 1998, 95 analytical procedures were evaluated at the National Water Quality Laboratory and 63 analytical procedures were evaluated at the Quality of Water Service Unit. An overall evaluation of the inorganic and low-level constituent data for water year 1998 indicated 77 of 78 analytical procedures at the National Water Quality Laboratory met the criteria for precision. Silver (dissolved, inductively coupled plasma-mass spectrometry) was determined to be imprecise. Five of 78 analytical procedures showed bias throughout the range of reference samples: chromium (dissolved, inductively coupled plasma-atomic emission spectrometry), dissolved solids (dissolved, gravimetric), lithium (dissolved, inductively coupled plasma-atomic emission spectrometry), silver (dissolved, inductively coupled plasma-mass spectrometry), and zinc (dissolved, inductively coupled plasma-mass spectrometry). At the National Water Quality Laboratory during water year 1998, lack of precision was indicated for 2 of 17 nutrient procedures: ammonia as nitrogen (dissolved, colorimetric) and orthophosphate as phosphorus (dissolved, colorimetric). Bias was indicated throughout the reference sample range for ammonia as nitrogen (dissolved, colorimetric, low level) and nitrate plus nitrite as nitrogen (dissolved, colorimetric, low level). All analytical procedures tested at the Quality of Water Service Unit during water year 1998 met the criteria for precision. One of the 63 analytical procedures indicated a bias throughout the range of reference samples: aluminum (whole-water recoverable, inductively coupled plasma-atomic emission spectrometry, trace).

  7. Quantitative assessment of prevalence of pre-analytical variables and their effect on coagulation assay. Can intervention improve patient safety?

    PubMed

    Bhushan, Ravi; Sen, Arijit

    2017-04-01

    Very few Indian studies exist on evaluation of pre-analytical variables affecting "Prothrombin Time" the commonest coagulation assay performed. The study was performed in an Indian tertiary care setting with an aim to assess quantitatively the prevalence of pre-analytical variables and their effects on the results (patient safety), for Prothrombin time test. The study also evaluated their effects on the result and whether intervention, did correct the results. The firstly evaluated the prevalence for various pre-analytical variables detected in samples sent for Prothrombin Time testing. These samples with the detected variables wherever possible were tested and result noted. The samples from the same patients were repeated and retested ensuring that no pre-analytical variable is present. The results were again noted to check for difference the intervention produced. The study evaluated 9989 samples received for PT/INR over a period of 18 months. The prevalence of different pre-analytical variables was found to be 862 (8.63%). The proportion of various pre-analytical variables detected were haemolysed samples 515 (5.16%), over filled vacutainers 62 (0.62%), under filled vacutainers 39 (0.39%), low values 205 (2.05%), clotted samples 11 (0.11%), wrong labeling 4 (0.04%), wrong vacutainer use 2 (0.02%), chylous samples 7 (0.07%) and samples with more than one variable 17 (0.17%). The comparison of percentage of samples showing errors were noted for the first variables since they could be tested with and without the variable in place. The reduction in error percentage was 91.5%, 69.2%, 81.5% and 95.4% post intervention for haemolysed, overfilled, under filled and samples collected with excess pressure at phlebotomy respectively. Correcting the variables did reduce the error percentage to a great extent in these four variables and hence the variables are found to affect "Prothrombin Time" testing and can hamper patient safety.

  8. Analytical and experimental studies on detection of longitudinal, L and inverted T cracks in isotropic and bi-material beams based on changes in natural frequencies

    NASA Astrophysics Data System (ADS)

    Ravi, J. T.; Nidhan, S.; Muthu, N.; Maiti, S. K.

    2018-02-01

    An analytical method for determination of dimensions of longitudinal crack in monolithic beams, based on frequency measurements, has been extended to model L and inverted T cracks. Such cracks including longitudinal crack arise in beams made of layered isotropic or composite materials. A new formulation for modelling cracks in bi-material beams is presented. Longitudinal crack segment sizes, for L and inverted T cracks, varying from 2.7% to 13.6% of length of Euler-Bernoulli beams are considered. Both forward and inverse problems have been examined. In the forward problems, the analytical results are compared with finite element (FE) solutions. In the inverse problems, the accuracy of prediction of crack dimensions is verified using FE results as input for virtual testing. The analytical results show good agreement with the actual crack dimensions. Further, experimental studies have been done to verify the accuracy of the analytical method for prediction of dimensions of three types of crack in isotropic and bi-material beams. The results show that the proposed formulation is reliable and can be employed for crack detection in slender beam like structures in practice.

  9. An improved 3D MoF method based on analytical partial derivatives

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Zhang, Xiong

    2016-12-01

    MoF (Moment of Fluid) method is one of the most accurate approaches among various surface reconstruction algorithms. As other second order methods, MoF method needs to solve an implicit optimization problem to obtain the optimal approximate surface. Therefore, the partial derivatives of the objective function have to be involved during the iteration for efficiency and accuracy. However, to the best of our knowledge, the derivatives are currently estimated numerically by finite difference approximation because it is very difficult to obtain the analytical derivatives of the object function for an implicit optimization problem. Employing numerical derivatives in an iteration not only increase the computational cost, but also deteriorate the convergence rate and robustness of the iteration due to their numerical error. In this paper, the analytical first order partial derivatives of the objective function are deduced for 3D problems. The analytical derivatives can be calculated accurately, so they are incorporated into the MoF method to improve its accuracy, efficiency and robustness. Numerical studies show that by using the analytical derivatives the iterations are converged in all mixed cells with the efficiency improvement of 3 to 4 times.

  10. An overview of the characterization of occupational exposure to nanoaerosols in workplaces

    NASA Astrophysics Data System (ADS)

    Castellano, Paola; Ferrante, Riccardo; Curini, Roberta; Canepari, Silvia

    2009-05-01

    Currently, there is a lack of standardized sampling and metric methods that can be applied to measure the level of exposure to nanosized aerosols. Therefore, any attempt to characterize exposure to nanoparticles (NP) in a workplace must involve a multifaceted approach characterized by different sampling and analytical techniques to measure all relevant characteristics of NP exposure. Furthermore, as NP aerosols are always complex mixtures of multiple origins, sampling and analytical methods need to be improved to selectively evaluate the apportionment from specific sources to the final nanomaterials. An open question at the world's level is how to relate specific toxic effects of NP with one or more among several different parameters (such as particle size, mass, composition, surface area, number concentration, aggregation or agglomeration state, water solubility and surface chemistry). As the evaluation of occupational exposure to NP in workplaces needs dimensional and chemical characterization, the main problem is the choice of the sampling and dimensional separation techniques. Therefore a convenient approach to allow a satisfactory risk assessment could be the contemporary use of different sampling and measuring techniques for particles with known toxicity in selected workplaces. Despite the lack of specific NP exposure limit values, exposure metrics, appropriate to nanoaerosols, are discussed in the Technical Report ISO/TR 27628:2007 with the aim to enable occupational hygienists to characterize and monitor nanoaerosols in workplaces. Moreover, NIOSH has developed the Document Approaches to Safe Nanotechnology (intended to be an information exchange with NIOSH) in order to address current and future research needs to understanding the potential risks that nanotechnology may have to workers.

  11. Resolving Identification Issues of Saraca asoca from Its Adulterant and Commercial Samples Using Phytochemical Markers

    PubMed Central

    Hegde, Satisha; Hegde, Harsha Vasudev; Jalalpure, Sunil Satyappa; Peram, Malleswara Rao; Pai, Sandeep Ramachandra; Roy, Subarna

    2017-01-01

    Saraca asoca (Roxb.) De Wilde (Ashoka) is a highly valued endangered medicinal tree species from Western Ghats of India. Besides treating cardiac and circulatory problems, S. asoca provides immense relief in gynecological disorders. Higher price and demand, in contrast to the smaller population size of the plant, have motivated adulteration with other plants such as Polyalthia longifolia (Sonnerat) Thwaites. The fundamental concerns in quality control of S. asoca arise due to its part of medicinal value (Bark) and the chemical composition. Phytochemical fingerprinting with proper selection of analytical markers is a promising method in addressing quality control issues. In the present study, high-performance liquid chromatography of phenolic compounds (gallic acid, catechin, and epicatechin) coupled to multivariate analysis was used. Five samples each of S. asoca, P. longifolia from two localities alongside five commercial market samples showed evidence of adulteration. Subsequently, multivariate hierarchical cluster analysis and principal component analysis was established to discriminate the adulterants of S. asoca. The proposed method ascertains identification of S. asoca from its putative adulterant P. longifolia and commercial market samples. The data generated may also serve as baseline data to form a quality standard for pharmacopoeias. SUMMARY Simultaneous quantification of gallic acid, catechin, epicatechin from Saraca asoca by high-performance liquid chromatographyDetection of S. asoca from adulterant and commercial samplesUse of analytical method along with a statistical tool for addressing quality issues. Abbreviations used: HPLC: High Performance Liquid Chromatography; RP-HPLC: Reverse Phase High Performance Liquid Chromatography; CAT: Catechin; EPI: Epicatechin; GA: Gallic acid; PCA: Principal Component Analysis. PMID:28808391

  12. Application of liquid-liquid-liquid microextraction and high-performance liquid chromatography for the determination of alkylphenols and bisphenol-A in water.

    PubMed

    Lin, Che-Yi; Fuh, Ming-Ren; Huang, Shang-Da

    2011-02-01

    A method termed liquid-liquid-liquid microextraction (LLLME) was utilized to extract 4-t-butylphenol, 4-t-octylphenol, 4-n-nonylphenol, and bisphenol-A from water. The extracted target analytes were separated and quantified by high-performance liquid chromatography using a fluorescence detector. In LLLME, the donor phase (i.e. water sample) was made weakly acidic by adding monobasic potassium phosphate (KH(2) PO(4)); the organic phase adopted was 4-chlorotoluene; the acceptor phase (i.e. enriched extract) was 0.2 M tetraethylammonium hydroxide dissolved in ethylene glycol. This study solves a problem associated with the surface activity of long-chain alkylphenolate ions, permitting LLLME to extract long-chain alkylphenols. Experimental conditions such as acceptor phase composition, organic phase identity, acceptor phase volume, sample agitation, extraction time, and salt addition were optimized. The relative standard deviation (RSD, 2.0-5.8%), coefficient of determination (r(2) 0.9977-0.9999), and detection limit (0.017-0.0048 ng/mL) of the proposed method were achieved under the selected optimized conditions. The method was successfully applied to analyses of lake and tap water samples, and the relative recoveries of target analytes from the spiked lake and tap water samples were 92.8-106.3 and 93.6-105.6%, respectively. The results obtained with the proposed method confirm this microextraction technique to be reliable for the monitoring of alkylphenols and bisphenol-A in water samples. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Heat Transfer Analysis of Thermal Protection Structures for Hypersonic Vehicles

    NASA Astrophysics Data System (ADS)

    Zhou, Chen; Wang, Zhijin; Hou, Tianjiao

    2017-11-01

    This research aims to develop an analytical approach to study the heat transfer problem of thermal protection systems (TPS) for hypersonic vehicles. Laplace transform and integral method are used to describe the temperature distribution through the TPS subject to aerodynamic heating during flight. Time-dependent incident heat flux is also taken into account. Two different cases with heat flux and radiation boundary conditions are studied and discussed. The results are compared with those obtained by finite element analyses and show a good agreement. Although temperature profiles of such problems can be readily accessed via numerical simulations, analytical solutions give a greater insight into the physical essence of the heat transfer problem. Furthermore, with the analytical approach, rapid thermal analyses and even thermal optimization can be achieved during the preliminary TPS design.

  14. EPA Tools and Resources Webinar: EPA’s Environmental Sampling and Analytical Methods for Environmental Remediation and Recovery

    EPA Pesticide Factsheets

    EPA’s Environmental Sampling and Analytical Methods (ESAM) is a website tool that supports the entire environmental characterization process from collection of samples all the way to their analyses.

  15. STATISTICAL ANALYSIS OF TANK 5 FLOOR SAMPLE RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shine, E.

    2012-03-14

    Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primarymore » sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, radionuclide, inorganic, and anion concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements above their MDCs. The identification of distributions and the selection of UCL95 procedures generally followed the protocol in Singh, Armbya, and Singh [2010]. When all of an analyte's measurements lie below their MDCs, only a summary of the MDCs can be provided. The measurement results reported by SRNL are listed in Appendix A, and the results of this analysis are reported in Appendix B. The data were generally found to follow a normal distribution, and to be homogeneous across composite samples.« less

  16. Statistical Analysis of Tank 5 Floor Sample Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shine, E. P.

    2013-01-31

    Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primarymore » sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, and the radionuclide1, elemental, and chemical concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements above their MDCs. The identification of distributions and the selection of UCL95 procedures generally followed the protocol in Singh, Armbya, and Singh [2010]. When all of an analyte's measurements lie below their MDCs, only a summary of the MDCs can be provided. The measurement results reported by SRNL are listed, and the results of this analysis are reported. The data were generally found to follow a normal distribution, and to be homogenous across composite samples.« less

  17. Statistical Analysis Of Tank 5 Floor Sample Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shine, E. P.

    2012-08-01

    Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primarymore » sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, and the radionuclide, elemental, and chemical concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements above their MDCs. The identification of distributions and the selection of UCL95 procedures generally followed the protocol in Singh, Armbya, and Singh [2010]. When all of an analyte's measurements lie below their MDCs, only a summary of the MDCs can be provided. The measurement results reported by SRNL are listed in Appendix A, and the results of this analysis are reported in Appendix B. The data were generally found to follow a normal distribution, and to be homogenous across composite samples.« less

  18. System automatically supplies precise analytical samples of high-pressure gases

    NASA Technical Reports Server (NTRS)

    Langdon, W. M.

    1967-01-01

    High-pressure-reducing and flow-stabilization system delivers analytical gas samples from a gas supply. The system employs parallel capillary restrictors for pressure reduction and downstream throttling valves for flow control. It is used in conjunction with a sampling valve and minimizes alterations of the sampled gas.

  19. NHEXAS PHASE I REGION 5 STUDY--QA ANALYTICAL RESULTS FOR VOCS IN BLANKS

    EPA Science Inventory

    This data set includes analytical results for measurements of VOCs in 88 blank samples. Measurements were made for up to 23 VOCs in blank samples of air, water, and blood. Blank samples were used to assess the potential for sample contamination during collection, storage, shipmen...

  20. NHEXAS PHASE I ARIZONA STUDY--QA ANALYTICAL RESULTS FOR PESTICIDES IN BLANK SAMPLES

    EPA Science Inventory

    The Pesticides in Blank Samples data set contains the analytical results of measurements of up to 4 pesticides in 43 blank samples from 29 households. Measurements were made in blank samples of dust, indoor and outdoor air, food and beverages, blood, urine, and dermal wipe resid...

  1. NHEXAS PHASE I MARYLAND STUDY--METALS IN DERMAL WIPES ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Dermal Wipe Samples data set contains analytical results for measurements of up to 4 metals in 343 dermal wipe samples over 80 households. Each sample was collected from the primary respondent within each household. The sampling period occurred on the first day of...

  2. NHEXAS PHASE I MARYLAND STUDY--QA ANALYTICAL RESULTS FOR METALS IN REPLICATE SAMPLES

    EPA Science Inventory

    The Metals in Replicates data set contains the analytical results of measurements of up to 11 metals in 88 replicate (duplicate) samples from 52 households. Measurements were made in samples of indoor and outdoor air, drinking water, food, and beverages. Duplicate samples for a...

  3. NHEXAS PHASE I ARIZONA STUDY--METALS IN BLOOD ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Blood data set contains analytical results for measurements of up to 2 metals in 165 blood samples over 165 households. Each sample was collected as a venous sample from the primary respondent within each household during Stage III of the NHEXAS study. The samples...

  4. NHEXAS PHASE I MARYLAND STUDY--METALS IN BLOOD ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Blood data set contains analytical results for measurements of up to 2 metals in 374 blood samples over 80 households. Each sample was collected via a venous sample from the primary respondent within each household by a phlebotomist. Samples were generally drawn o...

  5. NHEXAS PHASE I MARYLAND STUDY--QA ANALYTICAL RESULTS FOR PESTICIDE METABOLITES IN BLANKS

    EPA Science Inventory

    The Pesticide Metabolites in Blanks data set contains the analytical results of measurements of up to 4 pesticide metabolites in 14 blank samples from 13 households. Measurements were made in blank samples of urine. Blank samples were used to assess the potential for sample con...

  6. The interpretation of hair analysis for drugs and drug metabolites.

    PubMed

    Cuypers, Eva; Flanagan, Robert J

    2018-02-01

    Head hair analysis for drugs and drug metabolites has been used widely with the aim of detecting exposure in the weeks or months prior to sample collection. However, inappropriate interpretation of results has likely led to serious miscarriages of justice, especially in child custody cases. The aim of this review is to assess critically what can, and perhaps more importantly, what cannot be claimed as regards the interpretation of hair test results in a given set of circumstances in order to inform future testing. We searched the PubMed database for papers published 2010-2016 using the terms "hair" and "drug" and "decontamination", the terms "hair" and "drug" and "contamination", the terms "hair" and "drug-facilitated crime", the terms "hair" and "ethyl glucuronide", and the terms "hair", "drug testing" and "analysis". Study of the reference lists of the 46 relevant papers identified 25 further relevant citations, giving a total of 71 citations. Hair samples: Drugs, drug metabolites and/or decomposition products may arise not only from deliberate drug administration, but also via deposition from a contaminated atmosphere if drug(s) have been smoked or otherwise vaporized in a confined area, transfer from contaminated surfaces via food/fingers, etc., and transfer from sweat and other secretions after a single large exposure, which could include anesthesia. Excretion in sweat of endogenous analytes such as γ-hydroxybutyric acid is a potential confounder if its use is to be investigated. Cosmetic procedures such as bleaching or heat treatment of hair may remove analytes prior to sample collection. Hair color and texture, the area of the head the sample is taken from, the growth rate of individual hairs, and how the sample has been stored, may also affect the interpretation of results. Toxicological analysis: Immunoassay results alone do not provide reliable evidence on which to base judicial decisions. Gas or liquid chromatography with mass spectrometric detection (GC- or LC-MS), if used with due caution, can give accurate analyte identification and high sensitivity, but many problems remain. Firstly, it is not possible to prepare assay calibrators or quality control material except by soaking "blank" hair in solutions of appropriate analytes, drying, and then subjecting the dried material to an analysis. The fact that solvents can be used to add analytes to hair points to the fact that analytes can arrive not only on, but also in hair from exogenous sources. A range of solvent-washing procedures have been advocated to "decontaminate" hair by removing adsorbed analytes, but these carry the risk of transporting adsorbed analytes into the medulla of the hair therefore confounding the whole procedure. This is especially true if segmental analysis is being undertaken in order to provide a "time course" of drug exposure. Proposed clinical applications of hair analysis: There have been a number of reports where drugs seemingly administered during the perpetration of a crime have been detected in head hair. However, detailed evaluation of these reports is difficult without full understanding of the possible effects of any "decontamination" procedures used and of other variables such as hair color or cosmetic hair treatment. Similarly, in child custody cases and where the aim is to demonstrate abstinence from drug or alcohol use, the issues of possible exogenous sources of analyte, and of the large variations in analyte concentrations reported in known users, continue to confound the interpretation of results in individual cases. Interpretation of results of head hair analysis must take into account all the available circumstantial and other evidence especially as regards the methodology employed and the possibility of surface contamination of the hair prior to collection.

  7. New Tools to Prepare ACE Cross-section Files for MCNP Analytic Test Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    Monte Carlo calculations using one-group cross sections, multigroup cross sections, or simple continuous energy cross sections are often used to: (1) verify production codes against known analytical solutions, (2) verify new methods and algorithms that do not involve detailed collision physics, (3) compare Monte Carlo calculation methods with deterministic methods, and (4) teach fundamentals to students. In this work we describe 2 new tools for preparing the ACE cross-section files to be used by MCNP ® for these analytic test problems, simple_ace.pl and simple_ace_mg.pl.

  8. The U.S. Geological Survey coal quality (COALQUAL) database version 3.0

    USGS Publications Warehouse

    Palmer, Curtis A.; Oman, Charles L.; Park, Andy J.; Luppens, James A.

    2015-12-21

    Because of database size limits during the development of COALQUAL Version 1.3, many analyses of individual bench samples were merged into whole coal bed averages. The methodology for making these composite intervals was not consistent. Size limits also restricted the amount of georeferencing information and forced removal of qualifier notations such as "less than detection limit" (<) information, which can cause problems when using the data. A review of the original data sheets revealed that COALQUAL Version 2.0 was missing information that was needed for a complete understanding of a coal section. Another important database issue to resolve was the USGS "remnant moisture" problem. Prior to 1998, tests for remnant moisture (as-determined moisture in the sample at the time of analysis) were not performed on any USGS major, minor, or trace element coal analyses. Without the remnant moisture, it is impossible to convert the analyses to a usable basis (as-received, dry, etc.). Based on remnant moisture analyses of hundreds of samples of different ranks (and known residual moisture) reported after 1998, it was possible to develop a method to provide reasonable estimates of remnant moisture for older data to make it more useful in COALQUAL Version 3.0. In addition, COALQUAL Version 3.0 is improved by (1) adding qualifiers, including statistical programming to deal with the qualifiers; (2) clarifying the sample compositing problems; and (3) adding associated samples. Version 3.0 of COALQUAL also represents the first attempt to incorporate data verification by mathematically crosschecking certain analytical parameters. Finally, a new database system was designed and implemented to replace the outdated DOS program used in earlier versions of the database.

  9. An efficient adaptive sampling strategy for global surrogate modeling with applications in multiphase flow simulation

    NASA Astrophysics Data System (ADS)

    Mo, S.; Lu, D.; Shi, X.; Zhang, G.; Ye, M.; Wu, J.

    2016-12-01

    Surrogate models have shown remarkable computational efficiency in hydrological simulations involving design space exploration, sensitivity analysis, uncertainty quantification, etc. The central task of constructing a global surrogate models is to achieve a prescribed approximation accuracy with as few original model executions as possible, which requires a good design strategy to optimize the distribution of data points in the parameter domains and an effective stopping criterion to automatically terminate the design process when desired approximation accuracy is achieved. This study proposes a novel adaptive sampling strategy, which starts from a small number of initial samples and adaptively selects additional samples by balancing the collection in unexplored regions and refinement in interesting areas. We define an efficient and effective evaluation metric basing on Taylor expansion to select the most promising potential samples from candidate points, and propose a robust stopping criterion basing on the approximation accuracy at new points to guarantee the achievement of desired accuracy. The numerical results of several benchmark analytical functions indicate that the proposed approach is more computationally efficient and robust than the widely used maximin distance design and two other well-known adaptive sampling strategies. The application to two complicated multiphase flow problems further demonstrates the efficiency and effectiveness of our method in constructing global surrogate models for high-dimensional and highly nonlinear problems. Acknowledgements: This work was financially supported by the National Nature Science Foundation of China grants No. 41030746 and 41172206.

  10. Hasse diagram as a green analytical metrics tool: ranking of methods for benzo[a]pyrene determination in sediments.

    PubMed

    Bigus, Paulina; Tsakovski, Stefan; Simeonov, Vasil; Namieśnik, Jacek; Tobiszewski, Marek

    2016-05-01

    This study presents an application of the Hasse diagram technique (HDT) as the assessment tool to select the most appropriate analytical procedures according to their greenness or the best analytical performance. The dataset consists of analytical procedures for benzo[a]pyrene determination in sediment samples, which were described by 11 variables concerning their greenness and analytical performance. Two analyses with the HDT were performed-the first one with metrological variables and the second one with "green" variables as input data. Both HDT analyses ranked different analytical procedures as the most valuable, suggesting that green analytical chemistry is not in accordance with metrology when benzo[a]pyrene in sediment samples is determined. The HDT can be used as a good decision support tool to choose the proper analytical procedure concerning green analytical chemistry principles and analytical performance merits.

  11. Curriculum Innovation for Marketing Analytics

    ERIC Educational Resources Information Center

    Wilson, Elizabeth J.; McCabe, Catherine; Smith, Robert S.

    2018-01-01

    College graduates need better preparation for and experience in data analytics for higher-quality problem solving. Using the curriculum innovation framework of Borin, Metcalf, and Tietje (2007) and case study research methods, we offer rich insights about one higher education institution's work to address the marketing analytics skills gap.…

  12. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--QA ANALYTICAL RESULTS FOR METALS IN BLANK SAMPLES

    EPA Science Inventory

    The Metals in Blank Samples data set contains the analytical results of measurements of up to 27 metals in 52 blank samples. Measurements were made in blank samples of dust, indoor air, food, water, and dermal wipe residue. Blank samples were used to assess the potential for sa...

  13. Analytical test results for archived core composite samples from tanks 241-TY-101 and 241-TY-103

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beck, M.A.

    1993-07-16

    This report describes the analytical tests performed on archived core composite samples form a 1.085 sampling of the 241-TY-101 (101-TY) and 241-TY-103 (103-TY) single shell waste tanks. Both tanks are suspected of containing quantities of ferrocyanide compounds, as a result of process activities in the late 1950`s. Although limited quantities of the composite samples remained, attempts were made to obtain as much analytical information as possible, especially regarding the chemical and thermal properties of the material.

  14. Identifying and prioritizing the preference criteria using analytical hierarchical process for a student-lecturer allocation problem of internship programme

    NASA Astrophysics Data System (ADS)

    Faudzi, Syakinah; Abdul-Rahman, Syariza; Rahman, Rosshairy Abd; Hew, Jafri Hj. Zulkepli

    2016-10-01

    This paper discusses on identifying and prioritizing the student's preference criteria towards supervisor using Analytical Hierarchical Process (AHP) for student-lecturer allocation problem of internship programme. Typically a wide number of students undertake internship every semester and many preferences criteria may involve when assigning students to lecturer for supervision. Thus, identifying and prioritizing the preference criteria of assigning students to lecturer is critically needed especially when involving many preferences. AHP technique is used to prioritize the seven criteria which are capacity, specialization, academic position, availability, professional support, relationship and gender. Student's preference alternative is classified based on lecturer's academic position which are lecturer, senior lecturer, associate professor and professor. Criteria are ranked to find the best preference criteria and alternatives of the supervisor that students prefer to have. This problem is solved using Expert Choice 11 software. A sample of 30 respondents who are from semester 6 and above are randomly selected to participate in the study. By using questionnaire as our medium in collecting the student's data, consistency index is produced to validate the proposed study. Findings and result showed that, the most important preference criteria is professional support. It is followed by specialization, availability, relationship, gender, academic position and capacity. This study found that student would like to have a supportive supervisor because lack of supervision can lead the students to achieve low grade and knowledge from the internship session.

  15. Semi-Supervised Marginal Fisher Analysis for Hyperspectral Image Classification

    NASA Astrophysics Data System (ADS)

    Huang, H.; Liu, J.; Pan, Y.

    2012-07-01

    The problem of learning with both labeled and unlabeled examples arises frequently in Hyperspectral image (HSI) classification. While marginal Fisher analysis is a supervised method, which cannot be directly applied for Semi-supervised classification. In this paper, we proposed a novel method, called semi-supervised marginal Fisher analysis (SSMFA), to process HSI of natural scenes, which uses a combination of semi-supervised learning and manifold learning. In SSMFA, a new difference-based optimization objective function with unlabeled samples has been designed. SSMFA preserves the manifold structure of labeled and unlabeled samples in addition to separating labeled samples in different classes from each other. The semi-supervised method has an analytic form of the globally optimal solution, and it can be computed based on eigen decomposition. Classification experiments with a challenging HSI task demonstrate that this method outperforms current state-of-the-art HSI-classification methods.

  16. An Improved Computational Technique for Calculating Electromagnetic Forces and Power Absorptions Generated in Spherical and Deformed Body in Levitation Melting Devices

    NASA Technical Reports Server (NTRS)

    Zong, Jin-Ho; Szekely, Julian; Schwartz, Elliot

    1992-01-01

    An improved computational technique for calculating the electromagnetic force field, the power absorption and the deformation of an electromagnetically levitated metal sample is described. The technique is based on the volume integral method, but represents a substantial refinement; the coordinate transformation employed allows the efficient treatment of a broad class of rotationally symmetrical bodies. Computed results are presented to represent the behavior of levitation melted metal samples in a multi-coil, multi-frequency levitation unit to be used in microgravity experiments. The theoretical predictions are compared with both analytical solutions and with the results or previous computational efforts for the spherical samples and the agreement has been very good. The treatment of problems involving deformed surfaces and actually predicting the deformed shape of the specimens breaks new ground and should be the major usefulness of the proposed method.

  17. Advances in spectroscopic methods for quantifying soil carbon

    USGS Publications Warehouse

    Liebig, Mark; Franzluebbers, Alan J.; Follett, Ronald F.; Hively, W. Dean; Reeves, James B.; McCarty, Gregory W.; Calderon, Francisco

    2012-01-01

    The gold standard for soil C determination is combustion. However, this method requires expensive consumables, is limited to the determination of the total carbon and in the number of samples which can be processed (~100/d). With increased interest in soil C sequestration, faster methods are needed. Thus, interest in methods based on diffuse reflectance spectroscopy in the visible, near-infrared or mid-infrared ranges using either proximal or remote sensing. These methods have the ability to analyze more samples (2 to 3X/d) or huge areas (imagery) and do multiple analytes simultaneously, but require calibrations relating spectral and reference data and have specific problems, i.e., remote sensing is capable of scanning entire watersheds, thus reducing the sampling needed, but is limiting to the surface layer of tilled soils and by difficulty in obtaining proper calibration reference values. The objective of this discussion is the present state of spectroscopic methods for soil C determination.

  18. Resolution of matrix effects on analysis of total and methyl mercury in aqueous samples from the Florida Everglades

    USGS Publications Warehouse

    Olson, M.L.; Cleckner, L.B.; Hurley, J.P.; Krabbenhoft, D.P.; Heelan, T.W.

    1997-01-01

    Aqueous samples from the Florida Everglades present several problems for the analysis of total mercury (HgT) and methyl mercury (MeHg). Constituents such as dissolved organic carbon (DOC) and sulfide at selected sites present particular challenges due to interferences with standard analytical techniques. This is manifested by 1) the inability to discern when bromine monochloride (BrCl) addition is sufficient for sample oxidation for HgT analysis; and 2) incomplete spike recoveries using the distillation/ethylation technique for MeHg analysis. Here, we suggest ultra-violet (UV) oxidation prior to addition of BrCl to ensure total oxidation of DOC prior to HgT analysis and copper sulfate (CuSO4) addition to aid in distillation in the presence of sulfide for MeHg analysis. Despite high chloride (Cl-) levels, we observed no effects on MeHg distillation/ethylation analyses. ?? Springer-Verlag 1997.

  19. SAM Radiochemical Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target radiochemical analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select radiochemical analytes.

  20. NHEXAS PHASE I REGION 5 STUDY--METALS IN BLOOD ANALYTICAL RESULTS

    EPA Science Inventory

    This data set includes analytical results for measurements of metals in 165 blood samples. These samples were collected to examine the relationships between personal exposure measurements, environmental measurements, and body burden. Venous blood samples were collected by venipun...

  1. NHEXAS PHASE I REGION 5 STUDY--VOCS IN BLOOD ANALYTICAL RESULTS

    EPA Science Inventory

    This data set includes analytical results for measurements of VOCs (volatile organic compounds) in 145 blood samples. These samples were collected to examine the relationships between personal exposure measurements, environmental measurements, and body burden. Venous blood sample...

  2. Sample Collection Information Document for Chemical & Radiochemical Analytes – Companion to Selected Analytical Methods for Environmental Remediation and Recovery (SAM) 2012

    EPA Pesticide Factsheets

    Sample Collection Information Document is intended to provide sampling information to be used during site assessment, remediation and clearance activities following a chemical or radiological contamination incident.

  3. Recent advances in computational-analytical integral transforms for convection-diffusion problems

    NASA Astrophysics Data System (ADS)

    Cotta, R. M.; Naveira-Cotta, C. P.; Knupp, D. C.; Zotin, J. L. Z.; Pontes, P. C.; Almeida, A. P.

    2017-10-01

    An unifying overview of the Generalized Integral Transform Technique (GITT) as a computational-analytical approach for solving convection-diffusion problems is presented. This work is aimed at bringing together some of the most recent developments on both accuracy and convergence improvements on this well-established hybrid numerical-analytical methodology for partial differential equations. Special emphasis is given to novel algorithm implementations, all directly connected to enhancing the eigenfunction expansion basis, such as a single domain reformulation strategy for handling complex geometries, an integral balance scheme in dealing with multiscale problems, the adoption of convective eigenvalue problems in formulations with significant convection effects, and the direct integral transformation of nonlinear convection-diffusion problems based on nonlinear eigenvalue problems. Then, selected examples are presented that illustrate the improvement achieved in each class of extension, in terms of convergence acceleration and accuracy gain, which are related to conjugated heat transfer in complex or multiscale microchannel-substrate geometries, multidimensional Burgers equation model, and diffusive metal extraction through polymeric hollow fiber membranes. Numerical results are reported for each application and, where appropriate, critically compared against the traditional GITT scheme without convergence enhancement schemes and commercial or dedicated purely numerical approaches.

  4. Frechet derivatives for shallow water ocean acoustic inverse problems

    NASA Astrophysics Data System (ADS)

    Odom, Robert I.

    2003-04-01

    For any inverse problem, finding a model fitting the data is only half the problem. Most inverse problems of interest in ocean acoustics yield nonunique model solutions, and involve inevitable trade-offs between model and data resolution and variance. Problems of uniqueness and resolution and variance trade-offs can be addressed by examining the Frechet derivatives of the model-data functional with respect to the model variables. Tarantola [Inverse Problem Theory (Elsevier, Amsterdam, 1987), p. 613] published analytical formulas for the basic derivatives, e.g., derivatives of pressure with respect to elastic moduli and density. Other derivatives of interest, such as the derivative of transmission loss with respect to attenuation, can be easily constructed using the chain rule. For a range independent medium the analytical formulas involve only the Green's function and the vertical derivative of the Green's function for the medium. A crucial advantage of the analytical formulas for the Frechet derivatives over numerical differencing is that they can be computed with a single pass of any program which supplies the Green's function. Various derivatives of interest in shallow water ocean acoustics are presented and illustrated by an application to the sensitivity of measured pressure to shallow water sediment properties. [Work supported by ONR.

  5. A Protein Standard That Emulates Homology for the Characterization of Protein Inference Algorithms.

    PubMed

    The, Matthew; Edfors, Fredrik; Perez-Riverol, Yasset; Payne, Samuel H; Hoopmann, Michael R; Palmblad, Magnus; Forsström, Björn; Käll, Lukas

    2018-05-04

    A natural way to benchmark the performance of an analytical experimental setup is to use samples of known composition and see to what degree one can correctly infer the content of such a sample from the data. For shotgun proteomics, one of the inherent problems of interpreting data is that the measured analytes are peptides and not the actual proteins themselves. As some proteins share proteolytic peptides, there might be more than one possible causative set of proteins resulting in a given set of peptides and there is a need for mechanisms that infer proteins from lists of detected peptides. A weakness of commercially available samples of known content is that they consist of proteins that are deliberately selected for producing tryptic peptides that are unique to a single protein. Unfortunately, such samples do not expose any complications in protein inference. Hence, for a realistic benchmark of protein inference procedures, there is a need for samples of known content where the present proteins share peptides with known absent proteins. Here, we present such a standard, that is based on E. coli expressed human protein fragments. To illustrate the application of this standard, we benchmark a set of different protein inference procedures on the data. We observe that inference procedures excluding shared peptides provide more accurate estimates of errors compared to methods that include information from shared peptides, while still giving a reasonable performance in terms of the number of identified proteins. We also demonstrate that using a sample of known protein content without proteins with shared tryptic peptides can give a false sense of accuracy for many protein inference methods.

  6. Insufficient filling of vacuum tubes as a cause of microhemolysis and elevated serum lactate dehydrogenase levels. Use of a data-mining technique in evaluation of questionable laboratory test results.

    PubMed

    Tamechika, Yoshie; Iwatani, Yoshinori; Tohyama, Kaoru; Ichihara, Kiyoshi

    2006-01-01

    Experienced physicians noted unexpectedly elevated concentrations of lactate dehydrogenase in some patient samples, but quality control specimens showed no bias. To evaluate this problem, we used a "latent reference individual extraction method", designed to obtain reference intervals from a laboratory database by excluding individuals who have abnormal results for basic analytes other than the analyte in question, in this case lactate dehydrogenase. The reference interval derived for the suspected year was 264-530 U/L, while that of the previous year was 248-495 U/L. The only change we found was the introduction of an order entry system, which requests precise sampling volumes rather than complete filling of vacuum tubes. The effect of vacuum persistence was tested using ten freshly drawn blood samples. Compared with complete filling, 1/5 filling resulted in average elevations of lactate dehydrogenase, aspartic aminotransferase, and potassium levels of 8.0%, 3.8%, and 3.4%, respectively (all p<0.01). Microhemolysis was confirmed using a urine stick method. The length of time before centrifugation determined the degree of hemolysis, while vacuum during centrifugation did not affect it. Microhemolysis is the probable cause of the suspected pseudo-elevation noted by the physicians. Data-mining methodology represents a valuable tool for monitoring long-term bias in laboratory results.

  7. Report on the U.S. Geological Survey's evaluation program for standard reference samples distributed in October 1993 : T-127 (trace constituents), M-128 (major constituents), N-40 (nutrients), N-41 (nutrients), P-21 (low ionic strength), Hg-17 (mercury), AMW-3 (acid mine water), and WW-1 (whole water)

    USGS Publications Warehouse

    Long, H.K.; Farrar, J.W.

    1994-01-01

    This report presents the results of the U.S. Geological Survey's analytical evaluation program for eight standard reference samples--T-127 (trace constituents), M-128 (major constituents), N-40 (nutrients), N-41 (nutrients), P-21 (low ionic strength), Hg-17 (mercury), AMW-3 (acid mine water), and WW-1 (whole water)--that were distributed in October 1993 to 158 laboratories registered in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data that were received from 145 of the laboratories were evaluated with respect to: overall laboratory performance and relative laboratory performance for each analyte in the eight reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the eight standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.

  8. Solving Differential Equations Analytically. Elementary Differential Equations. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Unit 335.

    ERIC Educational Resources Information Center

    Goldston, J. W.

    This unit introduces analytic solutions of ordinary differential equations. The objective is to enable the student to decide whether a given function solves a given differential equation. Examples of problems from biology and chemistry are covered. Problem sets, quizzes, and a model exam are included, and answers to all items are provided. The…

  9. Potential sources of analytical bias and error in selected trace element data-quality analyses

    USGS Publications Warehouse

    Paul, Angela P.; Garbarino, John R.; Olsen, Lisa D.; Rosen, Michael R.; Mebane, Christopher A.; Struzeski, Tedmund M.

    2016-09-28

    Potential sources of analytical bias and error associated with laboratory analyses for selected trace elements where concentrations were greater in filtered samples than in paired unfiltered samples were evaluated by U.S. Geological Survey (USGS) Water Quality Specialists in collaboration with the USGS National Water Quality Laboratory (NWQL) and the Branch of Quality Systems (BQS).Causes for trace-element concentrations in filtered samples to exceed those in associated unfiltered samples have been attributed to variability in analytical measurements, analytical bias, sample contamination either in the field or laboratory, and (or) sample-matrix chemistry. These issues have not only been attributed to data generated by the USGS NWQL but have been observed in data generated by other laboratories. This study continues the evaluation of potential analytical bias and error resulting from matrix chemistry and instrument variability by evaluating the performance of seven selected trace elements in paired filtered and unfiltered surface-water and groundwater samples collected from 23 sampling sites of varying chemistries from six States, matrix spike recoveries, and standard reference materials.Filtered and unfiltered samples have been routinely analyzed on separate inductively coupled plasma-mass spectrometry instruments. Unfiltered samples are treated with hydrochloric acid (HCl) during an in-bottle digestion procedure; filtered samples are not routinely treated with HCl as part of the laboratory analytical procedure. To evaluate the influence of HCl on different sample matrices, an aliquot of the filtered samples was treated with HCl. The addition of HCl did little to differentiate the analytical results between filtered samples treated with HCl from those samples left untreated; however, there was a small, but noticeable, decrease in the number of instances where a particular trace-element concentration was greater in a filtered sample than in the associated unfiltered sample for all trace elements except selenium. Accounting for the small dilution effect (2 percent) from the addition of HCl, as required for the in-bottle digestion procedure for unfiltered samples, may be one step toward decreasing the number of instances where trace-element concentrations are greater in filtered samples than in paired unfiltered samples.The laboratory analyses of arsenic, cadmium, lead, and zinc did not appear to be influenced by instrument biases. These trace elements showed similar results on both instruments used to analyze filtered and unfiltered samples. The results for aluminum and molybdenum tended to be higher on the instrument designated to analyze unfiltered samples; the results for selenium tended to be lower. The matrices used to prepare calibration standards were different for the two instruments. The instrument designated for the analysis of unfiltered samples was calibrated using standards prepared in a nitric:hydrochloric acid (HNO3:HCl) matrix. The instrument designated for the analysis of filtered samples was calibrated using standards prepared in a matrix acidified only with HNO3. Matrix chemistry may have influenced the responses of aluminum, molybdenum, and selenium on the two instruments. The best analytical practice is to calibrate instruments using calibration standards prepared in matrices that reasonably match those of the samples being analyzed.Filtered and unfiltered samples were spiked over a range of trace-element concentrations from less than 1 to 58 times ambient concentrations. The greater the magnitude of the trace-element spike concentration relative to the ambient concentration, the greater the likelihood spike recoveries will be within data control guidelines (80–120 percent). Greater variability in spike recoveries occurred when trace elements were spiked at concentrations less than 10 times the ambient concentration. Spike recoveries that were considerably lower than 90 percent often were associated with spiked concentrations substantially lower than what was present in the ambient sample. Because the main purpose of spiking natural water samples with known quantities of a particular analyte is to assess possible matrix effects on analytical results, the results of this study stress the importance of spiking samples at concentrations that are reasonably close to what is expected but sufficiently high to exceed analytical variability. Generally, differences in spike recovery results between paired filtered and unfiltered samples were minimal when samples were analyzed on the same instrument.Analytical results for trace-element concentrations in ambient filtered and unfiltered samples greater than 10 and 40 μg/L, respectively, were within the data-quality objective for precision of ±25 percent. Ambient trace-element concentrations in filtered samples greater than the long-term method detection limits but less than 10 μg/L failed to meet the data-quality objective for precision for at least one trace element in about 54 percent of the samples. Similarly, trace-element concentrations in unfiltered samples greater than the long-term method detection limits but less than 40 μg/L failed to meet this data-quality objective for at least one trace-element analysis in about 58 percent of the samples. Although, aluminum and zinc were particularly problematic, limited re-analyses of filtered and unfiltered samples appeared to improve otherwise failed analytical precision.The evaluation of analytical bias using standard reference materials indicate a slight low bias for results for arsenic, cadmium, selenium, and zinc. Aluminum and molybdenum show signs of high bias. There was no observed bias, as determined using the standard reference materials, during the analysis of lead.

  10. Ozone Modulation/Membrane Introduction Mass Spectrometry for Analysis of Hydrocarbon Pollutants in Air

    NASA Astrophysics Data System (ADS)

    Atkinson, D. B.

    2001-12-01

    Modulation of volatile hydrocarbons in two-component mixtures is demonstrated using an ozonolysis pretreatment with membrane introduction mass spectrometry (MIMS). The MIMS technique allows selective introduction of volatile and semivolatile analytes into a mass spectrometer via processes known collectively as pervaporation [Kotiaho and Cooks, 1992]. A semipermeable polymer membrane acts as an interface between the sample (vapor or solution) and the vacuum of the mass spectrometer. This technique has been demonstrated to allow for sensitive analysis of hydrocarbons and other non-polar volatile organic compounds (VOC`s) in air samples[Cisper et al., 1995] . The methodology has the advantages of no sample pretreatment and short analysis time, which are promising for online monitoring applications but the chief disadvantage of lack of a separation step for the different analytes in a mixture. Several approaches have been investigated to overcome this problem including use of selective chemical ionization [Bier and Cooks, 1987] and multivariate calibration techniques[Ketola et al., 1999] . A new approach is reported for the quantitative measurement of VOCs in complex matrices. The method seeks to reduce the complexity of mass spectra observed in hydrocarbon mixture analysis by selective pretreatment of the analyte mixture. In the current investigation, the rapid reaction of ozone with alkenes is used, producing oxygenated compounds which are suppressed by the MIMS system. This has the effect of removing signals due to unsaturated analytes from the compound mass spectra, and comparison of the spectra before and after the ozone treatment reveals the nature of the parent compounds. In preliminary investigations, ozone reacted completely with cyclohexene from a mixture of cylohexene and cyclohexane, and with β -pinene from a mixture of toluene and β -pinene, suppressing the ion signals from the olefins. A slight attenuation of the cyclohexane and toluene in those mixtures was also observed. Despite this problem, the hydrocarbon signal response can be calibrated and the method can be used for quantitative analysis of volatile hydrocarbon compounds in air samples. This methodology should augment the efficiency of the MIMS approach in online and onsite monitoring of VOC emissions. Bier, M.R., and R.G. Cooks, Membrane Interface for Selective Introduction of Volatile Compounds Directly into The Ionization Chamber of a Mass Spectrometer, Anal. Chem., 59 (4), 597, 1987. Cisper, M.E., C.G. Gill, L.E. Townsend, and P.H. Hemberger, On-Line Detection of Volatile Organic Compounds in Air at Parts-per-Trillion Levels by Membrane Introduction Mass Spectrometry, Anal. Chem., 67 (8), 1413-1417, 1995. Ketola, R.A., M. Ojala, and J. Heikkonen, A Non-linear Asymmetric Error Function-based Least Mean Square Approach for the Analysis of Multicomponent Mass Spectra Measured by Membrane Inlet Mass Spectrometry, Rapid Commun. Mass Spectrom., 13 (8), 654, 1999. Kotiaho, T., and R.G. Cooks, Membrane Introduction Mass Spectrometry in Environmental Analysis, in: J.J. Breen, M. J. Dellarco, (Eds), Pollution in Industrial processes, 126 pp., ACS Symp. Ser., Washington, D.C. 508, 1992.

  11. Analytic theory of the selection mechanism in the Saffman-Taylor problem. [concerning shape of fingers in Hele-Shaw cell

    NASA Technical Reports Server (NTRS)

    Hong, D. C.; Langer, J. S.

    1986-01-01

    An analytic approach to the problem of predicting the widths of fingers in a Hele-Shaw cell is presented. The analysis is based on the WKB technique developed recently for dealing with the effects of surface tension in the problem of dendritic solidification. It is found that the relation between the dimensionless width lambda and the dimensionless group of parameters containing the surface tension, nu, has the form lambda - 1/2 = nu exp 2/3 in the limit of small nu.

  12. David Brandner | NREL

    Science.gov Websites

    chemical reaction engineering and transport phenomena Analytical analysis of complex bio-derived samples and Lignin Areas of Expertise Analytical analysis of complex samples Chemical reaction engineering and

  13. Sampling Large Graphs for Anticipatory Analytics

    DTIC Science & Technology

    2015-05-15

    low. C. Random Area Sampling Random area sampling [8] is a “ snowball ” sampling method in which a set of random seed vertices are selected and areas... Sampling Large Graphs for Anticipatory Analytics Lauren Edwards, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller Lincoln...systems, greater human-in-the-loop involvement, or through complex algorithms. We are investigating the use of sampling to mitigate these challenges

  14. Teaching Analytical Chemistry to Pharmacy Students: A Combined, Iterative Approach

    ERIC Educational Resources Information Center

    Masania, Jinit; Grootveld, Martin; Wilson, Philippe B.

    2018-01-01

    Analytical chemistry has often been a difficult subject to teach in a classroom or lecture-based context. Numerous strategies for overcoming the inherently practical-based difficulties have been suggested, each with differing pedagogical theories. Here, we present a combined approach to tackling the problem of teaching analytical chemistry, with…

  15. Analytic Cognitive Style Predicts Religious and Paranormal Belief

    ERIC Educational Resources Information Center

    Pennycook, Gordon; Cheyne, James Allan; Seli, Paul; Koehler, Derek J.; Fugelsang, Jonathan A.

    2012-01-01

    An analytic cognitive style denotes a propensity to set aside highly salient intuitions when engaging in problem solving. We assess the hypothesis that an analytic cognitive style is associated with a history of questioning, altering, and rejecting (i.e., unbelieving) supernatural claims, both religious and paranormal. In two studies, we examined…

  16. Impact of phase ratio, polydimethylsiloxane volume and size, and sampling temperature and time on headspace sorptive extraction recovery of some volatile compounds in the essential oil field.

    PubMed

    Bicchi, Carlo; Cordero, Chiara; Liberto, Erica; Rubiolo, Patrizia; Sgorbini, Barbara; Sandra, Pat

    2005-04-15

    This study evaluates concentration capability of headspace sorptive extraction (HSSE) and the influence of sampling conditions on HSSE recovery of an analyte. A standard mixture in water of six high-to-medium volatility analytes (isobutyl methyl ketone, 3-hexanol, isoamyl acetate, 1,8-cineole, linalool and carvone) was used to sample the headspace by HSSE with stir bars coated with different polydimethylsiloxane (PDMS) volumes (20, 40, 55 and 110 microL, respectively), headspace vial volumes (8, 21.2, 40, 250 and 1000 mL), sampling temperatures (25, 50 and 75 degrees C) and sampling times (30, 60 and 120 min, and 4, 8 and 16 h). The concentration factors (CFs) of HSSE versus static headspace (S-HS) were also determined. Analytes sampled by the PDMS stir bars were recovered by thermal desorption (TDS) and analysed by capillary GC-MS. This study demonstrates how analyte recovery depends on its physico-chemical characteristics and affinity for PDMS (octanol-water partition coefficients), sampling temperatures (50 degrees C) and times (60 min), the volumes of headspace (40 mL) and of PDMS (in particular, for high volatility analytes). HSSE is also shown to be very effective for trace analysis. The HSSE CFs calculated versus S-HS with a 1000 mL headspace volumes at 25 degrees C during 4 h sampling ranged between 10(3) and 10(4) times for all analytes investigated while the limits of quantitation determined under the same conditions were in the nmol/L range.

  17. White HDPE bottles as source of serious contamination of water samples with Ba and Zn.

    PubMed

    Reimann, Clemens; Grimstvedt, Andreas; Frengstad, Bjørn; Finne, Tor Erik

    2007-03-15

    During a recent study of surface water quality factory new white high-density polyethylene (HDPE) bottles were used for collecting the water samples. According to the established field protocol of the Geological Survey of Norway the bottles were twice carefully rinsed with water in the field prior to sampling. Several blank samples using milli-Q (ELGA) water (>18.2 MOmega) were also prepared. On checking the analytical results the blanks returned values of Ag, Ba, Sr, V, Zn and Zr. For Ba and Zn the values (c. 300 microg/l and 95 microg/l) were about 10 times above the concentrations that can be expected in natural waters. A laboratory test of the bottles demonstrated that the bottles contaminate the samples with significant amounts of Ba and Zn and some Sr. Simple acid washing of the bottles prior to use did not solve the contamination problem for Ba and Zn. The results suggest that there may exist "clean" and "dirty" HDPE bottles depending on manufacturer/production process. When collecting water samples it is mandatory to check bottles regularly as a possible source of contamination.

  18. High throughput liquid absorption preconcentrator sampling instrument

    DOEpatents

    Zaromb, Solomon; Bozen, Ralph M.

    1992-01-01

    A system for detecting trace concentrations of an analyte in air includes a preconcentrator for the analyte and an analyte detector. The preconcentrator includes an elongated tubular container comprising a wettable material. The wettable material is continuously wetted with an analyte-sorbing liquid which flows from one part of the container to a lower end. Sampled air flows through the container in contact with the wetted material with a swirling motion which results in efficient transfer of analyte vapors or aerosol particles to the sorbing liquid and preconcentration of traces of analyte in the liquid. The preconcentrated traces of analyte may be either detected within the container or removed therefrom for injection into a separate detection means or for subsequent analysis.

  19. High-throughput liquid-absorption preconcentrator sampling methods

    DOEpatents

    Zaromb, Solomon

    1994-01-01

    A system for detecting trace concentrations of an analyte in air includes a preconcentrator for the analyte and an analyte detector. The preconcentrator includes an elongated tubular container comprising a wettable material. The wettable material is continuously wetted with an analyte-sorbing liquid which flows from one part of the container to a lower end. Sampled air flows through the container in contact with the wetted material with a swirling motion which results in efficient transfer of analyte vapors or aerosol particles to the sorbing liquid and preconcentration of traces of analyte in the liquid. The preconcentrated traces of analyte may be either detected within the container or removed therefrom for injection into a separate detection means or for subsequent analysis.

  20. High throughput liquid absorption preconcentrator sampling instrument

    DOEpatents

    Zaromb, S.; Bozen, R.M.

    1992-12-22

    A system for detecting trace concentrations of an analyte in air includes a preconcentrator for the analyte and an analyte detector. The preconcentrator includes an elongated tubular container comprising a wettable material. The wettable material is continuously wetted with an analyte-sorbing liquid which flows from one part of the container to a lower end. Sampled air flows through the container in contact with the wetted material with a swirling motion which results in efficient transfer of analyte vapors or aerosol particles to the sorbing liquid and preconcentration of traces of analyte in the liquid. The preconcentrated traces of analyte may be either detected within the container or removed therefrom for injection into a separate detection means or for subsequent analysis. 12 figs.

  1. High-throughput liquid-absorption preconcentrator sampling methods

    DOEpatents

    Zaromb, S.

    1994-07-12

    A system for detecting trace concentrations of an analyte in air includes a preconcentrator for the analyte and an analyte detector. The preconcentrator includes an elongated tubular container comprising a wettable material. The wettable material is continuously wetted with an analyte-sorbing liquid which flows from one part of the container to a lower end. Sampled air flows through the container in contact with the wetted material with a swirling motion which results in efficient transfer of analyte vapors or aerosol particles to the sorbing liquid and preconcentration of traces of analyte in the liquid. The preconcentrated traces of analyte may be either detected within the container or removed therefrom for injection into a separate detection means or for subsequent analysis. 12 figs.

  2. NHEXAS PHASE I ARIZONA STUDY--QA ANALYTICAL RESULTS FOR METALS IN BLANK SAMPLES

    EPA Science Inventory

    The Metals in Blank Samples data set contains the analytical results of measurements of up to 27 metals in 82 blank samples from 26 households. Measurements were made in blank samples of dust, indoor and outdoor air, personal air, food, beverages, blood, urine, and dermal wipe r...

  3. NHEXAS PHASE I REGION 5 STUDY--QA ANALYTICAL RESULTS FOR METALS IN BLANKS

    EPA Science Inventory

    This data set includes analytical results for measurements of metals in 205 blank samples and for particles in 64 blank samples. Measurements were made for up to 12 metals in blank samples of air, dust, soil, water, food and beverages, blood, hair, and urine. Blank samples were u...

  4. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--METALS IN BLOOD ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Blood data set contains analytical results for measurements of up to 2 metals in 86 blood samples over 86 households. Each sample was collected as a venous sample from the primary respondent within each household. The samples consisted of two 3-mL tubes. The prim...

  5. Analytical methods for the determination of personal care products in human samples: an overview.

    PubMed

    Jiménez-Díaz, I; Zafra-Gómez, A; Ballesteros, O; Navalón, A

    2014-11-01

    Personal care products (PCPs) are organic chemicals widely used in everyday human life. Nowadays, preservatives, UV-filters, antimicrobials and musk fragrances are widely used PCPs. Different studies have shown that some of these compounds can cause adverse health effects, such as genotoxicity, which could even lead to mutagenic or carcinogenic effects, or estrogenicity because of their endocrine disruption activity. Due to the absence of official monitoring protocols, there is an increasing demand of analytical methods that allow the determination of those compounds in human samples in order to obtain more information regarding their behavior and fate in the human body. The complexity of the biological matrices and the low concentration levels of these compounds make necessary the use of advanced sample treatment procedures that afford both, sample clean-up, to remove potentially interfering matrix components, as well as the concentration of analytes. In the present work, a review of the more recent analytical methods published in the scientific literature for the determination of PCPs in human fluids and tissue samples, is presented. The work focused on sample preparation and the analytical techniques employed. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. A Paper-Based Electrochromic Array for Visualized Electrochemical Sensing.

    PubMed

    Zhang, Fengling; Cai, Tianyi; Ma, Liang; Zhan, Liyuan; Liu, Hong

    2017-01-31

    We report a battery-powered, paper-based electrochromic array for visualized electrochemical sensing. The paper-based sensing system consists of six parallel electrochemical cells, which are powered by an aluminum-air battery. Each single electrochemical cell uses a Prussian Blue spot electrodeposited on an indium-doped tin oxide thin film as the electrochromic indicator. Each electrochemical cell is preloaded with increasing amounts of analyte. The sample activates the battery for the sensing. Both the preloaded analyte and the analyte in the sample initiate the color change of Prussian Blue to Prussian White. With a reaction time of 60 s, the number of electrochemical cells with complete color changes is correlated to the concentration of analyte in the sample. As a proof-of-concept analyte, lactic acid was detected semi-quantitatively using the naked eye.

  7. An analytically solvable three-body break-up model problem in hyperspherical coordinates

    NASA Astrophysics Data System (ADS)

    Ancarani, L. U.; Gasaneo, G.; Mitnik, D. M.

    2012-10-01

    An analytically solvable S-wave model for three particles break-up processes is presented. The scattering process is represented by a non-homogeneous Coulombic Schrödinger equation where the driven term is given by a Coulomb-like interaction multiplied by the product of a continuum wave function and a bound state in the particles coordinates. The closed form solution is derived in hyperspherical coordinates leading to an analytic expression for the associated scattering transition amplitude. The proposed scattering model contains most of the difficulties encountered in real three-body scattering problem, e.g., non-separability in the electrons' spherical coordinates and Coulombic asymptotic behavior. Since the coordinates' coupling is completely different, the model provides an alternative test to that given by the Temkin-Poet model. The knowledge of the analytic solution provides an interesting benchmark to test numerical methods dealing with the double continuum, in particular in the asymptotic regions. An hyperspherical Sturmian approach recently developed for three-body collisional problems is used to reproduce to high accuracy the analytical results. In addition to this, we generalized the model generating an approximate wave function possessing the correct radial asymptotic behavior corresponding to an S-wave three-body Coulomb problem. The model allows us to explore the typical structure of the solution of a three-body driven equation, to identify three regions (the driven, the Coulombic and the asymptotic), and to analyze how far one has to go to extract the transition amplitude.

  8. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  9. Microextraction by packed sorbent: an emerging, selective and high-throughput extraction technique in bioanalysis.

    PubMed

    Pereira, Jorge; Câmara, José S; Colmsjö, Anders; Abdel-Rehim, Mohamed

    2014-06-01

    Sample preparation is an important analytical step regarding the isolation and concentration of desired components from complex matrices and greatly influences their reliable and accurate analysis and data quality. It is the most labor-intensive and error-prone process in analytical methodology and, therefore, may influence the analytical performance of the target analytes quantification. Many conventional sample preparation methods are relatively complicated, involving time-consuming procedures and requiring large volumes of organic solvents. Recent trends in sample preparation include miniaturization, automation, high-throughput performance, on-line coupling with analytical instruments and low-cost operation through extremely low volume or no solvent consumption. Micro-extraction techniques, such as micro-extraction by packed sorbent (MEPS), have these advantages over the traditional techniques. This paper gives an overview of MEPS technique, including the role of sample preparation in bioanalysis, the MEPS description namely MEPS formats (on- and off-line), sorbents, experimental and protocols, factors that affect the MEPS performance, and the major advantages and limitations of MEPS compared with other sample preparation techniques. We also summarize MEPS recent applications in bioanalysis. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Strategies in Forecasting Outcomes in Ethical Decision-making: Identifying and Analyzing the Causes of the Problem

    PubMed Central

    Beeler, Cheryl K.; Antes, Alison L.; Wang, Xiaoqian; Caughron, Jared J.; Thiel, Chase E.; Mumford, Michael D.

    2010-01-01

    This study examined the role of key causal analysis strategies in forecasting and ethical decision-making. Undergraduate participants took on the role of the key actor in several ethical problems and were asked to identify and analyze the causes, forecast potential outcomes, and make a decision about each problem. Time pressure and analytic mindset were manipulated while participants worked through these problems. The results indicated that forecast quality was associated with decision ethicality, and the identification of the critical causes of the problem was associated with both higher quality forecasts and higher ethicality of decisions. Neither time pressure nor analytic mindset impacted forecasts or ethicality of decisions. Theoretical and practical implications of these findings are discussed. PMID:20352056

  11. Analytical solutions for sequentially coupled one-dimensional reactive transport problems Part I: Mathematical derivations

    NASA Astrophysics Data System (ADS)

    Srinivasan, V.; Clement, T. P.

    2008-02-01

    Multi-species reactive transport equations coupled through sorption and sequential first-order reactions are commonly used to model sites contaminated with radioactive wastes, chlorinated solvents and nitrogenous species. Although researchers have been attempting to solve various forms of these reactive transport equations for over 50 years, a general closed-form analytical solution to this problem is not available in the published literature. In Part I of this two-part article, we derive a closed-form analytical solution to this problem for spatially-varying initial conditions. The proposed solution procedure employs a combination of Laplace and linear transform methods to uncouple and solve the system of partial differential equations. Two distinct solutions are derived for Dirichlet and Cauchy boundary conditions each with Bateman-type source terms. We organize and present the final solutions in a common format that represents the solutions to both boundary conditions. In addition, we provide the mathematical concepts for deriving the solution within a generic framework that can be used for solving similar transport problems.

  12. Why does the sign problem occur in evaluating the overlap of HFB wave functions?

    NASA Astrophysics Data System (ADS)

    Mizusaki, Takahiro; Oi, Makito; Shimizu, Noritaka

    2018-04-01

    For the overlap matrix element between Hartree-Fock-Bogoliubov states, there are two analytically different formulae: one with the square root of the determinant (the Onishi formula) and the other with the Pfaffian (Robledo's Pfaffian formula). The former formula is two-valued as a complex function, hence it leaves the sign of the norm overlap undetermined (i.e., the so-called sign problem of the Onishi formula). On the other hand, the latter formula does not suffer from the sign problem. The derivations for these two formulae are so different that the reasons are obscured why the resultant formulae possess different analytical properties. In this paper, we discuss the reason why the difference occurs by means of the consistent framework, which is based on the linked cluster theorem and the product-sum identity for the Pfaffian. Through this discussion, we elucidate the source of the sign problem in the Onishi formula. We also point out that different summation methods of series expansions may result in analytically different formulae.

  13. Inductive dielectric analyzer

    NASA Astrophysics Data System (ADS)

    Agranovich, Daniel; Polygalov, Eugene; Popov, Ivan; Ben Ishai, Paul; Feldman, Yuri

    2017-03-01

    One of the approaches to bypass the problem of electrode polarization in dielectric measurements is the free electrode method. The advantage of this technique is that, the probing electric field in the material is not supplied by contact electrodes, but rather by electromagnetic induction. We have designed an inductive dielectric analyzer based on a sensor comprising two concentric toroidal coils. In this work, we present an analytic derivation of the relationship between the impedance measured by the sensor and the complex dielectric permittivity of the sample. The obtained relationship was successfully employed to measure the dielectric permittivity and conductivity of various alcohols and aqueous salt solutions.

  14. Limiting cases of the small-angle scattering approximation solutions for the propagation of laser beams in anisotropic scattering media

    NASA Technical Reports Server (NTRS)

    Box, M. A.; Deepak, A.

    1981-01-01

    The propagation of photons in a medium with strongly anisotropic scattering is a problem with a considerable history. Like the propagation of electrons in metal foils, it may be solved in the small-angle scattering approximation by the use of Fourier-transform techniques. In certain limiting cases, one may even obtain analytic expressions. This paper presents some of these results in a model-independent form and also illustrates them by the use of four different phase-function models. Sample calculations are provided for comparison purposes

  15. Mass spectrometry of aerospace materials

    NASA Technical Reports Server (NTRS)

    Colony, J. A.

    1976-01-01

    Mass spectrometry is used for chemical analysis of aerospace materials and contaminants. Years of analytical aerospace experience have resulted in the development of specialized techniques of sampling and analysis which are required in order to optimize results. This work has resulted in the evolution of a hybrid method of indexing mass spectra which include both the largest peaks and the structurally significant peaks in a concise format. With this system, a library of mass spectra of aerospace materials was assembled, including the materials responsible for 80 to 90 percent of the contamination problems at Goddard Space Flight Center during the past several years.

  16. Classification without labels: learning from mixed samples in high energy physics

    NASA Astrophysics Data System (ADS)

    Metodiev, Eric M.; Nachman, Benjamin; Thaler, Jesse

    2017-10-01

    Modern machine learning techniques can be used to construct powerful models for difficult collider physics problems. In many applications, however, these models are trained on imperfect simulations due to a lack of truth-level information in the data, which risks the model learning artifacts of the simulation. In this paper, we introduce the paradigm of classification without labels (CWoLa) in which a classifier is trained to distinguish statistical mixtures of classes, which are common in collider physics. Crucially, neither individual labels nor class proportions are required, yet we prove that the optimal classifier in the CWoLa paradigm is also the optimal classifier in the traditional fully-supervised case where all label information is available. After demonstrating the power of this method in an analytical toy example, we consider a realistic benchmark for collider physics: distinguishing quark- versus gluon-initiated jets using mixed quark/gluon training samples. More generally, CWoLa can be applied to any classification problem where labels or class proportions are unknown or simulations are unreliable, but statistical mixtures of the classes are available.

  17. Classification without labels: learning from mixed samples in high energy physics

    DOE PAGES

    Metodiev, Eric M.; Nachman, Benjamin; Thaler, Jesse

    2017-10-25

    Modern machine learning techniques can be used to construct powerful models for difficult collider physics problems. In many applications, however, these models are trained on imperfect simulations due to a lack of truth-level information in the data, which risks the model learning artifacts of the simulation. In this paper, we introduce the paradigm of classification without labels (CWoLa) in which a classifier is trained to distinguish statistical mixtures of classes, which are common in collider physics. Crucially, neither individual labels nor class proportions are required, yet we prove that the optimal classifier in the CWoLa paradigm is also the optimalmore » classifier in the traditional fully-supervised case where all label information is available. After demonstrating the power of this method in an analytical toy example, we consider a realistic benchmark for collider physics: distinguishing quark- versus gluon-initiated jets using mixed quark/gluon training samples. More generally, CWoLa can be applied to any classification problem where labels or class proportions are unknown or simulations are unreliable, but statistical mixtures of the classes are available.« less

  18. Classification without labels: learning from mixed samples in high energy physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metodiev, Eric M.; Nachman, Benjamin; Thaler, Jesse

    Modern machine learning techniques can be used to construct powerful models for difficult collider physics problems. In many applications, however, these models are trained on imperfect simulations due to a lack of truth-level information in the data, which risks the model learning artifacts of the simulation. In this paper, we introduce the paradigm of classification without labels (CWoLa) in which a classifier is trained to distinguish statistical mixtures of classes, which are common in collider physics. Crucially, neither individual labels nor class proportions are required, yet we prove that the optimal classifier in the CWoLa paradigm is also the optimalmore » classifier in the traditional fully-supervised case where all label information is available. After demonstrating the power of this method in an analytical toy example, we consider a realistic benchmark for collider physics: distinguishing quark- versus gluon-initiated jets using mixed quark/gluon training samples. More generally, CWoLa can be applied to any classification problem where labels or class proportions are unknown or simulations are unreliable, but statistical mixtures of the classes are available.« less

  19. Determination of steroid hormones and related compounds in filtered and unfiltered water by solid-phase extraction, derivatization, and gas chromatography with tandem mass spectrometry

    USGS Publications Warehouse

    Foreman, William T.; Gray, James L.; ReVello, Rhiannon C.; Lindley, Chris E.; Losche, Scott A.; Barber, Larry B.

    2012-01-01

    A new analytical method has been developed and implemented at the U.S. Geological Survey National Water Quality Laboratory that determines a suite of 20 steroid hormones and related compounds in filtered water (using laboratory schedule 2434) and in unfiltered water (using laboratory schedule 4434). This report documents the procedures and initial performance data for the method and provides guidance on application of the method and considerations of data quality in relation to data interpretation. The analytical method determines 6 natural and 3 synthetic estrogen compounds, 6 natural androgens, 1 natural and 1 synthetic progestin compound, and 2 sterols: cholesterol and 3--coprostanol. These two sterols have limited biological activity but typically are abundant in wastewater effluents and serve as useful tracers. Bisphenol A, an industrial chemical used primarily to produce polycarbonate plastic and epoxy resins and that has been shown to have estrogenic activity, also is determined by the method. A technique referred to as isotope-dilution quantification is used to improve quantitative accuracy by accounting for sample-specific procedural losses in the determined analyte concentration. Briefly, deuterium- or carbon-13-labeled isotope-dilution standards (IDSs), all of which are direct or chemically similar isotopic analogs of the method analytes, are added to all environmental and quality-control and quality-assurance samples before extraction. Method analytes and IDS compounds are isolated from filtered or unfiltered water by solid-phase extraction onto an octadecylsilyl disk, overlain with a graded glass-fiber filter to facilitate extraction of unfiltered sample matrices. The disks are eluted with methanol, and the extract is evaporated to dryness, reconstituted in solvent, passed through a Florisil solid-phase extraction column to remove polar organic interferences, and again evaporated to dryness in a reaction vial. The method compounds are reacted with activated -methyl--trimethylsilyl trifluoroacetamide at 65 degrees Celsius for 1 hour to form trimethylsilyl or trimethylsilyl-enol ether derivatives that are more amenable to gas chromatographic separation than the underivatized compounds. Analysis is carried out by gas chromatography with tandem mass spectrometry using calibration standards that are derivatized concurrently with the sample extracts. Analyte concentrations are quantified relative to specific IDS compounds in the sample, which directly compensate for procedural losses (incomplete recovery) in the determined and reported analyte concentrations. Thus, reported analyte concentrations (or analyte recoveries for spiked samples) are corrected based on recovery of the corresponding IDS compound during the quantification process. Recovery for each IDS compound is reported for each sample and represents an absolute recovery in a manner comparable to surrogate recoveries for other organic methods used by the National Water Quality Laboratory. Thus, IDS recoveries provide a useful tool for evaluating sample-specific analytical performance from an absolute mass recovery standpoint. IDS absolute recovery will differ and typically be lower than the corresponding analyte’s method recovery in spiked samples. However, additional correction of reported analyte concentrations is unnecessary and inappropriate because the analyte concentration (or recovery) already is compensated for by the isotope-dilution quantification procedure. Method analytes were spiked at 10 and 100 nanograms per liter (ng/L) for most analytes (10 times greater spike levels were used for bisphenol A and 100 times greater spike levels were used for 3--coprostanol and cholesterol) into the following validation-sample matrices: reagent water, wastewater-affected surface water, a secondary-treated wastewater effluent, and a primary (no biological treatment) wastewater effluent. Overall method recovery for all analytes in these matrices averaged 100 percent, with overall relative standard deviation of 28 percent. Mean recoveries of the 20 individual analytes for spiked reagent-water samples prepared along with field samples and analyzed in 2009–2010 ranged from 84–104 percent, with relative standard deviations of 6–36 percent. Concentrations for two analytes, equilin and progesterone, are reported as estimated because these analytes had excessive bias or variability, or both. Additional database coding is applied to other reported analyte data as needed, based on sample-specific IDS recovery performance. Detection levels were derived statistically by fortifying reagent water at six different levels (0.1 to 4 ng/L) and range from about 0.4 to 4 ng/L for 16 analytes. Interim reporting levels applied to analytes in this report range from 0.8 to 8 ng/L. Bisphenol A and the sterols (cholesterol and 3-beta-coprostanol) were consistently detected in laboratory and field blanks. The minimum reporting levels were set at 100 ng/L for bisphenol A and at 200 ng/L for the two sterols to prevent any bias associated with the presence of these compounds in the blanks. A minimum reporting level of 2 ng/L was set for 11-ketotestosterone to minimize false positive risk from an interfering siloxane compound emanating as chromatographic-column bleed, from vial septum material, or from other sources at no more than 1 ng/L.

  20. Development and Applications of Liquid Sample Desorption Electrospray Ionization Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Zheng, Qiuling; Chen, Hao

    2016-06-01

    Desorption electrospray ionization mass spectrometry (DESI-MS) is a recent advance in the field of analytical chemistry. This review surveys the development of liquid sample DESI-MS (LS-DESI-MS), a variant form of DESI-MS that focuses on fast analysis of liquid samples, and its novel analy-tical applications in bioanalysis, proteomics, and reaction kinetics. Due to the capability of directly ionizing liquid samples, liquid sample DESI (LS-DESI) has been successfully used to couple MS with various analytical techniques, such as microfluidics, microextraction, electrochemistry, and chromatography. This review also covers these hyphenated techniques. In addition, several closely related ionization methods, including transmission mode DESI, thermally assisted DESI, and continuous flow-extractive DESI, are briefly discussed. The capabilities of LS-DESI extend and/or complement the utilities of traditional DESI and electrospray ionization and will find extensive and valuable analytical application in the future.

Top