Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-16
... Currently Approved Total Coliform Analytical Methods AGENCY: Environmental Protection Agency (EPA). ACTION... of currently approved Total Coliform Rule (TCR) analytical methods. At these meetings, stakeholders will be given an opportunity to discuss potential elements of a method re-evaluation study, such as...
Analytical quality by design: a tool for regulatory flexibility and robust analytics.
Peraman, Ramalingam; Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT).
Analytical Quality by Design: A Tool for Regulatory Flexibility and Robust Analytics
Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT). PMID:25722723
Analytical evaluation of current starch methods used in the international sugar industry: Part I
USDA-ARS?s Scientific Manuscript database
Several analytical starch methods currently exist in the international sugar industry that are used to prevent or mitigate starch-related processing challenges as well as assess the quality of traded end-products. These methods use simple iodometric chemistry, mostly potato starch standards, and uti...
AmO 2 Analysis for Analytical Method Testing and Assessment: Analysis Support for AmO 2 Production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhn, Kevin John; Bland, Galey Jean; Fulwyler, James Brent
Americium oxide samples will be measured for various analytes to support AmO 2 production. The key analytes that are currently requested by the Am production customer at LANL include total Am content, Am isotopics, Pu assay, Pu isotopics, and trace element content including 237Np content. Multiple analytical methods will be utilized depending on the sensitivity, accuracy and precision needs of the Am matrix. Traceability to the National Institute of Standards and Technology (NIST) will be achieved, where applicable, by running NIST traceable quality control materials. This given that there are no suitable AmO 2 reference materials currently available for requestedmore » analytes. The primary objective is to demonstrate the suitability of actinide analytical chemistry methods to support AmO 2 production operations.« less
Conventional approaches to water quality characterization can provide data on individual chemical components of each water sample. This analyte-by-analyte approach currently serves many useful research and compliance monitoring needs. However these approaches, which require a ...
Literature Review on Processing and Analytical Methods for ...
Report The purpose of this report was to survey the open literature to determine the current state of the science regarding the processing and analytical methods currently available for recovery of F. tularensis from water and soil matrices, and to determine what gaps remain in the collective knowledge concerning F. tularensis identification from environmental samples.
NASA Astrophysics Data System (ADS)
Li, Zhao; Wang, Dazhi; Zheng, Di; Yu, Linxin
2017-10-01
Rotational permanent magnet eddy current couplers are promising devices for torque and speed transmission without any mechanical contact. In this study, flux-concentration disk-type permanent magnet eddy current couplers with double conductor rotor are investigated. Given the drawback of the accurate three-dimensional finite element method, this paper proposes a mixed two-dimensional analytical modeling approach. Based on this approach, the closed-form expressions of magnetic field, eddy current, electromagnetic force and torque for such devices are obtained. Finally, a three-dimensional finite element method is employed to validate the analytical results. Besides, a prototype is manufactured and tested for the torque-speed characteristic.
Discreet passive explosive detection through 2-sided waveguided fluorescence
Harper, Ross James [Stillwater, OK; la Grone, Marcus [Cushing, OK; Fisher, Mark [Stillwater, OK
2011-10-18
The current invention provides a passive sampling device suitable for collecting and detecting the presence of target analytes. In particular, the passive sampling device is suitable for detecting nitro-aromatic compounds. The current invention further provides a passive sampling device reader suitable for determining the collection of target analytes. Additionally, the current invention provides methods for detecting target analytes using the passive sampling device and the passive sampling device reader.
Discreet passive explosive detection through 2-sided wave guided fluorescence
Harper, Ross James; la Grone, Marcus; Fisher, Mark
2012-10-16
The current invention provides a passive sampling device suitable for collecting and detecting the presence of target analytes. In particular, the passive sampling device is suitable for detecting nitro-aromatic compounds. The current invention further provides a passive sampling device reader suitable for determining the collection of target analytes. Additionally, the current invention provides methods for detecting target analytes using the passive sampling device and the passive sampling device reader.
3D analysis of eddy current loss in the permanent magnet coupling.
Zhu, Zina; Meng, Zhuo
2016-07-01
This paper first presents a 3D analytical model for analyzing the radial air-gap magnetic field between the inner and outer magnetic rotors of the permanent magnet couplings by using the Amperian current model. Based on the air-gap field analysis, the eddy current loss in the isolation cover is predicted according to the Maxwell's equations. A 3D finite element analysis model is constructed to analyze the magnetic field spatial distributions and vector eddy currents, and then the simulation results obtained are analyzed and compared with the analytical method. Finally, the current losses of two types of practical magnet couplings are measured in the experiment to compare with the theoretical results. It is concluded that the 3D analytical method of eddy current loss in the magnet coupling is viable and could be used for the eddy current loss prediction of magnet couplings.
Modeling and analysis of a novel planar eddy current damper
NASA Astrophysics Data System (ADS)
Zhang, He; Kou, Baoquan; Jin, Yinxi; Zhang, Lu; Zhang, Hailin; Li, Liyi
2014-05-01
In this paper, a novel 2-DOF permanent magnet planar eddy current damper is proposed, of which the stator is made of a copper plate and the mover is composed of two orthogonal 1-D permanent magnet arrays with a double sided structure. The main objective of the planar eddy current damper is to provide two orthogonal damping forces for dynamic systems like the 2-DOF high precision positioning system. Firstly, the basic structure and the operating principle of the planar damper are introduced. Secondly, the analytical model of the planar damper is established where the magnetic flux density distribution of the permanent magnet arrays is obtained by using the equivalent magnetic charge method and the image method. Then, the analytical expressions of the damping force and damping coefficient are derived. Lastly, to verify the analytical model, the finite element method (FEM) is adopted for calculating the flux density and a planar damper prototype is manufactured and thoroughly tested. The results from FEM and experiments are in good agreement with the ones from the analytical expressions indicating that the analytical model is reasonable and correct.
Towards an Airframe Noise Prediction Methodology: Survey of Current Approaches
NASA Technical Reports Server (NTRS)
Farassat, Fereidoun; Casper, Jay H.
2006-01-01
In this paper, we present a critical survey of the current airframe noise (AFN) prediction methodologies. Four methodologies are recognized. These are the fully analytic method, CFD combined with the acoustic analogy, the semi-empirical method and fully numerical method. It is argued that for the immediate need of the aircraft industry, the semi-empirical method based on recent high quality acoustic database is the best available method. The method based on CFD and the Ffowcs William- Hawkings (FW-H) equation with penetrable data surface (FW-Hpds ) has advanced considerably and much experience has been gained in its use. However, more research is needed in the near future particularly in the area of turbulence simulation. The fully numerical method will take longer to reach maturity. Based on the current trends, it is predicted that this method will eventually develop into the method of choice. Both the turbulence simulation and propagation methods need to develop more for this method to become useful. Nonetheless, the authors propose that the method based on a combination of numerical and analytical techniques, e.g., CFD combined with FW-H equation, should also be worked on. In this effort, the current symbolic algebra software will allow more analytical approaches to be incorporated into AFN prediction methods.
Method For Chemical Sensing Using A Microfabricated Teeter-Totter Resonator
Adkins, Douglas Ray; Heller, Edwin J.; Shul, Randy J.
2004-11-30
A method for sensing a chemical analyte in a fluid stream comprises providing a microfabricated teeter-totter resonator that relies upon a Lorentz force to cause oscillation in a paddle, applying a static magnetic field substantially aligned in-plane with the paddle, energizing a current conductor line on a surface of the paddle with an alternating electrical current to generate the Lorentz force, exposing the resonator to the analyte, and detecting the response of the oscillatory motion of the paddle to the chemical analyte. Preferably, a chemically sensitive coating is disposed on at least one surface of the paddle to enhance the sorption of the analyte by the paddle. The concentration of the analyte in a fluid stream can be determined by measuring the change in the resonant frequency or phase of the teeter-totter resonator as the chemical analyte is added to or removed from the paddle.
High temperature ion channels and pores
NASA Technical Reports Server (NTRS)
Cheley, Stephen (Inventor); Gu, Li Qun (Inventor); Bayley, Hagan (Inventor); Kang, Xiaofeng (Inventor)
2011-01-01
The present invention includes an apparatus, system and method for stochastic sensing of an analyte to a protein pore. The protein pore may be an engineer protein pore, such as an ion channel at temperatures above 55.degree. C. and even as high as near 100.degree. C. The analyte may be any reactive analyte, including chemical weapons, environmental toxins and pharmaceuticals. The analyte covalently bonds to the sensor element to produce a detectable electrical current signal. Possible signals include change in electrical current. Detection of the signal allows identification of the analyte and determination of its concentration in a sample solution. Multiple analytes present in the same solution may also be detected.
Direct current electrical potential measurement of the growth of small cracks
NASA Technical Reports Server (NTRS)
Gangloff, Richard P.; Slavik, Donald C.; Piascik, Robert S.; Van Stone, Robert H.
1992-01-01
The analytical and experimental aspects of the direct-current electrical potential difference (dcEPD) method for continuous monitoring of the growth kinetics of short (50 to 500 microns) fatigue cracks are reviewed, and successful applications of the deEPD method to study fatigue crack propagation in a variety of metallic alloys exposed to various environments are described. Particular attention is given to the principle of the dcEPD method, the analytical electrical potential calibration relationships, and the experimental procedures and equipment.
Jóźwik, Jagoda; Kałużna-Czaplińska, Joanna
2016-01-01
Currently, analysis of various human body fluids is one of the most essential and promising approaches to enable the discovery of biomarkers or pathophysiological mechanisms for disorders and diseases. Analysis of these fluids is challenging due to their complex composition and unique characteristics. Development of new analytical methods in this field has made it possible to analyze body fluids with higher selectivity, sensitivity, and precision. The composition and concentration of analytes in body fluids are most often determined by chromatography-based techniques. There is no doubt that proper use of knowledge that comes from a better understanding of the role of body fluids requires the cooperation of scientists of diverse specializations, including analytical chemists, biologists, and physicians. This article summarizes current knowledge about the application of different chromatographic methods in analyses of a wide range of compounds in human body fluids in order to diagnose certain diseases and disorders.
MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory.
Horowitz, Gary L; Zaman, Zahur; Blanckaert, Norbert J C; Chan, Daniel W; Dubois, Jeffrey A; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W; Nilsen, Olaug L; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang
2005-01-01
MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality.
MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory
Zaman, Zahur; Blanckaert, Norbert J. C.; Chan, Daniel W.; Dubois, Jeffrey A.; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W.; Nilsen, Olaug L.; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L.; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang
2005-01-01
MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality. PMID:18924721
USDA-ARS?s Scientific Manuscript database
Although quantitative analytical methods must be empirically validated prior to their actual use in a variety of applications, including regulatory monitoring of chemical adulterants in foods, validation of qualitative method performance for the analytes and matrices of interest is frequently ignore...
Analytical procedures for water-soluble vitamins in foods and dietary supplements: a review.
Blake, Christopher J
2007-09-01
Water-soluble vitamins include the B-group vitamins and vitamin C. In order to correctly monitor water-soluble vitamin content in fortified foods for compliance monitoring as well as to establish accurate data banks, an accurate and precise analytical method is a prerequisite. For many years microbiological assays have been used for analysis of B vitamins. However they are no longer considered to be the gold standard in vitamins analysis as many studies have shown up their deficiencies. This review describes the current status of analytical methods, including microbiological assays and spectrophotometric, biosensor and chromatographic techniques. In particular it describes the current status of the official methods and highlights some new developments in chromatographic procedures and detection methods. An overview is made of multivitamin extractions and analyses for foods and supplements.
Ferrell, Jack R.; Olarte, Mariefel V.; Christensen, Earl D.; ...
2016-07-05
Here, we discuss the standardization of analytical techniques for pyrolysis bio-oils, including the current status of methods, and our opinions on future directions. First, the history of past standardization efforts is summarized, and both successful and unsuccessful validation of analytical techniques highlighted. The majority of analytical standardization studies to-date has tested only physical characterization techniques. In this paper, we present results from an international round robin on the validation of chemical characterization techniques for bio-oils. Techniques tested included acid number, carbonyl titrations using two different methods (one at room temperature and one at 80 °C), 31P NMR for determination ofmore » hydroxyl groups, and a quantitative gas chromatography–mass spectrometry (GC-MS) method. Both carbonyl titration and acid number methods have yielded acceptable inter-laboratory variabilities. 31P NMR produced acceptable results for aliphatic and phenolic hydroxyl groups, but not for carboxylic hydroxyl groups. As shown in previous round robins, GC-MS results were more variable. Reliable chemical characterization of bio-oils will enable upgrading research and allow for detailed comparisons of bio-oils produced at different facilities. Reliable analytics are also needed to enable an emerging bioenergy industry, as processing facilities often have different analytical needs and capabilities than research facilities. We feel that correlations in reliable characterizations of bio-oils will help strike a balance between research and industry, and will ultimately help to -determine metrics for bio-oil quality. Lastly, the standardization of additional analytical methods is needed, particularly for upgraded bio-oils.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrell, Jack R.; Olarte, Mariefel V.; Christensen, Earl D.
Here, we discuss the standardization of analytical techniques for pyrolysis bio-oils, including the current status of methods, and our opinions on future directions. First, the history of past standardization efforts is summarized, and both successful and unsuccessful validation of analytical techniques highlighted. The majority of analytical standardization studies to-date has tested only physical characterization techniques. In this paper, we present results from an international round robin on the validation of chemical characterization techniques for bio-oils. Techniques tested included acid number, carbonyl titrations using two different methods (one at room temperature and one at 80 °C), 31P NMR for determination ofmore » hydroxyl groups, and a quantitative gas chromatography–mass spectrometry (GC-MS) method. Both carbonyl titration and acid number methods have yielded acceptable inter-laboratory variabilities. 31P NMR produced acceptable results for aliphatic and phenolic hydroxyl groups, but not for carboxylic hydroxyl groups. As shown in previous round robins, GC-MS results were more variable. Reliable chemical characterization of bio-oils will enable upgrading research and allow for detailed comparisons of bio-oils produced at different facilities. Reliable analytics are also needed to enable an emerging bioenergy industry, as processing facilities often have different analytical needs and capabilities than research facilities. We feel that correlations in reliable characterizations of bio-oils will help strike a balance between research and industry, and will ultimately help to -determine metrics for bio-oil quality. Lastly, the standardization of additional analytical methods is needed, particularly for upgraded bio-oils.« less
Lores, Marta; Llompart, Maria; Alvarez-Rivera, Gerardo; Guerra, Eugenia; Vila, Marlene; Celeiro, Maria; Lamas, J Pablo; Garcia-Jares, Carmen
2016-04-07
Cosmetic products placed on the market and their ingredients, must be safe under reasonable conditions of use, in accordance to the current legislation. Therefore, regulated and allowed chemical substances must meet the regulatory criteria to be used as ingredients in cosmetics and personal care products, and adequate analytical methodology is needed to evaluate the degree of compliance. This article reviews the most recent methods (2005-2015) used for the extraction and the analytical determination of the ingredients included in the positive lists of the European Regulation of Cosmetic Products (EC 1223/2009): comprising colorants, preservatives and UV filters. It summarizes the analytical properties of the most relevant analytical methods along with the possibilities of fulfilment of the current regulatory issues. The cosmetic legislation is frequently being updated; consequently, the analytical methodology must be constantly revised and improved to meet safety requirements. The article highlights the most important advances in analytical methodology for cosmetics control, both in relation to the sample pretreatment and extraction and the different instrumental approaches developed to solve this challenge. Cosmetics are complex samples, and most of them require a sample pretreatment before analysis. In the last times, the research conducted covering this aspect, tended to the use of green extraction and microextraction techniques. Analytical methods were generally based on liquid chromatography with UV detection, and gas and liquid chromatographic techniques hyphenated with single or tandem mass spectrometry; but some interesting proposals based on electrophoresis have also been reported, together with some electroanalytical approaches. Regarding the number of ingredients considered for analytical control, single analyte methods have been proposed, although the most useful ones in the real life cosmetic analysis are the multianalyte approaches. Copyright © 2016 Elsevier B.V. All rights reserved.
A Model for Axial Magnetic Bearings Including Eddy Currents
NASA Technical Reports Server (NTRS)
Kucera, Ladislav; Ahrens, Markus
1996-01-01
This paper presents an analytical method of modelling eddy currents inside axial bearings. The problem is solved by dividing an axial bearing into elementary geometric forms, solving the Maxwell equations for these simplified geometries, defining boundary conditions and combining the geometries. The final result is an analytical solution for the flux, from which the impedance and the force of an axial bearing can be derived. Several impedance measurements have shown that the analytical solution can fit the measured data with a precision of approximately 5%.
Analytical approaches to optimizing system "Semiconductor converter-electric drive complex"
NASA Astrophysics Data System (ADS)
Kormilicin, N. V.; Zhuravlev, A. M.; Khayatov, E. S.
2018-03-01
In the electric drives of the machine-building industry, the problem of optimizing the drive in terms of mass-size indicators is acute. The article offers analytical methods that ensure the minimization of the mass of a multiphase semiconductor converter. In multiphase electric drives, the form of the phase current at which the best possible use of the "semiconductor converter-electric drive complex" for active materials is different from the sinusoidal form. It is shown that under certain restrictions on the phase current form, it is possible to obtain an analytical solution. In particular, if one assumes the shape of the phase current to be rectangular, the optimal shape of the control actions will depend on the width of the interpolar gap. In the general case, the proposed algorithm can be used to solve the problem under consideration by numerical methods.
A Structural and Correlational Analysis of Two Common Measures of Personal Epistemology
ERIC Educational Resources Information Center
Laster, Bonnie Bost
2010-01-01
Scope and Method of Study: The current inquiry is a factor analytic study which utilizes first and second order factor analytic methods to examine the internal structures of two measurements of personal epistemological beliefs: the Schommer Epistemological Questionnaire (SEQ) and Epistemic Belief Inventory (EBI). The study also examines the…
The Development of MST Test Information for the Prediction of Test Performances
ERIC Educational Resources Information Center
Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.
2017-01-01
The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…
Vandekerckhove, Kristof; Seidl, Andreas; Gutka, Hiten; Kumar, Manish; Gratzl, Gyöngyi; Keire, David; Coffey, Todd; Kuehne, Henriette
2018-05-10
Leading regulatory agencies recommend biosimilar assessment to proceed in a stepwise fashion, starting with a detailed analytical comparison of the structural and functional properties of the proposed biosimilar and reference product. The degree of analytical similarity determines the degree of residual uncertainty that must be addressed through downstream in vivo studies. Substantive evidence of similarity from comprehensive analytical testing may justify a targeted clinical development plan, and thus enable a shorter path to licensing. The importance of a careful design of the analytical similarity study program therefore should not be underestimated. Designing a state-of-the-art analytical similarity study meeting current regulatory requirements in regions such as the USA and EU requires a methodical approach, consisting of specific steps that far precede the work on the actual analytical study protocol. This white paper discusses scientific and methodological considerations on the process of attribute and test method selection, criticality assessment, and subsequent assignment of analytical measures to US FDA's three tiers of analytical similarity assessment. Case examples of selection of critical quality attributes and analytical methods for similarity exercises are provided to illustrate the practical implementation of the principles discussed.
Analytical methods for human biomonitoring of pesticides. A review.
Yusa, Vicent; Millet, Maurice; Coscolla, Clara; Roca, Marta
2015-09-03
Biomonitoring of both currently-used and banned-persistent pesticides is a very useful tool for assessing human exposure to these chemicals. In this review, we present current approaches and recent advances in the analytical methods for determining the biomarkers of exposure to pesticides in the most commonly used specimens, such as blood, urine, and breast milk, and in emerging non-invasive matrices such as hair and meconium. We critically discuss the main applications for sample treatment, and the instrumental techniques currently used to determine the most relevant pesticide biomarkers. We finally look at the future trends in this field. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ji, Jinghua; Luo, Jianhua; Lei, Qian; Bian, Fangfang
2017-05-01
This paper proposed an analytical method, based on conformal mapping (CM) method, for the accurate evaluation of magnetic field and eddy current (EC) loss in fault-tolerant permanent-magnet (FTPM) machines. The aim of modulation function, applied in CM method, is to change the open-slot structure into fully closed-slot structure, whose air-gap flux density is easy to calculate analytically. Therefore, with the help of Matlab Schwarz-Christoffel (SC) Toolbox, both the magnetic flux density and EC density of FTPM machine are obtained accurately. Finally, time-stepped transient finite-element method (FEM) is used to verify the theoretical analysis, showing that the proposed method is able to predict the magnetic flux density and EC loss precisely.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koester, C J; Moulik, A
This article discusses developments in environmental analytical chemistry that occurred in the years of 2003 and 2004. References were found by searching the ''Science Citation Index and Current Contents''. As in our review of two years ago (A1), techniques are highlighted that represent current trends and state-of-the-art technologies in the sampling, extraction, separation, and detection of trace concentrations, low-part-per-billion and less, of organic, inorganic, and organometallic contaminants in environmental samples. New analytes of interest are also reviewed, the detections of which are made possible by recently developed analytical instruments and methods.
Over the last 10 years the EPA has invested in analytic elements as a computational method used in public domain software supporting capture zone delineation for source water assessments and wellhead protection. The current release is called WhAEM2000 (wellhead analytic element ...
Gold nanostructures and methods of use
Zhang, Jin Z [Santa Cruz, CA; Schwartzberg, Adam [Santa Cruz, CA; Olson, Tammy Y [Santa Cruz, CA
2012-03-20
The invention is drawn to novel nanostructures comprising hollow nanospheres and nanotubes for use as chemical sensors, conduits for fluids, and electronic conductors. The nanostructures can be used in microfluidic devices, for transporting fluids between devices and structures in analytical devices, for conducting electrical currents between devices and structure in analytical devices, and for conducting electrical currents between biological molecules and electronic devices, such as bio-microchips.
Gold nanostructures and methods of use
Zhang, Jin Z.; Schwartzberg, Adam; Olson, Tammy Y.
2016-03-01
The invention is drawn to novel nanostructures comprising hollow nanospheres and nanotubes for use as chemical sensors, conduits for fluids, and electronic conductors. The nanostructures can be used in microfluidic devices, for transporting fluids between devices and structures in analytical devices, for conducting electrical currents between devices and structure in analytical devices, and for conducting electrical currents between biological molecules and electronic devices, such as bio-microchips.
Overview of mycotoxin methods, present status and future needs.
Gilbert, J
1999-01-01
This article reviews current requirements for the analysis for mycotoxins in foods and identifies legislative as well as other factors that are driving development and validation of new methods. New regulatory limits for mycotoxins and analytical quality assurance requirements for laboratories to only use validated methods are seen as major factors driving developments. Three major classes of methods are identified which serve different purposes and can be categorized as screening, official and research. In each case the present status and future needs are assessed. In addition to an overview of trends in analytical methods, some other areas of analytical quality assurance such as participation in proficiency testing and reference materials are identified.
Solution of magnetic field and eddy current problem induced by rotating magnetic poles (abstract)
NASA Astrophysics Data System (ADS)
Liu, Z. J.; Low, T. S.
1996-04-01
The magnetic field and eddy current problems induced by rotating permanent magnet poles occur in electromagnetic dampers, magnetic couplings, and many other devices. Whereas numerical techniques, for example, finite element methods can be exploited to study various features of these problems, such as heat generation and drag torque development, etc., the analytical solution is always of interest to the designers since it helps them to gain the insight into the interdependence of the parameters involved and provides an efficient tool for designing. Some of the previous work showed that the solution of the eddy current problem due to the linearly moving magnet poles can give satisfactory approximation for the eddy current problem due to rotating fields. However, in many practical cases, especially when the number of magnet poles is small, there is significant effect of flux focusing due to the geometry. The above approximation can therefore lead to marked errors in the theoretical predictions of the device performance. Bernot et al. recently described an analytical solution in a polar coordinate system where the radial field is excited by a time-varying source. A discussion of an analytical solution of the magnetic field and eddy current problems induced by moving magnet poles in radial field machines will be given in this article. The theoretical predictions obtained from this method is compared with the results obtained from finite element calculations. The validity of the method is also checked by the comparison of the theoretical predictions and the measurements from a test machine. It is shown that the introduced solution leads to a significant improvement in the air gap field prediction as compared with the results obtained from the analytical solution that models the eddy current problems induced by linearly moving magnet poles.
2017-01-01
Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described. PMID:28542338
Young, Brian; King, Jonathan L; Budowle, Bruce; Armogida, Luigi
2017-01-01
Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described.
Makarov, Sergey N.; Yanamadala, Janakinadh; Piazza, Matthew W.; Helderman, Alex M.; Thang, Niang S.; Burnham, Edward H.; Pascual-Leone, Alvaro
2016-01-01
Goals Transcranial magnetic stimulation (TMS) is increasingly used as a diagnostic and therapeutic tool for numerous neuropsychiatric disorders. The use of TMS might cause whole-body exposure to undesired induced currents in patients and TMS operators. The aim of the present study is to test and justify a simple analytical model known previously, which may be helpful as an upper estimate of eddy current density at a particular distant observation point for any body composition and any coil setup. Methods We compare the analytical solution with comprehensive adaptive mesh refinement-based FEM simulations of a detailed full-body human model, two coil types, five coil positions, about 100,000 observation points, and two distinct pulse rise times, thus providing a representative number of different data sets for comparison, while also using other numerical data. Results Our simulations reveal that, after a certain modification, the analytical model provides an upper estimate for the eddy current density at any location within the body. In particular, it overestimates the peak eddy currents at distant locations from a TMS coil by a factor of 10 on average. Conclusion The simple analytical model tested in the present study may be valuable as a rapid method to safely estimate levels of TMS currents at different locations within a human body. Significance At present, safe limits of general exposure to TMS electric and magnetic fields are an open subject, including fetal exposure for pregnant women. PMID:26685221
Code of Federal Regulations, 2010 CFR
2010-07-01
... methods as presented in current environmental and analytical chemistry literature. Examples of analytical....001 microgram (µg) of compound per milligram of organic extract) of these compounds in the extractable organic matter. The concentration of each individual PAH or NPAH compound identified shall be reported in...
Nováková, Lucie; Pavlík, Jakub; Chrenková, Lucia; Martinec, Ondřej; Červený, Lukáš
2018-01-05
This review is a Part II of the series aiming to provide comprehensive overview of currently used antiviral drugs and to show modern approaches to their analysis. While in the Part I antivirals against herpes viruses and antivirals against respiratory viruses were addressed, this part concerns antivirals against hepatitis viruses (B and C) and human immunodeficiency virus (HIV). Many novel antivirals against hepatitis C virus (HCV) and HIV have been introduced into the clinical practice over the last decade. The recent broadening portfolio of these groups of antivirals is reflected in increasing number of developed analytical methods required to meet the needs of clinical terrain. Part II summarizes the mechanisms of action of antivirals against hepatitis B virus (HBV), HCV, and HIV, their use in clinical practice, and analytical methods for individual classes. It also provides expert opinion on state of art in the field of bioanalysis of these drugs. Analytical methods reflect novelty of these chemical structures and use by far the most current approaches, such as simple and high-throughput sample preparation and fast separation, often by means of UHPLC-MS/MS. Proper method validation based on requirements of bioanalytical guidelines is an inherent part of the developed methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Approaching the Limit in Atomic Spectrochemical Analysis.
ERIC Educational Resources Information Center
Hieftje, Gary M.
1982-01-01
To assess the ability of current analytical methods to approach the single-atom detection level, theoretical and experimentally determined detection levels are presented for several chemical elements. A comparison of these methods shows that the most sensitive atomic spectrochemical technique currently available is based on emission from…
Fast analytical spectral filtering methods for magnetic resonance perfusion quantification.
Reddy, Kasireddy V; Mitra, Abhishek; Yalavarthy, Phaneendra K
2016-08-01
The deconvolution in the perfusion weighted imaging (PWI) plays an important role in quantifying the MR perfusion parameters. The PWI application to stroke and brain tumor studies has become a standard clinical practice. The standard approach for this deconvolution is oscillatory-limited singular value decomposition (oSVD) and frequency domain deconvolution (FDD). The FDD is widely recognized as the fastest approach currently available for deconvolution of MR perfusion data. In this work, two fast deconvolution methods (namely analytical fourier filtering and analytical showalter spectral filtering) are proposed. Through systematic evaluation, the proposed methods are shown to be computationally efficient and quantitatively accurate compared to FDD and oSVD.
Zakon, Yevgeni; Ronen, Zeev; Halicz, Ludwik; Gelman, Faina
2017-10-01
In the present study we propose a new analytical method for 37 Cl/ 35 Cl analysis in perchlorate by Ion Chromatography(IC) coupled to Multicollector Inductively Coupled Plasma Mass Spectrometry (MC-ICPMS). The accuracy of the analytical method was validated by analysis of international perchlorate standard materials USGS-37 and USGS -38; analytical precision better than ±0.4‰ was achieved. 37 Cl/ 35 Cl isotope ratio analysis in perchlorate during laboratory biodegradation experiment with microbial cultures enriched from the contaminated soil in Israel resulted in isotope enrichment factor ε 37 Cl = -13.3 ± 1‰, which falls in the range reported previously for perchlorate biodegradation by pure microbial cultures. The proposed analytical method may significantly simplify the procedure for isotope analysis of perchlorate which is currently applied in environmental studies. Copyright © 2017. Published by Elsevier Ltd.
Green Chemistry Metrics with Special Reference to Green Analytical Chemistry.
Tobiszewski, Marek; Marć, Mariusz; Gałuszka, Agnieszka; Namieśnik, Jacek
2015-06-12
The concept of green chemistry is widely recognized in chemical laboratories. To properly measure an environmental impact of chemical processes, dedicated assessment tools are required. This paper summarizes the current state of knowledge in the field of development of green chemistry and green analytical chemistry metrics. The diverse methods used for evaluation of the greenness of organic synthesis, such as eco-footprint, E-Factor, EATOS, and Eco-Scale are described. Both the well-established and recently developed green analytical chemistry metrics, including NEMI labeling and analytical Eco-scale, are presented. Additionally, this paper focuses on the possibility of the use of multivariate statistics in evaluation of environmental impact of analytical procedures. All the above metrics are compared and discussed in terms of their advantages and disadvantages. The current needs and future perspectives in green chemistry metrics are also discussed.
Suba, Dávid; Urbányi, Zoltán; Salgó, András
2016-10-01
Capillary electrophoresis techniques are widely used in the analytical biotechnology. Different electrophoretic techniques are very adequate tools to monitor size-and charge heterogenities of protein drugs. Method descriptions and development studies of capillary zone electrophoresis (CZE) have been described in literature. Most of them are performed based on the classical one-factor-at-time (OFAT) approach. In this study a very simple method development approach is described for capillary zone electrophoresis: a "two-phase-four-step" approach is introduced which allows a rapid, iterative method development process and can be a good platform for CZE method. In every step the current analytical target profile and an appropriate control strategy were established to monitor the current stage of development. A very good platform was established to investigate intact and digested protein samples. Commercially available monoclonal antibody was chosen as model protein for the method development study. The CZE method was qualificated after the development process and the results were presented. The analytical system stability was represented by the calculated RSD% value of area percentage and migration time of the selected peaks (<0.8% and <5%) during the intermediate precision investigation. Copyright © 2016 Elsevier B.V. All rights reserved.
Manickum, Thavrin; John, Wilson
2015-07-01
The availability of national test centers to offer a routine service for analysis and quantitation of some selected steroid hormones [natural estrogens (17-β-estradiol, E2; estrone, E1; estriol, E3), synthetic estrogen (17-α-ethinylestradiol, EE2), androgen (testosterone), and progestogen (progesterone)] in wastewater matrix was investigated; corresponding internationally used chemical- and immuno-analytical test methods were reviewed. The enzyme-linked immunosorbent assay (ELISA) (immuno-analytical technique) was also assessed for its suitability as a routine test method to quantitate the levels of these hormones at a sewage/wastewater treatment plant (WTP) (Darvill, Pietermaritzburg, South Africa), over a 2-year period. The method performance and other relevant characteristics of the immuno-analytical ELISA method were compared to the conventional chemical-analytical methodology, like gas/liquid chromatography-mass spectrometry (GC/LC-MS), and GC-LC/tandem mass spectrometry (MSMS), for quantitation of the steroid hormones in wastewater and environmental waters. The national immuno-analytical ELISA technique was found to be sensitive (LOQ 5 ng/L, LOD 0.2-5 ng/L), accurate (mean recovery 96%), precise (RSD 7-10%), and cost-effective for screening and quantitation of these steroid hormones in wastewater and environmental water matrix. A survey of the most current international literature indicates a fairly equal use of the LC-MS/MS, GC-MS/MS (chemical-analytical), and ELISA (immuno-analytical) test methods for screening and quantitation of the target steroid hormones in both water and wastewater matrix. Internationally, the observed sensitivity, based on LOQ (ng/L), for the steroid estrogens E1, E2, EE2, is, in decreasing order: LC-MSMS (0.08-9.54) > GC-MS (1) > ELISA (5) (chemical-analytical > immuno-analytical). At the national level, the routine, unoptimized chemical-analytical LC-MSMS method was found to lack the required sensitivity for meeting environmental requirements for steroid hormone quantitation. Further optimization of the sensitivity of the chemical-analytical LC-tandem mass spectrometry methods, especially for wastewater screening, in South Africa is required. Risk assessment studies showed that it was not practical to propose standards or allowable limits for the steroid estrogens E1, E2, EE2, and E3; the use of predicted-no-effect concentration values of the steroid estrogens appears to be appropriate for use in their risk assessment in relation to aquatic organisms. For raw water sources, drinking water, raw and treated wastewater, the use of bioassays, with trigger values, is a useful screening tool option to decide whether further examination of specific endocrine activity may be warranted, or whether concentrations of such activity are of low priority, with respect to health concerns in the human population. The achievement of improved quantitation limits for immuno-analytical methods, like ELISA, used for compound quantitation, and standardization of the method for measuring E2 equivalents (EEQs) used for biological activity (endocrine: e.g., estrogenic) are some areas for future EDC research.
Wind-induced vibration of stay cables : brief
DOT National Transportation Integrated Search
2005-02-01
The objectives of this project were to: : Identify gaps in current knowledge base : Conduct analytical and experimental research in critical areas : Study performance of existing cable-stayed bridges : Study current mitigation methods...
Zhang, Baohong; Pan, Xiaoping; Venne, Louise; Dunnum, Suzy; McMurry, Scott T; Cobb, George P; Anderson, Todd A
2008-05-30
A reliable, sensitive, and reproducible method was developed for quantitative determination of nine new generation pesticides currently used in cotton agriculture. Injector temperature significantly affected analyte response as indicated by electron capture detector (ECD) chromatograms. A majority of the analytes had an enhanced response at injector temperatures between 240 and 260 degrees C, especially analytes such as acephate that overall had a poor response on the ECD. The method detection limits (MDLs) were 0.13, 0.05, 0.29, 0.35, 0.08, 0.10, 0.32, 0.05, and 0.59 ng/mL for acephate, trifuralin, malathion, thiamethozam, pendimethalin, DEF6, acetamiprid, brifenthrin, and lambda-cyhalothrin. This study provides a precision (0.17-13.1%), accuracy (recoveries=88-107%) and good reproducible method for the analytes of interest. At relatively high concentrations, only lambda-cyhalothrin was unstable at room temperature (20-25 degrees C) and 4 degrees C over 10 days. At relatively low concentrations, acephate and acetamiprid were also unstable regardless of temperature. After 10 days storage at room temperature, 30-40% degradation of lambda-cyhalothrin was observed. It is recommended that acephate, acetamiprid, and lambda-cyhalothrin be stored at -20 degrees C or analyzed immediately after extraction.
USDA-ARS?s Scientific Manuscript database
Current methods for generating malting quality metrics have been developed largely to support commercial malting and brewing operations, providing accurate, reproducible analytical data to guide malting and brewing production. Infrastructure to support these analytical operations often involves sub...
NASA Astrophysics Data System (ADS)
Pekşen, Ertan; Yas, Türker; Kıyak, Alper
2014-09-01
We examine the one-dimensional direct current method in anisotropic earth formation. We derive an analytic expression of a simple, two-layered anisotropic earth model. Further, we also consider a horizontally layered anisotropic earth response with respect to the digital filter method, which yields a quasi-analytic solution over anisotropic media. These analytic and quasi-analytic solutions are useful tests for numerical codes. A two-dimensional finite difference earth model in anisotropic media is presented in order to generate a synthetic data set for a simple one-dimensional earth. Further, we propose a particle swarm optimization method for estimating the model parameters of a layered anisotropic earth model such as horizontal and vertical resistivities, and thickness. The particle swarm optimization is a naturally inspired meta-heuristic algorithm. The proposed method finds model parameters quite successfully based on synthetic and field data. However, adding 5 % Gaussian noise to the synthetic data increases the ambiguity of the value of the model parameters. For this reason, the results should be controlled by a number of statistical tests. In this study, we use probability density function within 95 % confidence interval, parameter variation of each iteration and frequency distribution of the model parameters to reduce the ambiguity. The result is promising and the proposed method can be used for evaluating one-dimensional direct current data in anisotropic media.
Cho, Il-Hoon; Ku, Seockmo
2017-09-30
The development of novel and high-tech solutions for rapid, accurate, and non-laborious microbial detection methods is imperative to improve the global food supply. Such solutions have begun to address the need for microbial detection that is faster and more sensitive than existing methodologies (e.g., classic culture enrichment methods). Multiple reviews report the technical functions and structures of conventional microbial detection tools. These tools, used to detect pathogens in food and food homogenates, were designed via qualitative analysis methods. The inherent disadvantage of these analytical methods is the necessity for specimen preparation, which is a time-consuming process. While some literature describes the challenges and opportunities to overcome the technical issues related to food industry legal guidelines, there is a lack of reviews of the current trials to overcome technological limitations related to sample preparation and microbial detection via nano and micro technologies. In this review, we primarily explore current analytical technologies, including metallic and magnetic nanomaterials, optics, electrochemistry, and spectroscopy. These techniques rely on the early detection of pathogens via enhanced analytical sensitivity and specificity. In order to introduce the potential combination and comparative analysis of various advanced methods, we also reference a novel sample preparation protocol that uses microbial concentration and recovery technologies. This technology has the potential to expedite the pre-enrichment step that precedes the detection process.
2013-01-01
Influenza virus-like particle vaccines are one of the most promising ways to respond to the threat of future influenza pandemics. VLPs are composed of viral antigens but lack nucleic acids making them non-infectious which limit the risk of recombination with wild-type strains. By taking advantage of the advancements in cell culture technologies, the process from strain identification to manufacturing has the potential to be completed rapidly and easily at large scales. After closely reviewing the current research done on influenza VLPs, it is evident that the development of quantification methods has been consistently overlooked. VLP quantification at all stages of the production process has been left to rely on current influenza quantification methods (i.e. Hemagglutination assay (HA), Single Radial Immunodiffusion assay (SRID), NA enzymatic activity assays, Western blot, Electron Microscopy). These are analytical methods developed decades ago for influenza virions and final bulk influenza vaccines. Although these methods are time-consuming and cumbersome they have been sufficient for the characterization of final purified material. Nevertheless, these analytical methods are impractical for in-line process monitoring because VLP concentration in crude samples generally falls out of the range of detection for these methods. This consequently impedes the development of robust influenza-VLP production and purification processes. Thus, development of functional process analytical techniques, applicable at every stage during production, that are compatible with different production platforms is in great need to assess, optimize and exploit the full potential of novel manufacturing platforms. PMID:23642219
Modern Instrumental Methods in Forensic Toxicology*
Smith, Michael L.; Vorce, Shawn P.; Holler, Justin M.; Shimomura, Eric; Magluilo, Joe; Jacobs, Aaron J.; Huestis, Marilyn A.
2009-01-01
This article reviews modern analytical instrumentation in forensic toxicology for identification and quantification of drugs and toxins in biological fluids and tissues. A brief description of the theory and inherent strengths and limitations of each methodology is included. The focus is on new technologies that address current analytical limitations. A goal of this review is to encourage innovations to improve our technological capabilities and to encourage use of these analytical techniques in forensic toxicology practice. PMID:17579968
Gasoline and Diesel Fuel Test Methods Additional Resources
Supporting documents on the Direct Final Rule that allows refiners and laboratories to use more current and improved fuel testing procedures for twelve American Society for Testing and Materials analytical test methods.
Do Premarital Education Programs Really Work? A Meta-Analytic Study
ERIC Educational Resources Information Center
Fawcett, Elizabeth B.; Hawkins, Alan J.; Blanchard, Victoria L.; Carroll, Jason S.
2010-01-01
Previous studies (J. S. Carroll & W. J. Doherty, 2003) have asserted that premarital education programs have a positive effect on program participants. Using meta-analytic methods of current best practices to look across the entire body of published and unpublished evaluation research on premarital education, we found a more complex pattern of…
Mega-Analysis of School Psychology Blueprint for Training and Practice Domains
ERIC Educational Resources Information Center
Burns, Matthew K.; Kanive, Rebecca; Zaslofsky, Anne F.; Parker, David C.
2013-01-01
Meta-analytic research is an effective method for synthesizing existing research and for informing practice and policy. Hattie (2009) suggested that meta-analytic procedures could be employed to existing meta-analyses to create a mega-analysis. The current mega-analysis examined a sample of 47 meta-analyses according to the "School…
Jurowski, Kamil; Buszewski, Bogusław; Piekoszewski, Wojciech
2015-01-01
Nowadays, studies related to the distribution of metallic elements in biological samples are one of the most important issues. There are many articles dedicated to specific analytical atomic spectrometry techniques used for mapping/(bio)imaging the metallic elements in various kinds of biological samples. However, in such literature, there is a lack of articles dedicated to reviewing calibration strategies, and their problems, nomenclature, definitions, ways and methods used to obtain quantitative distribution maps. The aim of this article was to characterize the analytical calibration in the (bio)imaging/mapping of the metallic elements in biological samples including (1) nomenclature; (2) definitions, and (3) selected and sophisticated, examples of calibration strategies with analytical calibration procedures applied in the different analytical methods currently used to study an element's distribution in biological samples/materials such as LA ICP-MS, SIMS, EDS, XRF and others. The main emphasis was placed on the procedures and methodology of the analytical calibration strategy. Additionally, the aim of this work is to systematize the nomenclature for the calibration terms: analytical calibration, analytical calibration method, analytical calibration procedure and analytical calibration strategy. The authors also want to popularize the division of calibration methods that are different than those hitherto used. This article is the first work in literature that refers to and emphasizes many different and complex aspects of analytical calibration problems in studies related to (bio)imaging/mapping metallic elements in different kinds of biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Analytical evaluation of current starch methods used in the international sugar industry: Part I.
Cole, Marsha; Eggleston, Gillian; Triplett, Alexa
2017-08-01
Several analytical starch methods exist in the international sugar industry to mitigate starch-related processing challenges and assess the quality of traded end-products. These methods use iodometric chemistry, mostly potato starch standards, and utilize similar solubilization strategies, but had not been comprehensively compared. In this study, industrial starch methods were compared to the USDA Starch Research method using simulated raw sugars. Type of starch standard, solubilization approach, iodometric reagents, and wavelength detection affected total starch determination in simulated raw sugars. Simulated sugars containing potato starch were more accurately detected by the industrial methods, whereas those containing corn starch, a better model for sugarcane starch, were only accurately measured by the USDA Starch Research method. Use of a potato starch standard curve over-estimated starch concentrations. Among the variables studied, starch standard, solubilization approach, and wavelength detection affected the sensitivity, accuracy/precision, and limited the detection/quantification of the current industry starch methods the most. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, Qiao; Liang, WanZhen, E-mail: liangwz@xmu.edu.cn; Liu, Jie
2014-05-14
This work extends our previous works [J. Liu and W. Z. Liang, J. Chem. Phys. 135, 014113 (2011); J. Liu and W. Z. Liang, J. Chem. Phys. 135, 184111 (2011)] on analytical excited-state energy Hessian within the framework of time-dependent density functional theory (TDDFT) to couple with molecular mechanics (MM). The formalism, implementation, and applications of analytical first and second energy derivatives of TDDFT/MM excited state with respect to the nuclear and electric perturbations are presented. Their performances are demonstrated by the calculations of adiabatic excitation energies, and excited-state geometries, harmonic vibrational frequencies, and infrared intensities for a number ofmore » benchmark systems. The consistent results with the full quantum mechanical method and other hybrid theoretical methods indicate the reliability of the current numerical implementation of developed algorithms. The computational accuracy and efficiency of the current analytical approach are also checked and the computational efficient strategies are suggested to speed up the calculations of complex systems with many MM degrees of freedom. Finally, we apply the current analytical approach in TDDFT/MM to a realistic system, a red fluorescent protein chromophore together with part of its nearby protein matrix. The calculated results indicate that the rearrangement of the hydrogen bond interactions between the chromophore and the protein matrix is responsible for the large Stokes shift.« less
Three-dimensional eddy current solution of a polyphase machine test model (abstract)
NASA Astrophysics Data System (ADS)
Pahner, Uwe; Belmans, Ronnie; Ostovic, Vlado
1994-05-01
This abstract describes a three-dimensional (3D) finite element solution of a test model that has been reported in the literature. The model is a basis for calculating the current redistribution effects in the end windings of turbogenerators. The aim of the study is to see whether the analytical results of the test model can be found using a general purpose finite element package, thus indicating that the finite element model is accurate enough to treat real end winding problems. The real end winding problems cannot be solved analytically, as the geometry is far too complicated. The model consists of a polyphase coil set, containing 44 individual coils. This set generates a two pole mmf distribution on a cylindrical surface. The rotating field causes eddy currents to flow in the inner massive and conducting rotor. In the analytical solution a perfect sinusoidal mmf distribution is put forward. The finite element model contains 85824 tetrahedra and 16451 nodes. A complex single scalar potential representation is used in the nonconducting parts. The computation time required was 3 h and 42 min. The flux plots show that the field distribution is acceptable. Furthermore, the induced currents are calculated and compared with the values found from the analytical solution. The distribution of the eddy currents is very close to the distribution of the analytical solution. The most important results are the losses, both local and global. The value of the overall losses is less than 2% away from those of the analytical solution. Also the local distribution of the losses is at any given point less than 7% away from the analytical solution. The deviations of the results are acceptable and are partially due to the fact that the sinusoidal mmf distribution was not modeled perfectly in the finite element method.
Technique for determining the amount of hydrogen diffusing through a steel membrane
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kardash, N.V.; Batrakov, V.V.
1995-07-01
Hydrogen diffusion through steel membranes still attracts much attention from scientists, and during recent years new results have been reported. Hydrogen diffusion is usually studied in the cell designed by M.A. Devanathan, but there are also other techniques for determining hydrogen permeability, namely: from the change in the solution volume in a horizontal or gas microburette; from the hydrogen ionization current; from the penetration current; and from the buckling of the cathode. The authors developed an analytical method using autocatalytic titration for determining the amount of hydrogen passed through a steel membrane. The method is based on permanganatometry which ismore » widely used in analytical chemistry.« less
Propfan experimental data analysis
NASA Technical Reports Server (NTRS)
Vernon, David F.; Page, Gregory S.; Welge, H. Robert
1984-01-01
A data reduction method, which is consistent with the performance prediction methods used for analysis of new aircraft designs, is defined and compared to the method currently used by NASA using data obtained from an Ames Res. Center 11 foot transonic wind tunnel test. Pressure and flow visualization data from the Ames test for both the powered straight underwing nacelle, and an unpowered contoured overwing nacelle installation is used to determine the flow phenomena present for a wind mounted turboprop installation. The test data is compared to analytic methods, showing the analytic methods to be suitable for design and analysis of new configurations. The data analysis indicated that designs with zero interference drag levels are achieveable with proper wind and nacelle tailoring. A new overwing contoured nacelle design and a modification to the wing leading edge extension for the current wind tunnel model design are evaluated. Hardware constraints of the current model parts prevent obtaining any significant performance improvement due to a modified nacelle contouring. A new aspect ratio wing design for an up outboard rotation turboprop installation is defined, and an advanced contoured nacelle is provided.
NASA Astrophysics Data System (ADS)
Ivanova, V.; Surleva, A.; Koleva, B.
2018-06-01
An ion chromatographic method for determination of fluoride, chloride, nitrate and sulphate in untreated and treated drinking waters was described. An automated 850 IC Professional, Metrohm system equipped with conductivity detector and Metrosep A Supp 7-250 (250 x 4 mm) column was used. The validation of the method was performed for simultaneous determination of all studied analytes and the results have showed that the validated method fits the requirements of the current water legislation. The main analytical characteristics were estimated for each of studied analytes: limits of detection, limits of quantification, working and linear ranges, repeatability and intermediate precision, recovery. The trueness of the method was estimated by analysis of certified reference material for soft drinking water. Recovery test was performed on spiked drinking water samples. An uncertainty was estimated. The method was applied for analysis of drinking waters before and after chlorination.
Contribution of Electrochemistry to the Biomedical and Pharmaceutical Analytical Sciences.
Kauffmann, Jean-Michel; Patris, Stephanie; Vandeput, Marie; Sarakbi, Ahmad; Sakira, Abdul Karim
2016-01-01
All analytical techniques have experienced major progress since the last ten years and electroanalysis is also involved in this trend. The unique characteristics of phenomena occurring at the electrode-solution interface along with the variety of electrochemical methods currently available allow for a broad spectrum of applications. Potentiometric, conductometric, voltammetric and amperometric methods are briefly reviewed with a critical view in terms of performance of the developed instrumentation with special emphasis on pharmaceutical and biomedical applications.
A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.
Yang, Harry; Zhang, Jianchun
2015-01-01
The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.
An improved 3D MoF method based on analytical partial derivatives
NASA Astrophysics Data System (ADS)
Chen, Xiang; Zhang, Xiong
2016-12-01
MoF (Moment of Fluid) method is one of the most accurate approaches among various surface reconstruction algorithms. As other second order methods, MoF method needs to solve an implicit optimization problem to obtain the optimal approximate surface. Therefore, the partial derivatives of the objective function have to be involved during the iteration for efficiency and accuracy. However, to the best of our knowledge, the derivatives are currently estimated numerically by finite difference approximation because it is very difficult to obtain the analytical derivatives of the object function for an implicit optimization problem. Employing numerical derivatives in an iteration not only increase the computational cost, but also deteriorate the convergence rate and robustness of the iteration due to their numerical error. In this paper, the analytical first order partial derivatives of the objective function are deduced for 3D problems. The analytical derivatives can be calculated accurately, so they are incorporated into the MoF method to improve its accuracy, efficiency and robustness. Numerical studies show that by using the analytical derivatives the iterations are converged in all mixed cells with the efficiency improvement of 3 to 4 times.
Zhang, Lei; Yue, Hong-Shui; Ju, Ai-Chun; Ye, Zheng-Liang
2016-10-01
Currently, near infrared spectroscopy (NIRS) has been considered as an efficient tool for achieving process analytical technology(PAT) in the manufacture of traditional Chinese medicine (TCM) products. In this article, the NIRS based process analytical system for the production of salvianolic acid for injection was introduced. The design of the process analytical system was described in detail, including the selection of monitored processes and testing mode, and potential risks that should be avoided. Moreover, the development of relative technologies was also presented, which contained the establishment of the monitoring methods for the elution of polyamide resin and macroporous resin chromatography processes, as well as the rapid analysis method for finished products. Based on author's experience of research and work, several issues in the application of NIRS to the process monitoring and control in TCM production were then raised, and some potential solutions were also discussed. The issues include building the technical team for process analytical system, the design of the process analytical system in the manufacture of TCM products, standardization of the NIRS-based analytical methods, and improving the management of process analytical system. Finally, the prospect for the application of NIRS in the TCM industry was put forward. Copyright© by the Chinese Pharmaceutical Association.
Analytical methods for quantitation of prenylated flavonoids from hops.
Nikolić, Dejan; van Breemen, Richard B
2013-01-01
The female flowers of hops ( Humulus lupulus L.) are used as a flavoring agent in the brewing industry. There is growing interest in possible health benefits of hops, particularly as estrogenic and chemopreventive agents. Among the possible active constituents, most of the attention has focused on prenylated flavonoids, which can chemically be classified as prenylated chalcones and prenylated flavanones. Among chalcones, xanthohumol (XN) and desmethylxanthohumol (DMX) have been the most studied, while among flavanones, 8-prenylnaringenin (8-PN) and 6-prenylnaringenin (6-PN) have received the most attention. Because of the interest in medicinal properties of prenylated flavonoids, there is demand for accurate, reproducible and sensitive analytical methods to quantify these compounds in various matrices. Such methods are needed, for example, for quality control and standardization of hop extracts, measurement of the content of prenylated flavonoids in beer, and to determine pharmacokinetic properties of prenylated flavonoids in animals and humans. This review summarizes currently available analytical methods for quantitative analysis of the major prenylated flavonoids, with an emphasis on the LC-MS and LC-MS-MS methods and their recent applications to biomedical research on hops. This review covers all methods in which prenylated flavonoids have been measured, either as the primary analytes or as a part of a larger group of analytes. The review also discusses methodological issues relating to the quantitative analysis of these compounds regardless of the chosen analytical approach.
METHOD DEVELOPMENT FOR THE DETERMINATION OF FORMALDEHYDE IN SAMPLES OF ENVIRONMENTAL ORIGIN
An analytical method was developed for the determination of formaldehyde in samples of environmental origin. After a review of the current literature, five candidate methods involving chemical derivatization were chosen for evaluation. The five derivatization reagents studied wer...
Glycoprotein Enrichment Analytical Techniques: Advantages and Disadvantages.
Zhu, R; Zacharias, L; Wooding, K M; Peng, W; Mechref, Y
2017-01-01
Protein glycosylation is one of the most important posttranslational modifications. Numerous biological functions are related to protein glycosylation. However, analytical challenges remain in the glycoprotein analysis. To overcome the challenges associated with glycoprotein analysis, many analytical techniques were developed in recent years. Enrichment methods were used to improve the sensitivity of detection, while HPLC and mass spectrometry methods were developed to facilitate the separation of glycopeptides/proteins and enhance detection, respectively. Fragmentation techniques applied in modern mass spectrometers allow the structural interpretation of glycopeptides/proteins, while automated software tools started replacing manual processing to improve the reliability and throughput of the analysis. In this chapter, the current methodologies of glycoprotein analysis were discussed. Multiple analytical techniques are compared, and advantages and disadvantages of each technique are highlighted. © 2017 Elsevier Inc. All rights reserved.
CHAPTER 7: Glycoprotein Enrichment Analytical Techniques: Advantages and Disadvantages
Zhu, Rui; Zacharias, Lauren; Wooding, Kerry M.; Peng, Wenjing; Mechref, Yehia
2017-01-01
Protein glycosylation is one of the most important posttranslational modifications. Numerous biological functions are related to protein glycosylation. However, analytical challenges remain in the glycoprotein analysis. To overcome the challenges associated with glycoprotein analysis, many analytical techniques were developed in recent years. Enrichment methods were used to improve the sensitivity of detection while HPLC and mass spectrometry methods were developed to facilitate the separation of glycopeptides/proteins and enhance detection, respectively. Fragmentation techniques applied in modern mass spectrometers allow the structural interpretation of glycopeptides/proteins while automated software tools started replacing manual processing to improve the reliability and throughout of the analysis. In this chapter, the current methodologies of glycoprotein analysis were discussed. Multiple analytical techniques are compared, and advantages and disadvantages of each technique are highlighted. PMID:28109440
Perspectives on Using Video Recordings in Conversation Analytical Studies on Learning in Interaction
ERIC Educational Resources Information Center
Rusk, Fredrik; Pörn, Michaela; Sahlström, Fritjof; Slotte-Lüttge, Anna
2015-01-01
Video is currently used in many studies to document the interaction in conversation analytical (CA) studies on learning. The discussion on the method used in these studies has primarily focused on the analysis or the data construction, whereas the relation between data construction and analysis is rarely brought to attention. The aim of this…
Simulation of a model nanopore sensor: Ion competition underlies device behavior.
Mádai, Eszter; Valiskó, Mónika; Dallos, András; Boda, Dezső
2017-12-28
We study a model nanopore sensor with which a very low concentration of analyte molecules can be detected on the basis of the selective binding of the analyte molecules to the binding sites on the pore wall. The bound analyte ions partially replace the current-carrier cations in a thermodynamic competition. This competition depends both on the properties of the nanopore and the concentrations of the competing ions (through their chemical potentials). The output signal given by the device is the current reduction caused by the presence of the analyte ions. The concentration of the analyte ions can be determined through calibration curves. We model the binding site with the square-well potential and the electrolyte as charged hard spheres in an implicit background solvent. We study the system with a hybrid method in which we compute the ion flux with the Nernst-Planck (NP) equation coupled with the Local Equilibrium Monte Carlo (LEMC) simulation technique. The resulting NP+LEMC method is able to handle both strong ionic correlations inside the pore (including finite size of ions) and bulk concentrations as low as micromolar. We analyze the effect of bulk ion concentrations, pore parameters, binding site parameters, electrolyte properties, and voltage on the behavior of the device.
Simulation of a model nanopore sensor: Ion competition underlies device behavior
NASA Astrophysics Data System (ADS)
Mádai, Eszter; Valiskó, Mónika; Dallos, András; Boda, Dezső
2017-12-01
We study a model nanopore sensor with which a very low concentration of analyte molecules can be detected on the basis of the selective binding of the analyte molecules to the binding sites on the pore wall. The bound analyte ions partially replace the current-carrier cations in a thermodynamic competition. This competition depends both on the properties of the nanopore and the concentrations of the competing ions (through their chemical potentials). The output signal given by the device is the current reduction caused by the presence of the analyte ions. The concentration of the analyte ions can be determined through calibration curves. We model the binding site with the square-well potential and the electrolyte as charged hard spheres in an implicit background solvent. We study the system with a hybrid method in which we compute the ion flux with the Nernst-Planck (NP) equation coupled with the Local Equilibrium Monte Carlo (LEMC) simulation technique. The resulting NP+LEMC method is able to handle both strong ionic correlations inside the pore (including finite size of ions) and bulk concentrations as low as micromolar. We analyze the effect of bulk ion concentrations, pore parameters, binding site parameters, electrolyte properties, and voltage on the behavior of the device.
Vincent, Ursula; Serano, Federica; von Holst, Christoph
2017-08-01
Carotenoids are used in animal nutrition mainly as sensory additives that favourably affect the colour of fish, birds and food of animal origin. Various analytical methods exist for their quantification in compound feed, reflecting the different physico-chemical characteristics of the carotenoid and the corresponding feed additives. They may be natural products or specific formulations containing the target carotenoids produced by chemical synthesis. In this study a multi-analyte method was developed that can be applied to the determination of all 10 carotenoids currently authorised within the European Union for compound feedingstuffs. The method functions regardless of whether the carotenoids have been added to the compound feed via natural products or specific formulations. It is comprised of three steps: (1) digestion of the feed sample with an enzyme; (2) pressurised liquid extraction; and (3) quantification of the analytes by reversed-phase HPLC coupled to a photodiode array detector in the visible range. The method was single-laboratory validated for poultry and fish feed covering a mass fraction range of the target analyte from 2.5 to 300 mg kg - 1 . The following method performance characteristics were obtained: the recovery rate varied from 82% to 129% and precision expressed as the relative standard deviation of intermediate precision varied from 1.6% to 15%. Based on the acceptable performance obtained in the validation study, the multi-analyte method is considered fit for the intended purpose.
Mirski, Tomasz; Bartoszcze, Michał; Bielawska-Drózd, Agata; Cieślik, Piotr; Michalski, Aleksander J; Niemcewicz, Marcin; Kocik, Janusz; Chomiczewski, Krzysztof
2014-01-01
Modern threats of bioterrorism force the need to develop methods for rapid and accurate identification of dangerous biological agents. Currently, there are many types of methods used in this field of studies that are based on immunological or genetic techniques, or constitute a combination of both methods (immuno-genetic). There are also methods that have been developed on the basis of physical and chemical properties of the analytes. Each group of these analytical assays can be further divided into conventional methods (e.g. simple antigen-antibody reactions, classical PCR, real-time PCR), and modern technologies (e.g. microarray technology, aptamers, phosphors, etc.). Nanodiagnostics constitute another group of methods that utilize the objects at a nanoscale (below 100 nm). There are also integrated and automated diagnostic systems, which combine different methods and allow simultaneous sampling, extraction of genetic material and detection and identification of the analyte using genetic, as well as immunological techniques.
Stochastic sensing through covalent interactions
Bayley, Hagan; Shin, Seong-Ho; Luchian, Tudor; Cheley, Stephen
2013-03-26
A system and method for stochastic sensing in which the analyte covalently bonds to the sensor element or an adaptor element. If such bonding is irreversible, the bond may be broken by a chemical reagent. The sensor element may be a protein, such as the engineered P.sub.SH type or .alpha.HL protein pore. The analyte may be any reactive analyte, including chemical weapons, environmental toxins and pharmaceuticals. The analyte covalently bonds to the sensor element to produce a detectable signal. Possible signals include change in electrical current, change in force, and change in fluorescence. Detection of the signal allows identification of the analyte and determination of its concentration in a sample solution. Multiple analytes present in the same solution may be detected.
Current Status of Mycotoxin Analysis: A Critical Review.
Shephard, Gordon S
2016-07-01
It is over 50 years since the discovery of aflatoxins focused the attention of food safety specialists on fungal toxins in the feed and food supply. Since then, analysis of this important group of natural contaminants has advanced in parallel with general developments in analytical science, and current MS methods are capable of simultaneously analyzing hundreds of compounds, including mycotoxins, pesticides, and drugs. This profusion of data may advance our understanding of human exposure, yet constitutes an interpretive challenge to toxicologists and food safety regulators. Despite these advances in analytical science, the basic problem of the extreme heterogeneity of mycotoxin contamination, although now well understood, cannot be circumvented. The real health challenges posed by mycotoxin exposure occur in the developing world, especially among small-scale and subsistence farmers. Addressing these problems requires innovative approaches in which analytical science must also play a role in providing suitable out-of-laboratory analytical techniques.
METHOD DEVELOPMENT, EVALUATION, REFINEMENT, AND ANALYSIS FOR FIELD STUDIES
Manufacturers routinely introduce new pesticides into the marketplace and discontinue manufacturing older pesticides that may be more toxic to humans. Analytical methods and environmental data are needed for current use residential pesticides (e.g., pyrethrins, synthetic pyrethr...
ERIC Educational Resources Information Center
Smith, Walter T., Jr.; Patterson, John M.
1980-01-01
Discusses analytical methods selected from current research articles. Groups information by topics of general interest, including acids, aldehydes and ketones, nitro compounds, phenols, and thiols. Cites 97 references. (CS)
Modern analytical methods for the detection of food fraud and adulteration by food category.
Hong, Eunyoung; Lee, Sang Yoo; Jeong, Jae Yun; Park, Jung Min; Kim, Byung Hee; Kwon, Kisung; Chun, Hyang Sook
2017-09-01
This review provides current information on the analytical methods used to identify food adulteration in the six most adulterated food categories: animal origin and seafood, oils and fats, beverages, spices and sweet foods (e.g. honey), grain-based food, and others (organic food and dietary supplements). The analytical techniques (both conventional and emerging) used to identify adulteration in these six food categories involve sensory, physicochemical, DNA-based, chromatographic and spectroscopic methods, and have been combined with chemometrics, making these techniques more convenient and effective for the analysis of a broad variety of food products. Despite recent advances, the need remains for suitably sensitive and widely applicable methodologies that encompass all the various aspects of food adulteration. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
NASA Technical Reports Server (NTRS)
Hylton, L. D.; Mihelc, M. S.; Turner, E. R.; Nealy, D. A.; York, R. E.
1983-01-01
Three airfoil data sets were selected for use in evaluating currently available analytical models for predicting airfoil surface heat transfer distributions in a 2-D flow field. Two additional airfoils, representative of highly loaded, low solidity airfoils currently being designed, were selected for cascade testing at simulated engine conditions. Some 2-D analytical methods were examined and a version of the STAN5 boundary layer code was chosen for modification. The final form of the method utilized a time dependent, transonic inviscid cascade code coupled to a modified version of the STAN5 boundary layer code featuring zero order turbulence modeling. The boundary layer code is structured to accommodate a full spectrum of empirical correlations addressing the coupled influences of pressure gradient, airfoil curvature, and free-stream turbulence on airfoil surface heat transfer distribution and boundary layer transitional behavior. Comparison of pedictions made with the model to the data base indicates a significant improvement in predictive capability.
NASA Astrophysics Data System (ADS)
Hylton, L. D.; Mihelc, M. S.; Turner, E. R.; Nealy, D. A.; York, R. E.
1983-05-01
Three airfoil data sets were selected for use in evaluating currently available analytical models for predicting airfoil surface heat transfer distributions in a 2-D flow field. Two additional airfoils, representative of highly loaded, low solidity airfoils currently being designed, were selected for cascade testing at simulated engine conditions. Some 2-D analytical methods were examined and a version of the STAN5 boundary layer code was chosen for modification. The final form of the method utilized a time dependent, transonic inviscid cascade code coupled to a modified version of the STAN5 boundary layer code featuring zero order turbulence modeling. The boundary layer code is structured to accommodate a full spectrum of empirical correlations addressing the coupled influences of pressure gradient, airfoil curvature, and free-stream turbulence on airfoil surface heat transfer distribution and boundary layer transitional behavior. Comparison of pedictions made with the model to the data base indicates a significant improvement in predictive capability.
Detection methods and performance criteria for genetically modified organisms.
Bertheau, Yves; Diolez, Annick; Kobilinsky, André; Magin, Kimberly
2002-01-01
Detection methods for genetically modified organisms (GMOs) are necessary for many applications, from seed purity assessment to compliance of food labeling in several countries. Numerous analytical methods are currently used or under development to support these needs. The currently used methods are bioassays and protein- and DNA-based detection protocols. To avoid discrepancy of results between such largely different methods and, for instance, the potential resulting legal actions, compatibility of the methods is urgently needed. Performance criteria of methods allow evaluation against a common standard. The more-common performance criteria for detection methods are precision, accuracy, sensitivity, and specificity, which together specifically address other terms used to describe the performance of a method, such as applicability, selectivity, calibration, trueness, precision, recovery, operating range, limit of quantitation, limit of detection, and ruggedness. Performance criteria should provide objective tools to accept or reject specific methods, to validate them, to ensure compatibility between validated methods, and be used on a routine basis to reject data outside an acceptable range of variability. When selecting a method of detection, it is also important to consider its applicability, its field of applications, and its limitations, by including factors such as its ability to detect the target analyte in a given matrix, the duration of the analyses, its cost effectiveness, and the necessary sample sizes for testing. Thus, the current GMO detection methods should be evaluated against a common set of performance criteria.
ERIC Educational Resources Information Center
Nivens, Delana A.; Padgett, Clifford W.; Chase, Jeffery M.; Verges, Katie J.; Jamieson, Deborah S.
2010-01-01
Case studies and current literature are combined with spectroscopic analysis to provide a unique chemistry experience for art history students and to provide a unique inquiry-based laboratory experiment for analytical chemistry students. The XRF analysis method was used to demonstrate to nonscience majors (art history students) a powerful…
Validating Analytical Protocols to Determine Selected Pesticides and PCBs Using Routine Samples.
Pindado Jiménez, Oscar; García Alonso, Susana; Pérez Pastor, Rosa María
2017-01-01
This study aims at providing recommendations concerning the validation of analytical protocols by using routine samples. It is intended to provide a case-study on how to validate the analytical methods in different environmental matrices. In order to analyze the selected compounds (pesticides and polychlorinated biphenyls) in two different environmental matrices, the current work has performed and validated two analytical procedures by GC-MS. A description is given of the validation of the two protocols by the analysis of more than 30 samples of water and sediments collected along nine months. The present work also scopes the uncertainty associated with both analytical protocols. In detail, uncertainty of water sample was performed through a conventional approach. However, for the sediments matrices, the estimation of proportional/constant bias is also included due to its inhomogeneity. Results for the sediment matrix are reliable, showing a range 25-35% of analytical variability associated with intermediate conditions. The analytical methodology for the water matrix determines the selected compounds with acceptable recoveries and the combined uncertainty ranges between 20 and 30%. Analyzing routine samples is rarely applied to assess trueness of novel analytical methods and up to now this methodology was not focused on organochlorine compounds in environmental matrices.
NBOMe: new potent hallucinogens--pharmacology, analytical methods, toxicities, fatalities: a review.
Kyriakou, C; Marinelli, E; Frati, P; Santurro, A; Afxentiou, M; Zaami, S; Busardo, F P
2015-09-01
NBOMe is a class of emerging new psychoactive substances that has recently gained prominence in the drug abuse market. NBOMes are N-2-methoxy-benzyl substituted 2C class of hallucinogens, currently being marked online as "research chemicals" under various names: N-bomb, Smiles, Solaris, and Cimbi. This article reviews available literature on the pharmacology; the analytical methods currently used for the detection and quantification of NBOMe in biological matrices and blotters, together with intoxication cases and NBOMe-related fatalities. Relevant scientific articles were identified from Medline, Cochrane Central, Scopus, Web of Science, Science Direct, EMBASE and Google Scholar, through June 2015 using the following keywords: "NBOMe", "Nbomb", "Smiles", "intoxication", "toxicity" "fatalities", "death", "pharmacology", "5-HT2A receptor", "analysis" and "analytical methods". The main key word "NBOMe" was individually searched in association to each of the others. The review of the literature allowed us to identify 43 citations on pharmacology, analytical methods and NBOMe-related toxicities and fatalities. The high potency of NBOMes (potent agonists of 5-HT2A receptor) has led to several severe intoxications, overdose and traumatic fatalities; thus, their increase raises significant public health concerns. Moreover, due to the high potency and ease of synthesis, it is likely that their recreational use will become more widespread in the future. The publication of new data, case reports and evaluation of the NBOMes metabolites is necessary in order to improve knowledge and awareness within the forensic community.
Briot, T; Robelet, A; Morin, N; Riou, J; Lelièvre, B; Lebelle-Dehaut, A-V
2016-07-01
In this study, a novel analytical method to quantify prion inactivating detergent in rinsing waters coming from the washer-disinfector of a hospital sterilization unit has been developed. The final aim was to obtain an easy and functional method in a routine hospital process which does not need the cleaning product manufacturer services. An ICP-MS method based on the potassium dosage of the washer-disinfector's rinsing waters was developed. Potassium hydroxide is present on the composition of the three prion inactivating detergent currently on the French market. The detergent used in this study was the Actanios LDI(®) (Anios laboratories). A Passing and Bablok regression compares concentrations measured with this developed method and with the HPLC-UV manufacturer method. According to results obtained, the developed method is easy to use in a routine hospital process. The Passing and Bablok regression showed that there is no statistical difference between the two analytical methods during the second rinsing step. Besides, both methods were linear on the third rinsing step, with a 1.5ppm difference between the concentrations measured for each method. This study shows that the ICP-MS method developed is nonspecific for the detergent, but specific for the potassium element which is present in all prion inactivating detergent currently on the French market. This method should be functional for all the prion inactivating detergent containing potassium, if the sensibility of the method is sufficient when the potassium concentration is very low in the prion inactivating detergent formulation. Copyright © 2016. Published by Elsevier Masson SAS.
Measuring solids concentration in stormwater runoff: comparison of analytical methods.
Clark, Shirley E; Siu, Christina Y S
2008-01-15
Stormwater suspended solids typically are quantified using one of two methods: aliquot/subsample analysis (total suspended solids [TSS]) or whole-sample analysis (suspended solids concentration [SSC]). Interproject comparisons are difficult because of inconsistencies in the methods and in their application. To address this concern, the suspended solids content has been measured using both methodologies in many current projects, but the question remains about how to compare these values with historical water-quality data where the analytical methodology is unknown. This research was undertaken to determine the effect of analytical methodology on the relationship between these two methods of determination of the suspended solids concentration, including the effect of aliquot selection/collection method and of particle size distribution (PSD). The results showed that SSC was best able to represent the known sample concentration and that the results were independent of the sample's PSD. Correlations between the results and the known sample concentration could be established for TSS samples, but they were highly dependent on the sample's PSD and on the aliquot collection technique. These results emphasize the need to report not only the analytical method but also the particle size information on the solids in stormwater runoff.
NASA Technical Reports Server (NTRS)
Brinson, H. F.
1985-01-01
The utilization of adhesive bonding for composite structures is briefly assessed. The need for a method to determine damage initiation and propagation for such joints is outlined. Methods currently in use to analyze both adhesive joints and fiber reinforced plastics is mentioned and it is indicated that all methods require the input of the mechanical properties of the polymeric adhesive and composite matrix material. The mechanical properties of polymers are indicated to be viscoelastic and sensitive to environmental effects. A method to analytically characterize environmentally dependent linear and nonlinear viscoelastic properties is given. It is indicated that the methodology can be used to extrapolate short term data to long term design lifetimes. That is, the method can be used for long term durability predictions. Experimental results for near adhesive resins, polymers used as composite matrices and unidirectional composite laminates is given. The data is fitted well with the analytical durability methodology. Finally, suggestions are outlined for the development of an analytical methodology for the durability predictions of adhesively bonded composite structures.
Wroble, Julie; Frederick, Timothy; Frame, Alicia; Vallero, Daniel
2017-01-01
Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)'s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete ("grab") samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples.
2017-01-01
Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)’s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete (“grab”) samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples. PMID:28759607
Development of Mass Spectrometric Ionization Methods for Fullerenes and Fullerene Derivatives
Currently investigations into the environmental behavior of fullerenes and fullerene derivatives is hampered by the lack of well characterized standards and by the lack of readily available quantitative analytical methods. Reported herein are investigations into the utility of ma...
Detection of heavy metal by paper-based microfluidics.
Lin, Yang; Gritsenko, Dmitry; Feng, Shaolong; Teh, Yi Chen; Lu, Xiaonan; Xu, Jie
2016-09-15
Heavy metal pollution has shown great threat to the environment and public health worldwide. Current methods for the detection of heavy metals require expensive instrumentation and laborious operation, which can only be accomplished in centralized laboratories. Various microfluidic paper-based analytical devices have been developed recently as simple, cheap and disposable alternatives to conventional ones for on-site detection of heavy metals. In this review, we first summarize current development of paper-based analytical devices and discuss the selection of paper substrates, methods of device fabrication, and relevant theories in these devices. We then compare and categorize recent reports on detection of heavy metals using paper-based microfluidic devices on the basis of various detection mechanisms, such as colorimetric, fluorescent, and electrochemical methods. To finalize, the future development and trend in this field are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
Quality of Big Data in Healthcare
Sukumar, Sreenivas R.; Ramachandran, Natarajan; Ferrell, Regina Kay
2015-01-01
The current trend in Big Data Analytics and in particular Health information technology is towards building sophisticated models, methods and tools for business, operational and clinical intelligence, but the critical issue of data quality required for these models is not getting the attention it deserves. The objective of the paper is to highlight the issues of data quality in the context of Big Data Healthcare Analytics.
Quality of Big Data in Healthcare
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sukumar, Sreenivas R.; Ramachandran, Natarajan; Ferrell, Regina Kay
The current trend in Big Data Analytics and in particular Health information technology is towards building sophisticated models, methods and tools for business, operational and clinical intelligence, but the critical issue of data quality required for these models is not getting the attention it deserves. The objective of the paper is to highlight the issues of data quality in the context of Big Data Healthcare Analytics.
Two-dimensional free-surface flow under gravity: A new benchmark case for SPH method
NASA Astrophysics Data System (ADS)
Wu, J. Z.; Fang, L.
2018-02-01
Currently there are few free-surface benchmark cases with analytical results for the Smoothed Particle Hydrodynamics (SPH) simulation. In the present contribution we introduce a two-dimensional free-surface flow under gravity, and obtain an analytical expression on the surface height difference and a theoretical estimation on the surface fractal dimension. They are preliminarily validated and supported by SPH calculations.
Lim, Wei Yin; Goh, Boon Tong; Khor, Sook Mei
2017-08-15
Clinicians, working in the health-care diagnostic systems of developing countries, currently face the challenges of rising costs, increased number of patient visits, and limited resources. A significant trend is using low-cost substrates to develop microfluidic devices for diagnostic purposes. Various fabrication techniques, materials, and detection methods have been explored to develop these devices. Microfluidic paper-based analytical devices (μPADs) have gained attention for sensing multiplex analytes, confirming diagnostic test results, rapid sample analysis, and reducing the volume of samples and analytical reagents. μPADs, which can provide accurate and reliable direct measurement without sample pretreatment, can reduce patient medical burden and yield rapid test results, aiding physicians in choosing appropriate treatment. The objectives of this review are to provide an overview of the strategies used for developing paper-based sensors with enhanced analytical performances and to discuss the current challenges, limitations, advantages, disadvantages, and future prospects of paper-based microfluidic platforms in clinical diagnostics. μPADs, with validated and justified analytical performances, can potentially improve the quality of life by providing inexpensive, rapid, portable, biodegradable, and reliable diagnostics. Copyright © 2017 Elsevier B.V. All rights reserved.
The impact of capillary backpressure on spontaneous counter-current imbibition in porous media
NASA Astrophysics Data System (ADS)
Foley, Amir Y.; Nooruddin, Hasan A.; Blunt, Martin J.
2017-09-01
We investigate the impact of capillary backpressure on spontaneous counter-current imbibition. For such displacements in strongly water-wet systems, the non-wetting phase is forced out through the inlet boundary as the wetting phase imbibes into the rock, creating a finite capillary backpressure. Under the assumption that capillary backpressure depends on the water saturation applied at the inlet boundary of the porous medium, its impact is determined using the continuum modelling approach by varying the imposed inlet saturation in the analytical solution. We present analytical solutions for the one-dimensional incompressible horizontal displacement of a non-wetting phase by a wetting phase in a porous medium. There exists an inlet saturation value above which any change in capillary backpressure has a negligible impact on the solutions. Above this threshold value, imbibition rates and front positions are largely invariant. A method for identifying this inlet saturation is proposed using an analytical procedure and we explore how varying multiphase flow properties affects the analytical solutions and this threshold saturation. We show the value of this analytical approach through the analysis of previously published experimental data.
Analytical ground state for the Jaynes-Cummings model with ultrastrong coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Yuanwei; Institute of Theoretical Physics, Shanxi University, Taiyuan 030006; Chen Gang
2011-06-15
We present a generalized variational method to analytically obtain the ground-state properties of the Jaynes-Cummings model with the ultrastrong coupling. An explicit expression for the ground-state energy, which agrees well with the numerical simulation in a wide range of the experimental parameters, is given. In particular, the introduced method can successfully solve this Jaynes-Cummings model with the positive detuning (the atomic resonant level is larger than the photon frequency), which cannot be treated in the adiabatical approximation and the generalized rotating-wave approximation. Finally, we also demonstrate analytically how to control the mean photon number by means of the current experimentalmore » parameters including the photon frequency, the coupling strength, and especially the atomic resonant level.« less
NASA Astrophysics Data System (ADS)
Schlager, Kenneth J.; Ruchti, Timothy L.
1995-04-01
TAMM for Transcutaneous Analyte Measuring Method is a near infrared spectroscopic technique for the noninvasive measurement of human blood chemistry. A near infrared indium gallium arsenide (InGaAs) photodiode array spectrometer has been developed and tested on over 1,000 patients as a part of an SBIR program sponsored by the Naval Medical Research and Development Command. Nine (9) blood analytes have been measured and evaluated during pre-clinical testing: sodium, chloride, calcium, potassium, bicarbonate, BUN, glucose, hematocrit and hemoglobin. A reflective rather than a transmissive invasive approach to measurement has been taken to avoid variations resulting from skin color and sensor positioning. The current status of the instrumentation, neural network pattern recognition algorithms and test results will be discussed.
NASA Astrophysics Data System (ADS)
Bervillier, C.; Boisseau, B.; Giacomini, H.
2008-02-01
The relation between the Wilson-Polchinski and the Litim optimized ERGEs in the local potential approximation is studied with high accuracy using two different analytical approaches based on a field expansion: a recently proposed genuine analytical approximation scheme to two-point boundary value problems of ordinary differential equations, and a new one based on approximating the solution by generalized hypergeometric functions. A comparison with the numerical results obtained with the shooting method is made. A similar accuracy is reached in each case. Both two methods appear to be more efficient than the usual field expansions frequently used in the current studies of ERGEs (in particular for the Wilson-Polchinski case in the study of which they fail).
Annual banned-substance review: Analytical approaches in human sports drug testing.
Thevis, Mario; Kuuranne, Tiia; Geyer, Hans
2018-01-01
Several high-profile revelations concerning anti-doping rule violations over the past 12 months have outlined the importance of tackling prevailing challenges and reducing the limitations of the current anti-doping system. At this time, the necessity to enhance, expand, and improve analytical test methods in response to the substances outlined in the World Anti-Doping Agency's (WADA) Prohibited List represents an increasingly crucial task for modern sports drug-testing programs. The ability to improve analytical testing methods often relies on the expedient application of novel information regarding superior target analytes for sports drug-testing assays, drug elimination profiles, alternative test matrices, together with recent advances in instrumental developments. This annual banned-substance review evaluates literature published between October 2016 and September 2017 offering an in-depth evaluation of developments in these arenas and their potential application to substances reported in WADA's 2017 Prohibited List. Copyright © 2017 John Wiley & Sons, Ltd.
The science of visual analysis at extreme scale
NASA Astrophysics Data System (ADS)
Nowell, Lucy T.
2011-01-01
Driven by market forces and spanning the full spectrum of computational devices, computer architectures are changing in ways that present tremendous opportunities and challenges for data analysis and visual analytic technologies. Leadership-class high performance computing system will have as many as a million cores by 2020 and support 10 billion-way concurrency, while laptop computers are expected to have as many as 1,000 cores by 2015. At the same time, data of all types are increasing exponentially and automated analytic methods are essential for all disciplines. Many existing analytic technologies do not scale to make full use of current platforms and fewer still are likely to scale to the systems that will be operational by the end of this decade. Furthermore, on the new architectures and for data at extreme scales, validating the accuracy and effectiveness of analytic methods, including visual analysis, will be increasingly important.
Simonzadeh, Ninus
2009-04-01
Phospholipids, such as 1,2-dioleoyl-sn-glycero-3-phosphocholine (DOPC), and 1,1',2,2'-tetramyristoyl cardiolipin, along with cholesterol, form liposomes in aqueous media and have been investigated at NeoPharm (Lake Bluff, IL) as drug-delivery systems. To accurately assess the effectiveness of various formulations involving the use of aforementioned phospholipids and cholesterol, their quantitative determination is essential. An isocratic high-performance liquid chromatographic method for the simultaneous determination of cholesterol, cardiolipin, and DOPC in various pharmaceutical formulations containing the active drug substance has consequently been developed and is presented here. The current method utilizes an ASTEC-diol analytical column and is shown to be stability-indicating and free from interference from any of the formulation excipients, such as sucrose, sodium chloride, and sodium lactate. The analytes are detected using an evaporative light scattering detector (Alltech or Polymer Laboratories). The quantitation of each lipid component is performed using non-linear regression analysis. The retention characteristics of the analytes are examined as a function of eluent composition (e.g., pH, salt content, organic to aqueous phase ratio) and column temperature. The method was validated and was found to be sensitive, specific, rugged, and cost-effective. The current method provides enhanced chromatographic separation for lipid components as well as degradation products as compared to similar methods reported in the literature. It is also inherently simpler than other similar methods reported in the literature that typically use complex gradient elution.
Study designs appropriate for the workplace.
Hogue, C J
1986-01-01
Carlo and Hearn have called for "refinement of old [epidemiologic] methods and an ongoing evaluation of where methods fit in the overall scheme as we address the multiple complexities of reproductive hazard assessment." This review is an attempt to bring together the current state-of-the-art methods for problem definition and hypothesis testing available to the occupational epidemiologist. For problem definition, meta analysis can be utilized to narrow the field of potential causal hypotheses. Passive active surveillance may further refine issues for analytic research. Within analytic epidemiology, several methods may be appropriate for the workplace setting. Those discussed here may be used to estimate the risk ratio in either a fixed or dynamic population.
McLain, B.J.
1993-01-01
Graphite furnace atomic absorption spectrophotometry is a sensitive, precise, and accurate method for the determination of chromium in natural water samples. The detection limit for this analytical method is 0.4 microg/L with a working linear limit of 25.0 microg/L. The precision at the detection limit ranges from 20 to 57 percent relative standard deviation (RSD) with an improvement to 4.6 percent RSD for concentrations more than 3 microg/L. Accuracy of this method was determined for a variety of reference standards that was representative of the analytical range. The results were within the established standard deviations. Samples were spiked with known concentrations of chromium with recoveries ranging from 84 to 122 percent. In addition, a comparison of data between graphite furnace atomic absorption spectrophotometry and direct-current plasma atomic emission spectrometry resulted in suitable agreement between the two methods, with an average deviation of +/- 2.0 microg/L throughout the analytical range.
Modelling of nanoscale quantum tunnelling structures using algebraic topology method
NASA Astrophysics Data System (ADS)
Sankaran, Krishnaswamy; Sairam, B.
2018-05-01
We have modelled nanoscale quantum tunnelling structures using Algebraic Topology Method (ATM). The accuracy of ATM is compared to the analytical solution derived based on the wave nature of tunnelling electrons. ATM provides a versatile, fast, and simple model to simulate complex structures. We are currently expanding the method for modelling electrodynamic systems.
Determination of mycotoxins in foods: current state of analytical methods and limitations.
Köppen, Robert; Koch, Matthias; Siegel, David; Merkel, Stefan; Maul, Ronald; Nehls, Irene
2010-05-01
Mycotoxins are natural contaminants produced by a range of fungal species. Their common occurrence in food and feed poses a threat to the health of humans and animals. This threat is caused either by the direct contamination of agricultural commodities or by a "carry-over" of mycotoxins and their metabolites into animal tissues, milk, and eggs after feeding of contaminated hay or corn. As a consequence of their diverse chemical structures and varying physical properties, mycotoxins exhibit a wide range of biological effects. Individual mycotoxins can be genotoxic, mutagenic, carcinogenic, teratogenic, and oestrogenic. To protect consumer health and to reduce economic losses, surveillance and control of mycotoxins in food and feed has become a major objective for producers, regulatory authorities and researchers worldwide. However, the variety of chemical structures makes it impossible to use one single technique for mycotoxin analysis. Hence, a vast number of analytical methods has been developed and validated. The heterogeneity of food matrices combined with the demand for a fast, simultaneous and accurate determination of multiple mycotoxins creates enormous challenges for routine analysis. The most crucial issues will be discussed in this review. These are (1) the collection of representative samples, (2) the performance of classical and emerging analytical methods based on chromatographic or immunochemical techniques, (3) the validation of official methods for enforcement, and (4) the limitations and future prospects of the current methods.
Martinez, Ramon; Ordunez, Pedro; Soliz, Patricia N; Ballesteros, Michael F
2016-01-01
Background The complexity of current injury-related health issues demands the usage of diverse and massive data sets for comprehensive analyses, and application of novel methods to communicate data effectively to the public health community, decision-makers and the public. Recent advances in information visualisation, availability of new visual analytic methods and tools, and progress on information technology provide an opportunity for shaping the next generation of injury surveillance. Objective To introduce data visualisation conceptual bases, and propose a visual analytic and visualisation platform in public health surveillance for injury prevention and control. Methods The paper introduces data visualisation conceptual bases, describes a visual analytic and visualisation platform, and presents two real-world case studies illustrating their application in public health surveillance for injury prevention and control. Results Application of visual analytic and visualisation platform is presented as solution for improved access to heterogeneous data sources, enhance data exploration and analysis, communicate data effectively, and support decision-making. Conclusions Applications of data visualisation concepts and visual analytic platform could play a key role to shape the next generation of injury surveillance. Visual analytic and visualisation platform could improve data use, the analytic capacity, and ability to effectively communicate findings and key messages. The public health surveillance community is encouraged to identify opportunities to develop and expand its use in injury prevention and control. PMID:26728006
Reference intervals for selected serum biochemistry analytes in cheetahs Acinonyx jubatus.
Hudson-Lamb, Gavin C; Schoeman, Johan P; Hooijberg, Emma H; Heinrich, Sonja K; Tordiffe, Adrian S W
2016-02-26
Published haematologic and serum biochemistry reference intervals are very scarce for captive cheetahs and even more for free-ranging cheetahs. The current study was performed to establish reference intervals for selected serum biochemistry analytes in cheetahs. Baseline serum biochemistry analytes were analysed from 66 healthy Namibian cheetahs. Samples were collected from 30 captive cheetahs at the AfriCat Foundation and 36 free-ranging cheetahs from central Namibia. The effects of captivity-status, age, sex and haemolysis score on the tested serum analytes were investigated. The biochemistry analytes that were measured were sodium, potassium, magnesium, chloride, urea and creatinine. The 90% confidence interval of the reference limits was obtained using the non-parametric bootstrap method. Reference intervals were preferentially determined by the non-parametric method and were as follows: sodium (128 mmol/L - 166 mmol/L), potassium (3.9 mmol/L - 5.2 mmol/L), magnesium (0.8 mmol/L - 1.2 mmol/L), chloride (97 mmol/L - 130 mmol/L), urea (8.2 mmol/L - 25.1 mmol/L) and creatinine (88 µmol/L - 288 µmol/L). Reference intervals from the current study were compared with International Species Information System values for cheetahs and found to be narrower. Moreover, age, sex and haemolysis score had no significant effect on the serum analytes in this study. Separate reference intervals for captive and free-ranging cheetahs were also determined. Captive cheetahs had higher urea values, most likely due to dietary factors. This study is the first to establish reference intervals for serum biochemistry analytes in cheetahs according to international guidelines. These results can be used for future health and disease assessments in both captive and free-ranging cheetahs.
NASA Astrophysics Data System (ADS)
Qin, Ting; Liao, Congwei; Huang, Shengxiang; Yu, Tianbao; Deng, Lianwen
2018-01-01
An analytical drain current model based on the surface potential is proposed for amorphous indium gallium zinc oxide (a-InGaZnO) thin-film transistors (TFTs) with a synchronized symmetric dual-gate (DG) structure. Solving the electric field, surface potential (φS), and central potential (φ0) of the InGaZnO film using the Poisson equation with the Gaussian method and Lambert function is demonstrated in detail. The compact analytical model of current-voltage behavior, which consists of drift and diffusion components, is investigated by regional integration, and voltage-dependent effective mobility is taken into account. Comparison results demonstrate that the calculation results obtained using the derived models match well with the simulation results obtained using a technology computer-aided design (TCAD) tool. Furthermore, the proposed model is incorporated into SPICE simulations using Verilog-A to verify the feasibility of using DG InGaZnO TFTs for high-performance circuit designs.
Modeling and analysis of a magnetically levitated synchronous permanent magnet planar motor
NASA Astrophysics Data System (ADS)
Kou, Baoquan; Zhang, Lu; Li, Liyi; Zhang, Hailin
2012-04-01
In this paper, a new magnetically levitated synchronous permanent magnet planar motor (MLSPMPM) driven by composite-current is proposed, of which the mover is made of a copper coil array and the stator are magnets and magnetic conductor. The coil pitch τt and permanent magnet pole pitch τp satisfy the following relationship 3nτt = (3n ± 1)τp. Firstly, an analytical model of the planar motor is established, flux density distribution of the two-dimensional magnet array is obtained by solving the equations of the scalar magnetic potential. Secondly, the expressions of the electromagnetic forces induced by magnetic field and composite current are derived. To verify the analytical model and the electromagnetic forces, finite element method (FEM) is used for calculating the flux density and electromagnetic forces of the MLSPMPM. And the results from FEM are in good agreement with the results from the analytical equations. This indicates that the analytical model is reasonable.
An Analytical Assessment of NASA's N+1 Subsonic Fixed Wing Project Noise Goal
NASA Technical Reports Server (NTRS)
Berton, Jeffrey J.; Envia, Edmane; Burley, Casey L.
2009-01-01
The Subsonic Fixed Wing Project of NASA's Fundamental Aeronautics Program has adopted a noise reduction goal for new, subsonic, single-aisle, civil aircraft expected to replace current 737 and A320 airplanes. These so-called 'N+1' aircraft - designated in NASA vernacular as such since they will follow the current, in-service, 'N' airplanes - are hoped to achieve certification noise goal levels of 32 cumulative EPNdB under current Stage 4 noise regulations. A notional, N+1, single-aisle, twinjet transport with ultrahigh bypass ratio turbofan engines is analyzed in this study using NASA software and methods. Several advanced noise-reduction technologies are analytically applied to the propulsion system and airframe. Certification noise levels are predicted and compared with the NASA goal.
Advances in spectroscopic methods for quantifying soil carbon
Reeves, James B.; McCarty, Gregory W.; Calderon, Francisco; Hively, W. Dean
2012-01-01
The current gold standard for soil carbon (C) determination is elemental C analysis using dry combustion. However, this method requires expensive consumables, is limited by the number of samples that can be processed (~100/d), and is restricted to the determination of total carbon. With increased interest in soil C sequestration, faster methods of analysis are needed, and there is growing interest in methods based on diffuse reflectance spectroscopy in the visible, near-infrared or mid-infrared spectral ranges. These spectral methods can decrease analytical requirements and speed sample processing, be applied to large landscape areas using remote sensing imagery, and be used to predict multiple analytes simultaneously. However, the methods require localized calibrations to establish the relationship between spectral data and reference analytical data, and also have additional, specific problems. For example, remote sensing is capable of scanning entire watersheds for soil carbon content but is limited to the surface layer of tilled soils and may require difficult and extensive field sampling to obtain proper localized calibration reference values. The objective of this chapter is to discuss the present state of spectroscopic methods for determination of soil carbon.
Peraman, R.; Bhadraya, K.; Reddy, Y. Padmanabha; Reddy, C. Surayaprakash; Lokesh, T.
2015-01-01
By considering the current regulatory requirement for an analytical method development, a reversed phase high performance liquid chromatographic method for routine analysis of etofenamate in dosage form has been optimized using analytical quality by design approach. Unlike routine approach, the present study was initiated with understanding of quality target product profile, analytical target profile and risk assessment for method variables that affect the method response. A liquid chromatography system equipped with a C18 column (250×4.6 mm, 5 μ), a binary pump and photodiode array detector were used in this work. The experiments were conducted based on plan by central composite design, which could save time, reagents and other resources. Sigma Tech software was used to plan and analyses the experimental observations and obtain quadratic process model. The process model was used for predictive solution for retention time. The predicted data from contour diagram for retention time were verified actually and it satisfied with actual experimental data. The optimized method was achieved at 1.2 ml/min flow rate of using mobile phase composition of methanol and 0.2% triethylamine in water at 85:15, % v/v, pH adjusted to 6.5. The method was validated and verified for targeted method performances, robustness and system suitability during method transfer. PMID:26997704
A Meta-Analytic Review of Research on Gender Differences in Sexuality, 1993-2007
ERIC Educational Resources Information Center
Petersen, Jennifer L.; Hyde, Janet Shibley
2010-01-01
In 1993 Oliver and Hyde conducted a meta-analysis on gender differences in sexuality. The current study updated that analysis with current research and methods. Evolutionary psychology, cognitive social learning theory, social structural theory, and the gender similarities hypothesis provided predictions about gender differences in sexuality. We…
Recent advances in immunosensor for narcotic drug detection
Gandhi, Sonu; Suman, Pankaj; Kumar, Ashok; Sharma, Prince; Capalash, Neena; Suri, C. Raman
2015-01-01
Introduction: Immunosensor for illicit drugs have gained immense interest and have found several applications for drug abuse monitoring. This technology has offered a low cost detection of narcotics; thereby, providing a confirmatory platform to compliment the existing analytical methods. Methods: In this minireview, we define the basic concept of transducer for immunosensor development that utilizes antibodies and low molecular mass hapten (opiate) molecules. Results: This article emphasizes on recent advances in immunoanalytical techniques for monitoring of opiate drugs. Our results demonstrate that high quality antibodies can be used for immunosensor development against target analyte with greater sensitivity, specificity and precision than other available analytical methods. Conclusion: In this review we highlight the fundamentals of different transducer technologies and its applications for immunosensor development currently being developed in our laboratory using rapid screening via immunochromatographic kit, label free optical detection via enzyme, fluorescence, gold nanoparticles and carbon nanotubes based immunosensing for sensitive and specific monitoring of opiates. PMID:26929925
Calculated and measured fields in superferric wiggler magnets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blum, E.B.; Solomon, L.
1995-02-01
Although Klaus Halbach is widely known and appreciated as the originator of the computer program POISSON for electromagnetic field calculation, Klaus has always believed that analytical methods can give much more insight into the performance of a magnet than numerical simulation. Analytical approximations readily show how the different aspects of a magnet`s design such as pole dimensions, current, and coil configuration contribute to the performance. These methods yield accuracies of better than 10%. Analytical methods should therefore be used when conceptualizing a magnet design. Computer analysis can then be used for refinement. A simple model is presented for the peakmore » on-axis field of an electro-magnetic wiggler with iron poles and superconducting coils. The model is applied to the radiator section of the superconducting wiggler for the BNL Harmonic Generation Free Electron Laser. The predictions of the model are compared to the measured field and the results from POISSON.« less
Microscale Concentration Measurements Using Laser Light Scattering Methods
NASA Technical Reports Server (NTRS)
Niederhaus, Charles; Miller, Fletcher
2004-01-01
The development of lab-on-a-chip devices for microscale biochemical assays has led to the need for microscale concentration measurements of specific analyses. While fluorescence methods are the current choice, this method requires developing fluorophore-tagged conjugates for each analyte of interest. In addition, fluorescent imaging is also a volume-based method, and can be limiting as smaller detection regions are required.
A comparison of several techniques for imputing tree level data
David Gartner
2002-01-01
As Forest Inventory and Analysis (FIA) changes from periodic surveys to the multipanel annual survey, new analytical methods become available. The current official statistic is the moving average. One alternative is an updated moving average. Several methods of updating plot per acre volume have been discussed previously. However, these methods may not be appropriate...
77 FR 14814 - Tobacco Product Analysis; Scientific Workshop; Request for Comments
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-13
... work to develop tobacco reference products that are not currently available for laboratory use. Discuss... methods used to analyze tobacco products. FDA will invite speakers to address scientific and technical matters relating to the testing of tobacco reference products and the analytical methods used to measure...
Hao, Zhi-hong; Yao, Jian-zhen; Tang, Rui-ling; Zhang, Xue-mei; Li, Wen-ge; Zhang, Qin
2015-02-01
The method for the determmation of trace boron, molybdenum, silver, tin and lead in geochemical samples by direct current are full spectrum direct reading atomic emission spectroscopy (DC-Arc-AES) was established. Direct current are full spectrum direct reading atomic emission spectrometer with a large area of solid-state detectors has functions of full spectrum direct reading and real-time background correction. The new electrodes and new buffer recipe were proposed in this paper, and have applied for national patent. Suitable analytical line pairs, back ground correcting points of elements and the internal standard method were selected, and Ge was used as internal standard. Multistage currents were selected in the research on current program, and each current set different holding time to ensure that each element has a good signal to noise ratio. Continuous rising current mode selected can effectively eliminate the splash of the sample. Argon as shielding gas can eliminate CN band generating and reduce spectral background, also plays a role in stabilizing the are, and argon flow 3.5 L x min(-1) was selected. Evaporation curve of each element was made, and it was concluded that the evaporation behavior of each element is consistent, and combined with the effects of different spectrographic times on the intensity and background, the spectrographic time of 35s was selected. In this paper, national standards substances were selected as a standard series, and the standard series includes different nature and different content of standard substances which meet the determination of trace boron, molybdenum, silver, tin and lead in geochemical samples. In the optimum experimental conditions, the detection limits for B, Mo, Ag, Sn and Pb are 1.1, 0.09, 0.01, 0.41, and 0.56 microg x g(-1) respectively, and the precisions (RSD, n=12) for B, Mo, Ag, Sn and Pb are 4.57%-7.63%, 5.14%-7.75%, 5.48%-12.30%, 3.97%-10.46%, and 4.26%-9.21% respectively. The analytical accuracy was validated by national standards and the results are in agreement with certified values. The method is simple, rapid, is an advanced analytical method for the determination of trace amounts of geochemical samples' boron, molybdenum, silver, tin and lead, and has a certain practicality.
Convergence analysis of two-node CMFD method for two-group neutron diffusion eigenvalue problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeong, Yongjin; Park, Jinsu; Lee, Hyun Chul
2015-12-01
In this paper, the nonlinear coarse-mesh finite difference method with two-node local problem (CMFD2N) is proven to be unconditionally stable for neutron diffusion eigenvalue problems. The explicit current correction factor (CCF) is derived based on the two-node analytic nodal method (ANM2N), and a Fourier stability analysis is applied to the linearized algorithm. It is shown that the analytic convergence rate obtained by the Fourier analysis compares very well with the numerically measured convergence rate. It is also shown that the theoretical convergence rate is only governed by the converged second harmonic buckling and the mesh size. It is also notedmore » that the convergence rate of the CCF of the CMFD2N algorithm is dependent on the mesh size, but not on the total problem size. This is contrary to expectation for eigenvalue problem. The novel points of this paper are the analytical derivation of the convergence rate of the CMFD2N algorithm for eigenvalue problem, and the convergence analysis based on the analytic derivations.« less
Control/structure interaction conceptual design tool
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
1990-01-01
The JPL Control/Structure Interaction Program is developing new analytical methods for designing micro-precision spacecraft with controlled structures. One of these, the Conceptual Design Tool, will illustrate innovative new approaches to the integration of multi-disciplinary analysis and design methods. The tool will be used to demonstrate homogeneity of presentation, uniform data representation across analytical methods, and integrated systems modeling. The tool differs from current 'integrated systems' that support design teams most notably in its support for the new CSI multi-disciplinary engineer. The design tool will utilize a three dimensional solid model of the spacecraft under design as the central data organization metaphor. Various analytical methods, such as finite element structural analysis, control system analysis, and mechanical configuration layout, will store and retrieve data from a hierarchical, object oriented data structure that supports assemblies of components with associated data and algorithms. In addition to managing numerical model data, the tool will assist the designer in organizing, stating, and tracking system requirements.
NASA Astrophysics Data System (ADS)
Lu, Zheng; Huang, Biao; Zhang, Qi; Lu, Xilin
2018-05-01
Eddy-current tuned mass dampers (EC-TMDs) are non-contacting passive control devices and are developed on the basis of conventional tuned mass dampers. They comprise a solid mass, a stiffness element, and a damping element, wherein the damping mechanism originates from eddy currents. By relative motion between a non-magnetic conductive metal and a permanent magnet in a dynamic system, a time-varying magnetic field is induced in the conductor, thereby generating eddy currents. The eddy currents induce a magnetic field with opposite polarity, causing repulsive forces, i.e., damping forces. This technology can overcome the drawbacks of conventional tuned mass dampers, such as limited service life, deterioration of mechanical properties, and undesired additional stiffness. The experimental and analytical study of this system installed on a multi-degree-of-freedom structure is presented in this paper. A series of shaking table tests were conducted on a five-story steel-frame model with/without an EC-TMD to evaluate the effectiveness and performance of the EC-TMD in suppressing the vibration of the model under seismic excitations. The experimental results show that the EC-TMD can effectively reduce the displacement response, acceleration response, interstory drift ratio, and maximum strain of the columns under different earthquake excitations. Moreover, an analytical method was proposed on the basis of electromagnetic and structural dynamic theories. A comparison between the test and simulation results shows that the simulation method can be used to estimate the response of structures with an EC-TMD under earthquake excitations with acceptable accuracy.
Perich, C; Ricós, C; Alvarez, V; Biosca, C; Boned, B; Cava, F; Doménech, M V; Fernández-Calle, P; Fernández-Fernández, P; García-Lario, J V; Minchinela, J; Simón, M; Jansen, R
2014-05-15
Current external quality assurance schemes have been classified into six categories, according to their ability to verify the degree of standardization of the participating measurement procedures. SKML (Netherlands) is a Category 1 EQA scheme (commutable EQA materials with values assigned by reference methods), whereas SEQC (Spain) is a Category 5 scheme (replicate analyses of non-commutable materials with no values assigned by reference methods). The results obtained by a group of Spanish laboratories participating in a pilot study organized by SKML are examined, with the aim of pointing out the improvements over our current scheme that a Category 1 program could provide. Imprecision and bias are calculated for each analyte and laboratory, and compared with quality specifications derived from biological variation. Of the 26 analytes studied, 9 had results comparable with those from reference methods, and 10 analytes did not have comparable results. The remaining 7 analytes measured did not have available reference method values, and in these cases, comparison with the peer group showed comparable results. The reasons for disagreement in the second group can be summarized as: use of non-standard methods (IFCC without exogenous pyridoxal phosphate for AST and ALT, Jaffé kinetic at low-normal creatinine concentrations and with eGFR); non-commutability of the reference material used to assign values to the routine calibrator (calcium, magnesium and sodium); use of reference materials without established commutability instead of reference methods for AST and GGT, and lack of a systematic effort by manufacturers to harmonize results. Results obtained in this work demonstrate the important role of external quality assurance programs using commutable materials with values assigned by reference methods to correctly monitor the standardization of laboratory tests with consequent minimization of risk to patients. Copyright © 2013 Elsevier B.V. All rights reserved.
2014-01-01
In the current practice, to determine the safety factor of a slope with two-dimensional circular potential failure surface, one of the searching methods for the critical slip surface is Genetic Algorithm (GA), while the method to calculate the slope safety factor is Fellenius' slices method. However GA needs to be validated with more numeric tests, while Fellenius' slices method is just an approximate method like finite element method. This paper proposed a new method to determine the minimum slope safety factor which is the determination of slope safety factor with analytical solution and searching critical slip surface with Genetic-Traversal Random Method. The analytical solution is more accurate than Fellenius' slices method. The Genetic-Traversal Random Method uses random pick to utilize mutation. A computer automatic search program is developed for the Genetic-Traversal Random Method. After comparison with other methods like slope/w software, results indicate that the Genetic-Traversal Random Search Method can give very low safety factor which is about half of the other methods. However the obtained minimum safety factor with Genetic-Traversal Random Search Method is very close to the lower bound solutions of slope safety factor given by the Ansys software. PMID:24782679
Long, H. Keith; Daddow, Richard L.; Farrar, Jerry W.
1998-01-01
Since 1962, the U.S. Geological Survey (USGS) has operated the Standard Reference Sample Project to evaluate the performance of USGS, cooperator, and contractor analytical laboratories that analyze chemical constituents of environmental samples. The laboratories are evaluated by using performance evaluation samples, called Standard Reference Samples (SRSs). SRSs are submitted to laboratories semi-annually for round-robin laboratory performance comparison purposes. Currently, approximately 100 laboratories are evaluated for their analytical performance on six SRSs for inorganic and nutrient constituents. As part of the SRS Project, a surplus of homogeneous, stable SRSs is maintained for purchase by USGS offices and participating laboratories for use in continuing quality-assurance and quality-control activities. Statistical evaluation of the laboratories results provides information to compare the analytical performance of the laboratories and to determine possible analytical deficiences and problems. SRS results also provide information on the bias and variability of different analytical methods used in the SRS analyses.
Yanagisawa, Naoki; Dutta, Debashis
2012-08-21
In this Article, we describe a microfluidic enzyme-linked immunosorbent assay (ELISA) method whose sensitivity can be substantially enhanced through preconcentration of the target analyte around a semipermeable membrane. The reported preconcentration has been accomplished in our current work via electrokinetic means allowing a significant increase in the amount of captured analyte relative to nonspecific binding in the trapping/detection zone. Upon introduction of an enzyme substrate into this region, the rate of generation of the ELISA reaction product (resorufin) was observed to increase by over a factor of 200 for the sample and 2 for the corresponding blank compared to similar assays without analyte trapping. Interestingly, in spite of nonuniformities in the amount of captured analyte along the surface of our analysis channel, the measured fluorescence signal in the preconcentration zone increased linearly with time over an enzyme reaction period of 30 min and at a rate that was proportional to the analyte concentration in the bulk sample. In our current study, the reported technique has been shown to reduce the smallest detectable concentration of the tumor marker CA 19-9 and Blue Tongue Viral antibody by over 2 orders of magnitude compared to immunoassays without analyte preconcentration. When compared to microwell based ELISAs, the reported microfluidic approach not only yielded a similar improvement in the smallest detectable analyte concentration but also reduced the sample consumption in the assay by a factor of 20 (5 μL versus 100 μL).
Review of spectral imaging technology in biomedical engineering: achievements and challenges.
Li, Qingli; He, Xiaofu; Wang, Yiting; Liu, Hongying; Xu, Dongrong; Guo, Fangmin
2013-10-01
Spectral imaging is a technology that integrates conventional imaging and spectroscopy to get both spatial and spectral information from an object. Although this technology was originally developed for remote sensing, it has been extended to the biomedical engineering field as a powerful analytical tool for biological and biomedical research. This review introduces the basics of spectral imaging, imaging methods, current equipment, and recent advances in biomedical applications. The performance and analytical capabilities of spectral imaging systems for biological and biomedical imaging are discussed. In particular, the current achievements and limitations of this technology in biomedical engineering are presented. The benefits and development trends of biomedical spectral imaging are highlighted to provide the reader with an insight into the current technological advances and its potential for biomedical research.
Parrinello, Christina M.; Grams, Morgan E.; Couper, David; Ballantyne, Christie M.; Hoogeveen, Ron C.; Eckfeldt, John H.; Selvin, Elizabeth; Coresh, Josef
2016-01-01
Background Equivalence of laboratory tests over time is important for longitudinal studies. Even a small systematic difference (bias) can result in substantial misclassification. Methods We selected 200 Atherosclerosis Risk in Communities Study participants attending all 5 study visits over 25 years. Eight analytes were re-measured in 2011–13 from stored blood samples from multiple visits: creatinine, uric acid, glucose, total cholesterol, HDL-cholesterol, LDL-cholesterol, triglycerides, and high-sensitivity C-reactive protein. Original values were recalibrated to re-measured values using Deming regression. Differences >10% were considered to reflect substantial bias, and correction equations were applied to affected analytes in the total study population. We examined trends in chronic kidney disease (CKD) pre- and post-recalibration. Results Repeat measures were highly correlated with original values (Pearson’s r>0.85 after removing outliers [median 4.5% of paired measurements]), but 2 of 8 analytes (creatinine and uric acid) had differences >10%. Original values of creatinine and uric acid were recalibrated to current values using correction equations. CKD prevalence differed substantially after recalibration of creatinine (visits 1, 2, 4 and 5 pre-recalibration: 21.7%, 36.1%, 3.5%, 29.4%; post-recalibration: 1.3%, 2.2%, 6.4%, 29.4%). For HDL-cholesterol, the current direct enzymatic method differed substantially from magnesium dextran precipitation used during visits 1–4. Conclusions Analytes re-measured in samples stored for ~25 years were highly correlated with original values, but two of the 8 analytes showed substantial bias at multiple visits. Laboratory recalibration improved reproducibility of test results across visits and resulted in substantial differences in CKD prevalence. We demonstrate the importance of consistent recalibration of laboratory assays in a cohort study. PMID:25952043
An analytical drain current model for symmetric double-gate MOSFETs
NASA Astrophysics Data System (ADS)
Yu, Fei; Huang, Gongyi; Lin, Wei; Xu, Chuanzhong
2018-04-01
An analytical surface-potential-based drain current model of symmetric double-gate (sDG) MOSFETs is described as a SPICE compatible model in this paper. The continuous surface and central potentials from the accumulation to the strong inversion regions are solved from the 1-D Poisson's equation in sDG MOSFETs. Furthermore, the drain current is derived from the charge sheet model as a function of the surface potential. Over a wide range of terminal voltages, doping concentrations, and device geometries, the surface potential calculation scheme and drain current model are verified by solving the 1-D Poisson's equation based on the least square method and using the Silvaco Atlas simulation results and experimental data, respectively. Such a model can be adopted as a useful platform to develop the circuit simulator and provide the clear understanding of sDG MOSFET device physics.
NASA Astrophysics Data System (ADS)
Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min
2017-09-01
The Halbach type hollow cylindrical permanent magnet array (HCPMA) is a volume compact and energy conserved field source, which have attracted intense interests in many practical applications. Here, using the complex variable integration method based on the Biot-Savart Law (including current distributions inside the body and on the surfaces of magnet), we derive analytical field solutions to an ideal multipole HCPMA in entire space including the interior of magnet. The analytic field expression inside the array material is used to construct an analytic demagnetization function, with which we can explain the origin of demagnetization phenomena in HCPMA by taking into account an ideal magnetic hysteresis loop with finite coercivity. These analytical field expressions and demagnetization functions provide deeper insight into the nature of such permanent magnet array systems and offer guidance in designing optimized array system.
e-Research and Learning Theory: What Do Sequence and Process Mining Methods Contribute?
ERIC Educational Resources Information Center
Reimann, Peter; Markauskaite, Lina; Bannert, Maria
2014-01-01
This paper discusses the fundamental question of how data-intensive e-research methods could contribute to the development of learning theories. Using methodological developments in research on self-regulated learning as an example, it argues that current applications of data-driven analytical techniques, such as educational data mining and its…
A Review of Biological Agent Sampling Methods and ...
Report This study was conducted to evaluate current sampling and analytical capabilities, from a time and resource perspective, for a large-scale biological contamination incident. The analysis will be useful for strategically directing future research investment.
The sweet tooth of biopharmaceuticals: importance of recombinant protein glycosylation analysis.
Lingg, Nico; Zhang, Peiqing; Song, Zhiwei; Bardor, Muriel
2012-12-01
Biopharmaceuticals currently represent the fastest growing sector of the pharmaceutical industry, mainly driven by a rapid expansion in the manufacture of recombinant protein-based drugs. Glycosylation is the most prominent post-translational modification occurring on these protein drugs. It constitutes one of the critical quality attributes that requires thorough analysis for optimal efficacy and safety. This review examines the functional importance of glycosylation of recombinant protein drugs, illustrated using three examples of protein biopharmaceuticals: IgG antibodies, erythropoietin and glucocerebrosidase. Current analytical methods are reviewed as solutions for qualitative and quantitative measurements of glycosylation to monitor quality target product profiles of recombinant glycoprotein drugs. Finally, we propose a framework for designing the quality target product profile of recombinant glycoproteins and planning workflow for glycosylation analysis with the selection of available analytical methods and tools. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A pipette-based calibration system for fast-scan cyclic voltammetry with fast response times.
Ramsson, Eric S
2016-01-01
Fast-scan cyclic voltammetry (FSCV) is an electrochemical technique that utilizes the oxidation and/or reduction of an analyte of interest to infer rapid changes in concentrations. In order to calibrate the resulting oxidative or reductive current, known concentrations of an analyte must be introduced under controlled settings. Here, I describe a simple and cost-effective method, using a Petri dish and pipettes, for the calibration of carbon fiber microelectrodes (CFMs) using FSCV.
Recent Advances in Paper-Based Sensors
Liana, Devi D.; Raguse, Burkhard; Gooding, J. Justin; Chow, Edith
2012-01-01
Paper-based sensors are a new alternative technology for fabricating simple, low-cost, portable and disposable analytical devices for many application areas including clinical diagnosis, food quality control and environmental monitoring. The unique properties of paper which allow passive liquid transport and compatibility with chemicals/biochemicals are the main advantages of using paper as a sensing platform. Depending on the main goal to be achieved in paper-based sensors, the fabrication methods and the analysis techniques can be tuned to fulfill the needs of the end-user. Current paper-based sensors are focused on microfluidic delivery of solution to the detection site whereas more advanced designs involve complex 3-D geometries based on the same microfluidic principles. Although paper-based sensors are very promising, they still suffer from certain limitations such as accuracy and sensitivity. However, it is anticipated that in the future, with advances in fabrication and analytical techniques, that there will be more new and innovative developments in paper-based sensors. These sensors could better meet the current objectives of a viable low-cost and portable device in addition to offering high sensitivity and selectivity, and multiple analyte discrimination. This paper is a review of recent advances in paper-based sensors and covers the following topics: existing fabrication techniques, analytical methods and application areas. Finally, the present challenges and future outlooks are discussed. PMID:23112667
Analytical methods for determination of mycotoxins: a review.
Turner, Nicholas W; Subrahmanyam, Sreenath; Piletsky, Sergey A
2009-01-26
Mycotoxins are small (MW approximately 700), toxic chemical products formed as secondary metabolites by a few fungal species that readily colonise crops and contaminate them with toxins in the field or after harvest. Ochratoxins and Aflatoxins are mycotoxins of major significance and hence there has been significant research on broad range of analytical and detection techniques that could be useful and practical. Due to the variety of structures of these toxins, it is impossible to use one standard technique for analysis and/or detection. Practical requirements for high-sensitivity analysis and the need for a specialist laboratory setting create challenges for routine analysis. Several existing analytical techniques, which offer flexible and broad-based methods of analysis and in some cases detection, have been discussed in this manuscript. There are a number of methods used, of which many are lab-based, but to our knowledge there seems to be no single technique that stands out above the rest, although analytical liquid chromatography, commonly linked with mass spectroscopy is likely to be popular. This review manuscript discusses (a) sample pre-treatment methods such as liquid-liquid extraction (LLE), supercritical fluid extraction (SFE), solid phase extraction (SPE), (b) separation methods such as (TLC), high performance liquid chromatography (HPLC), gas chromatography (GC), and capillary electrophoresis (CE) and (c) others such as ELISA. Further currents trends, advantages and disadvantages and future prospects of these methods have been discussed.
Benefits and Limitations of DNA Barcoding and Metabarcoding in Herbal Product Authentication
Raclariu, Ancuta Cristina; Heinrich, Michael; Ichim, Mihael Cristin
2017-01-01
Abstract Introduction Herbal medicines play an important role globally in the health care sector and in industrialised countries they are often considered as an alternative to mono‐substance medicines. Current quality and authentication assessment methods rely mainly on morphology and analytical phytochemistry‐based methods detailed in pharmacopoeias. Herbal products however are often highly processed with numerous ingredients, and even if these analytical methods are accurate for quality control of specific lead or marker compounds, they are of limited suitability for the authentication of biological ingredients. Objective To review the benefits and limitations of DNA barcoding and metabarcoding in complementing current herbal product authentication. Method Recent literature relating to DNA based authentication of medicinal plants, herbal medicines and products are summarised to provide a basic understanding of how DNA barcoding and metabarcoding can be applied to this field. Results Different methods of quality control and authentication have varying resolution and usefulness along the value chain of these products. DNA barcoding can be used for authenticating products based on single herbal ingredients and DNA metabarcoding for assessment of species diversity in processed products, and both methods should be used in combination with appropriate hyphenated chemical methods for quality control. Conclusions DNA barcoding and metabarcoding have potential in the context of quality control of both well and poorly regulated supply systems. Standardisation of protocols for DNA barcoding and DNA sequence‐based identification are necessary before DNA‐based biological methods can be implemented as routine analytical approaches and approved by the competent authorities for use in regulated procedures. © 2017 The Authors. Phytochemical Analysis Published by John Wiley & Sons Ltd. PMID:28906059
Danezis, G P; Anagnostopoulos, C J; Liapis, K; Koupparis, M A
2016-10-26
One of the recent trends in Analytical Chemistry is the development of economic, quick and easy hyphenated methods to be used in a field that includes analytes of different classes and physicochemical properties. In this work a multi-residue method was developed for the simultaneous determination of 28 xenobiotics (polar and hydrophilic) using hydrophilic interaction liquid chromatography technique (HILIC) coupled with triple quadrupole mass spectrometry (LC-MS/MS) technology. The scope of the method includes plant growth regulators (chlormequat, daminozide, diquat, maleic hydrazide, mepiquat, paraquat), pesticides (cyromazine, the metabolite of the fungicide propineb PTU (propylenethiourea), amitrole), various multiclass antibiotics (tetracyclines, sulfonamides quinolones, kasugamycin and mycotoxins (aflatoxin B1, B2, fumonisin B1 and ochratoxin A). Isolation of the analytes from the matrix was achieved with a fast and effective technique. The validation of the multi-residue method was performed at the levels: 10 μg/kg and 100 μg/kg in the following representative substrates: fruits-vegetables (apples, apricots, lettuce and onions), cereals and pulses (flour and chickpeas), animal products (milk and meat) and cereal based baby foods. The method was validated taking into consideration EU guidelines and showed acceptable linearity (r ≥ 0.99), accuracy with recoveries between 70 and 120% and precision with RSD ≤ 20% for the majority of the analytes studied. For the analytes that presented accuracy and precision values outside the acceptable limits the method still is able to serve as a semi-quantitative method. The matrix effect, the limits of detection and quantification were also estimated and compared with the current EU MRLs (Maximum Residue Levels) and FAO/WHO MLs (Maximum Levels) or CXLs (Codex Maximum Residue Limits). The combined and expanded uncertainty of the method for each analyte per substrate, was also estimated. Copyright © 2016 Elsevier B.V. All rights reserved.
Jáčová, Jaroslava; Gardlo, Alžběta; Friedecký, David; Adam, Tomáš; Dimandja, Jean-Marie D
2017-08-18
Orthogonality is a key parameter that is used to evaluate the separation power of chromatography-based two-dimensional systems. It is necessary to scale the separation data before the assessment of the orthogonality. Current scaling approaches are sample-dependent, and the extent of the retention space that is converted into a normalized retention space is set according to the retention times of the first and last analytes contained in a unique sample to elute. The presence or absence of a highly retained analyte in a sample can thus significantly influence the amount of information (in terms of the total amount of separation space) contained in the normalized retention space considered for the calculation of the orthogonality. We propose a Whole Separation Space Scaling (WOSEL) approach that accounts for the whole separation space delineated by the analytical method, and not the sample. This approach enables an orthogonality-based evaluation of the efficiency of the analytical system that is independent of the sample selected. The WOSEL method was compared to two currently used orthogonality approaches through the evaluation of in silico-generated chromatograms and real separations of human biofluids and petroleum samples. WOSEL exhibits sample-to-sample stability values of 3.8% on real samples, compared to 7.0% and 10.1% for the two other methods, respectively. Using real analyses, we also demonstrate that some previously developed approaches can provide misleading conclusions on the overall orthogonality of a two-dimensional chromatographic system. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cong, Yongzheng; Rausch, Sarah J.; Geng, Tao
2014-10-27
Here we show that a closed pneumatic microvalve on a PDMS chip can serve as a semipermeable membrane under an applied potential, enabling current to pass through while blocking the passage of charged analytes. Enrichment of both anionic and cationic species has been demonstrated, and concentration factors of ~70 have been achieved in just 8 s. Once analytes are concentrated, the valve is briefly opened and the sample is hydrodynamically injected onto an integrated microchip or capillary electrophoresis (CE) column. In contrast to existing preconcentration approaches, the membrane-based method described here enables both rapid analyte concentration as well as highmore » resolution separations.« less
Analytical Glycobiology at High Sensitivity: Current Approaches and Directions
Novotny, Milos V.; Alley, William R.; Mann, Benjamin F.
2013-01-01
This review summarizes the analytical advances made during the last several years in the structural and quantitative determinations of glycoproteins in complex biological mixtures. The main analytical techniques used in the fields of glycomics and glycoproteomics involve different modes of mass spectrometry and their combinations with capillary separation methods such as microcolumn liquid chromatography and capillary electrophoresis. The needs for high-sensitivity measurements have been emphasized in the oligosaccharide profiling used in the field of biomarker discovery through MALDI mass spectrometry. High-sensitivity profiling of both glycans and glycopeptides from biological fluids and tissue extracts has been aided significantly through lectin preconcentration and the uses of affinity chromatography. PMID:22945852
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, S.S.; Attari, A.
1995-01-01
The discovery of arsenic compounds, as alkylarsines, in natural gas prompted this research program to develop reliable measurement techniques needed to assess the efficiency of removal processes for these environmentally sensitive substances. These techniques include sampling, speciation, quantitation and on-line instrumental methods for monitoring the total arsenic concentration. The current program has yielded many products, including calibration standards, arsenic-specific sorbents, sensitive analytical methods and instrumentation. Four laboratory analytical methods have been developed and successfully employed for arsenic determination in natural gas. These methods use GC-AED and GC-MS instruments to speciate alkylarsines, and peroxydisulfate extraction with FIAS, special carbon sorbent withmore » XRF and an IGT developed sorbent with GFAA for total arsenic measurement.« less
An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1992-01-01
Research conducted during the period from July 1991 through December 1992 is covered. A method based upon the quasi-analytical approach was developed for computing the aerodynamic sensitivity coefficients of three dimensional wings in transonic and subsonic flow. In addition, the method computes for comparison purposes the aerodynamic sensitivity coefficients using the finite difference approach. The accuracy and validity of the methods are currently under investigation.
Pasin, Daniel; Cawley, Adam; Bidny, Sergei; Fu, Shanlin
2017-10-01
The proliferation of new psychoactive substances (NPS) in recent years has resulted in the development of numerous analytical methods for the detection and identification of known and unknown NPS derivatives. High-resolution mass spectrometry (HRMS) has been identified as the method of choice for broad screening of NPS in a wide range of analytical contexts because of its ability to measure accurate masses using data-independent acquisition (DIA) techniques. Additionally, it has shown promise for non-targeted screening strategies that have been developed in order to detect and identify novel analogues without the need for certified reference materials (CRMs) or comprehensive mass spectral libraries. This paper reviews the applications of HRMS for the analysis of NPS in forensic drug chemistry and analytical toxicology. It provides an overview of the sample preparation procedures in addition to data acquisition, instrumental analysis, and data processing techniques. Furthermore, it gives an overview of the current state of non-targeted screening strategies with discussion on future directions and perspectives of this technique. Graphical Abstract Missing the bullseye - a graphical respresentation of non-targeted screening. Image courtesy of Christian Alonzo.
On the analytic and numeric optimisation of airplane trajectories under real atmospheric conditions
NASA Astrophysics Data System (ADS)
Gonzalo, J.; Domínguez, D.; López, D.
2014-12-01
From the beginning of aviation era, economic constraints have forced operators to continuously improve the planning of the flights. The revenue is proportional to the cost per flight and the airspace occupancy. Many methods, the first started in the middle of last century, have explore analytical, numerical and artificial intelligence resources to reach the optimal flight planning. In parallel, advances in meteorology and communications allow an almost real-time knowledge of the atmospheric conditions and a reliable, error-bounded forecast for the near future. Thus, apart from weather risks to be avoided, airplanes can dynamically adapt their trajectories to minimise their costs. International regulators are aware about these capabilities, so it is reasonable to envisage some changes to allow this dynamic planning negotiation to soon become operational. Moreover, current unmanned airplanes, very popular and often small, suffer the impact of winds and other weather conditions in form of dramatic changes in their performance. The present paper reviews analytic and numeric solutions for typical trajectory planning problems. Analytic methods are those trying to solve the problem using the Pontryagin principle, where influence parameters are added to state variables to form a split condition differential equation problem. The system can be solved numerically -indirect optimisation- or using parameterised functions -direct optimisation-. On the other hand, numerical methods are based on Bellman's dynamic programming (or Dijkstra algorithms), where the fact that two optimal trajectories can be concatenated to form a new optimal one if the joint point is demonstrated to belong to the final optimal solution. There is no a-priori conditions for the best method. Traditionally, analytic has been more employed for continuous problems whereas numeric for discrete ones. In the current problem, airplane behaviour is defined by continuous equations, while wind fields are given in a discrete grid at certain time intervals. The research demonstrates advantages and disadvantages of each method as well as performance figures of the solutions found for typical flight conditions under static and dynamic atmospheres. This provides significant parameters to be used in the selection of solvers for optimal trajectories.
The Water-Energy-Food Nexus: Advancing Innovative, Policy-Relevant Methods
NASA Astrophysics Data System (ADS)
Crootof, A.; Albrecht, T.; Scott, C. A.
2017-12-01
The water-energy-food (WEF) nexus is rapidly expanding in scholarly literature and policy settings as a novel way to address complex Anthropocene challenges. The nexus approach aims to identify tradeoffs and synergies of water, energy, and food systems, internalize social and environmental impacts, and guide development of cross-sectoral policies. However, a primary limitation of the nexus approach is the absence - or gaps and inconsistent use - of adequate methods to advance an innovative and policy-relevant nexus approach. This paper presents an analytical framework to identify robust nexus methods that align with nexus thinking and highlights innovative nexus methods at the frontier. The current state of nexus methods was assessed with a systematic review of 245 journal articles and book chapters. This review revealed (a) use of specific and reproducible methods for nexus assessment is uncommon - less than one-third of the reviewed studies present explicit methods; (b) nexus methods frequently fall short of capturing interactions among water, energy, and food - the very concept they purport to address; (c) assessments strongly favor quantitative approaches - 70% use primarily quantitative tools; (d) use of social science methods is limited (26%); and (e) many nexus methods are confined to disciplinary silos - only about one-quarter combine methods from diverse disciplines and less than one-fifth utilize both quantitative and qualitative approaches. Despite some pitfalls of current nexus methods, there are a host of studies that offer innovative approaches to help quantify nexus linkages and interactions among sectors, conceptualize dynamic feedbacks, and support mixed method approaches to better understand WEF systems. Applying our analytical framework to all 245 studies, we identify, and analyze herein, seventeen studies that implement innovative multi-method and cross-scalar tools to demonstrate promising advances toward improved nexus assessment. This paper finds that, to make the WEF nexus effective as a policy-relevant analytical tool, methods are needed that incorporate social and political dimensions of water, energy, and food; utilize multiple and interdisciplinary approaches; and engage stakeholders and policy-makers.
THEORETICAL METHODS FOR COMPUTING ELECTRICAL CONDITIONS IN WIRE-PLATE ELECTROSTATIC PRECIPITATORS
The paper describes a new semi-empirical, approximate theory for predicting electrical conditions. In the approximate theory, analytical expressions are derived for calculating voltage-current characteristics and electric potential, electric field, and space charge density distri...
Sun, Qinqin; Yan, Fei; Su, Bin
2018-05-15
3,3',5,5'-Tetramethylbenzidine (TMB) has been frequently used as an indicator in G-quadruplex/hemin DNAzyme (G4zyme)-based chemical and biochemical analysis, and its oxidation products are usually monitored by electrochemical or optical methods to quantify G4zyme formation-related analytes. Herein we report a simple electrochemical approach based on isoporous silica-micelle membrane (iSMM) to measure TMB, instead of its oxidation products, in G4zyme-based detection of specific analytes. The iSMM was grown on the indium tin oxide (ITO) electrode, which was composed of highly ordered, vertically oriented silica nanochannels and cylindrical micelles of cetyltrimethylammonium. The iSMM-ITO electrode was selectively responsive to neutral TMB but not its oxidation products, thanks to the sieving and pre-concentration capacity of micellar structures in terms of molecular charge and lipophilicity. In other words, only TMB could be extracted and enriched into micelles and subsequently oxidized at the underlying ITO electrode surface (namely the micelle/ITO interface), generating an amplified anodic current. Since the depletion of TMB was catalyzed by G4zymes formed in the presence of specific analyte, the decrease of this anodic current enabled the quantitative detection of this analyte. The current variation relative to its initial value ((j 0 -j)/j 0 ), termed as the current attenuation ratio, showed the obvious dependence on the analyte concentration. As proof-of-concept experiments, four substances, i.e., potassium cation (K + ), adenosine triphosphate, thrombin and nucleic acid, were detected in aqueous media and the analysis of K + in pre-treated human serum was also performed. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.
2017-02-01
CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.
S-curve networks and an approximate method for estimating degree distributions of complex networks
NASA Astrophysics Data System (ADS)
Guo, Jin-Li
2010-12-01
In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.
Benefits and Limitations of DNA Barcoding and Metabarcoding in Herbal Product Authentication.
Raclariu, Ancuta Cristina; Heinrich, Michael; Ichim, Mihael Cristin; de Boer, Hugo
2018-03-01
Herbal medicines play an important role globally in the health care sector and in industrialised countries they are often considered as an alternative to mono-substance medicines. Current quality and authentication assessment methods rely mainly on morphology and analytical phytochemistry-based methods detailed in pharmacopoeias. Herbal products however are often highly processed with numerous ingredients, and even if these analytical methods are accurate for quality control of specific lead or marker compounds, they are of limited suitability for the authentication of biological ingredients. To review the benefits and limitations of DNA barcoding and metabarcoding in complementing current herbal product authentication. Recent literature relating to DNA based authentication of medicinal plants, herbal medicines and products are summarised to provide a basic understanding of how DNA barcoding and metabarcoding can be applied to this field. Different methods of quality control and authentication have varying resolution and usefulness along the value chain of these products. DNA barcoding can be used for authenticating products based on single herbal ingredients and DNA metabarcoding for assessment of species diversity in processed products, and both methods should be used in combination with appropriate hyphenated chemical methods for quality control. DNA barcoding and metabarcoding have potential in the context of quality control of both well and poorly regulated supply systems. Standardisation of protocols for DNA barcoding and DNA sequence-based identification are necessary before DNA-based biological methods can be implemented as routine analytical approaches and approved by the competent authorities for use in regulated procedures. © 2017 The Authors. Phytochemical Analysis Published by John Wiley & Sons Ltd. © 2017 The Authors. Phytochemical Analysis Published by John Wiley & Sons Ltd.
The NIST Quantitative Infrared Database
Chu, P. M.; Guenther, F. R.; Rhoderick, G. C.; Lafferty, W. J.
1999-01-01
With the recent developments in Fourier transform infrared (FTIR) spectrometers it is becoming more feasible to place these instruments in field environments. As a result, there has been enormous increase in the use of FTIR techniques for a variety of qualitative and quantitative chemical measurements. These methods offer the possibility of fully automated real-time quantitation of many analytes; therefore FTIR has great potential as an analytical tool. Recently, the U.S. Environmental Protection Agency (U.S.EPA) has developed protocol methods for emissions monitoring using both extractive and open-path FTIR measurements. Depending upon the analyte, the experimental conditions and the analyte matrix, approximately 100 of the hazardous air pollutants (HAPs) listed in the 1990 U.S.EPA Clean Air Act amendment (CAAA) can be measured. The National Institute of Standards and Technology (NIST) has initiated a program to provide quality-assured infrared absorption coefficient data based on NIST prepared primary gas standards. Currently, absorption coefficient data has been acquired for approximately 20 of the HAPs. For each compound, the absorption coefficient spectrum was calculated using nine transmittance spectra at 0.12 cm−1 resolution and the Beer’s law relationship. The uncertainties in the absorption coefficient data were estimated from the linear regressions of the transmittance data and considerations of other error sources such as the nonlinear detector response. For absorption coefficient values greater than 1 × 10−4 μmol/mol)−1 m−1 the average relative expanded uncertainty is 2.2 %. This quantitative infrared database is currently an ongoing project at NIST. Additional spectra will be added to the database as they are acquired. Our current plans include continued data acquisition of the compounds listed in the CAAA, as well as the compounds that contribute to global warming and ozone depletion.
Calculation method of spin accumulations and spin signals in nanostructures using spin resistors
NASA Astrophysics Data System (ADS)
Torres, Williams Savero; Marty, Alain; Laczkowski, Piotr; Jamet, Matthieu; Vila, Laurent; Attané, Jean-Philippe
2018-02-01
Determination of spin accumulations and spin currents is essential for a deep understanding of spin transport in nanostructures and further optimization of spintronic devices. So far, they are easily obtained using different approaches in nanostructures composed of few elements; however their calculation becomes complicated as the number of elements increases. Here, we propose a 1-D spin resistor approach to calculate analytically spin accumulations, spin currents and magneto-resistances in heterostructures. Our method, particularly applied to multi-terminal metallic nanostructures, provides a fast and systematic mean to determine such spin properties in structures where conventional methods remain complex.
NASA Technical Reports Server (NTRS)
Israelsson, Ulf E. (Inventor); Strayer, Donald M. (Inventor)
1992-01-01
A contact-less method for determining transport critical current density and flux penetration depth in bulk superconductor material. A compressor having a hollow interior and a plunger for selectively reducing the free space area for distribution of the magnetic flux therein are formed of superconductor material. Analytical relationships, based upon the critical state model, Maxwell's equations and geometrical relationships define transport critical current density and flux penetration depth in terms of the initial trapped magnetic flux density and the ratio between initial and final magnetic flux densities whereby data may be reliably determined by means of the simple test apparatus for evaluating the current density and flux penetration depth.
Mattarozzi, Monica; Suman, Michele; Cascio, Claudia; Calestani, Davide; Weigel, Stefan; Undas, Anna; Peters, Ruud
2017-01-01
Estimating consumer exposure to nanomaterials (NMs) in food products and predicting their toxicological properties are necessary steps in the assessment of the risks of this technology. To this end, analytical methods have to be available to detect, characterize and quantify NMs in food and materials related to food, e.g. food packaging and biological samples following metabolization of food. The challenge for the analytical sciences is that the characterization of NMs requires chemical as well as physical information. This article offers a comprehensive analysis of methods available for the detection and characterization of NMs in food and related products. Special attention was paid to the crucial role of sample preparation methods since these have been partially neglected in the scientific literature so far. The currently available instrumental methods are grouped as fractionation, counting and ensemble methods, and their advantages and limitations are discussed. We conclude that much progress has been made over the last 5 years but that many challenges still exist. Future perspectives and priority research needs are pointed out. Graphical Abstract Two possible analytical strategies for the sizing and quantification of Nanoparticles: Asymmetric Flow Field-Flow Fractionation with multiple detectors (allows the determination of true size and mass-based particle size distribution); Single Particle Inductively Coupled Plasma Mass Spectrometry (allows the determination of a spherical equivalent diameter of the particle and a number-based particle size distribution).
López-Serna, Rebeca; Marín-de-Jesús, David; Irusta-Mata, Rubén; García-Encina, Pedro Antonio; Lebrero, Raquel; Fdez-Polanco, María; Muñoz, Raúl
2018-08-15
The work here presented aimed at developing an analytical method for the simultaneous determination of 22 pharmaceuticals and personal care products, including 3 transformation products, in sewage and sludge. A meticulous method optimization, involving an experimental design, was carried out. The developed method was fully automated and consisted of the online extraction of 17 mL of water sample by Direct Immersion Solid Phase MicroExtraction followed by On-fiber Derivatization coupled to Gas Chromatography - Mass Spectrometry (DI-SPME - On-fiber Derivatization - GC - MS). This methodology was validated for 12 of the initial compounds as a reliable (relative recoveries above 90% for sewage and 70% for sludge; repeatability as %RSD below 10% in all cases), sensitive (LODs below 20 ng L -1 in sewage and 10 ng g -1 in sludge), versatile (sewage and sewage-sludge samples up to 15,000 ng L -1 and 900 ng g -1 , respectively) and green analytical alternative for many medium-tech routine laboratories around the world to keep up with both current and forecast environmental regulations requirements. The remaining 10 analytes initially considered showed insufficient suitability to be included in the final method. The methodology was successfully applied to real samples generated in a pilot scale sewage treatment reactor. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Safouhi, Hassan; Hoggan, Philip
2003-01-01
This review on molecular integrals for large electronic systems (MILES) places the problem of analytical integration over exponential-type orbitals (ETOs) in a historical context. After reference to the pioneering work, particularly by Barnett, Shavitt and Yoshimine, it focuses on recent progress towards rapid and accurate analytic solutions of MILES over ETOs. Software such as the hydrogenlike wavefunction package Alchemy by Yoshimine and collaborators is described. The review focuses on convergence acceleration of these highly oscillatory integrals and in particular it highlights suitable nonlinear transformations. Work by Levin and Sidi is described and applied to MILES. A step by step description of progress in the use of nonlinear transformation methods to obtain efficient codes is provided. The recent approach developed by Safouhi is also presented. The current state of the art in this field is summarized to show that ab initio analytical work over ETOs is now a viable option.
Current Protocols in Pharmacology
2016-01-01
Determination of drug or drug metabolite concentrations in biological samples, particularly in serum or plasma, is fundamental to describing the relationships between administered dose, route of administration, and time after dose to the drug concentrations achieved and to the observed effects of the drug. A well-characterized, accurate analytical method is needed, but it must also be established that the analyte concentration in the sample at the time of analysis is the same as the concentration at sample acquisition. Drugs and metabolites may be susceptible to degradation in samples due to metabolism or to physical and chemical processes, resulting in a lower measured concentration than was in the original sample. Careful examination of analyte stability during processing and storage and adjustment of procedures and conditions to maximize that stability are a critical part of method validation for the analysis, and can ensure the accuracy of the measured concentrations. PMID:27960029
Fourier analysis of polar cap electric field and current distributions
NASA Technical Reports Server (NTRS)
Barbosa, D. D.
1984-01-01
A theoretical study of high-latitude electric fields and currents, using analytic Fourier analysis methods, is conducted. A two-dimensional planar model of the ionosphere with an enhanced conductivity auroral belt and field-aligned currents at the edges is employed. Two separate topics are treated. A field-aligned current element near the cusp region of the polar cap is included to investigate the modifications to the convection pattern by the east-west component of the interplanetary magnetic field. It is shown that a sizable one-cell structure is induced near the cusp which diverts equipotential contours to the dawnside or duskside, depending on the sign of the cusp current. This produces characteristic dawn-dusk asymmetries to the electric field that have been previously observed over the polar cap. The second topic is concerned with the electric field configuration obtained in the limit of perfect shielding, where the field is totally excluded equatorward of the auroral oval. When realistic field-aligned current distributions are used, the result is to produce severely distorted, crescent-shaped equipotential contours over the cap. Exact, analytic formulae applicable to this case are also provided.
Application of the superposition principle to solar-cell analysis
NASA Technical Reports Server (NTRS)
Lindholm, F. A.; Fossum, J. G.; Burgess, E. L.
1979-01-01
The superposition principle of differential-equation theory - which applies if and only if the relevant boundary-value problems are linear - is used to derive the widely used shifting approximation that the current-voltage characteristic of an illuminated solar cell is the dark current-voltage characteristic shifted by the short-circuit photocurrent. Analytical methods are presented to treat cases where shifting is not strictly valid. Well-defined conditions necessary for superposition to apply are established. For high injection in the base region, the method of analysis accurately yields the dependence of the open-circuit voltage on the short-circuit current (or the illumination level).
Computational dosimetry for grounded and ungrounded human models due to contact current
NASA Astrophysics Data System (ADS)
Chan, Kwok Hung; Hattori, Junya; Laakso, Ilkka; Hirata, Akimasa; Taki, Masao
2013-08-01
This study presents the computational dosimetry of contact currents for grounded and ungrounded human models. The uncertainty of the quasi-static (QS) approximation of the in situ electric field induced in a grounded/ungrounded human body due to the contact current is first estimated. Different scenarios of cylindrical and anatomical human body models are considered, and the results are compared with the full-wave analysis. In the QS analysis, the induced field in the grounded cylindrical model is calculated by the QS finite-difference time-domain (QS-FDTD) method, and compared with the analytical solution. Because no analytical solution is available for the grounded/ungrounded anatomical human body model, the results of the QS-FDTD method are then compared with those of the conventional FDTD method. The upper frequency limit for the QS approximation in the contact current dosimetry is found to be 3 MHz, with a relative local error of less than 10%. The error increases above this frequency, which can be attributed to the neglect of the displacement current. The QS or conventional FDTD method is used for the dosimetry of induced electric field and/or specific absorption rate (SAR) for a contact current injected into the index finger of a human body model in the frequency range from 10 Hz to 100 MHz. The in situ electric fields or SAR are compared with the basic restrictions in the international guidelines/standards. The maximum electric field or the 99th percentile value of the electric fields appear not only in the fat and muscle tissues of the finger, but also around the wrist, forearm, and the upper arm. Some discrepancies are observed between the basic restrictions for the electric field and SAR and the reference levels for the contact current, especially in the extremities. These discrepancies are shown by an equation that relates the current density, tissue conductivity, and induced electric field in the finger with a cross-sectional area of 1 cm2.
Evolution of microbiological analytical methods for dairy industry needs
Sohier, Danièle; Pavan, Sonia; Riou, Armelle; Combrisson, Jérôme; Postollec, Florence
2014-01-01
Traditionally, culture-based methods have been used to enumerate microbial populations in dairy products. Recent developments in molecular methods now enable faster and more sensitive analyses than classical microbiology procedures. These molecular tools allow a detailed characterization of cell physiological states and bacterial fitness and thus, offer new perspectives to integration of microbial physiology monitoring to improve industrial processes. This review summarizes the methods described to enumerate and characterize physiological states of technological microbiota in dairy products, and discusses the current deficiencies in relation to the industry’s needs. Recent studies show that Polymerase chain reaction-based methods can successfully be applied to quantify fermenting microbes and probiotics in dairy products. Flow cytometry and omics technologies also show interesting analytical potentialities. However, they still suffer from a lack of validation and standardization for quality control analyses, as reflected by the absence of performance studies and official international standards. PMID:24570675
Evolution of microbiological analytical methods for dairy industry needs.
Sohier, Danièle; Pavan, Sonia; Riou, Armelle; Combrisson, Jérôme; Postollec, Florence
2014-01-01
Traditionally, culture-based methods have been used to enumerate microbial populations in dairy products. Recent developments in molecular methods now enable faster and more sensitive analyses than classical microbiology procedures. These molecular tools allow a detailed characterization of cell physiological states and bacterial fitness and thus, offer new perspectives to integration of microbial physiology monitoring to improve industrial processes. This review summarizes the methods described to enumerate and characterize physiological states of technological microbiota in dairy products, and discusses the current deficiencies in relation to the industry's needs. Recent studies show that Polymerase chain reaction-based methods can successfully be applied to quantify fermenting microbes and probiotics in dairy products. Flow cytometry and omics technologies also show interesting analytical potentialities. However, they still suffer from a lack of validation and standardization for quality control analyses, as reflected by the absence of performance studies and official international standards.
On the superposition principle in interference experiments.
Sinha, Aninda; H Vijay, Aravind; Sinha, Urbasi
2015-05-14
The superposition principle is usually incorrectly applied in interference experiments. This has recently been investigated through numerics based on Finite Difference Time Domain (FDTD) methods as well as the Feynman path integral formalism. In the current work, we have derived an analytic formula for the Sorkin parameter which can be used to determine the deviation from the application of the principle. We have found excellent agreement between the analytic distribution and those that have been earlier estimated by numerical integration as well as resource intensive FDTD simulations. The analytic handle would be useful for comparing theory with future experiments. It is applicable both to physics based on classical wave equations as well as the non-relativistic Schrödinger equation.
An Overview of the Analysis of Trace Organics in Water.
ERIC Educational Resources Information Center
Trussell, Albert R.; Umphres, Mark D.
1978-01-01
Summarized are current analytical techniques used to classify, isolate, resolve, identify, and quantify organic compounds present in drinking water. A variety of methods are described, then drawbacks and advantages are listed, and research needs and future trends are noted. (CS)
HANDBOOK: CONTINUOUS EMISSION MONITORING SYSTEMS FOR NON-CRITERIA POLLUTANTS
This Handbook provides a description of the methods used to continuously monitor non-criteria pollutants emitted from stationary sources. The Handbook contains a review of current regulatory programs, the state-of-the-art sampling system design, analytical techniques, and the use...
A REVIEW OF APPLICATIONS OF LUMINESCENCE TO MONITORING OF CHEMICAL CONTAMINANTS IN THE ENVIRONMENT
The recent analytical literature on the application of luminescence techniques to the measurement of various classes of environmentally significant chemicals has been reviewed. Luminescent spectroscopy based methods are compared to other current techniques. Also, examples of rece...
da Rocha, Leticia; Sloane, Elliot; M Bassani, Jose
2005-01-01
This study describes a framework to support the choice of the maintenance service (in-house or third party contract) for each category of medical equipment based on: a) the real medical equipment maintenance management system currently used by the biomedical engineering group of the public health system of the Universidade Estadual de Campinas located in Brazil to control the medical equipment maintenance service, b) the Activity Based Costing (ABC) method, and c) the Analytic Hierarchy Process (AHP) method. Results show the cost and performance related to each type of maintenance service. Decision-makers can use these results to evaluate possible strategies for the categories of equipment.
Mass Spectrometry for Paper-Based Immunoassays: Toward On-Demand Diagnosis.
Chen, Suming; Wan, Qiongqiong; Badu-Tawiah, Abraham K
2016-05-25
Current analytical methods, either point-of-care or centralized detection, are not able to meet recent demands of patient-friendly testing and increased reliability of results. Here, we describe a two-point separation on-demand diagnostic strategy based on a paper-based mass spectrometry immunoassay platform that adopts stable and cleavable ionic probes as mass reporter; these probes make possible sensitive, interruptible, storable, and restorable on-demand detection. In addition, a new touch paper spray method was developed for on-chip, sensitive, and cost-effective analyte detection. This concept is successfully demonstrated via (i) the detection of Plasmodium falciparum histidine-rich protein 2 antigen and (ii) multiplexed and simultaneous detection of cancer antigen 125 and carcinoembryonic antigen.
Shaaban, Heba; Górecki, Tadeusz
2015-01-01
Green analytical chemistry is an aspect of green chemistry which introduced in the late nineties. The main objectives of green analytical chemistry are to obtain new analytical technologies or to modify an old method to incorporate procedures that use less hazardous chemicals. There are several approaches to achieve this goal such as using environmentally benign solvents and reagents, reducing the chromatographic separation times and miniaturization of analytical devices. Traditional methods used for the analysis of pharmaceutically active compounds require large volumes of organic solvents and generate large amounts of waste. Most of them are volatile and harmful to the environment. With the awareness about the environment, the development of green technologies has been receiving increasing attention aiming at eliminating or reducing the amount of organic solvents consumed everyday worldwide without loss in chromatographic performance. This review provides the state of the art of green analytical methodologies for environmental analysis of pharmaceutically active compounds in the aquatic environment with special emphasis on strategies for greening liquid chromatography (LC). The current trends of fast LC applied to environmental analysis, including elevated mobile phase temperature, as well as different column technologies such as monolithic columns, fully porous sub-2 μm and superficially porous particles are presented. In addition, green aspects of gas chromatography (GC) and supercritical fluid chromatography (SFC) will be discussed. We pay special attention to new green approaches such as automation, miniaturization, direct analysis and the possibility of locating the chromatograph on-line or at-line as a step forward in reducing the environmental impact of chromatographic analyses. Copyright © 2014 Elsevier B.V. All rights reserved.
Brasca, Milena; Morandi, Stefano; Silvetti, Tiziana; Rosi, Veronica; Cattaneo, Stefano; Pellegrino, Luisa
2013-05-21
Hen egg-white lysozyme (LSZ) is currently used in the food industry to limit the proliferation of lactic acid bacteria spoilage in the production of wine and beer, and to inhibit butyric acid fermentation in hard and extra hard cheeses (late blowing) caused by the outgrowth of clostridial spores. The aim of this work was to evaluate how the enzyme activity in commercial preparations correlates to the enzyme concentration and can be affected by the presence of process-related impurities. Different analytical approaches, including turbidimetric assay, SDS-PAGE and HPLC were used to analyse 17 commercial preparations of LSZ marketed in different countries. The HPLC method adopted by ISO allowed the true LSZ concentration to be determined with accuracy. The turbidimetric assay was the most suitable method to evaluate LSZ activity, whereas SDS-PAGE allowed the presence of other egg proteins, which are potential allergens, to be detected. The analytical results showed that the purity of commercially available enzyme preparations can vary significantly, and evidenced the effectiveness of combining different analytical approaches in this type of control.
Llorente-Mirandes, Toni; Rubio, Roser; López-Sánchez, José Fermín
2017-01-01
Here we review recent developments in analytical proposals for the assessment of inorganic arsenic (iAs) content in food products. Interest in the determination of iAs in products for human consumption such as food commodities, wine, and seaweed among others is fueled by the wide recognition of its toxic effects on humans, even at low concentrations. Currently, the need for robust and reliable analytical methods is recognized by various international safety and health agencies, and by organizations in charge of establishing acceptable tolerance levels of iAs in food. This review summarizes the state of the art of analytical methods while highlighting tools for the assessment of quality assessment of the results, such as the production and evaluation of certified reference materials (CRMs) and the availability of specific proficiency testing (PT) programmes. Because the number of studies dedicated to the subject of this review has increased considerably over recent years, the sources consulted and cited here are limited to those from 2010 to the end of 2015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, F.; Nehl, T.W.
1998-09-01
Because of their high efficiency and power density the PM brushless dc motor is a strong candidate for electric and hybrid vehicle propulsion systems. An analytical approach is developed to predict the inverter high frequency pulse width modulation (PWM) switching caused eddy-current losses in a permanent magnet brushless dc motor. The model uses polar coordinates to take curvature effects into account, and is also capable of including the space harmonic effect of the stator magnetic field and the stator lamination effect on the losses. The model was applied to an existing motor design and was verified with the finite elementmore » method. Good agreement was achieved between the two approaches. Hence, the model is expected to be very helpful in predicting PWM switching losses in permanent magnet machine design.« less
Annual banned-substance review: analytical approaches in human sports drug testing.
Thevis, Mario; Kuuranne, Tiia; Geyer, Hans; Schänzer, Wilhelm
2010-04-01
The annual update of the list of prohibited substances and doping methods as issued by the World Anti-Doping Agency (WADA) allows the implementation of most recent considerations of performance manipulation and emerging therapeutics into human sports doping control programmes. The annual banned-substance review for human doping controls critically summarizes recent innovations in analytical approaches that support the efforts of convicting cheating athletes by improved or newly established methods that focus on known as well as newly outlawed substances and doping methods. In the current review, literature published between October 2008 and September 2009 reporting on new and/or enhanced procedures and techniques for doping analysis, as well as aspects relevant to the doping control arena, was considered to complement the 2009 annual banned-substance review.
The Shock and Vibration Bulletin. Part 2. Invited Papers, Structural Dynamics
1974-08-01
VIKING LANDER DYNAMICS 41 Mr. Joseph C. Pohlen, Martin Marietta Aerospace, Denver, Colorado Structural Dynamics PERFORMANCE OF STATISTICAL ENERGY ANALYSIS 47...aerospace structures. Analytical prediction of these environments is beyond the current scope of classical modal techniques. Statistical energy analysis methods...have been developed that circumvent the difficulties of high-frequency nodal analysis. These statistical energy analysis methods are evaluated
El-Yazbi, Amira F
2017-07-01
Sofosbuvir (SOFO) was approved by the U.S. Food and Drug Administration in 2013 for the treatment of hepatitis C virus infection with enhanced antiviral potency compared with earlier analogs. Notwithstanding, all current editions of the pharmacopeias still do not present any analytical methods for the quantification of SOFO. Thus, rapid, simple, and ecofriendly methods for the routine analysis of commercial formulations of SOFO are desirable. In this study, five accurate methods for the determination of SOFO in pharmaceutical tablets were developed and validated. These methods include HPLC, capillary zone electrophoresis, HPTLC, and UV spectrophotometric and derivative spectrometry methods. The proposed methods proved to be rapid, simple, sensitive, selective, and accurate analytical procedures that were suitable for the reliable determination of SOFO in pharmaceutical tablets. An analysis of variance test with P-value > 0.05 confirmed that there were no significant differences between the proposed assays. Thus, any of these methods can be used for the routine analysis of SOFO in commercial tablets.
NASA Astrophysics Data System (ADS)
Lovrić, Milivoj
Electrochemical stripping means the oxidative or reductive removal of atoms, ions, or compounds from an electrode surface (or from the electrode body, as in the case of liquid mercury electrodes with dissolved metals) [1-5]. In general, these atoms, ions, or compounds have been preliminarily immobilized on the surface of an inert electrode (or within it) as the result of a preconcentration step, while the products of the electrochemical stripping will dissolve in the electrolytic solution. Often the product of the electrochemical stripping is identical to the analyte before the preconcentration. However, there are exemptions to these rules. Electroanalytical stripping methods comprise two steps: first, the accumulation of a dissolved analyte onto, or in, the working electrode, and, second, the subsequent stripping of the accumulated substance by a voltammetric [3, 5], potentiometric [6, 7], or coulometric [8] technique. In stripping voltammetry, the condition is that there are two independent linear relationships: the first one between the activity of accumulated substance and the concentration of analyte in the sample, and the second between the maximum stripping current and the accumulated substance activity. Hence, a cumulative linear relationship between the maximum response and the analyte concentration exists. However, the electrode capacity for the analyte accumulation is limited and the condition of linearity is satisfied only well below the electrode saturation. For this reason, stripping voltammetry is used mainly in trace analysis. The limit of detection depends on the factor of proportionality between the activity of the accumulated substance and the bulk concentration of the analyte. This factor is a constant in the case of a chemical accumulation, but for electrochemical accumulation it depends on the electrode potential. The factor of proportionality between the maximum stripping current and the analyte concentration is rarely known exactly. In fact, it is frequently ignored. For the analysis it suffices to establish the linear relationship empirically. The slope of this relationship may vary from one sample to another because of different influences of the matrix. In this case the concentration of the analyte is determined by the method of standard additions [1]. After measuring the response of the sample, the concentration of the analyte is deliberately increased by adding a certain volume of its standard solution. The response is measured again, and this procedure is repeated three or four times. The unknown concentration is determined by extrapolation of the regression line to the concentration axis [9]. However, in many analytical methods, the final measurement is performed in a standard matrix that allows the construction of a calibration plot. Still, the slope of this plot depends on the active area of the working electrode surface. Each solid electrode needs a separate calibration plot, and that plot must be checked from time to time because of possible deterioration of the electrode surface [2].
Multi-Intelligence Analytics for Next Generation Analysts (MIAGA)
NASA Astrophysics Data System (ADS)
Blasch, Erik; Waltz, Ed
2016-05-01
Current analysts are inundated with large volumes of data from which extraction, exploitation, and indexing are required. A future need for next-generation analysts is an appropriate balance between machine analytics from raw data and the ability of the user to interact with information through automation. Many quantitative intelligence tools and techniques have been developed which are examined towards matching analyst opportunities with recent technical trends such as big data, access to information, and visualization. The concepts and techniques summarized are derived from discussions with real analysts, documented trends of technical developments, and methods to engage future analysts with multiintelligence services. For example, qualitative techniques should be matched against physical, cognitive, and contextual quantitative analytics for intelligence reporting. Future trends include enabling knowledge search, collaborative situational sharing, and agile support for empirical decision-making and analytical reasoning.
Prioritizing pesticide compounds for analytical methods development
Norman, Julia E.; Kuivila, Kathryn; Nowell, Lisa H.
2012-01-01
The U.S. Geological Survey (USGS) has a periodic need to re-evaluate pesticide compounds in terms of priorities for inclusion in monitoring and studies and, thus, must also assess the current analytical capabilities for pesticide detection. To meet this need, a strategy has been developed to prioritize pesticides and degradates for analytical methods development. Screening procedures were developed to separately prioritize pesticide compounds in water and sediment. The procedures evaluate pesticide compounds in existing USGS analytical methods for water and sediment and compounds for which recent agricultural-use information was available. Measured occurrence (detection frequency and concentrations) in water and sediment, predicted concentrations in water and predicted likelihood of occurrence in sediment, potential toxicity to aquatic life or humans, and priorities of other agencies or organizations, regulatory or otherwise, were considered. Several existing strategies for prioritizing chemicals for various purposes were reviewed, including those that identify and prioritize persistent, bioaccumulative, and toxic compounds, and those that determine candidates for future regulation of drinking-water contaminants. The systematic procedures developed and used in this study rely on concepts common to many previously established strategies. The evaluation of pesticide compounds resulted in the classification of compounds into three groups: Tier 1 for high priority compounds, Tier 2 for moderate priority compounds, and Tier 3 for low priority compounds. For water, a total of 247 pesticide compounds were classified as Tier 1 and, thus, are high priority for inclusion in analytical methods for monitoring and studies. Of these, about three-quarters are included in some USGS analytical method; however, many of these compounds are included on research methods that are expensive and for which there are few data on environmental samples. The remaining quarter of Tier 1 compounds are high priority as new analytes. The objective for analytical methods development is to design an integrated analytical strategy that includes as many of the Tier 1 pesticide compounds as possible in a relatively few, cost-effective methods. More than 60 percent of the Tier 1 compounds are high priority because they are anticipated to be present at concentrations approaching levels that could be of concern to human health or aquatic life in surface water or groundwater. An additional 17 percent of Tier 1 compounds were frequently detected in monitoring studies, but either were not measured at levels potentially relevant to humans or aquatic organisms, or do not have benchmarks available with which to compare concentrations. The remaining 21 percent are pesticide degradates that were included because their parent pesticides were in Tier 1. Tier 1 pesticide compounds for water span all major pesticide use groups and a diverse range of chemical classes, with herbicides and their degradates composing half of compounds. Many of the high priority pesticide compounds also are in several national regulatory programs for water, including those that are regulated in drinking water by the U.S. Environmental Protection Agency under the Safe Drinking Water Act and those that are on the latest Contaminant Candidate List. For sediment, a total of 175 pesticide compounds were classified as Tier 1 and, thus, are high priority for inclusion in analytical methods available for monitoring and studies. More than 60 percent of these compounds are included in some USGS analytical method; however, some are spread across several research methods that are expensive to perform, and monitoring data are not extensive for many compounds. The remaining Tier 1 compounds for sediment are high priority as new analytes. The objective for analytical methods development for sediment is to enhance an existing analytical method that currently includes nearly half of the pesticide compounds in Tier 1 by adding as many additional Tier 1 compounds as are analytically compatible. About 35 percent of the Tier 1 compounds for sediment are high priority on the basis of measured occurrence. A total of 74 compounds, or 42 percent, are high priority on the basis of predicted likelihood of occurrence according to physical-chemical properties, and either have potential toxicity to aquatic life, high pesticide useage, or both. The remaining 22 percent of Tier 1 pesticide compounds were either degradates of Tier 1 parent compounds or included for other reasons. As with water, the Tier 1 pesticide compounds for sediment are distributed across the major pesticide-use groups; insecticides and their degradates are the largest fraction, making up 45 percent of Tier 1. In contrast to water, organochlorines, at 17 percent, are the largest chemical class for Tier 1 in sediment, which is to be expected because there is continued widespread detection in sediments of persistent organochlorine pesticides and their degradates at concentrations high enough for potential effects on aquatic life. Compared to water, there are fewer available benchmarks with which to compare contaminant concentrations in sediment, but a total of 19 Tier 1 compounds have at least one sediment benchmark or screening value for aquatic organisms. Of the 175 compounds in Tier 1, 77 percent have high aquatic-life toxicity, as defined for this process. This evaluation of pesticides and degradates resulted in two lists of compounds that are priorities for USGS analytical methods development, one for water and one for sediment. These lists will be used as the basis for redesigning and enhancing USGS analytical capabilities for pesticides in order to capture as many high-priority pesticide compounds as possible using an economically feasible approach.
Smalling, K.L.; Kuivila, K.M.
2008-01-01
A multi-residue method was developed for the simultaneous determination of 85 current-use and legacy organochlorine pesticides in a single sediment sample. After microwave-assisted extraction, clean-up of samples was optimized using gel permeation chromatography and either stacked carbon and alumina solid-phase extraction cartridges or a deactivated Florisil column. Analytes were determined by gas chromatography with ion-trap mass spectrometry and electron capture detection. Method detection limits ranged from 0.6 to 8.9 ??g/kg dry weight. Bed and suspended sediments from a variety of locations were analyzed to validate the method and 29 pesticides, including at least 1 from every class, were detected.
Analytical difficulties facing today's regulatory laboratories: issues in method validation.
MacNeil, James D
2012-08-01
The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.
Asymptotic analysis of corona discharge from thin electrodes
NASA Technical Reports Server (NTRS)
Durbin, P. A.
1986-01-01
The steady discharge of a high-voltage corona is analyzed as a singular perturbation problem. The small parameter is the ratio of the length of the ionization region to the total gap length. By this method, current versus voltage characteristics can be calculated analytically.
Acetanilide herbicides are frequently applied in the U.S. on crops (corn, soybeans, popcorn, etc.) to control broadleaf and annual weeds. The acetanilide herbicides currently registered for use in the U.S. are: alachlor, acetochlor, metolachlor, propachlor, dimethenamid and fluf...
Appendix 3 Summary of Field Sampling and Analytical Methods with Bibliography
Conductivity and Specific conductance are measures of the ability of water to conduct an electric current, and are a general measure of stream-water quality. Conductivity is affected by temperature, with warmer water having a greater conductivity. Specific conductance is the te...
Computational Fluid Dynamics Uncertainty Analysis Applied to Heat Transfer over a Flat Plate
NASA Technical Reports Server (NTRS)
Groves, Curtis Edward; Ilie, Marcel; Schallhorn, Paul A.
2013-01-01
There have been few discussions on using Computational Fluid Dynamics (CFD) without experimental validation. Pairing experimental data, uncertainty analysis, and analytical predictions provides a comprehensive approach to verification and is the current state of the art. With pressed budgets, collecting experimental data is rare or non-existent. This paper investigates and proposes a method to perform CFD uncertainty analysis only from computational data. The method uses current CFD uncertainty techniques coupled with the Student-T distribution to predict the heat transfer coefficient over a at plate. The inputs to the CFD model are varied from a specified tolerance or bias error and the difference in the results are used to estimate the uncertainty. The variation in each input is ranked from least to greatest to determine the order of importance. The results are compared to heat transfer correlations and conclusions drawn about the feasibility of using CFD without experimental data. The results provide a tactic to analytically estimate the uncertainty in a CFD model when experimental data is unavailable
Measurement of toroidal vessel eddy current during plasma disruption on J-TEXT.
Liu, L J; Yu, K X; Zhang, M; Zhuang, G; Li, X; Yuan, T; Rao, B; Zhao, Q
2016-01-01
In this paper, we have employed a thin, printed circuit board eddy current array in order to determine the radial distribution of the azimuthal component of the eddy current density at the surface of a steel plate. The eddy current in the steel plate can be calculated by analytical methods under the simplifying assumptions that the steel plate is infinitely large and the exciting current is of uniform distribution. The measurement on the steel plate shows that this method has high spatial resolution. Then, we extended this methodology to a toroidal geometry with the objective of determining the poloidal distribution of the toroidal component of the eddy current density associated with plasma disruption in a fusion reactor called J-TEXT. The preliminary measured result is consistent with the analysis and calculation results on the J-TEXT vacuum vessel.
2011-09-01
project research addresses our long-term goal to develop an analytical suite of the Advanced Laser Fluorescence (ALF) methods and instruments to improve...demonstrated ALF utility as an integrated tool for aquatic research and observations. The ALF integration into the major oceanographic programs is...currently in progress, including the California Current Ecosystem Long Term Ecological Research (CCE LTER, NSF) and California Cooperative Oceanic
Experimental and Analytical Determinations of Spiral Bevel Gear-Tooth Bending Stress Compared
NASA Technical Reports Server (NTRS)
Handschuh, Robert F.
2000-01-01
Spiral bevel gears are currently used in all main-rotor drive systems for rotorcraft produced in the United States. Applications such as these need spiral bevel gears to turn the corner from the horizontal gas turbine engine to the vertical rotor shaft. These gears must typically operate at extremely high rotational speeds and carry high power levels. With these difficult operating conditions, an improved analytical capability is paramount to increasing aircraft safety and reliability. Also, literature on the analysis and testing of spiral bevel gears has been very sparse in comparison to that for parallel axis gears. This is due to the complex geometry of this type of gear and to the specialized test equipment necessary to test these components. To develop an analytical model of spiral bevel gears, researchers use differential geometry methods to model the manufacturing kinematics. A three-dimensional spiral bevel gear modeling method was developed that uses finite elements for the structural analysis. This method was used to analyze the three-dimensional contact pattern between the test pinion and gear used in the Spiral Bevel Gear Test Facility at the NASA Glenn Research Center at Lewis Field. Results of this analysis are illustrated in the preceding figure. The development of the analytical method was a joint endeavor between NASA Glenn, the U.S. Army Research Laboratory, and the University of North Dakota.
Martinez, Ramon; Ordunez, Pedro; Soliz, Patricia N; Ballesteros, Michael F
2016-04-01
The complexity of current injury-related health issues demands the usage of diverse and massive data sets for comprehensive analyses, and application of novel methods to communicate data effectively to the public health community, decision-makers and the public. Recent advances in information visualisation, availability of new visual analytic methods and tools, and progress on information technology provide an opportunity for shaping the next generation of injury surveillance. To introduce data visualisation conceptual bases, and propose a visual analytic and visualisation platform in public health surveillance for injury prevention and control. The paper introduces data visualisation conceptual bases, describes a visual analytic and visualisation platform, and presents two real-world case studies illustrating their application in public health surveillance for injury prevention and control. Application of visual analytic and visualisation platform is presented as solution for improved access to heterogeneous data sources, enhance data exploration and analysis, communicate data effectively, and support decision-making. Applications of data visualisation concepts and visual analytic platform could play a key role to shape the next generation of injury surveillance. Visual analytic and visualisation platform could improve data use, the analytic capacity, and ability to effectively communicate findings and key messages. The public health surveillance community is encouraged to identify opportunities to develop and expand its use in injury prevention and control. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Magnetic shielding of 3-phase current by a composite material at low frequencies
NASA Astrophysics Data System (ADS)
Livesey, K. L.; Camley, R. E.; Celinski, Z.; Maat, S.
2017-05-01
Electromagnetic shielding at microwave frequencies (MHz and GHz) can be accomplished by attenuating the waves using ferromagnetic resonance and eddy currents in conductive materials. This method is not as effective at shielding the quasi-static magnetic fields produced by low-frequency (kHz) currents. We explore theoretically the use of composite materials - magnetic nanoparticles embedded in a polymer matrix - as a shielding material surrounding a 3-phase current source. We develop several methods to estimate the permeability of a single magnetic nanoparticle at low frequencies, several hundred kHz, and find that the relative permeability can be as high as 5,000-20,000. We then use two analytic effective medium theories to find the effective permeability of a collection of nanoparticles as a function of the volume filling fraction. The analytic calculations provide upper and lower bounds on the composite permeability, and we use a numerical solution to calculate the effective permeability for specific cases. The field-pattern for the 3-phase current is calculated using a magnetic scalar potential for each of the three wires surrounded by a cylinder with the effective permeability found above. For a cylinder with an inner radius of 1 cm and an outer radius of 1.5 cm and an effective permeability of 50, one finds a reduction factor of about 8 in the field strength outside the cylinder.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kazakevich, G.; Johnson, R.; Lebedev, V.
A simplified analytical model of the resonant interaction of the beam of Larmor electrons drifting in the crossed constant fields of a magnetron with a synchronous wave providing a phase grouping of the drifting charge was developed to optimize the parameters of an rf resonant injected signal driving the magnetrons for management of phase and power of rf sources with a rate required for superconducting high-current accelerators. The model, which considers the impact of the rf resonant signal injected into the magnetron on the operation of the injection-locked tube, substantiates the recently developed method of fast power control of magnetronsmore » in the range up to 10 dB at the highest generation efficiency, with low noise, precise stability of the carrier frequency, and the possibility of wideband phase control. Experiments with continuous wave 2.45 GHz, 1 kW microwave oven magnetrons have verified the correspondence of the behavior of these tubes to the analytical model. A proof of the principle of the novel method of power control in magnetrons, based on the developed model, was demonstrated in the experiments. The method is attractive for high-current superconducting rf accelerators. This study also discusses vector methods of power control with the rates required for superconducting accelerators, the impact of the rf resonant signal injected into the magnetron on the rate of phase control of the injection-locked tubes, and a conceptual scheme of the magnetron transmitter with highest efficiency for high-current accelerators.« less
Kazakevich, G.; Johnson, R.; Lebedev, V.; ...
2018-06-14
A simplified analytical model of the resonant interaction of the beam of Larmor electrons drifting in the crossed constant fields of a magnetron with a synchronous wave providing a phase grouping of the drifting charge was developed to optimize the parameters of an rf resonant injected signal driving the magnetrons for management of phase and power of rf sources with a rate required for superconducting high-current accelerators. The model, which considers the impact of the rf resonant signal injected into the magnetron on the operation of the injection-locked tube, substantiates the recently developed method of fast power control of magnetronsmore » in the range up to 10 dB at the highest generation efficiency, with low noise, precise stability of the carrier frequency, and the possibility of wideband phase control. Experiments with continuous wave 2.45 GHz, 1 kW microwave oven magnetrons have verified the correspondence of the behavior of these tubes to the analytical model. A proof of the principle of the novel method of power control in magnetrons, based on the developed model, was demonstrated in the experiments. The method is attractive for high-current superconducting rf accelerators. This study also discusses vector methods of power control with the rates required for superconducting accelerators, the impact of the rf resonant signal injected into the magnetron on the rate of phase control of the injection-locked tubes, and a conceptual scheme of the magnetron transmitter with highest efficiency for high-current accelerators.« less
NASA Astrophysics Data System (ADS)
Chen, Jui-Sheng; Li, Loretta Y.; Lai, Keng-Hsin; Liang, Ching-Ping
2017-11-01
A novel solution method is presented which leads to an analytical model for the advective-dispersive transport in a semi-infinite domain involving a wide spectrum of boundary inputs, initial distributions, and zero-order productions. The novel solution method applies the Laplace transform in combination with the generalized integral transform technique (GITT) to obtain the generalized analytical solution. Based on this generalized analytical expression, we derive a comprehensive set of special-case solutions for some time-dependent boundary distributions and zero-order productions, described by the Dirac delta, constant, Heaviside, exponentially-decaying, or periodically sinusoidal functions as well as some position-dependent initial conditions and zero-order productions specified by the Dirac delta, constant, Heaviside, or exponentially-decaying functions. The developed solutions are tested against an analytical solution from the literature. The excellent agreement between the analytical solutions confirms that the new model can serve as an effective tool for investigating transport behaviors under different scenarios. Several examples of applications, are given to explore transport behaviors which are rarely noted in the literature. The results show that the concentration waves resulting from the periodically sinusoidal input are sensitive to dispersion coefficient. The implication of this new finding is that a tracer test with a periodic input may provide additional information when for identifying the dispersion coefficients. Moreover, the solution strategy presented in this study can be extended to derive analytical models for handling more complicated problems of solute transport in multi-dimensional media subjected to sequential decay chain reactions, for which analytical solutions are not currently available.
Toxicologic evaluation of analytes from Tank 241-C-103
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahlum, D.D.; Young, J.Y.; Weller, R.E.
1994-11-01
Westinghouse Hanford Company requested PNL to assemble a toxicology review panel (TRP) to evaluate analytical data compiled by WHC, and provide advice concerning potential health effects associated with exposure to tank-vapor constituents. The team`s objectives would be to (1) review procedures used for sampling vapors from tanks, (2) identify constituents in tank-vapor samples that could be related to symptoms reported by workers, (3) evaluate the toxicological implications of those constituents by comparison to establish toxicological databases, (4) provide advice for additional analytical efforts, and (5) support other activities as requested by WHC. The TRP represents a wide range of expertise,more » including toxicology, industrial hygiene, and occupational medicine. The TRP prepared a list of target analytes that chemists at the Oregon Graduate Institute/Sandia (OGI), Oak Ridge National Laboratory (ORNL), and PNL used to establish validated methods for quantitative analysis of head-space vapors from Tank 241-C-103. this list was used by the analytical laboratories to develop appropriate analytical methods for samples from Tank 241-C-103. Target compounds on the list included acetone, acetonitrile, ammonia, benzene, 1, 3-butadiene, butanal, n-butanol, hexane, 2-hexanone, methylene chloride, nitric oxide, nitrogen dioxide, nitrous oxide, dodecane, tridecane, propane nitrile, sulfur oxide, tributyl phosphate, and vinylidene chloride. The TRP considered constituent concentrations, current exposure limits, reliability of data relative to toxicity, consistency of the analytical data, and whether the material was carcinogenic or teratogenic. A final consideration in the analyte selection process was to include representative chemicals for each class of compounds found.« less
Recent Work in Hybrid Radiation Transport Methods with Applications to Commercial Nuclear Power
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kulesza, Joel A.
This talk will begin with an overview of hybrid radiation transport methods followed by a discussion of the author’s work to advance current capabilities. The talk will then describe applications for these methods in commercial nuclear power reactor analyses and techniques for experimental validation. When discussing these analytical and experimental activities, the importance of technical standards such as those created and maintained by ASTM International will be demonstrated.
Biosensors for hepatitis B virus detection.
Yao, Chun-Yan; Fu, Wei-Ling
2014-09-21
A biosensor is an analytical device used for the detection of analytes, which combines a biological component with a physicochemical detector. Recently, an increasing number of biosensors have been used in clinical research, for example, the blood glucose biosensor. This review focuses on the current state of biosensor research with respect to efficient, specific and rapid detection of hepatitis B virus (HBV). The biosensors developed based on different techniques, including optical methods (e.g., surface plasmon resonance), acoustic wave technologies (e.g., quartz crystal microbalance), electrochemistry (amperometry, voltammetry and impedance) and novel nanotechnology, are also discussed.
NASA Astrophysics Data System (ADS)
Pomata, Donatella; Di Filippo, Patrizia; Riccardi, Carmela; Buiarelli, Francesca; Gallo, Valentina
2014-02-01
Organic component of airborne particulate matter originates from both natural and anthropogenic sources whose contributions can be identified through the analysis of chemical markers. The validation of analytical methods for analysis of compounds used as chemical markers is of great importance especially if they must be determined in rather complex matrices. Currently, standard reference materials (SRM) with certified values for all those analytes are not available. In this paper, we report a method for the simultaneous determination of levoglucosan and xylitol as tracers for biomass burning emissions, and arabitol, mannitol and ergosterol as biomarkers for airborne fungi in SRM 1649a, by GC/MS. Their quantitative analysis in SRM 1649a was carried out using both internal standard calibration curves and standard addition method. A matrix effect was observed for all analytes, minor for levoglucosan and major for polyols and ergosterol. The results related to levoglucosan around 160 μg g-1 agreed with those reported by other authors, while no comparison was possible for xylitol (120 μg g-1), arabitol (15 μg g-1), mannitol (18 μg g-1), and ergosterol (0.5 μg g-1). The analytical method used for SRM 1649a was also applied to PM10 samples collected in Rome during four seasonal sampling campaigns. The ratios between annual analyte concentrations in PM10 samples and in SRM 1649a were of the same order of magnitude although particulate matter samples analyzed were collected in two different sites and periods.
NASA Technical Reports Server (NTRS)
Edmonds, Larry D.
1987-01-01
The steady state current distribution in a three dimensional integrated circuit is presented. A device physics approach, based on a perturbation method rather than an equivalent lumped circuit approach, is used. The perturbation method allows the various currents to be expressed in terms of elementary solutions which are solutions to very simple boundary value problems. A Simple Steady State Theory is the subtitle because the most obvious limitation of the present version of the analysis is that all depletion region boundary surfaces are treated as equipotential surfaces. This may be an adequate approximation in some applications but it is an obvious weakness in the theory when applied to latched states. Examples that illustrate the use of these analytical methods are not given because they will be presented in detail in the future.
Development of a CZE method for the quantification of pseudoephedrine and cetirizine.
Alnajjar, Ahmed O; Idris, Abubakr M
2014-10-01
Pseudoephedrine and cetirizine have been combined in dosage forms with more therapeutic benefits when compared with single-drug treatment. The current manuscript reports the development of the first capillary zone electrophoresis (CZE) assay method for that combination. The effects of pH and buffer concentration on resolution, noise, migration time and peak area were examined employing experimental design approach. The analytes were electropherographed into a 50.2 cm-long and 50 µm i.d. fused-silica capillary column using 10 mmol/L borate at pH 8.3 with a potential of 25 kV at 25°C and UV detection at 214 nm. The method was successfully validated in order to verify its suitability for pharmaceutical analysis for the purposes of quality control. Over previous high-performance liquid chromatographic methods, the current CZE method features the benefits of the use of cost-effective electrolyte, besides high sample throughput (11 samples/h). Furthermore, other analytical results including linear dynamic ranges, recovery (96.9-98.1%), intra- and interday precision (relative standard deviation ≤ 1.70%) as well as the limits of detection and quantification (≤2.65 µg/mL) were all satisfactory for the intended purpose. © The Author [2013]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Multi-analytical Approaches Informing the Risk of Sepsis
NASA Astrophysics Data System (ADS)
Gwadry-Sridhar, Femida; Lewden, Benoit; Mequanint, Selam; Bauer, Michael
Sepsis is a significant cause of mortality and morbidity and is often associated with increased hospital resource utilization, prolonged intensive care unit (ICU) and hospital stay. The economic burden associated with sepsis is huge. With advances in medicine, there are now aggressive goal oriented treatments that can be used to help these patients. If we were able to predict which patients may be at risk for sepsis we could start treatment early and potentially reduce the risk of mortality and morbidity. Analytic methods currently used in clinical research to determine the risk of a patient developing sepsis may be further enhanced by using multi-modal analytic methods that together could be used to provide greater precision. Researchers commonly use univariate and multivariate regressions to develop predictive models. We hypothesized that such models could be enhanced by using multiple analytic methods that together could be used to provide greater insight. In this paper, we analyze data about patients with and without sepsis using a decision tree approach and a cluster analysis approach. A comparison with a regression approach shows strong similarity among variables identified, though not an exact match. We compare the variables identified by the different approaches and draw conclusions about the respective predictive capabilities,while considering their clinical significance.
Analytical method for measuring cosmogenic 35S in natural waters
Uriostegui, Stephanie H.; Bibby, Richard K.; Esser, Bradley K.; ...
2015-05-18
Here, cosmogenic sulfur-35 in water as dissolved sulfate ( 35SO 4) has successfully been used as an intrinsic hydrologic tracer in low-SO 4, high-elevation basins. Its application in environmental waters containing high SO 4 concentrations has been limited because only small amounts of SO 4 can be analyzed using current liquid scintillation counting (LSC) techniques. We present a new analytical method for analyzing large amounts of BaSO 4 for 35S. We quantify efficiency gains when suspending BaSO 4 precipitate in Inta-Gel Plus cocktail, purify BaSO 4 precipitate to remove dissolved organic matter, mitigate interference of radium-226 and its daughter productsmore » by selection of high purity barium chloride, and optimize LSC counting parameters for 35S determination in larger masses of BaSO 4. Using this improved procedure, we achieved counting efficiencies that are comparable to published LSC techniques despite a 10-fold increase in the SO 4 sample load. 35SO 4 was successfully measured in high SO 4 surface waters and groundwaters containing low ratios of 35S activity to SO 4 mass demonstrating that this new analytical method expands the analytical range of 35SO 4 and broadens the utility of 35SO 4 as an intrinsic tracer in hydrologic settings.« less
Poitevin, Eric
2016-01-01
The minerals and trace elements that account for about 4% of total human body mass serve as materials and regulators in numerous biological activities in body structure building. Infant formula and milk products are important sources of endogenic and added minerals and trace elements and hence, must comply with regulatory as well as nutritional and safety requirements. In addition, reliable analytical data are necessary to support product content and innovation, health claims, or declaration and specific safety issues. Adequate analytical platforms and methods must be implemented to demonstrate both the compliance and safety assessment of all declared and regulated minerals and trace elements, especially trace-element contaminant surveillance. The first part of this paper presents general information on the mineral composition of infant formula and milk products and their regulatory status. In the second part, a survey describes the main techniques and related current official methods determining minerals and trace elements in infant formula and milk products applied for by various international organizations (AOAC INTERNATIONAL, the International Organization for Standardization, the International Dairy Federation, and the European Committe for Standardization). The third part summarizes method officialization activities by Stakeholder Panels on Infant Formula and Adult Nutritionals and Stakeholder Panel on Strategic Food Analytical Methods. The final part covers a general discussion focusing on analytical gaps and future trends in inorganic analysis that have been applied for in infant formula and milk-based products.
Extracting laboratory test information from biomedical text
Kang, Yanna Shen; Kayaalp, Mehmet
2013-01-01
Background: No previous study reported the efficacy of current natural language processing (NLP) methods for extracting laboratory test information from narrative documents. This study investigates the pathology informatics question of how accurately such information can be extracted from text with the current tools and techniques, especially machine learning and symbolic NLP methods. The study data came from a text corpus maintained by the U.S. Food and Drug Administration, containing a rich set of information on laboratory tests and test devices. Methods: The authors developed a symbolic information extraction (SIE) system to extract device and test specific information about four types of laboratory test entities: Specimens, analytes, units of measures and detection limits. They compared the performance of SIE and three prominent machine learning based NLP systems, LingPipe, GATE and BANNER, each implementing a distinct supervised machine learning method, hidden Markov models, support vector machines and conditional random fields, respectively. Results: Machine learning systems recognized laboratory test entities with moderately high recall, but low precision rates. Their recall rates were relatively higher when the number of distinct entity values (e.g., the spectrum of specimens) was very limited or when lexical morphology of the entity was distinctive (as in units of measures), yet SIE outperformed them with statistically significant margins on extracting specimen, analyte and detection limit information in both precision and F-measure. Its high recall performance was statistically significant on analyte information extraction. Conclusions: Despite its shortcomings against machine learning methods, a well-tailored symbolic system may better discern relevancy among a pile of information of the same type and may outperform a machine learning system by tapping into lexically non-local contextual information such as the document structure. PMID:24083058
NASA Astrophysics Data System (ADS)
Noda, Isao
2014-07-01
Noteworthy experimental practices, which are advancing forward the frontiers of the field of two-dimensional (2D) correlation spectroscopy, are reviewed with the focus on various perturbation methods currently practiced to induce spectral changes, pertinent examples of applications in various fields, and types of analytical probes employed. Types of perturbation methods found in the published literature are very diverse, encompassing both dynamic and static effects. Although a sizable portion of publications report the use of dynamic perturbatuions, much greater number of studies employ static effect, especially that of temperature. Fields of applications covered by the literature are also very broad, ranging from fundamental research to practical applications in a number of physical, chemical and biological systems, such as synthetic polymers, composites and biomolecules. Aside from IR spectroscopy, which is the most commonly used tool, many other analytical probes are used in 2D correlation analysis. The ever expanding trend in depth, breadth and versatility of 2D correlation spectroscopy techniques and their broad applications all point to the robust and healthy state of the field.
Scheven, U M
2013-12-01
This paper describes a new variant of established stimulated echo pulse sequences, and an analytical method for determining diffusion or dispersion coefficients for Gaussian or non-Gaussian displacement distributions. The unipolar displacement encoding PFGSTE sequence uses trapezoidal gradient pulses of equal amplitude g and equal ramp rates throughout while sampling positive and negative halves of q-space. Usefully, the equal gradient amplitudes and gradient ramp rates help to reduce the impact of experimental artefacts caused by residual amplifier transients, eddy currents, or ferromagnetic hysteresis in components of the NMR magnet. The pulse sequence was validated with measurements of diffusion in water and of dispersion in flow through a packing of spheres. The analytical method introduced here permits the robust determination of the variance of non-Gaussian, dispersive displacement distributions. The noise sensitivity of the analytical method is shown to be negligible, using a demonstration experiment with a non-Gaussian longitudinal displacement distribution, measured on flow through a packing of mono-sized spheres. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yttri, K. E.; Schnelle-Kreiss, J.; Maenhaut, W.; Alves, C.; Bossi, R.; Bjerke, A.; Claeys, M.; Dye, C.; Evtyugina, M.; García-Gacio, D.; Gülcin, A.; Hillamo, R.; Hoffer, A.; Hyder, M.; Iinuma, Y.; Jaffrezo, J.-L.; Kasper-Giebl, A.; Kiss, G.; López-Mahia, P. L.; Pio, C.; Piot, C.; Ramirez-Santa-Cruz, C.; Sciare, J.; Teinilä, K.; Vermeylen, R.; Vicente, A.; Zimmermann, R.
2014-07-01
The monosaccharide anhydrides (MAs) levoglucosan, galactosan and mannosan are products of incomplete combustion and pyrolysis of cellulose and hemicelluloses, and are found to be major constituents of biomass burning aerosol particles. Hence, ambient aerosol particle concentrations of levoglucosan are commonly used to study the influence of residential wood burning, agricultural waste burning and wild fire emissions on ambient air quality. A European-wide intercomparison on the analysis of the three monosaccharide anhydrides was conducted based on ambient aerosol quartz fiber filter samples collected at a Norwegian urban background site during winter. Thus, the samples' content of MAs is representative for biomass burning particles originating from residential wood burning. The purpose of the intercomparison was to examine the comparability of the great diversity of analytical methods used for analysis of levoglucosan, mannosan and galactosan in ambient aerosol filter samples. Thirteen laboratories participated, of which three applied High-Performance Anion-Exchange Chromatography (HPAEC), four used High-Performance Liquid Chromatography (HPLC) or Ultra-Performance Liquid Chromatography (UPLC), and six resorted to Gas Chromatography (GC). The analytical methods used were of such diversity that they should be considered as thirteen different analytical methods. All of the thirteen laboratories reported levels of levoglucosan, whereas nine reported data for mannosan and/or galactosan. Eight of the thirteen laboratories reported levels for all three isomers. The accuracy for levoglucosan, presented as the mean percentage error (PE) for each participating laboratory, varied from -63 to 23%; however, for 62% of the laboratories the mean PE was within ±10%, and for 85% the mean PE was within ±20%. For mannosan, the corresponding range was -60 to 69%, but as for levoglucosan, the range was substantially smaller for a subselection of the laboratories; i.e., for 33% of the laboratories the mean PE was within ±10%. For galactosan, the mean PE for the participating laboratories ranged from -84 to 593%, and as for mannosan 33% of the laboratories reported a mean PE within ±10%. The variability of the various analytical methods, as defined by their minimum and maximum PE value, was typically better for levoglucosan than for mannosan and galactosan, ranging from 3.2 to 41% for levoglucosan, from 10 to 67% for mannosan, and from 6 to 364% for galactosan. For the levoglucosan to mannosan ratio, which may be used to assess the relative importance of softwood vs. hardwood burning, the variability only ranged from 3.5 to 24%. To our knowledge, this is the first major intercomparison on analytical methods used to quantify monosaccharide anhydrides in ambient aerosol filter samples conducted and reported in the scientific literature. The results show that for levoglucosan the accuracy is only slightly lower than that reported for analysis of SO42- on filter samples, a constituent that has been analyzed by numerous laboratories for several decades, typically by ion chromatography, and which is considered a fairly easy constituent to measure. Hence, the results obtained for levoglucosan with respect to accuracy are encouraging and suggest that levels of levoglucosan, and to a lesser extent mannosan and galactosan, obtained by most of the analytical methods currently used to quantify monosaccharide anhydrides in ambient aerosol filter samples, are comparable. Finally, the various analytical methods used in the current study should be tested for other aerosol matrices and concentrations as well, the most obvious being summertime aerosol samples affected by wild fires and/or agricultural fires.
NASA Astrophysics Data System (ADS)
Yttri, K. E.; Schnelle-Kreis, J.; Maenhaut, W.; Abbaszade, G.; Alves, C.; Bjerke, A.; Bonnier, N.; Bossi, R.; Claeys, M.; Dye, C.; Evtyugina, M.; García-Gacio, D.; Hillamo, R.; Hoffer, A.; Hyder, M.; Iinuma, Y.; Jaffrezo, J.-L.; Kasper-Giebl, A.; Kiss, G.; López-Mahia, P. L.; Pio, C.; Piot, C.; Ramirez-Santa-Cruz, C.; Sciare, J.; Teinilä, K.; Vermeylen, R.; Vicente, A.; Zimmermann, R.
2015-01-01
The monosaccharide anhydrides (MAs) levoglucosan, galactosan and mannosan are products of incomplete combustion and pyrolysis of cellulose and hemicelluloses, and are found to be major constituents of biomass burning (BB) aerosol particles. Hence, ambient aerosol particle concentrations of levoglucosan are commonly used to study the influence of residential wood burning, agricultural waste burning and wildfire emissions on ambient air quality. A European-wide intercomparison on the analysis of the three monosaccharide anhydrides was conducted based on ambient aerosol quartz fiber filter samples collected at a Norwegian urban background site during winter. Thus, the samples' content of MAs is representative for BB particles originating from residential wood burning. The purpose of the intercomparison was to examine the comparability of the great diversity of analytical methods used for analysis of levoglucosan, mannosan and galactosan in ambient aerosol filter samples. Thirteen laboratories participated, of which three applied high-performance anion-exchange chromatography (HPAEC), four used high-performance liquid chromatography (HPLC) or ultra-performance liquid chromatography (UPLC) and six resorted to gas chromatography (GC). The analytical methods used were of such diversity that they should be considered as thirteen different analytical methods. All of the thirteen laboratories reported levels of levoglucosan, whereas nine reported data for mannosan and/or galactosan. Eight of the thirteen laboratories reported levels for all three isomers. The accuracy for levoglucosan, presented as the mean percentage error (PE) for each participating laboratory, varied from -63 to 20%; however, for 62% of the laboratories the mean PE was within ±10%, and for 85% the mean PE was within ±20%. For mannosan, the corresponding range was -60 to 69%, but as for levoglucosan, the range was substantially smaller for a subselection of the laboratories; i.e. for 33% of the laboratories the mean PE was within ±10%. For galactosan, the mean PE for the participating laboratories ranged from -84 to 593%, and as for mannosan 33% of the laboratories reported a mean PE within ±10%. The variability of the various analytical methods, as defined by their minimum and maximum PE value, was typically better for levoglucosan than for mannosan and galactosan, ranging from 3.2 to 41% for levoglucosan, from 10 to 67% for mannosan and from 6 to 364% for galactosan. For the levoglucosan to mannosan ratio, which may be used to assess the relative importance of softwood versus hardwood burning, the variability only ranged from 3.5 to 24 . To our knowledge, this is the first major intercomparison on analytical methods used to quantify monosaccharide anhydrides in ambient aerosol filter samples conducted and reported in the scientific literature. The results show that for levoglucosan the accuracy is only slightly lower than that reported for analysis of SO42- (sulfate) on filter samples, a constituent that has been analysed by numerous laboratories for several decades, typically by ion chromatography and which is considered a fairly easy constituent to measure. Hence, the results obtained for levoglucosan with respect to accuracy are encouraging and suggest that levels of levoglucosan, and to a lesser extent mannosan and galactosan, obtained by most of the analytical methods currently used to quantify monosaccharide anhydrides in ambient aerosol filter samples, are comparable. Finally, the various analytical methods used in the current study should be tested for other aerosol matrices and concentrations as well, the most obvious being summertime aerosol samples affected by wildfires and/or agricultural fires.
New method to monitor RF safety in MRI-guided interventions based on RF induced image artefacts.
van den Bosch, Michiel R; Moerland, Marinus A; Lagendijk, Jan J W; Bartels, Lambertus W; van den Berg, Cornelis A T
2010-02-01
Serious tissue heating may occur at the tips of elongated metallic structures used in MRI-guided interventions, such as vascular guidewires, catheters, biopsy needles, and brachytherapy needles. This heating is due to resonating electromagnetic radiofrequency (RF) waves along the structure. Since it is hard to predict the exact length at which resonance occurs under in vivo conditions, there is a need for methods to monitor this resonance behavior. In this study, the authors propose a method based on the RF induced image artefacts and demonstrate its applicability in two phantom experiments. The authors developed an analytical model that describes the RF induced image artefacts as a function of the induced current in an elongated metallic structure placed parallel to the static magnetic field. It describes the total RF field as a sum of the RF fields produced by the transmit coil of the MR scanner and by the elongated metallic structure. Several spoiled gradient echo images with different nominal flip angle settings were acquired to map the B1+ field, which is a quantitative measure for the RF distortion around the structure. From this map, the current was extracted by fitting the analytical model. To investigate the sensitivity of our method we performed two phantom experiments with different setup parameters: One that mimics a brachytherapy needle insertion and one that resembles a guidewire intervention. In the first experiment, a short needle was placed centrally in the MR bore to ensure that the induced currents would be small. In the second experiment, a longer wire was placed in an off-center position to mimic a worst case scenario for the patient. In both experiments, a Luxtron (Santa Clara, CA) fiberoptic temperature sensor was positioned at the structure tip to record the temperature. In the first experiment, no significant temperature increases were measured, while the RF image artefacts and the induced currents in the needle increased with the applied insertion depth. The maximum induced current in the needle was 44 mA. Furthermore, a standing wave pattern became clearly visible for larger insertion depths. In the second experiment, significant temperature increases up to 2.4 degrees C in 1 min were recorded during the image acquisitions. The maximum current value was 1.4 A. In both experiments, a proper estimation of the current in the metallic structure could be made using our analytical model. The authors have developed a method to quantitatively determine the induced current in an elongated metallic structure from its RF distortion. This creates a powerful and sensitive method to investigate the resonant behavior of RF waves along elongated metallic structures used for MRI-guided interventions, for example, to monitor the RF safety or to inspect the influence of coating on the resonance length. Principally, it can be applied under in vivo conditions and for noncylindrical metallic structures such as hip implants by taking their geometry into account.
Development of variable LRFD \\0x03C6 factors for deep foundation design due to site variability.
DOT National Transportation Integrated Search
2012-04-01
The current design guidelines of Load and Resistance Factor Design (LRFD) specifies constant values : for deep foundation design, based on analytical method selected and degree of redundancy of the pier. : However, investigation of multiple sites in ...
Analysis of multiple mycotoxins in food.
Hajslova, Jana; Zachariasova, Milena; Cajka, Tomas
2011-01-01
Mycotoxins are secondary metabolites of microscopic filamentous fungi. With regard to the widespread distribution of fungi in the environment, mycotoxins are considered to be one of the most important natural contaminants in foods and feeds. To protect consumers' health and reduce economic losses, surveillance and control of mycotoxins in food and feed has become a major objective for producers, regulatory authorities, and researchers worldwide. In this context, availability of reliable analytical methods applicable for this purpose is essential. Since the variety of chemical structures of mycotoxins makes impossible to use one single technique for their analysis, a vast number of analytical methods has been developed and validated. Both a large variability of food matrices and growing demands for a fast, cost-saving and accurate determination of multiple mycotoxins by a single method outline new challenges for analytical research. This strong effort is facilitated by technical developments in mass spectrometry allowing decreasing the influence of matrix effects in spite of omitting sample clean-up step. The current state-of-the-art together with future trends is presented in this chapter. Attention is focused mainly on instrumental method; advances in biosensors and other screening bioanalytical approaches enabling analysis of multiple mycotoxins are not discussed in detail.
Microplastics in the environment: Challenges in analytical chemistry - A review.
Silva, Ana B; Bastos, Ana S; Justino, Celine I L; da Costa, João P; Duarte, Armando C; Rocha-Santos, Teresa A P
2018-08-09
Microplastics can be present in the environment as manufactured microplastics (known as primary microplastics) or resulting from the continuous weathering of plastic litter, which yields progressively smaller plastic fragments (known as secondary microplastics). Herein, we discuss the numerous issues associated with the analysis of microplastics, and to a less extent of nanoplastics, in environmental samples (water, sediments, and biological tissues), from their sampling and sample handling to their identification and quantification. The analytical quality control and quality assurance associated with the validation of analytical methods and use of reference materials for the quantification of microplastics are also discussed, as well as the current challenges within this field of research and possible routes to overcome such limitations. Copyright © 2018 Elsevier B.V. All rights reserved.
2014-01-01
This review aims to highlight the recent advances and methodological improvements in instrumental techniques applied for the analysis of different brominated flame retardants (BFRs). The literature search strategy was based on the recent analytical reviews published on BFRs. The main selection criteria involved the successful development and application of analytical methods for determination of the target compounds in various environmental matrices. Different factors affecting chromatographic separation and mass spectrometric detection of brominated analytes were evaluated and discussed. Techniques using advanced instrumentation to achieve outstanding results in quantification of different BFRs and their metabolites/degradation products were highlighted. Finally, research gaps in the field of BFR analysis were identified and recommendations for future research were proposed. PMID:27433482
Ermacora, Alessia; Hrnčiřík, Karel
2014-01-01
Substantial progress has been recently made in the development and optimisation of analytical methods for the quantification of 2-MCPD, 3-MCPD and glycidyl esters in oils and fats, and there are a few methods currently available that allow a reliable quantification of these contaminants in bulk oils and fats. On the other hand, no standard method for the analysis of foodstuffs has yet been established. The aim of this study was the development and validation of a new method for the simultaneous quantification of 2-MCPD, 3-MCPD and glycidyl esters in oil-based food products. The developed protocol includes a first step of liquid-liquid extraction and purification of the lipophilic substances of the sample, followed by the application of a previously developed procedure based on acid transesterification, for the indirect quantification of these contaminants in oils and fats. The method validation was carried out on food products (fat-based spreads, creams, margarine, mayonnaise) manufactured in-house, in order to control the manufacturing process and account for any food matrix-analyte interactions (the sample spiking was carried out on the single components used for the formulations rather than the final products). The method showed good accuracy (the recoveries ranged from 97% to 106% for bound 3-MCPD and 2-MCPD and from 88% to 115% for bound glycidol) and sensitivity (the LOD was 0.04 and 0.05 mg kg(-1) for bound MCPD and glycidol, respectively). Repeatability and reproducibility were satisfactory (RSD below 2% and 5%, respectively) for all analytes. The levels of salts and surface-active compounds in the formulation were found to have no impact on the accuracy and the other parameters of the method.
Walker, M J; Burns, D T; Elliott, C T; Gowland, M H; Mills, E N Clare
2016-01-07
Food allergy is an increasing problem for those affected, their families or carers, the food industry and for regulators. The food supply chain is highly vulnerable to fraud involving food allergens, risking fatalities and severe reputational damage to the food industry. Many facets are being pursued to ameliorate the difficulties including better food labelling and the concept of thresholds of elicitation of allergy symptoms as risk management tools. These efforts depend to a high degree on the ability reliably to detect and quantify food allergens; yet all current analytical approaches exhibit severe deficiencies that jeopardise accurate results being produced particularly in terms of the risks of false positive and false negative reporting. If we fail to realise the promise of current risk assessment and risk management of food allergens through lack of the ability to measure food allergens reproducibly and with traceability to an international unit of measurement, the analytical community will have failed a significant societal challenge. Three distinct but interrelated areas of analytical work are urgently needed to address the substantial gaps identified: (a) a coordinated international programme for the production of properly characterised clinically relevant reference materials and calibrants for food allergen analysis; (b) an international programme to widen the scope of proteomics and genomics bioinformatics for the genera containing the major allergens to address problems in ELISA, MS and DNA methods; (c) the initiation of a coordinated international programme leading to reference methods for allergen proteins that provide results traceable to the SI. This article describes in more detail food allergy, the risks of inapplicable or flawed allergen analyses with examples and a proposed framework, including clinically relevant incurred allergen concentrations, to address the currently unmet and urgently required analytical requirements. Support for the above recommendations from food authorities, business organisations and National Measurement Institutes is important; however transparent international coordination is essential. Thus our recommendations are primarily addressed to the European Commission, the Health and Food Safety Directorate, DG Santé. A global multidisciplinary consortium is required to provide a curated suite of data including genomic and proteomic data on key allergenic food sources, made publically available on line.
Boonyasit, Yuwadee; Laiwattanapaisal, Wanida
2015-01-01
A method for acquiring albumin-corrected fructosamine values from whole blood using a microfluidic paper-based analytical system that offers substantial improvement over previous methods is proposed. The time required to quantify both serum albumin and fructosamine is shortened to 10 min with detection limits of 0.50 g dl(-1) and 0.58 mM, respectively (S/N = 3). The proposed system also exhibited good within-run and run-to-run reproducibility. The results of the interference study revealed that the acceptable recoveries ranged from 95.1 to 106.2%. The system was compared with currently used large-scale methods (n = 15), and the results demonstrated good agreement among the techniques. The microfluidic paper-based system has the potential to continuously monitor glycemic levels in low resource settings.
Graphite nanocomposites sensor for multiplex detection of antioxidants in food.
Ng, Khan Loon; Tan, Guan Huat; Khor, Sook Mei
2017-12-15
Butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT), and tert-butylhydroquinone (TBHQ) are synthetic antioxidants used in the food industry. Herein, we describe the development of a novel graphite nanocomposite-based electrochemical sensor for the multiplex detection and measurement of BHA, BHT, and TBHQ levels in complex food samples using a linear sweep voltammetry technique. Moreover, our newly established analytical method exhibited good sensitivity, limit of detection, limit of quantitation, and selectivity. The accuracy and reliability of analytical results were challenged by method validation and comparison with the results of the liquid chromatography method, where a linear correlation of more than 0.99 was achieved. The addition of sodium dodecyl sulfate as supporting additive further enhanced the LSV response (anodic peak current, I pa ) of BHA and BHT by 2- and 20-times, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
Web-based Visual Analytics for Extreme Scale Climate Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A; Evans, Katherine J; Harney, John F
In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via newmore » visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.« less
Freitag, Ruth; Hilbrig, Frank
2007-07-01
CEC is defined as an analytical method, where the analytes are separated on a chromatographic column in the presence of an applied voltage. The separation of charged analytes in CEC is complex, since chromatographic interaction, electroosmosis and electrophoresis contribute to the experimentally observed behavior. The putative contribution of effects such as surface electrodiffusion has been suggested. A sound theoretical treatment incorporating all effects is currently not available. The question of whether the different effects contribute in an independent or an interdependent manner is still under discussion. In this contribution, the state-of-the-art in the theoretical description of the individual contributions as well as models for the retention behavior and in particular possible dimensionless 'retention factors' is discussed, together with the experimental database for the separation of charged analytes, in particular proteins and peptides, by CEC and related techniques.
Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition
NASA Technical Reports Server (NTRS)
Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd
2015-01-01
Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.
ERIC Educational Resources Information Center
Ahmed, Abdelrahman M.; AbdelAlmuniem, Arwa; Almabhouh, Ahmed A.
2016-01-01
This study aimed to identify the current status of using Web 2.0 tools in university teaching by the faculty members of the College of Education at Sudan University of Science and Technology. The study used a descriptive analytical method based on the use of questionnaires and interviews. The questionnaire was administered to a sample of 40…
Ideal evolution of magnetohydrodynamic turbulence when imposing Taylor-Green symmetries.
Brachet, M E; Bustamante, M D; Krstulovic, G; Mininni, P D; Pouquet, A; Rosenberg, D
2013-01-01
We investigate the ideal and incompressible magnetohydrodynamic (MHD) equations in three space dimensions for the development of potentially singular structures. The methodology consists in implementing the fourfold symmetries of the Taylor-Green vortex generalized to MHD, leading to substantial computer time and memory savings at a given resolution; we also use a regridding method that allows for lower-resolution runs at early times, with no loss of spectral accuracy. One magnetic configuration is examined at an equivalent resolution of 6144(3) points and three different configurations on grids of 4096(3) points. At the highest resolution, two different current and vorticity sheet systems are found to collide, producing two successive accelerations in the development of small scales. At the latest time, a convergence of magnetic field lines to the location of maximum current is probably leading locally to a strong bending and directional variability of such lines. A novel analytical method, based on sharp analysis inequalities, is used to assess the validity of the finite-time singularity scenario. This method allows one to rule out spurious singularities by evaluating the rate at which the logarithmic decrement of the analyticity-strip method goes to zero. The result is that the finite-time singularity scenario cannot be ruled out, and the singularity time could be somewhere between t=2.33 and t=2.70. More robust conclusions will require higher resolution runs and grid-point interpolation measurements of maximum current and vorticity.
ERIC Educational Resources Information Center
Kou, Xiaojing
2011-01-01
Various formats of online discussion have proven valuable for enhancing learning and collaboration in distance and blended learning contexts. However, despite their capacity to reveal essential processes in collaborative inquiry, current mainstream analytical frameworks, such as the cognitive presence framework (Garrison, Anderson, & Archer,…
Chemical monitoring strategies are most effective for those chemicals whose hazards are well understood and for which sensitive and cost effective analytical methods are available. Unfortunately, such chemicals represent a minor fraction of those that may currently occur in the e...
Droplet digital PCR quantifies host inflammatory transcripts in feces reliably and reproducibly
USDA-ARS?s Scientific Manuscript database
The gut is the most extensive, interactive, and complex interface between the human host and the environment and therefore a critical site of immunological activity. Non-invasive methods to assess the host response in this organ are currently lacking. Feces are the available analyte which have been ...
Quantitative Literacy for Undergraduate Business Students in the 21st Century
ERIC Educational Resources Information Center
McClure, Richard; Sircar, Sumit
2008-01-01
The current business environment is awash in vast amounts of data that ongoing transactions continually generate. Leading-edge corporations are using business analytics to achieve competitive advantage. However, educators are not adequately preparing business school students in quantitative methods to meet this challenge. For more than half a…
The identification and quantitation of non-method-specific target analytes have greater importance with respect to EPA's current combustion strategy. The risk associated with combustion process emissions must now be characterized. EPA has recently released draft guidance on pr...
USDA-ARS?s Scientific Manuscript database
Market demands for cotton varieties with improved fiber properties also call for the development of fast, reliable analytical methods for monitoring fiber development and measuring their properties. Currently, cotton breeders rely on instrumentation that can require significant amounts of sample, w...
The recent discovery of the pollution of the environment with Kepone has resulted in a tremendous interest in the development of residue methodology for the compound. Current multiresidue methods for the determination of the common organochlorinated pesticides do not yield good q...
A Graphical Approach to Teaching Amplifier Design at the Undergraduate Level
ERIC Educational Resources Information Center
Assaad, R. S.; Silva-Martinez, J.
2009-01-01
Current methods of teaching basic amplifier design at the undergraduate level need further development to match today's technological advances. The general class approach to amplifier design is analytical and heavily based on mathematical manipulations. However, the students mathematical abilities are generally modest, creating a void in which…
The Latent Structure of Child Depression: A Taxometric Analysis
ERIC Educational Resources Information Center
Richey, J. Anthony; Schmidt, Norman B.; Lonigan, Christopher J.; Phillips, Beth M.; Catanzaro, Salvatore J.; Laurent, Jeff; Gerhardstein, Rebecca R.; Kotov, Roman
2009-01-01
Background: The current study examined the categorical versus continuous nature of child and adolescent depression among three samples of children and adolescents ranging from 5 to 19 years. Methods: Depression was measured using the Children's Depression Inventory (CDI). Indicators derived from the CDI were based on factor analytic research on…
DOT National Transportation Integrated Search
2010-01-01
Current AASHTO provisions for the conventional load rating of flat slab bridges rely on the equivalent strip method : of analysis for determining live load effects, this is generally regarded as overly conservative by many professional : engineers. A...
Measurement of toroidal vessel eddy current during plasma disruption on J-TEXT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, L. J.; Yu, K. X.; Zhang, M., E-mail: zhangming@hust.edu.cn
2016-01-15
In this paper, we have employed a thin, printed circuit board eddy current array in order to determine the radial distribution of the azimuthal component of the eddy current density at the surface of a steel plate. The eddy current in the steel plate can be calculated by analytical methods under the simplifying assumptions that the steel plate is infinitely large and the exciting current is of uniform distribution. The measurement on the steel plate shows that this method has high spatial resolution. Then, we extended this methodology to a toroidal geometry with the objective of determining the poloidal distributionmore » of the toroidal component of the eddy current density associated with plasma disruption in a fusion reactor called J-TEXT. The preliminary measured result is consistent with the analysis and calculation results on the J-TEXT vacuum vessel.« less
Clemons, Kristina; Wiley, Rachel; Waverka, Kristin; Fox, James; Dziekonski, Eric; Verbeck, Guido F
2013-07-01
Here, we present a method of extracting drug residues from fingerprints via Direct Analyte-Probed Nanoextraction coupled to nanospray ionization-mass spectrometry (DAPNe-NSI-MS). This instrumental technique provides higher selectivity and lower detection limits over current methods, greatly reducing sample preparation, and does not compromise the integrity of latent fingerprints. This coupled to Raman microscopy is an advantageous supplement for location and identification of trace particles. DAPNe uses a nanomanipulator for extraction and differing microscopies for localization of chemicals of interest. A capillary tip with solvent of choice is placed in a nanopositioner. The surface to be analyzed is placed under a microscope, and a particle of interest is located. Using a pressure injector, the solvent is injected onto the surface where it dissolves the analyte, and then extracted back into the capillary tip. The solution is then directly analyzed via NSI-MS. Analyses of caffeine, cocaine, crystal methamphetamine, and ecstasy have been performed successfully. © 2013 American Academy of Forensic Sciences.
Evaluation of Analysis Techniques for Fluted-Core Sandwich Cylinders
NASA Technical Reports Server (NTRS)
Lovejoy, Andrew E.; Schultz, Marc R.
2012-01-01
Buckling-critical launch-vehicle structures require structural concepts that have high bending stiffness and low mass. Fluted-core, also known as truss-core, sandwich construction is one such concept. In an effort to identify an analysis method appropriate for the preliminary design of fluted-core cylinders, the current paper presents and compares results from several analysis techniques applied to a specific composite fluted-core test article. The analysis techniques are evaluated in terms of their ease of use and for their appropriateness at certain stages throughout a design analysis cycle (DAC). Current analysis techniques that provide accurate determination of the global buckling load are not readily applicable early in the DAC, such as during preliminary design, because they are too costly to run. An analytical approach that neglects transverse-shear deformation is easily applied during preliminary design, but the lack of transverse-shear deformation results in global buckling load predictions that are significantly higher than those from more detailed analysis methods. The current state of the art is either too complex to be applied for preliminary design, or is incapable of the accuracy required to determine global buckling loads for fluted-core cylinders. Therefore, it is necessary to develop an analytical method for calculating global buckling loads of fluted-core cylinders that includes transverse-shear deformations, and that can be easily incorporated in preliminary design.
TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuemann, J; Grassberger, C; Paganetti, H
2014-06-15
Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50)more » were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend treatment plan verification using Monte Carlo simulations for patients with complex geometries.« less
Reagen, William K; Lindstrom, Kent R; Thompson, Kathy L; Flaherty, John M
2004-09-01
The widespread use of semi- and nonvolatile organofluorochemicals in industrial facilities, concern about their persistence, and relatively recent advancements in liquid chromatography/mass spectrometry (LC/MS) technology have led to the development of new analytical methods to assess potential worker exposure to airborne organofluorochemicals. Techniques were evaluated for the determination of 19 organofluorochemicals and for total fluorine in ambient air samples. Due to the potential biphasic nature of most of these fluorochemicals when airborne, Occupational Safety and Health Administration (OSHA) versatile sampler (OVS) tubes were used to simultaneously trap fluorochemical particulates and vapors from workplace air. Analytical methods were developed for OVS air samples to quantitatively analyze for total fluorine using oxygen bomb combustion/ion selective electrode and for 17 organofluorochemicals using LC/MS and gas chromatography/mass spectrometry (GC/MS). The experimental design for this validation was based on the National Institute of Occupational Safety and Health (NIOSH) Guidelines for Air Sampling and Analytical Method Development and Evaluation, with some revisions of the experimental design. The study design incorporated experiments to determine analytical recovery and stability, sampler capacity, the effect of some environmental parameters on recoveries, storage stability, limits of detection, precision, and accuracy. Fluorochemical mixtures were spiked onto each OVS tube over a range of 0.06-6 microg for each of 12 compounds analyzed by LC/MS and 0.3-30 microg for 5 compounds analyzed by GC/MS. These ranges allowed reliable quantitation at 0.001-0.1 mg/m3 in general for LC/MS analytes and 0.005-0.5 mg/m3 for GC/MS analytes when 60 L of air are sampled. The organofluorochemical exposure guideline (EG) is currently 0.1 mg/m3 for many analytes, with one exception being ammonium perfluorooctanoate (EG is 0.01 mg/m3). Total fluorine results may be used to determine if the individual compounds quantified provide a suitable mass balance of total airborne organofluorochemicals based on known fluorine content. Improvements in precision and/or recovery as well as some additional testing would be needed to meet all NIOSH validation criteria. This study provided valuable information about the accuracy of this method for organofluorochemical exposure assessment.
Analytical methods in sphingolipidomics: Quantitative and profiling approaches in food analysis.
Canela, Núria; Herrero, Pol; Mariné, Sílvia; Nadal, Pedro; Ras, Maria Rosa; Rodríguez, Miguel Ángel; Arola, Lluís
2016-01-08
In recent years, sphingolipidomics has emerged as an interesting omic science that encompasses the study of the full sphingolipidome characterization, content, structure and activity in cells, tissues or organisms. Like other omics, it has the potential to impact biomarker discovery, drug development and systems biology knowledge. Concretely, dietary food sphingolipids have gained considerable importance due to their extensively reported bioactivity. Because of the complexity of this lipid family and their diversity among foods, powerful analytical methodologies are needed for their study. The analytical tools developed in the past have been improved with the enormous advances made in recent years in mass spectrometry (MS) and chromatography, which allow the convenient and sensitive identification and quantitation of sphingolipid classes and form the basis of current sphingolipidomics methodologies. In addition, novel hyphenated nuclear magnetic resonance (NMR) strategies, new ionization strategies, and MS imaging are outlined as promising technologies to shape the future of sphingolipid analyses. This review traces the analytical methods of sphingolipidomics in food analysis concerning sample extraction, chromatographic separation, the identification and quantification of sphingolipids by MS and their structural elucidation by NMR. Copyright © 2015 Elsevier B.V. All rights reserved.
El-Yazbi, Amira F
2017-01-20
Sofosbuvir (SOFO) was approved by the U.S. Food and Drug Administration in 2013 for the treatment of hepatitis C virusinfection with enhanced antiviral potency compared with earlier analogs. Notwithstanding, all current editions of the pharmacopeias still do not present any analytical methods for the quantification of SOFO. Thus, rapid, simple, and ecofriendly methods for the routine analysis of commercial formulations of SOFO are desirable. In this study, five accurate methods for the determination of SOFO in pharmaceutical tablets were developed and validated. These methods include HPLC, capillary zone electrophoresis, HPTLC, and UV spectrophotometric and derivative spectrometry methods. The proposed methods proved to be rapid, simple, sensitive, selective, and accurate analytical procedures that were suitable for the reliable determination of SOFO in pharmaceutical tablets. An analysis of variance test with <em>P</em>-value > 0.05 confirmed that there were no significant differences between the proposed assays. Thus, any of these methods can be used for the routine analysis of SOFO in commercial tablets.
Ongay, Sara; Hendriks, Gert; Hermans, Jos; van den Berge, Maarten; ten Hacken, Nick H T; van de Merbel, Nico C; Bischoff, Rainer
2014-01-24
In spite of the data suggesting the potential of urinary desmosine (DES) and isodesmosine (IDS) as biomarkers for elevated lung elastic fiber turnover, further validation in large-scale studies of COPD populations, as well as the analysis of longitudinal samples is required. Validated analytical methods that allow the accurate and precise quantification of DES and IDS in human urine are mandatory in order to properly evaluate the outcome of such clinical studies. In this work, we present the development and full validation of two methods that allow DES and IDS measurement in human urine, one for the free and one for the total (free+peptide-bound) forms. To this end we compared the two principle approaches that are used for the absolute quantification of endogenous compounds in biological samples, analysis against calibrators containing authentic analyte in surrogate matrix or containing surrogate analyte in authentic matrix. The validated methods were employed for the analysis of a small set of samples including healthy never-smokers, healthy current-smokers and COPD patients. This is the first time that the analysis of urinary free DES, free IDS, total DES, and total IDS has been fully validated and that the surrogate analyte approach has been evaluated for their quantification in biological samples. Results indicate that the presented methods have the necessary quality and level of validation to assess the potential of urinary DES and IDS levels as biomarkers for the progression of COPD and the effect of therapeutic interventions. Copyright © 2014 Elsevier B.V. All rights reserved.
Parameter Analysis for Arc Snubber of EAST Neutral Beam Injector
NASA Astrophysics Data System (ADS)
Wang, Haitian; Li, Ge; Cao, Liang; Dang, Xiaoqiang; Fu, Peng
2010-08-01
According to the B-H curve and structural dimensions of the snubber by the Fink-Baker Method, the inductive voltage and the eddy current of any core tape with the thickness of the saturated regions are derived when the accelerator breakdown occurs. Using the Ampere's law, in each core tape, the eddy current of the core lamination is equal to the arc current, and the relation of the thickness of the saturated regions for different laminations can be deduced. The total equivalent resistance of the snubber can be obtained. The transient eddy current model based on the stray capacitance and the equivalent resistance is analyzed, and the solving process is given in detail. The exponential time constant and the arc current are obtained. Then, the maximum width of the lamination and the minimum thickness of the core tape are determined. The experimental time constant of the eddy current obtained, with or without the bias current, is approximately the same as that by the analytical method, which proves the accuracy of the adopted assumptions and the analysis method.
Analysis and numerical modelling of eddy current damper for vibration problems
NASA Astrophysics Data System (ADS)
Irazu, L.; Elejabarrieta, M. J.
2018-07-01
This work discusses a contactless eddy current damper, which is used to attenuate structural vibration. Eddy currents can remove energy from dynamic systems without any contact and, thus, without adding mass or modifying the rigidity of the structure. An experimental modal analysis of a cantilever beam in the absence of and under a partial magnetic field is conducted in the bandwidth of 01 kHz. The results show that the eddy current phenomenon can attenuate the vibration of the entire structure without modifying the natural frequencies or the mode shapes of the structure itself. In this study, a new inverse method to numerically determine the dynamic properties of the contactless eddy current damper is proposed. The proposed inverse method and the eddy current model based on a lineal viscous force are validated by a practical application. The numerically obtained transfer function correlates with the experimental one, thus showing good agreement in the entire bandwidth of 01 kHz. The proposed method provides an easy and quick tool to model and predict the dynamic behaviour of the contactless eddy current damper, thereby avoiding the use of complex analytical models.
[Sample preparation and bioanalysis in mass spectrometry].
Bourgogne, Emmanuel; Wagner, Michel
2015-01-01
The quantitative analysis of compounds of clinical interest of low molecular weight (<1000 Da) in biological fluids is currently in most cases performed by liquid chromatography-mass spectrometry (LC-MS). Analysis of these compounds in biological fluids (plasma, urine, saliva, hair...) is a difficult task requiring a sample preparation. Sample preparation is a crucial part of chemical/biological analysis and in a sense is considered the bottleneck of the whole analytical process. The main objectives of sample preparation are the removal of potential interferences, analyte preconcentration, and converting (if needed) the analyte into a more suitable form for detection or separation. Without chromatographic separation, endogenous compounds, co-eluted products may affect a quantitative method in mass spectrometry performance. This work focuses on three distinct parts. First, quantitative bioanalysis will be defined, different matrices and sample preparation techniques currently used in bioanalysis by mass spectrometry of/for small molecules of clinical interest in biological fluids. In a second step the goals of sample preparation will be described. Finally, in a third step, sample preparation strategies will be made either directly ("dilute and shoot") or after precipitation.
Rota, Paola; Anastasia, Luigi; Allevi, Pietro
2015-05-07
The current analytical protocol used for the GC-MS determination of free or 1,7-lactonized natural sialic acids (Sias), as heptafluorobutyrates, overlooks several transformations. Using authentic reference standards and by combining GC-MS and NMR analyses, flaws in the analytical protocol were pinpointed and elucidated, thus establishing the scope and limitations of the method. It was demonstrated that (a) Sias 1,7-lactones, even if present in biological samples, decompose under the acidic hydrolysis conditions used for their release; (b) Sias 1,7-lactones are unpredicted artifacts, accidentally generated from their parent acids; (c) the N-acetyl group is quantitatively exchanged with that of the derivatizing perfluorinated anhydride; (d) the partial or complete failure of the Sias esterification-step with diazomethane leads to the incorrect quantification and structure attribution of all free Sias. While these findings prompt an urgent correction and improvement of the current analytical protocol, they could be instrumental for a critical revision of many incorrect claims reported in the literature.
Ogawa, Kuniyasu; Sasaki, Tatsuyoshi; Yoneda, Shigeki; Tsujinaka, Kumiko; Asai, Ritsuko
2018-05-17
In order to increase the current density generated in a PEFC (polymer electrolyte fuel cell), a method for measuring the spatial distribution of both the current and the water content of the MEA (membrane electrode assembly) is necessary. Based on the frequency shifts of NMR (nuclear magnetic resonance) signals acquired from the water contained in the MEA using 49 NMR coils in a 7 × 7 arrangement inserted in the PEFC, a method for measuring the two-dimensional spatial distribution of electric current generated in a unit cell with a power generation area of 140 mm × 160 mm was devised. We also developed an inverse analysis method to determine the two-dimensional electric current distribution that can be applied to actual PEFC connections. Two analytical techniques, namely coarse graining of segments and stepwise search, were used to shorten the calculation time required for inverse analysis of the electric current map. Using this method and techniques, spatial distributions of electric current and water content in the MEA were obtained when the PEFC generated electric power at 100 A. Copyright © 2018 Elsevier Inc. All rights reserved.
Finley, Anna J; Tang, David; Schmeichel, Brandon J
2015-01-01
Prior research has found that persons who favor more analytic modes of thought are less religious. We propose that individual differences in analytic thought are associated with reduced religious beliefs particularly when analytic thought is measured (hence, primed) first. The current study provides a direct replication of prior evidence that individual differences in analytic thinking are negatively related to religious beliefs when analytic thought is measured before religious beliefs. When religious belief is measured before analytic thinking, however, the negative relationship is reduced to non-significance, suggesting that the link between analytic thought and religious belief is more tenuous than previously reported. The current study suggests that whereas inducing analytic processing may reduce religious belief, more analytic thinkers are not necessarily less religious. The potential for measurement order to inflate the inverse correlation between analytic thinking and religious beliefs deserves additional consideration.
Finley, Anna J.; Tang, David; Schmeichel, Brandon J.
2015-01-01
Prior research has found that persons who favor more analytic modes of thought are less religious. We propose that individual differences in analytic thought are associated with reduced religious beliefs particularly when analytic thought is measured (hence, primed) first. The current study provides a direct replication of prior evidence that individual differences in analytic thinking are negatively related to religious beliefs when analytic thought is measured before religious beliefs. When religious belief is measured before analytic thinking, however, the negative relationship is reduced to non-significance, suggesting that the link between analytic thought and religious belief is more tenuous than previously reported. The current study suggests that whereas inducing analytic processing may reduce religious belief, more analytic thinkers are not necessarily less religious. The potential for measurement order to inflate the inverse correlation between analytic thinking and religious beliefs deserves additional consideration. PMID:26402334
Advances in Assays and Analytical Approaches for Botulinum Toxin Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grate, Jay W.; Ozanich, Richard M.; Warner, Marvin G.
2010-08-04
Methods to detect botulinum toxin, the most poisonous substance known, are reviewed. Current assays are being developed with two main objectives in mind: 1) to obtain sufficiently low detection limits to replace the mouse bioassay with an in vitro assay, and 2) to develop rapid assays for screening purposes that are as sensitive as possible while requiring an hour or less to process the sample an obtain the result. This review emphasizes the diverse analytical approaches and devices that have been developed over the last decade, while also briefly reviewing representative older immunoassays to provide background and context.
Comparative study of I- V methods to extract Au/FePc/p-Si Schottky barrier diode parameters
NASA Astrophysics Data System (ADS)
Oruç, Çiğdem; Altındal, Ahmet
2018-01-01
So far, various methods have been proposed to extract the Schottky diode parameters from measured current-voltage characteristics. In this work, Schottky barrier diode with structure of Au/2(3),9(10),16(17),23(24)-tetra(4-(4-methoxyphenyl)-8-methylcoumarin-7 oxy) phthalocyaninatoiron(II) (FePc)/p-Si was fabricated and current-voltage measurements were carried out on it. In addition, current-voltage measurements were also performed on Au/p-Si structure, without FePc, to clarify the influence of the presence of an interface layer on the device performance. The measured current-voltage characteristics indicate that the interface properties of a Schottky barrier diode can be controlled by the presence of an organic interface layer. It is found that the room temperature barrier height of Au/FePc/p-Si structure is larger than that of the Au/p-Si structure. The obtained forward bias current-voltage characteristics of the Au/FePc/p-Si device was analysed by five different analytical methods. It is found that the extracted values of SBD parameters strongly depends on the method used.
A LITERATURE REVIEW OF WIPE SAMPLING METHODS ...
Wipe sampling is an important technique for the estimation of contaminant deposition in buildings, homes, or outdoor surfaces as a source of possible human exposure. Numerousmethods of wipe sampling exist, and each method has its own specification for the type of wipe, wetting solvent, and determinative step to be used, depending upon the contaminant of concern. The objective of this report is to concisely summarize the findings of a literature review that was conducted to identify the state-of-the-art wipe sampling techniques for a target list of compounds. This report describes the methods used to perform the literature review; a brief review of wipe sampling techniques in general; an analysis of physical and chemical properties of each target analyte; an analysis of wipe sampling techniques for the target analyte list; and asummary of the wipe sampling techniques for the target analyte list, including existing data gaps. In general, no overwhelming consensus can be drawn from the current literature on how to collect a wipe sample for the chemical warfare agents, organophosphate pesticides, and other toxic industrial chemicals of interest to this study. Different methods, media, and wetting solvents have been recommended and used by various groups and different studies. For many of the compounds of interest, no specific wipe sampling methodology has been established for their collection. Before a wipe sampling method (or methods) can be established for the co
An update on pharmaceutical film coating for drug delivery.
Felton, Linda A; Porter, Stuart C
2013-04-01
Pharmaceutical coating processes have generally been transformed from what was essentially an art form in the mid-twentieth century to a much more technology-driven process. This review article provides a basic overview of current film coating processes, including a discussion on polymer selection, coating formulation additives and processing equipment. Substrate considerations for pharmaceutical coating processes are also presented. While polymeric coating operations are commonplace in the pharmaceutical industry, film coating processes are still not fully understood, which presents serious challenges with current regulatory requirements. Novel analytical technologies and various modeling techniques that are being used to better understand film coating processes are discussed. This review article also examines the challenges of implementing process analytical technologies in coating operations, active pharmaceutical ingredients in polymer film coatings, the use of high-solids coating systems and continuous coating and other novel coating application methods.
NASA Astrophysics Data System (ADS)
Fontchastagner, Julien; Lubin, Thierry; Mezani, Smaïl; Takorabet, Noureddine
2018-03-01
This paper presents a design optimization of an axial-flux eddy-current magnetic coupling. The design procedure is based on a torque formula derived from a 3D analytical model and a population algorithm method. The main objective of this paper is to determine the best design in terms of magnets volume in order to transmit a torque between two movers, while ensuring a low slip speed and a good efficiency. The torque formula is very accurate and computationally efficient, and is valid for any slip speed values. Nevertheless, in order to solve more realistic problems, and then, take into account the thermal effects on the torque value, a thermal model based on convection heat transfer coefficients is also established and used in the design optimization procedure. Results show the effectiveness of the proposed methodology.
Sharma, Teenu; Khurana, Rajneet Kaur; Jain, Atul; Katare, O P; Singh, Bhupinder
2018-05-01
The current research work envisages an analytical quality by design-enabled development of a simple, rapid, sensitive, specific, robust and cost-effective stability-indicating reversed-phase high-performance liquid chromatographic method for determining stress-induced forced-degradation products of sorafenib tosylate (SFN). An Ishikawa fishbone diagram was constructed to embark upon analytical target profile and critical analytical attributes, i.e. peak area, theoretical plates, retention time and peak tailing. Factor screening using Taguchi orthogonal arrays and quality risk assessment studies carried out using failure mode effect analysis aided the selection of critical method parameters, i.e. mobile phase ratio and flow rate potentially affecting the chosen critical analytical attributes. Systematic optimization using response surface methodology of the chosen critical method parameters was carried out employing a two-factor-three-level-13-run, face-centered cubic design. A method operable design region was earmarked providing optimum method performance using numerical and graphical optimization. The optimum method employed a mobile phase composition consisting of acetonitrile and water (containing orthophosphoric acid, pH 4.1) at 65:35 v/v at a flow rate of 0.8 mL/min with UV detection at 265 nm using a C 18 column. Response surface methodology validation studies confirmed good efficiency and sensitivity of the developed method for analysis of SFN in mobile phase as well as in human plasma matrix. The forced degradation studies were conducted under different recommended stress conditions as per ICH Q1A (R2). Mass spectroscopy studies showed that SFN degrades in strongly acidic, alkaline and oxidative hydrolytic conditions at elevated temperature, while the drug was per se found to be photostable. Oxidative hydrolysis using 30% H 2 O 2 showed maximum degradation with products at retention times of 3.35, 3.65, 4.20 and 5.67 min. The absence of any significant change in the retention time of SFN and degradation products, formed under different stress conditions, ratified selectivity and specificity of the systematically developed method. Copyright © 2017 John Wiley & Sons, Ltd.
Irregular analytical errors in diagnostic testing - a novel concept.
Vogeser, Michael; Seger, Christoph
2018-02-23
In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC-isotope-dilution mass spectrometry methods are increasingly used for pre-market validation of routine diagnostic assays (these tests also involve substantial sets of clinical validation samples). Based on this definition/terminology, we list recognized causes of irregular analytical error as a risk catalog for clinical chemistry in this article. These issues include reproducible individual analytical errors (e.g. caused by anti-reagent antibodies) and non-reproducible, sporadic errors (e.g. errors due to incorrect pipetting volume due to air bubbles in a sample), which can both lead to inaccurate results and risks for patients.
Analyzing Matrices of Meta-Analytic Correlations: Current Practices and Recommendations
ERIC Educational Resources Information Center
Sheng, Zitong; Kong, Wenmo; Cortina, Jose M.; Hou, Shuofei
2016-01-01
Researchers have become increasingly interested in conducting analyses on meta-analytic correlation matrices. Methodologists have provided guidance and recommended practices for the application of this technique. The purpose of this article is to review current practices regarding analyzing meta-analytic correlation matrices, to identify the gaps…
Quantitative Determination of Caffeine in Beverages Using a Combined SPME-GC/MS Method
NASA Astrophysics Data System (ADS)
Pawliszyn, Janusz; Yang, Min J.; Orton, Maureen L.
1997-09-01
Solid-phase microextraction (SPME) combined with gas chromatography/mass spectrometry (GC/MS) has been applied to the analysis of various caffeinated beverages. Unlike the current methods, this technique is solvent free and requires no pH adjustments. The simplicity of the SPME-GC/MS method lends itself to a good undergraduate laboratory practice. This publication describes the analytical conditions and presents the data for determination of caffeine in coffee, tea, and coke. Quantitation by isotopic dilution is also illustrated.
EvoGraph: On-The-Fly Efficient Mining of Evolving Graphs on GPU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Dipanjan; Song, Shuaiwen
With the prevalence of the World Wide Web and social networks, there has been a growing interest in high performance analytics for constantly-evolving dynamic graphs. Modern GPUs provide massive AQ1 amount of parallelism for efficient graph processing, but the challenges remain due to their lack of support for the near real-time streaming nature of dynamic graphs. Specifically, due to the current high volume and velocity of graph data combined with the complexity of user queries, traditional processing methods by first storing the updates and then repeatedly running static graph analytics on a sequence of versions or snapshots are deemed undesirablemore » and computational infeasible on GPU. We present EvoGraph, a highly efficient and scalable GPU- based dynamic graph analytics framework.« less
Revisiting the positive DC corona discharge theory: Beyond Peek's and Townsend's law
NASA Astrophysics Data System (ADS)
Monrolin, Nicolas; Praud, Olivier; Plouraboué, Franck
2018-06-01
The classical positive Corona Discharge theory in a cylindrical axisymmetric configuration is revisited in order to find analytically the influence of gas properties and thermodynamic conditions on the corona current. The matched asymptotic expansion of Durbin and Turyn [J. Phys. D: Appl. Phys. 20, 1490-1495 (1987)] of a simplified but self-consistent problem is performed and explicit analytical solutions are derived. The mathematical derivation enables us to express a new positive DC corona current-voltage characteristic, choosing either a dimensionless or dimensional formulation. In dimensional variables, the current voltage law and the corona inception voltage explicitly depend on the electrode size and physical gas properties such as ionization and photoionization parameters. The analytical predictions are successfully confronted with experiments and Peek's and Townsend's laws. An analytical expression of the corona inception voltage φ o n is proposed, which depends on the known values of physical parameters without adjustable parameters. As a proof of consistency, the classical Townsend current-voltage law I = C φ ( φ - φ o n ) is retrieved by linearizing the non-dimensional analytical solution. A brief parametric study showcases the interest in this analytical current model, especially for exploring small corona wires or considering various thermodynamic conditions.
Analytical characterization of wine and its precursors by capillary electrophoresis.
Gomez, Federico J V; Monasterio, Romina P; Vargas, Verónica Carolina Soto; Silva, María F
2012-08-01
The accurate determination of marker chemical species in grape, musts, and wines presents a unique analytical challenge with high impact on diverse areas of knowledge such as health, plant physiology, and economy. Capillary electromigration techniques have emerged as a powerful tool, allowing the separation and identification of highly polar compounds that cannot be easily separated by traditional HPLC methods, providing complementary information and permitting the simultaneous analysis of analytes with different nature in a single run. The main advantage of CE over traditional methods for wine analysis is that in most cases samples require no treatment other than filtration. The purpose of this article is to present a revision on capillary electromigration methods applied to the analysis of wine and its precursors over the last decade. The current state of the art of the topic is evaluated, with special emphasis on the natural compounds that have allowed wine to be considered as a functional food. The most representative revised compounds are phenolic compounds, amino acids, proteins, elemental species, mycotoxins, and organic acids. Finally, a discussion on future trends of the role of capillary electrophoresis in the field of analytical characterization of wines for routine analysis, wine classification, as well as multidisciplinary aspects of the so-called "from soil to glass" chain is presented. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Review: visual analytics of climate networks
NASA Astrophysics Data System (ADS)
Nocke, T.; Buschmann, S.; Donges, J. F.; Marwan, N.; Schulz, H.-J.; Tominski, C.
2015-09-01
Network analysis has become an important approach in studying complex spatiotemporal behaviour within geophysical observation and simulation data. This new field produces increasing numbers of large geo-referenced networks to be analysed. Particular focus lies currently on the network analysis of the complex statistical interrelationship structure within climatological fields. The standard procedure for such network analyses is the extraction of network measures in combination with static standard visualisation methods. Existing interactive visualisation methods and tools for geo-referenced network exploration are often either not known to the analyst or their potential is not fully exploited. To fill this gap, we illustrate how interactive visual analytics methods in combination with geovisualisation can be tailored for visual climate network investigation. Therefore, the paper provides a problem analysis relating the multiple visualisation challenges to a survey undertaken with network analysts from the research fields of climate and complex systems science. Then, as an overview for the interested practitioner, we review the state-of-the-art in climate network visualisation and provide an overview of existing tools. As a further contribution, we introduce the visual network analytics tools CGV and GTX, providing tailored solutions for climate network analysis, including alternative geographic projections, edge bundling, and 3-D network support. Using these tools, the paper illustrates the application potentials of visual analytics for climate networks based on several use cases including examples from global, regional, and multi-layered climate networks.
Review: visual analytics of climate networks
NASA Astrophysics Data System (ADS)
Nocke, T.; Buschmann, S.; Donges, J. F.; Marwan, N.; Schulz, H.-J.; Tominski, C.
2015-04-01
Network analysis has become an important approach in studying complex spatiotemporal behaviour within geophysical observation and simulation data. This new field produces increasing amounts of large geo-referenced networks to be analysed. Particular focus lies currently on the network analysis of the complex statistical interrelationship structure within climatological fields. The standard procedure for such network analyses is the extraction of network measures in combination with static standard visualisation methods. Existing interactive visualisation methods and tools for geo-referenced network exploration are often either not known to the analyst or their potential is not fully exploited. To fill this gap, we illustrate how interactive visual analytics methods in combination with geovisualisation can be tailored for visual climate network investigation. Therefore, the paper provides a problem analysis, relating the multiple visualisation challenges with a survey undertaken with network analysts from the research fields of climate and complex systems science. Then, as an overview for the interested practitioner, we review the state-of-the-art in climate network visualisation and provide an overview of existing tools. As a further contribution, we introduce the visual network analytics tools CGV and GTX, providing tailored solutions for climate network analysis, including alternative geographic projections, edge bundling, and 3-D network support. Using these tools, the paper illustrates the application potentials of visual analytics for climate networks based on several use cases including examples from global, regional, and multi-layered climate networks.
Lee, Hyunyeol; Jeong, Woo Chul; Kim, Hyung Joong; Woo, Eung Je; Park, Jaeseok
2016-05-01
To develop a novel, current-controlled alternating steady-state free precession (SSFP)-based conductivity imaging method and corresponding MR signal models to estimate current-induced magnetic flux density (Bz ) and conductivity distribution. In the proposed method, an SSFP pulse sequence, which is in sync with alternating current pulses, produces dual oscillating steady states while yielding nonlinear relation between signal phase and Bz . A ratiometric signal model between the states was analytically derived using the Bloch equation, wherein Bz was estimated by solving a nonlinear inverse problem for conductivity estimation. A theoretical analysis on the signal-to-noise ratio of Bz was given. Numerical and experimental studies were performed using SSFP-FID and SSFP-ECHO with current pulses positioned either before or after signal encoding to investigate the feasibility of the proposed method in conductivity estimation. Given all SSFP variants herein, SSFP-FID with alternating current pulses applied before signal encoding exhibits the highest Bz signal-to-noise ratio and conductivity contrast. Additionally, compared with conventional conductivity imaging, the proposed method benefits from rapid SSFP acquisition without apparent loss of conductivity contrast. We successfully demonstrated the feasibility of the proposed method in estimating current-induced Bz and conductivity distribution. It can be a promising, rapid imaging strategy for quantitative conductivity imaging. © 2015 Wiley Periodicals, Inc.
Dietary fibre: challenges in production and use of food composition data.
Westenbrink, Susanne; Brunt, Kommer; van der Kamp, Jan-Willem
2013-10-01
Dietary fibre is a heterogeneous group of components for which several definitions and analytical methods were developed over the past decades, causing confusion among users and producers of dietary fibre data in food composition databases. An overview is given of current definitions and analytical methods. Some of the issues related to maintaining dietary fibre values in food composition databases are discussed. Newly developed AOAC methods (2009.01 or modifications) yield higher dietary fibre values, due to the inclusion of low molecular weight dietary fibre and resistant starch. For food composition databases procedures need to be developed to combine 'classic' and 'new' dietary fibre values since re-analysing all foods on short notice is impossible due to financial restrictions. Standardised value documentation procedures are important to evaluate dietary fibre values from several sources before exchanging and using the data, e.g. for dietary intake research. Copyright © 2012 Elsevier Ltd. All rights reserved.
Screening of 23 β-lactams in foodstuffs by LC-MS/MS using an alkaline QuEChERS-like extraction.
Bessaire, Thomas; Mujahid, Claudia; Beck, Andrea; Tarres, Adrienne; Savoy, Marie-Claude; Woo, Pei-Mun; Mottier, Pascal; Desmarchelier, Aurélien
2018-04-01
A fast and robust high performance LC-MS/MS screening method was developed for the analysis of β-lactam antibiotics in foods of animal origin: eggs, raw milk, processed dairy ingredients, infant formula, and meat- and fish-based products including baby foods. QuEChERS extraction with some adaptations enabled 23 drugs to be simultaneously monitored. Screening target concentrations were set at levels adequate to ensure compliance with current European, Chinese, US and Canadian regulations. The method was fully validated according to the European Community Reference Laboratories Residues Guidelines using 93 food samples of different composition. False-negative and false-positive rates were below 5% for all analytes. The method is adequate for use in high-routine laboratories. A 1-year study was additionally conducted to assess the stability of the 23 analytes in the working standard solution.
NASA Technical Reports Server (NTRS)
Mudgett, Paul D.; Schultz, John R.; Sauer, Richard L.
1992-01-01
Until 1989, ion chromatography (IC) was the baseline technology selected for the Specific Ion Analyzer, an in-flight inorganic water quality monitor being designed for Space Station Freedom. Recent developments in capillary electrophoresis (CE) may offer significant savings of consumables, power consumption, and weight/volume allocation, relative to IC technology. A thorough evaluation of CE's analytical capability, however, is necessary before one of the two techniques is chosen. Unfortunately, analytical methods currently available for inorganic CE are unproven for NASA's target list of anions and cations. Thus, CE electrolyte chemistry and methods to measure the target contaminants must be first identified and optimized. This paper reports the status of a study to evaluate CE's capability with regard to inorganic and carboxylate anions, alkali and alkaline earth cations, and transition metal cations. Preliminary results indicate that CE has an impressive selectivity and trace sensitivity, although considerable methods development remains to be performed.
Jordan, Gregor; Onami, Ichio; Heinrich, Julia; Staack, Roland F
2017-11-01
Assessment of active drug exposure of biologics may be crucial for drug development. Typically, ligand-binding assay methods are used to provide free/active drug concentrations. To what extent hybrid LC-MS/MS procedures enable correct 'active' drug quantification is currently under consideration. Experimental & results: The relevance of appropriate extraction condition was evaluated by a hybrid target capture immuno-affinity LC-MS/MS method using total and free/active quality controls (QCs). The rapid extraction (10 min) provided correct results, whereas overnight incubation resulted in significant overestimation of the free/active drug (monclonal antibody) concentration. Conventional total QCs were inappropriate to determine optimal method conditions in contrast to free/active QCs. The 'free/active analyte QC concept' enables development of appropriate extraction conditions for correct active drug quantification by hybrid LC-MS/MS.
Consistent approach to describing aircraft HIRF protection
NASA Technical Reports Server (NTRS)
Rimbey, P. R.; Walen, D. B.
1995-01-01
The high intensity radiated fields (HIRF) certification process as currently implemented is comprised of an inconsistent combination of factors that tend to emphasize worst case scenarios in assessing commercial airplane certification requirements. By examining these factors which include the process definition, the external HIRF environment, the aircraft coupling and corresponding internal fields, and methods of measuring equipment susceptibilities, activities leading to an approach to appraising airplane vulnerability to HIRF are proposed. This approach utilizes technically based criteria to evaluate the nature of the threat, including the probability of encountering the external HIRF environment. No single test or analytic method comprehensively addresses the full HIRF threat frequency spectrum. Additional tools such as statistical methods must be adopted to arrive at more realistic requirements to reflect commercial aircraft vulnerability to the HIRF threat. Test and analytic data are provided to support the conclusions of this report. This work was performed under NASA contract NAS1-19360, Task 52.
Current matrix element in HAL QCD's wavefunction-equivalent potential method
NASA Astrophysics Data System (ADS)
Watanabe, Kai; Ishii, Noriyoshi
2018-04-01
We give a formula to calculate a matrix element of a conserved current in the effective quantum mechanics defined by the wavefunction-equivalent potentials proposed by the HAL QCD collaboration. As a first step, a non-relativistic field theory with two-channel coupling is considered as the original theory, with which a wavefunction-equivalent HAL QCD potential is obtained in a closed analytic form. The external field method is used to derive the formula by demanding that the result should agree with the original theory. With this formula, the matrix element is obtained by sandwiching the effective current operator between the left and right eigenfunctions of the effective Hamiltonian associated with the HAL QCD potential. In addition to the naive one-body current, the effective current operator contains an additional two-body term emerging from the degrees of freedom which has been integrated out.
NASA Technical Reports Server (NTRS)
Papazian, Peter B.; Perala, Rodney A.; Curry, John D.; Lankford, Alan B.; Keller, J. David
1988-01-01
Using three different current injection methods and a simple voltage probe, transfer impedances for Solid Rocket Motor (SRM) joints, wire meshes, aluminum foil, Thorstrand and a graphite composite motor case were measured. In all cases, the surface current distribution for the particular current injection device was calculated analytically or by finite difference methods. The results of these calculations were used to generate a geometric factor which was the ratio of total injected current to surface current density. The results were validated in several ways. For wire mesh measurements, results showed good agreement with calculated results for a 14 by 18 Al screen. SRM joint impedances were independently verified. The filiment wound case measurement results were validated only to the extent that their curve shape agrees with the expected form of transfer impedance for a homogeneous slab excited by a plane wave source.
Duester, Lars; Fabricius, Anne-Lena; Jakobtorweihen, Sven; Philippe, Allan; Weigl, Florian; Wimmer, Andreas; Schuster, Michael; Nazar, Muhammad Faizan
2016-11-01
Coacervate-based techniques are intensively used in environmental analytical chemistry to enrich and extract different kinds of analytes. Most methods focus on the total content or the speciation of inorganic and organic substances. Size fractionation is less commonly addressed. Within coacervate-based techniques, cloud point extraction (CPE) is characterized by a phase separation of non-ionic surfactants dispersed in an aqueous solution when the respective cloud point temperature is exceeded. In this context, the feature article raises the following question: May CPE in future studies serve as a key tool (i) to enrich and extract nanoparticles (NPs) from complex environmental matrices prior to analyses and (ii) to preserve the colloidal status of unstable environmental samples? With respect to engineered NPs, a significant gap between environmental concentrations and size- and element-specific analytical capabilities is still visible. CPE may support efforts to overcome this "concentration gap" via the analyte enrichment. In addition, most environmental colloidal systems are known to be unstable, dynamic, and sensitive to changes of the environmental conditions during sampling and sample preparation. This delivers a so far unsolved "sample preparation dilemma" in the analytical process. The authors are of the opinion that CPE-based methods have the potential to preserve the colloidal status of these instable samples. Focusing on NPs, this feature article aims to support the discussion on the creation of a convention called the "CPE extractable fraction" by connecting current knowledge on CPE mechanisms and on available applications, via the uncertainties visible and modeling approaches available, with potential future benefits from CPE protocols.
Description and Recognition of the Concept of Social Capital in Higher Education System
ERIC Educational Resources Information Center
Tonkaboni, Forouzan; Yousefy, Alireza; Keshtiaray, Narges
2013-01-01
The current research is intended to describe and recognize the concept of social capital in higher education based on theoretical method in a descriptive-analytical approach. Description and Recognition of the data, gathered from theoretical and experimental studies, indicated that social capital is one of the most important indices for…
Methods for Measuring Student Response to Stimulant Medication: A Meta-Analytic Review
ERIC Educational Resources Information Center
Someki, Fumio; Burns, Matthew K.
2009-01-01
Measuring student response to interventions has become an important role for school psychologists. Children with Attention-Deficit/Hyperactivity Disorder (ADHD) are frequently treated with stimulant medication, but the response to the treatment is rarely adequately assessed. The current study examined the sensitivity of measures used to assess the…
An analytical method was developed for the trace analysis of 98 semi-volatile organic compounds (SOCs) in remote, high elevation lake sediment. Sediment cores from Lone Pine Lake (West of the Continental Divide) and Mills Lake (East of the Continental Divide) in Rocky Mountain Na...
Background: Numerous indicators have been used to assess the presence of fecal pollution, many relying on molecular methods such as qPCR. One of the targets frequently used, the human-associated Bacteroides 16s rRNA region, has several assays in current usage. These assays vary...
Developing Systemic Theories Requires Formal Methods
ERIC Educational Resources Information Center
Gobet, Fernand
2012-01-01
Ziegler and Phillipson (Z&P) advance an interesting and ambitious proposal, whereby current analytical/mechanistic theories of gifted education are replaced by systemic theories. In this commentary, the author focuses on the pros and cons of using systemic theories. He argues that Z&P's proposal both goes too far and not far enough. The future of…
Chemical monitoring strategies are most effective for those chemicals whose hazards are well understood and for which sensitive and cost effective analytical methods are available. Unfortunately, such chemicals represent a minor fraction of those that may currently occur in the e...
Choosing and Using Introns in Molecular Phylogenetics
Creer, Simon
2007-01-01
Introns are now commonly used in molecular phylogenetics in an attempt to recover gene trees that are concordant with species trees, but there are a range of genomic, logistical and analytical considerations that are infrequently discussed in empirical studies that utilize intron data. This review outlines expedient approaches for locus selection, overcoming paralogy problems, recombination detection methods and the identification and incorporation of LVHs in molecular systematics. A range of parsimony and Bayesian analytical approaches are also described in order to highlight the methods that can currently be employed to align sequences and treat indels in subsequent analyses. By covering the main points associated with the generation and analysis of intron data, this review aims to provide a comprehensive introduction to using introns (or any non-coding nuclear data partition) in contemporary phylogenetics. PMID:19461984
Tarasov, Andrii; Rauhut, Doris; Jung, Rainer
2017-12-01
Analytical methods of haloanisoles and halophenols quantification in cork matrix are summarized in the current review. Sample-preparation and sample-treatment techniques have been compared and discussed from the perspective of their efficiency, time- and extractant-optimization, easiness of performance. Primary interest of these analyses usually addresses to 2,4,6-trichloroanisole (TCA), which is a major wine contaminant among haloanisoles. Two concepts of TCA determination are described in the review: releasable TCA and total TCA analyses. Chromatographic, bioanalytical and sensorial methods were compared according to their application in the cork industry and in scientific investigations. Finally, it was shown that modern analytical techniques are able to provide required sensitivity, selectivity and repeatability for haloanisoles and halophenols determination. Copyright © 2017 Elsevier B.V. All rights reserved.
Galaxy formation through hierarchical clustering
NASA Astrophysics Data System (ADS)
White, Simon D. M.; Frenk, Carlos S.
1991-09-01
Analytic methods for studying the formation of galaxies by gas condensation within massive dark halos are presented. The present scheme applies to cosmogonies where structure grows through hierarchical clustering of a mixture of gas and dissipationless dark matter. The simplest models consistent with the current understanding of N-body work on dissipationless clustering, and that of numerical and analytic work on gas evolution and cooling are adopted. Standard models for the evolution of the stellar population are also employed, and new models for the way star formation heats and enriches the surrounding gas are constructed. Detailed results are presented for a cold dark matter universe with Omega = 1 and H(0) = 50 km/s/Mpc, but the present methods are applicable to other models. The present luminosity functions contain significantly more faint galaxies than are observed.
Targeted methods for quantitative analysis of protein glycosylation
Goldman, Radoslav; Sanda, Miloslav
2018-01-01
Quantification of proteins by LC-MS/MS-MRM has become a standard method with broad projected clinical applicability. MRM quantification of protein modifications is, however, far less utilized, especially in the case of glycoproteins. This review summarizes current methods for quantitative analysis of protein glycosylation with a focus on MRM methods. We describe advantages of this quantitative approach, analytical parameters that need to be optimized to achieve reliable measurements, and point out the limitations. Differences between major classes of N- and O-glycopeptides are described and class-specific glycopeptide assays are demonstrated. PMID:25522218
Chapter A5. Section 6.1.F. Wastewater, Pharmaceutical, and Antibiotic Compounds
Lewis, Michael Edward; Zaugg, Steven D.
2003-01-01
The USGS differentiates between samples collected for analysis of wastewater compounds and those collected for analysis of pharmaceutical and antibiotic compounds, based on the analytical schedule for the laboratory method. Currently, only the wastewater laboratory method for field-filtered samples (SH1433) is an approved, routine (production) method. (The unfiltered wastewater method LC 8033 also is available but requires a proposal for custom analysis.) At this time, analysis of samples for pharmaceutical and antibiotic compounds is confined to research studies and is available only on a custom basis.
Recent activities within the Aeroservoelasticity Branch at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Noll, Thomas E.; Perry, Boyd, III; Gilbert, Michael G.
1989-01-01
The objective of research in aeroservoelasticity at the NASA Langley Research Center is to enhance the modeling, analysis, and multidisciplinary design methodologies for obtaining multifunction digital control systems for application to flexible flight vehicles. Recent accomplishments are discussed, and a status report on current activities within the Aeroservoelasticity Branch is presented. In the area of modeling, improvements to the Minimum-State Method of approximating unsteady aerodynamics are shown to provide precise, low-order aeroservoelastic models for design and simulation activities. Analytical methods based on Matched Filter Theory and Random Process Theory to provide efficient and direct predictions of the critical gust profile and the time-correlated gust loads for linear structural design considerations are also discussed. Two research projects leading towards improved design methodology are summarized. The first program is developing an integrated structure/control design capability based on hierarchical problem decomposition, multilevel optimization and analytical sensitivities. The second program provides procedures for obtaining low-order, robust digital control laws for aeroelastic applications. In terms of methodology validation and application the current activities associated with the Active Flexible Wing project are reviewed.
NASA Astrophysics Data System (ADS)
Melnikov, A. A.; Kostishin, V. G.; Alenkov, V. V.
2017-05-01
Real operating conditions of a thermoelectric cooling device are in the presence of thermal resistances between thermoelectric material and a heat medium or cooling object. They limit performance of a device and should be considered when modeling. Here we propose a dimensionless mathematical steady state model, which takes them into account. Analytical equations for dimensionless cooling capacity, voltage, and coefficient of performance (COP) depending on dimensionless current are given. For improved accuracy a device can be modeled with use of numerical or combined analytical-numerical methods. The results of modeling are in acceptable accordance with experimental results. The case of zero temperature difference between hot and cold heat mediums at which the maximum cooling capacity mode appears is considered in detail. Optimal device parameters for maximal cooling capacity, such as fraction of thermal conductance on the cold side y, fraction of current relative to maximal j' are estimated in range of 0.38-0.44 and 0.48-0.95, respectively, for dimensionless conductance K' = 5-100. Also, a method for determination of thermal resistances of a thermoelectric cooling system is proposed.
Surface Plasmon Resonance: New Biointerface Designs and High-Throughput Affinity Screening
NASA Astrophysics Data System (ADS)
Linman, Matthew J.; Cheng, Quan Jason
Surface plasmon resonance (SPR) is a surface optical technique that measures minute changes in refractive index at a metal-coated surface. It has become increasingly popular in the study of biological and chemical analytes because of its label-free measurement feature. In addition, SPR allows for both quantitative and qualitative assessment of binding interactions in real time, making it ideally suited for probing weak interactions that are often difficult to study with other methods. This chapter presents the biosensor development in the last 3 years or so utilizing SPR as the principal analytical technique, along with a concise background of the technique itself. While SPR has demonstrated many advantages, it is a nonselective method and so, building reproducible and functional interfaces is vital to sensing applications. This chapter, therefore, focuses mainly on unique surface chemistries and assay approaches to examine biological interactions with SPR. In addition, SPR imaging for high-throughput screening based on microarrays and novel hyphenated techniques involving the coupling of SPR to other analytical methods is discussed. The chapter concludes with a commentary on the current state of SPR biosensing technology and the general direction of future biosensor research.
Stamm, H; Gibson, N; Anklam, E
2012-08-01
This paper describes the requirements and resulting challenges for the implementation of current and upcoming European Union legislation referring to the use of nanomaterials in food, cosmetics and other consumer products. The European Commission has recently adopted a recommendation for the definition of nanomaterials. There is now an urgent need for appropriate and fit-for-purpose analytical methods in order to identify nanomaterials properly according to this definition and to assess whether or not a product contains nanomaterials. Considering the lack of such methods to date, this paper elaborates on the challenges of the legislative framework and the type of methods needed, not only to facilitate implementation of labelling requirements, but also to ensure the safety of products coming to the market. Considering the many challenges in the analytical process itself, such as interaction of nanoparticles with matrix constituents, potential agglomeration and aggregation due to matrix environment, broad variety of matrices, etc., there is a need for integrated analytical approaches, not only for sample preparation (e.g. separation from matrix), but also for the actual characterisation. Furthermore, there is an urgent need for quality assurance tools such as validated methods and (certified) reference materials, including materials containing nanoparticles in a realistic matrix (food products, cosmetics, etc.).
An automated protocol for performance benchmarking a widefield fluorescence microscope.
Halter, Michael; Bier, Elianna; DeRose, Paul C; Cooksey, Gregory A; Choquette, Steven J; Plant, Anne L; Elliott, John T
2014-11-01
Widefield fluorescence microscopy is a highly used tool for visually assessing biological samples and for quantifying cell responses. Despite its widespread use in high content analysis and other imaging applications, few published methods exist for evaluating and benchmarking the analytical performance of a microscope. Easy-to-use benchmarking methods would facilitate the use of fluorescence imaging as a quantitative analytical tool in research applications, and would aid the determination of instrumental method validation for commercial product development applications. We describe and evaluate an automated method to characterize a fluorescence imaging system's performance by benchmarking the detection threshold, saturation, and linear dynamic range to a reference material. The benchmarking procedure is demonstrated using two different materials as the reference material, uranyl-ion-doped glass and Schott 475 GG filter glass. Both are suitable candidate reference materials that are homogeneously fluorescent and highly photostable, and the Schott 475 GG filter glass is currently commercially available. In addition to benchmarking the analytical performance, we also demonstrate that the reference materials provide for accurate day to day intensity calibration. Published 2014 Wiley Periodicals Inc. Published 2014 Wiley Periodicals Inc. This article is a US government work and, as such, is in the public domain in the United States of America.
Solving time-dependent two-dimensional eddy current problems
NASA Technical Reports Server (NTRS)
Lee, Min Eig; Hariharan, S. I.; Ida, Nathan
1988-01-01
Results of transient eddy current calculations are reported. For simplicity, a two-dimensional transverse magnetic field which is incident on an infinitely long conductor is considered. The conductor is assumed to be a good but not perfect conductor. The resulting problem is an interface initial boundary value problem with the boundary of the conductor being the interface. A finite difference method is used to march the solution explicitly in time. The method is shown. Treatment of appropriate radiation conditions is given special consideration. Results are validated with approximate analytic solutions. Two stringent test cases of high and low frequency incident waves are considered to validate the results.
Methods And Devices For Characterizing Duplex Nucleic Acid Molecules
Akeson, Mark; Vercoutere, Wenonah; Haussler, David; Winters-Hilt, Stephen
2005-08-30
Methods and devices are provided for characterizing a duplex nucleic acid, e.g., a duplex DNA molecule. In the subject methods, a fluid conducting medium that includes a duplex nucleic acid molecule is contacted with a nanopore under the influence of an applied electric field and the resulting changes in current through the nanopore caused by the duplex nucleic acid molecule are monitored. The observed changes in current through the nanopore are then employed as a set of data values to characterize the duplex nucleic acid, where the set of data values may be employed in raw form or manipulated, e.g., into a current blockade profile. Also provided are nanopore devices for practicing the subject methods, where the subject nanopore devices are characterized by the presence of an algorithm which directs a processing means to employ monitored changes in current through a nanopore to characterize a duplex nucleic acid molecule responsible for the current changes. The subject methods and devices find use in a variety of applications, including, among other applications, the identification of an analyte duplex DNA molecule in a sample, the specific base sequence at a single nulceotide polymorphism (SNP), and the sequencing of duplex DNA molecules.
Koyama, Kazuo; Miyazaki, Kinuko; Abe, Kousuke; Egawa, Yoshitsugu; Fukazawa, Toru; Kitta, Tadashi; Miyashita, Takashi; Nezu, Toru; Nohara, Hidenori; Sano, Takashi; Takahashi, Yukinari; Taniguchi, Hideji; Yada, Hiroshi; Yamazaki, Kumiko; Watanabe, Yomi
2017-06-01
An indirect enzymatic analysis method for the quantification of fatty acid esters of 2-/3-monochloro-1,2-propanediol (2/3-MCPD) and glycidol was developed, using the deuterated internal standard of each free-form component. A statistical method for calibration and quantification of 2-MCPD-d 5 , which is difficult to obtain, is substituted by 3-MCPD-d 5 used for calculation of 3-MCPD. Using data from a previous collaborative study, the current method for the determination of 2-MCPD content using 2-MCPD-d 5 was compared to three alternative new methods using 3-MCPD-d 5 . The regression analysis showed that the alternative methods were unbiased compared to the current method. The relative standard deviation (RSD R ) among the testing laboratories was ≤ 15% and the Horwitz ratio was ≤ 1.0, a satisfactory value.
Development of the Diabetes Technology Society Blood Glucose Monitor System Surveillance Protocol
Klonoff, David C.; Lias, Courtney; Beck, Stayce; Parkes, Joan Lee; Kovatchev, Boris; Vigersky, Robert A.; Arreaza-Rubin, Guillermo; Burk, Robert D.; Kowalski, Aaron; Little, Randie; Nichols, James; Petersen, Matt; Rawlings, Kelly; Sacks, David B.; Sampson, Eric; Scott, Steve; Seley, Jane Jeffrie; Slingerland, Robbert; Vesper, Hubert W.
2015-01-01
Background: Inaccurate blood glucsoe monitoring systems (BGMSs) can lead to adverse health effects. The Diabetes Technology Society (DTS) Surveillance Program for cleared BGMSs is intended to protect people with diabetes from inaccurate, unreliable BGMS products that are currently on the market in the United States. The Surveillance Program will provide an independent assessment of the analytical performance of cleared BGMSs. Methods: The DTS BGMS Surveillance Program Steering Committee included experts in glucose monitoring, surveillance testing, and regulatory science. Over one year, the committee engaged in meetings and teleconferences aiming to describe how to conduct BGMS surveillance studies in a scientifically sound manner that is in compliance with good clinical practice and all relevant regulations. Results: A clinical surveillance protocol was created that contains performance targets and analytical accuracy-testing studies with marketed BGMS products conducted by qualified clinical and laboratory sites. This protocol entitled “Protocol for the Diabetes Technology Society Blood Glucose Monitor System Surveillance Program” is attached as supplementary material. Conclusion: This program is needed because currently once a BGMS product has been cleared for use by the FDA, no systematic postmarket Surveillance Program exists that can monitor analytical performance and detect potential problems. This protocol will allow identification of inaccurate and unreliable BGMSs currently available on the US market. The DTS Surveillance Program will provide BGMS manufacturers a benchmark to understand the postmarket analytical performance of their products. Furthermore, patients, health care professionals, payers, and regulatory agencies will be able to use the results of the study to make informed decisions to, respectively, select, prescribe, finance, and regulate BGMSs on the market. PMID:26481642
An analytical solution for Dean flow in curved ducts with rectangular cross section
NASA Astrophysics Data System (ADS)
Norouzi, M.; Biglari, N.
2013-05-01
In this paper, a full analytical solution for incompressible flow inside the curved ducts with rectangular cross-section is presented for the first time. The perturbation method is applied to solve the governing equations and curvature ratio is considered as the perturbation parameter. The previous perturbation solutions are usually restricted to the flow in curved circular or annular pipes related to the overly complex form of solutions or singularity situation for flow in curved ducts with non-circular shapes of cross section. This issue specifies the importance of analytical studies in the field of Dean flow inside the non-circular ducts. In this study, the main flow velocity, stream function of lateral velocities (secondary flows), and flow resistance ratio in rectangular curved ducts are obtained analytically. The effect of duct curvature and aspect ratio on flow field is investigated as well. Moreover, it is important to mention that the current analytical solution is able to simulate the Taylor-Görtler and Dean vortices (vortices in stable and unstable situations) in curved channels.
Kuan, Da-Han; Wang, I-Shun; Lin, Jiun-Rue; Yang, Chao-Han; Huang, Chi-Hsien; Lin, Yen-Hung; Lin, Chih-Ting; Huang, Nien-Tsu
2016-08-02
The hemoglobin-A1c test, measuring the ratio of glycated hemoglobin (HbA1c) to hemoglobin (Hb) levels, has been a standard assay in diabetes diagnosis that removes the day-to-day glucose level variation. Currently, the HbA1c test is restricted to hospitals and central laboratories due to the laborious, time-consuming whole blood processing and bulky instruments. In this paper, we have developed a microfluidic device integrating dual CMOS polysilicon nanowire sensors (MINS) for on-chip whole blood processing and simultaneous detection of multiple analytes. The micromachined polymethylmethacrylate (PMMA) microfluidic device consisted of a serpentine microchannel with multiple dam structures designed for non-lysed cells or debris trapping, uniform plasma/buffer mixing and dilution. The CMOS-fabricated polysilicon nanowire sensors integrated with the microfluidic device were designed for the simultaneous, label-free electrical detection of multiple analytes. Our study first measured the Hb and HbA1c levels in 11 clinical samples via these nanowire sensors. The results were compared with those of standard Hb and HbA1c measurement methods (Hb: the sodium lauryl sulfate hemoglobin detection method; HbA1c: cation-exchange high-performance liquid chromatography) and showed comparable outcomes. Finally, we successfully demonstrated the efficacy of the MINS device's on-chip whole blood processing followed by simultaneous Hb and HbA1c measurement in a clinical sample. Compared to current Hb and HbA1c sensing instruments, the MINS platform is compact and can simultaneously detect two analytes with only 5 μL of whole blood, which corresponds to a 300-fold blood volume reduction. The total assay time, including the in situ sample processing and analyte detection, was just 30 minutes. Based on its on-chip whole blood processing and simultaneous multiple analyte detection functionalities with a lower sample volume requirement and shorter process time, the MINS device can be effectively applied to real-time diabetes diagnostics and monitoring in point-of-care settings.
Richardson, Magnus J E
2007-08-01
Integrate-and-fire models are mainstays of the study of single-neuron response properties and emergent states of recurrent networks of spiking neurons. They also provide an analytical base for perturbative approaches that treat important biological details, such as synaptic filtering, synaptic conductance increase, and voltage-activated currents. Steady-state firing rates of both linear and nonlinear integrate-and-fire models, receiving fluctuating synaptic drive, can be calculated from the time-independent Fokker-Planck equation. The dynamic firing-rate response is less easy to extract, even at the first-order level of a weak modulation of the model parameters, but is an important determinant of neuronal response and network stability. For the linear integrate-and-fire model the response to modulations of current-based synaptic drive can be written in terms of hypergeometric functions. For the nonlinear exponential and quadratic models no such analytical forms for the response are available. Here it is demonstrated that a rather simple numerical method can be used to obtain the steady-state and dynamic response for both linear and nonlinear models to parameter modulation in the presence of current-based or conductance-based synaptic fluctuations. To complement the full numerical solution, generalized analytical forms for the high-frequency response are provided. A special case is also identified--time-constant modulation--for which the response to an arbitrarily strong modulation can be calculated exactly.
About an Extreme Achievable Current in Plasma Focus Installation of Mather Type
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nikulin, V. Ya.; Polukhin, S. N.; Vikhrev, V. V.
A computer simulation and analytical analysis of the discharge process in Plasma Focus has shown that there is an upper limit to the current which can be achieved in Plasma Focus installation of Mather type by only increasing the capacity of the condenser bank. The maximum current achieved for various plasma focus installations of 1 MJ level is discussed. For example, for the PF-1000 (IFPiLM) and 1 MJ Frascati PF, the maximum current is near 2 MA. Thus, the commonly used method of increasing the energy of the PF installation by increasing of the capacity has no merit. Alternative optionsmore » in order to increase the current are discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelen, J. P.; Sun, Y.; Harris, J. R.
In this paper we derive analytical expressions for the output current of an un-gated thermionic cathode RF gun in the presence of back-bombardment heating. We provide a brief overview of back-bombardment theory and discuss comparisons between the analytical back-bombardment predictions and simulation models. We then derive an expression for the output current as a function of the RF repetition rate and discuss relationships between back-bombardment, fieldenhancement, and output current. We discuss in detail the relevant approximations and then provide predictions about how the output current should vary as a function of repetition rate for some given system configurations.
Zimmerman, Christian E.; Nielsen, Roger L.
2003-01-01
The use of strontium-to-calcium (Sr/Ca) ratios in otoliths is becoming a standard method to describe life history type and the chronology of migrations between freshwater and seawater habitats in teleosts (e.g. Kalish, 1990; Radtke et al., 1990; Secor, 1992; Rieman et al., 1994; Radtke, 1995; Limburg, 1995; Tzeng et al. 1997; Volk et al., 2000; Zimmerman, 2000; Zimmerman and Reeves, 2000, 2002). This method provides critical information concerning the relationship and ecology of species exhibiting phenotypic variation in migratory behavior (Kalish, 1990; Secor, 1999). Methods and procedures, however, vary among laboratories because a standard method or protocol for measurement of Sr in otoliths does not exist. In this note, we examine the variations in analytical conditions in an effort to increase precision of Sr/Ca measurements. From these findings we argue that precision can be maximized with higher beam current (although there is specimen damage) than previously recommended by Gunn et al. (1992).
Floré, Katelijne M J; Delanghe, Joris R
2009-01-01
Current point-of-care testing (POCT) glucometers are based on various test principles. Two major method groups dominate the market: glucose oxidase-based systems and glucose dehydrogenase-based systems using pyrroloquinoline quinone (GDH-PQQ) as a cofactor. The GDH-PQQ-based glucometers are replacing the older glucose oxidase-based systems because of their lower sensitivity for oxygen. On the other hand, the GDH-PQQ test method results in falsely elevated blood glucose levels in peritoneal dialysis patients receiving solutions containing icodextrin (e.g., Extraneal; Baxter, Brussels, Belgium). Icodextrin is metabolized in the systemic circulation into different glucose polymers, but mainly maltose, which interferes with the GDH-PQQ-based method. Clinicians should be aware of this analytical interference. The POCT glucometers based on the GDH-PQQ method should preferably not be used in this high-risk population and POCT glucose results inconsistent with clinical suspicion of hypoglycemic coma should be retested with another testing system.
Exact analytical solution of a classical Josephson tunnel junction problem
NASA Astrophysics Data System (ADS)
Kuplevakhsky, S. V.; Glukhov, A. M.
2010-10-01
We give an exact and complete analytical solution of the classical problem of a Josephson tunnel junction of arbitrary length W ɛ(0,∞) in the presence of external magnetic fields and transport currents. Contrary to a wide-spread belief, the exact analytical solution unambiguously proves that there is no qualitative difference between so-called "small" (W≪1) and "large" junctions (W≫1). Another unexpected physical implication of the exact analytical solution is the existence (in the current-carrying state) of unquantized Josephson vortices carrying fractional flux and located near one of the edges of the junction. We also refine the mathematical definition of critical transport current.
NASA Astrophysics Data System (ADS)
Sauvé, Alexandre; Montier, Ludovic
2016-12-01
Context: Bolometers are high sensitivity detector commonly used in Infrared astronomy. The HFI instrument of the Planck satellite makes extensive use of them, but after the satellite launch two electronic related problems revealed critical. First an unexpected excess response of detectors at low optical excitation frequency for ν < 1 Hz, and secondly the Analog To digital Converter (ADC) component had been insufficiently characterized on-ground. These two problems require an exquisite knowledge of detector response. However bolometers have highly nonlinear characteristics, coming from their electrical and thermal coupling making them very difficult to model. Goal: We present a method to build the analytical transfer function in frequency domain which describe the voltage response of an Alternative Current (AC) biased bolometer to optical excitation, based on the standard bolometer model. This model is built using the setup of the Planck/HFI instrument and offers the major improvement of being based on a physical model rather than the currently in use had-hoc model based on Direct Current (DC) bolometer theory. Method: The analytical transfer function expression will be presented in matrix form. For this purpose, we build linearized versions of the bolometer electro thermal equilibrium. A custom description of signals in frequency is used to solve the problem with linear algebra. The model performances is validated using time domain simulations. Results: The provided expression is suitable for calibration and data processing. It can also be used to provide constraints for fitting optical transfer function using real data from steady state electronic response and optical response. The accurate description of electronic response can also be used to improve the ADC nonlinearity correction for quickly varying optical signals.
Status of Rifaximin: A Review of Characteristics, Uses and Analytical Methods.
Kogawa, Ana Carolina; Salgado, Hérida Regina Nunes
2018-11-02
Rifaximin, an oral antimicrobial, has many advantages because it is selective intestine, has minimal adverse effects and is used for the treatment of some diseases such as hepatic encephalopathy, irritable bowel syndrome, travelers' diarrhea, ulcerative colitis, Clostridium difficile and acute diarrhea. Rifaximin in the form of 200 mg tablets is commercially available. The crystalline α form is therapeutically safe and effective. In most of the official compendia, rifaximin has no monograph and in none of them is there a monograph for rifaximin tablets. The literature, however, contemplates this gap with varied methods. The literature presents some methods for evaluation of rifaximin in both biological fluid and pharmaceutical product. High performance liquid chromatography stands out for the evaluation of rifaximin. Most of the methods reported in the literature are for pharmaceuticals products. They use (1) toxic organic solvents, harmful to the operator and the environment, and/or (2) buffer solution, which has a shorter service life and requires time-consuming washes of the chromatographic system generating more waste. So, this work aims to discuss (i) properties; (ii) applications; (iii) polymorphism and (iv) analytical methods of rifaximin by the look of green chemistry. This review shows an extremely current topic of great importance to the chemical-pharmaceutical area and everything it involves, since the analytical process until the impact on the environment in which it is embedded.
Speciated arsenic in air: measurement methodology and risk assessment considerations.
Lewis, Ari S; Reid, Kim R; Pollock, Margaret C; Campleman, Sharan L
2012-01-01
Accurate measurement of arsenic (As) in air is critical to providing a more robust understanding of arsenic exposures and associated human health risks. Although there is extensive information available on total arsenic in air, less is known on the relative contribution of each arsenic species. To address this data gap, the authors conducted an in-depth review of available information on speciated arsenic in air. The evaluation included the type of species measured and the relative abundance, as well as an analysis of the limitations of current analytical methods. Despite inherent differences in the procedures, most techniques effectively separated arsenic species in the air samples. Common analytical techniques such as inductively coupled plasma mass spectrometry (ICP-MS) and/or hydride generation (HG)- or quartz furnace (GF)-atomic absorption spectrometry (AAS) were used for arsenic measurement in the extracts, and provided some of the most sensitive detection limits. The current analysis demonstrated that, despite limited comparability among studies due to differences in seasonal factors, study duration, sample collection methods, and analytical methods, research conducted to date is adequate to show that arsenic in air is mainly in the inorganic form. Reported average concentrations of As(III) and As(V) ranged up to 7.4 and 10.4 ng/m3, respectively, with As(V) being more prevalent than As(III) in most studies. Concentrations of the organic methylated arsenic compounds are negligible (in the pg/m3 range). However because of the variability in study methods and measurement methodology, the authors were unable to determine the variation in arsenic composition as a function of source or particulate matter (PM) fraction. In this work, the authors include the implications of arsenic speciation in air on potential exposure and risks. The authors conclude that it is important to synchronize sample collection, preparation, and analytical techniques in order to generate data more useful for arsenic inhalation risk assessment, and a more robust documentation of quality assurance/quality control (QA/QC) protocols is necessary to ensure accuracy, precision, representativeness, and comparability.
Large exchange-dominated domain wall velocities in antiferromagnetically coupled nanowires
NASA Astrophysics Data System (ADS)
Kuteifan, Majd; Lubarda, M. V.; Fu, S.; Chang, R.; Escobar, M. A.; Mangin, S.; Fullerton, E. E.; Lomakin, V.
2016-04-01
Magnetic nanowires supporting field- and current-driven domain wall motion are envisioned for methods of information storage and processing. A major obstacle for their practical use is the domain-wall velocity, which is traditionally limited for low fields and currents due to the Walker breakdown occurring when the driving component reaches a critical threshold value. We show through numerical and analytical modeling that the Walker breakdown limit can be extended or completely eliminated in antiferromagnetically coupled magnetic nanowires. These coupled nanowires allow for large domain-wall velocities driven by field and/or current as compared to conventional nanowires.
NASA Astrophysics Data System (ADS)
Liu, Guoxi; Zhang, Chunli; Chen, Weiqiu; Dong, Shuxiang
2013-07-01
An analytical model of resonant magnetoelectric (ME) coupling in magnetostrictive (MS)-piezoelectric (PE) laminated composites in consideration of eddy-current effect in MS layer using equivalent circuit method is presented. Numerical calculations show that: (1) the eddy-current has a strong effect on ME coupling in MS-PE laminated composites at resonant frequency; and (2) the resonant ME coupling is then significantly dependent on the sizes of ME laminated composites, which were neglected in most previous theoretical analyses. The achieved results provide a theoretical guidance for the practice engineering design, manufacture, and application of ME laminated composites and devices.
NASA Astrophysics Data System (ADS)
Iwamoto, Mitsumasa; Taguchi, Dai
2018-03-01
Thermally stimulated current (TSC) measurement is widely used in a variety of research fields, i.e., physics, electronics, electrical engineering, chemistry, ceramics, and biology. TSC is short-circuit current that flows owing to the displacement of charges in samples during heating. TSC measurement is very simple, but TSC curves give very important information on charge behaviors. In the 1970s, TSC measurement contributed greatly to the development of electrical insulation engineering, semiconductor device technology, and so forth. Accordingly, the TSC experimental technique and its analytical method advanced. Over the past decades, many new molecules and advanced functional materials have been discovered and developed. Along with this, TSC measurement has attracted much attention in industries and academic laboratories as a way of characterizing newly discovered materials and devices. In this review, we report the latest research trend in the TSC method for the development of materials and devices in Japan.
Analytical studies on the instabilities of heterogeneous intelligent traffic flow
NASA Astrophysics Data System (ADS)
Ngoduy, D.
2013-10-01
It has been widely reported in literature that a small perturbation in traffic flow such as a sudden deceleration of a vehicle could lead to the formation of traffic jams without a clear bottleneck. These traffic jams are usually related to instabilities in traffic flow. The applications of intelligent traffic systems are a potential solution to reduce the amplitude or to eliminate the formation of such traffic instabilities. A lot of research has been conducted to theoretically study the effect of intelligent vehicles, for example adaptive cruise control vehicles, using either computer simulation or analytical method. However, most current analytical research has only applied to single class traffic flow. To this end, the main topic of this paper is to perform a linear stability analysis to find the stability threshold of heterogeneous traffic flow using microscopic models, particularly the effect of intelligent vehicles on heterogeneous (or multi-class) traffic flow instabilities. The analytical results will show how intelligent vehicle percentages affect the stability of multi-class traffic flow.
Calibrant-Free Analyte Quantitation via a Variable Velocity Flow Cell.
Beck, Jason G; Skuratovsky, Aleksander; Granger, Michael C; Porter, Marc D
2017-01-17
In this paper, we describe a novel method for analyte quantitation that does not rely on calibrants, internal standards, or calibration curves but, rather, leverages the relationship between disparate and predictable surface-directed analyte flux to an array of sensing addresses and a measured resultant signal. To reduce this concept to practice, we fabricated two flow cells such that the mean linear fluid velocity, U, was varied systematically over an array of electrodes positioned along the flow axis. This resulted in a predictable variation of the address-directed flux of a redox analyte, ferrocenedimethanol (FDM). The resultant limiting currents measured at a series of these electrodes, and accurately described by a convective-diffusive transport model, provided a means to calculate an "unknown" concentration without the use of calibrants, internal standards, or a calibration curve. Furthermore, the experiment and concentration calculation only takes minutes to perform. Deviation in calculated FDM concentrations from true values was minimized to less than 0.5% when empirically derived values of U were employed.
NASA Astrophysics Data System (ADS)
Takeuchi, Toshie; Nakagawa, Takafumi; Tsukima, Mitsuru; Koyama, Kenichi; Tohya, Nobumoto; Yano, Tomotaka
A new electromagnetically actuated vacuum circuit breaker (VCB) has been designed and developed on the basis of the transient electromagnetic analysis coupled with motion. The VCB has three advanced bi-stable electromagnetic actuators, which control each phase independently. The VCB serves as a synchronous circuit breaker as well as a standard circuit breaker. In this work, the flux delay due to the eddy current is analytically formulated using the delay time constant of the actuator coil current, thereby leading to accurate driving behavior. With this analytical method, the electromagnetic mechanism for a 24kV rated VCB has been optimized; and as a result, the driving energy is reduced to one fifth of that of a conventional VCB employing spring mechanism, and the number of parts is significantly decreased. Therefore, the developed VCB becomes compact, highly reliable and highly durable.
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
SAM Radiochemical Methods Query
Laboratories measuring target radiochemical analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select radiochemical analytes.
Electrosynthesis of nanofibers and nano-composite films
Lin, Yuehe; Liang, Liang; Liu, Jun
2006-10-17
A method for producing an array of oriented nanofibers that involves forming a solution that includes at least one electroactive species. An electrode substrate is brought into contact with the solution. A current density is applied to the electrode substrate that includes at least a first step of applying a first substantially constant current density for a first time period and a second step of applying a second substantially constant current density for a second time period. The first and second time periods are of sufficient duration to electrically deposit on the electrode substrate an array of oriented nanofibers produced from the electroactive species. Also disclosed are films that include arrays or networks of oriented nanofibers and a method for amperometrically detecting or measuring at least one analyte in a sample.
Contreras, María Del Mar; Bribi, Noureddine; Gómez-Caravaca, Ana María; Gálvez, Julio; Segura-Carretero, Antonio
2017-01-01
Two analytical platforms, gas chromatography (GC) coupled to quadrupole-time-of-flight (QTOF) mass spectrometry (MS) and reversed-phase ultrahigh performance liquid chromatography (UHPLC) coupled to diode array (DAD) and QTOF detection, were applied in order to study the alkaloid profile of Fumaria capreolata . The use of these mass analyzers enabled tentatively identifying the alkaloids by matching their accurate mass signals and suggested molecular formulae with those previously reported in libraries and databases. Moreover, the proposed structures were corroborated by studying their fragmentation pattern obtained by both platforms. In this way, 8 and 26 isoquinoline alkaloids were characterized using GC-QTOF-MS and RP-UHPLC-DAD-QTOF-MS, respectively, and they belonged to the following subclasses: protoberberine, protopine, aporphine, benzophenanthridine, spirobenzylisoquinoline, morphinandienone, and benzylisoquinoline. Moreover, the latter analytical method was selected to determine at 280 nm the concentration of protopine (9.6 ± 0.7 mg/g), a potential active compound of the extract. In conclusion, although GC-MS has been commonly used for the analysis of this type of phytochemicals, RP-UHPLC-DAD-QTOF-MS provided essential complementary information. This analytical method can be applied for the quality control of phytopharmaceuticals containing Fumaria extracts currently found in the market.
Bribi, Noureddine; Gómez-Caravaca, Ana María
2017-01-01
Two analytical platforms, gas chromatography (GC) coupled to quadrupole-time-of-flight (QTOF) mass spectrometry (MS) and reversed-phase ultrahigh performance liquid chromatography (UHPLC) coupled to diode array (DAD) and QTOF detection, were applied in order to study the alkaloid profile of Fumaria capreolata. The use of these mass analyzers enabled tentatively identifying the alkaloids by matching their accurate mass signals and suggested molecular formulae with those previously reported in libraries and databases. Moreover, the proposed structures were corroborated by studying their fragmentation pattern obtained by both platforms. In this way, 8 and 26 isoquinoline alkaloids were characterized using GC-QTOF-MS and RP-UHPLC-DAD-QTOF-MS, respectively, and they belonged to the following subclasses: protoberberine, protopine, aporphine, benzophenanthridine, spirobenzylisoquinoline, morphinandienone, and benzylisoquinoline. Moreover, the latter analytical method was selected to determine at 280 nm the concentration of protopine (9.6 ± 0.7 mg/g), a potential active compound of the extract. In conclusion, although GC-MS has been commonly used for the analysis of this type of phytochemicals, RP-UHPLC-DAD-QTOF-MS provided essential complementary information. This analytical method can be applied for the quality control of phytopharmaceuticals containing Fumaria extracts currently found in the market. PMID:29348751
ERIC Educational Resources Information Center
Hetzel-Riggin, Melanie D.; Brausch, Amy M.; Montgomery, Brad S.
2007-01-01
Objective: The purpose of the current study was to investigate the independent effects of different treatment elements on a number of secondary problems related to childhood and adolescent sexual abuse, as well as investigate a number of different moderators of treatment effectiveness. Method: Twenty-eight studies that provided treatment outcome…
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level
ERIC Educational Resources Information Center
Savalei, Victoria; Rhemtulla, Mijke
2017-01-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately…
Disciplinary Identity as Analytic Construct and Design Goal: Making Learning Sciences Matter
ERIC Educational Resources Information Center
Carlone, Heidi B.
2017-01-01
Bent Flyvbjerg (2001), in his book "Making Social Science Matter: Why Social Inquiry Fails and How It Can Succeed Again," argues that social science's aims and methods are currently, and perhaps always will be, ill suited to the type of cumulative and predictive theory that characterizes inquiry and knowledge generation in the natural…
Diamond, Dermot; Lau, King Tong; Brady, Sarah; Cleary, John
2008-05-15
Rapid developments in wireless communications are opening up opportunities for new ways to perform many types of analytical measurements that up to now have been restricted in scope due to the need to have access to centralised facilities. This paper will address both the potential for new applications and the challenges that currently inhibit more widespread integration of wireless communications with autonomous sensors and analytical devices. Key issues are identified and strategies for closer integration of analytical information and wireless communications systems discussed.
A Review of Current Methods for Analysis of Mycotoxins in Herbal Medicines
Zhang, Lei; Dou, Xiao-Wen; Zhang, Cheng; Logrieco, Antonio F.; Yang, Mei-Hua
2018-01-01
The presence of mycotoxins in herbal medicines is an established problem throughout the entire world. The sensitive and accurate analysis of mycotoxin in complicated matrices (e.g., herbs) typically involves challenging sample pretreatment procedures and an efficient detection instrument. However, although numerous reviews have been published regarding the occurrence of mycotoxins in herbal medicines, few of them provided a detailed summary of related analytical methods for mycotoxin determination. This review focuses on analytical techniques including sampling, extraction, cleanup, and detection for mycotoxin determination in herbal medicines established within the past ten years. Dedicated sections of this article address the significant developments in sample preparation, and highlight the importance of this procedure in the analytical technology. This review also summarizes conventional chromatographic techniques for mycotoxin qualification or quantitation, as well as recent studies regarding the development and application of screening assays such as enzyme-linked immunosorbent assays, lateral flow immunoassays, aptamer-based lateral flow assays, and cytometric bead arrays. The present work provides a good insight regarding the advanced research that has been done and closes with an indication of future demand for the emerging technologies. PMID:29393905
Immobilization of Fab' fragments onto substrate surfaces: A survey of methods and applications.
Crivianu-Gaita, Victor; Thompson, Michael
2015-08-15
Antibody immobilization onto surfaces has widespread applications in many different fields. It is desirable to bind antibodies such that their fragment-antigen-binding (Fab) units are oriented away from the surface in order to maximize analyte binding. The immobilization of only Fab' fragments yields benefits over the more traditional whole antibody immobilization technique. Bound Fab' fragments display higher surface densities, yielding a higher binding capacity for the analyte. The nucleophilic sulfide of the Fab' fragments allows for specific orientations to be achieved. For biosensors, this indicates a higher sensitivity and lower detection limit for a target analyte. The last thirty years have shown tremendous progress in the immobilization of Fab' fragments onto gold, Si-based, polysaccharide-based, plastic-based, magnetic, and inorganic surfaces. This review will show the current scope of Fab' immobilization techniques available and illustrate methods employed to minimize non-specific adsorption of undesirables. Furthermore, a variety of examples will be given to show the versatility of immobilized Fab' fragments in different applications and future directions of the field will be addressed, especially regarding biosensors. Copyright © 2015 Elsevier B.V. All rights reserved.
Crew appliance computer program manual, volume 1
NASA Technical Reports Server (NTRS)
Russell, D. J.
1975-01-01
Trade studies of numerous appliance concepts for advanced spacecraft galley, personal hygiene, housekeeping, and other areas were made to determine which best satisfy the space shuttle orbiter and modular space station mission requirements. Analytical models of selected appliance concepts not currently included in the G-189A Generalized Environmental/Thermal Control and Life Support Systems (ETCLSS) Computer Program subroutine library were developed. The new appliance subroutines are given along with complete analytical model descriptions, solution methods, user's input instructions, and validation run results. The appliance components modeled were integrated with G-189A ETCLSS models for shuttle orbiter and modular space station, and results from computer runs of these systems are presented.
A Method for Direct-Measurement of the Energy of Rupture of Impact Specimens
1953-01-01
CONTENTS SECTION A - Poreword SFCTION B » ObjectiTes of the Current Investigation SECTION C - Basic Elements of an Impact Testing System ...SECTION D - Discussion lo Linear System 2 c Rotary System 3o Methods for Ifeasui ing the Energy of Rupture SECTION E « The Energy Measuring System ...has followed and to siironarize our techni<»l findings, Co BASIC ELEKEMTS OF AN IMPACT TESTING SYSTEM For the analytical purposes of this
A Newton method for the magnetohydrodynamic equilibrium equations
NASA Astrophysics Data System (ADS)
Oliver, Hilary James
We have developed and implemented a (J, B) space Newton method to solve the full nonlinear three dimensional magnetohydrodynamic equilibrium equations in toroidal geometry. Various cases have been run successfully, demonstrating significant improvement over Picard iteration, including a 3D stellarator equilibrium at β = 2%. The algorithm first solves the equilibrium force balance equation for the current density J, given a guess for the magnetic field B. This step is taken from the Picard-iterative PIES 3D equilibrium code. Next, we apply Newton's method to Ampere's Law by expansion of the functional J(B), which is defined by the first step. An analytic calculation in magnetic coordinates, of how the Pfirsch-Schlüter currents vary in the plasma in response to a small change in the magnetic field, yields the Newton gradient term (analogous to ∇f . δx in Newton's method for f(x) = 0). The algorithm is computationally feasible because we do this analytically, and because the gradient term is flux surface local when expressed in terms of a vector potential in an Ar=0 gauge. The equations are discretized by a hybrid spectral/offset grid finite difference technique, and leading order radial dependence is factored from Fourier coefficients to improve finite- difference accuracy near the polar-like origin. After calculating the Newton gradient term we transfer the equation from the magnetic grid to a fixed background grid, which greatly improves the code's performance.
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture... Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MREs are listed as follows: (1) Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
Electrodialytic in-line preconcentration for ionic solute analysis.
Ohira, Shin-Ichi; Yamasaki, Takayuki; Koda, Takumi; Kodama, Yuko; Toda, Kei
2018-04-01
Preconcentration is an effective way to improve analytical sensitivity. Many types of methods are used for enrichment of ionic solute analytes. However, current methods are batchwise and include procedures such as trapping and elution. In this manuscript, we propose in-line electrodialytic enrichment of ionic solutes. The method can enrich ionic solutes within seconds by quantitative transfer of analytes from the sample solution to the acceptor solution under an electric field. Because of quantitative ion transfer, the enrichment factor (the ratio of the concentration in the sample and to that in the obtained acceptor solution) only depends on the flow rate ratio of the sample solution to the acceptor solution. The ratios of the concentrations and flow rates are equal for ratios up to 70, 20, and 70 for the tested ionic solutes of inorganic cations, inorganic anions, and heavy metal ions, respectively. The sensitivity of ionic solute determinations is also improved based on the enrichment factor. The method can also simultaneously achieve matrix isolation and enrichment. The method was successively applied to determine the concentrations of trace amounts of chloroacetic acids in tap water. The regulated concentration levels cannot be determined by conventional high-performance liquid chromatography with ultraviolet detection (HPLC-UV) without enrichment. However, enrichment with the present method is effective for determination of tap water quality by improving the limits of detection of HPLC-UV. The standard addition test with real tap water samples shows good recoveries (94.9-109.6%). Copyright © 2017 Elsevier B.V. All rights reserved.
Recent developments in nickel electrode analysis
NASA Technical Reports Server (NTRS)
Whiteley, Richard V.; Daman, M. E.; Kaiser, E. Q.
1991-01-01
Three aspects of nickel electrode analysis for Nickel-Hydrogen and Nickel-Cadmium battery cell applications are addressed: (1) the determination of active material; (2) charged state nickel (as NiOOH + CoOOH); and (3) potassium ion content in the electrode. Four deloading procedures are compared for completeness of active material removal, and deloading conditions for efficient active material analyses are established. Two methods for charged state nickel analysis are compared: the current NASA procedure and a new procedure based on the oxidation of sodium oxalate by the charged material. Finally, a method for determining potassium content in an electrode sample by flame photometry is presented along with analytical results illustrating differences in potassium levels from vendor to vendor and the effects of stress testing on potassium content in the electrode. The relevance of these analytical procedures to electrode performance is reviewed.
Nie, Jinfang; Liang, Yuanzhi; Zhang, Yun; Le, Shangwang; Li, Dunnan; Zhang, Songbai
2013-01-21
In this paper, we report a simple, low-cost method for rapid, highly reproductive fabrication of paper-based microfluidics by using a commercially available, minitype CO(2) laser cutting/engraving machine. This method involves only one operation of cutting a piece of paper by laser according to a predesigned pattern. The hollow microstructures formed in the paper are used as the 'hydrophobic barriers' to define the hydrophilic flowing paths. A typical paper device on a 4 cm × 4 cm piece of paper can be fabricated within ∼7-20 s; it is ready for use once the cutting process is finished. The main fabrication parameters such as the applied current and cutting rate of the laser were optimized. The fabrication resolution and multiplexed analytical capability of the hollow microstructure-patterned paper were also characterized.
40 CFR 161.180 - Enforcement analytical method.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 161.180... DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data Requirements § 161.180 Enforcement analytical method. An analytical method suitable for enforcement purposes must be...
Reference Intervals of Common Clinical Chemistry Analytes for Adults in Hong Kong.
Lo, Y C; Armbruster, David A
2012-04-01
Defining reference intervals is a major challenge because of the difficulty in recruiting volunteers to participate and testing samples from a significant number of healthy reference individuals. Historical literature citation intervals are often suboptimal because they're be based on obsolete methods and/or only a small number of poorly defined reference samples. Blood donors in Hong Kong gave permission for additional blood to be collected for reference interval testing. The samples were tested for twenty-five routine analytes on the Abbott ARCHITECT clinical chemistry system. Results were analyzed using the Rhoads EP evaluator software program, which is based on the CLSI/IFCC C28-A guideline, and defines the reference interval as the 95% central range. Method specific reference intervals were established for twenty-five common clinical chemistry analytes for a Chinese ethnic population. The intervals were defined for each gender separately and for genders combined. Gender specific or combined gender intervals were adapted as appropriate for each analyte. A large number of healthy, apparently normal blood donors from a local ethnic population were tested to provide current reference intervals for a new clinical chemistry system. Intervals were determined following an accepted international guideline. Laboratories using the same or similar methodologies may adapt these intervals if deemed validated and deemed suitable for their patient population. Laboratories using different methodologies may be able to successfully adapt the intervals for their facilities using the reference interval transference technique based on a method comparison study.
NASA Astrophysics Data System (ADS)
Song, K.; Song, H. P.; Gao, C. F.
2018-03-01
It is well known that the key factor determining the performance of thermoelectric materials is the figure of merit, which depends on the thermal conductivity (TC), electrical conductivity, and Seebeck coefficient (SC). The electric current must be zero when measuring the TC and SC to avoid the occurrence of measurement errors. In this study, the complex-variable method is used to analyze the thermoelectric field near an elliptic inhomogeneity in an open circuit, and the field distributions are obtained in closed form. Our analysis shows that an electric current inevitably exists in both the matrix and the inhomogeneity even though the circuit is open. This unexpected electric current seriously affects the accuracy with which the TC and SC are measured. These measurement errors, both overall and local, are analyzed in detail. In addition, an error correction method is proposed based on the analytical results.
Methods for describing the electromagnetic properties of silver and gold nanoparticles.
Zhao, Jing; Pinchuk, Anatoliy O; McMahon, Jeffrey M; Li, Shuzhou; Ausman, Logan K; Atkinson, Ariel L; Schatz, George C
2008-12-01
This Account provides an overview of the methods that are currently being used to study the electromagnetics of silver and gold nanoparticles, with an emphasis on the determination of extinction and surface-enhanced Raman scattering (SERS) spectra. These methods have proven to be immensely useful in recent years for interpreting a wide range of nanoscience experiments and providing the capability to describe optical properties of particles up to several hundred nanometers in dimension, including arbitrary particle structures and complex dielectric environments (adsorbed layers of molecules, nearby metal films, and other particles). While some of the methods date back to Mie's celebrated work a century ago, others are still at the forefront of algorithm development in computational electromagnetics. This Account gives a qualitative description of the physical and mathematical basis behind the most commonly used methods, including both analytical and numerical methods, as well as representative results of applications that are relevant to current experiments. The analytical methods that we discuss are either derived from Mie theory for spheres or from the quasistatic (Gans) model as applied to spheres and spheroids. In this discussion, we describe the use of Mie theory to determine electromagnetic contributions to SERS enhancements that include for retarded dipole emission effects, and the use of the quasistatic approximation for spheroidal particles interacting with dye adsorbate layers. The numerical methods include the discrete dipole approximation (DDA), the finite difference time domain (FDTD) method, and the finite element method (FEM) based on Whitney forms. We discuss applications such as using DDA to describe the interaction of two gold disks to define electromagnetic hot spots, FDTD for light interacting with metal wires that go from particle-like plasmonic response to the film-like transmission as wire dimension is varied, and FEM studies of electromagnetic fields near cubic particles.
Pharmaceutical cocrystals, salts and polymorphs: Advanced characterization techniques.
Pindelska, Edyta; Sokal, Agnieszka; Kolodziejski, Waclaw
2017-08-01
The main goal of a novel drug development is to obtain it with optimal physiochemical, pharmaceutical and biological properties. Pharmaceutical companies and scientists modify active pharmaceutical ingredients (APIs), which often are cocrystals, salts or carefully selected polymorphs, to improve the properties of a parent drug. To find the best form of a drug, various advanced characterization methods should be used. In this review, we have described such analytical methods, dedicated to solid drug forms. Thus, diffraction, spectroscopic, thermal and also pharmaceutical characterization methods are discussed. They all are necessary to study a solid API in its intrinsic complexity from bulk down to the molecular level, gain information on its structure, properties, purity and possible transformations, and make the characterization efficient, comprehensive and complete. Furthermore, these methods can be used to monitor and investigate physical processes, involved in the drug development, in situ and in real time. The main aim of this paper is to gather information on the current advancements in the analytical methods and highlight their pharmaceutical relevance. Copyright © 2017 Elsevier B.V. All rights reserved.
Wu, Shimin; Anumol, Tarun; Gandhi, Jay; Snyder, Shane A
2017-03-03
The addition of oxidants for disinfecting water can lead to the formation of potentially carcinogenic compounds referred to as disinfection byproducts (DBPs). Haloacetic acids (HAAs) are one of the most widely detected DBPs in US water utilities and some of them are regulated by the US Environmental Protection Agency (USEPA). The present study developed a method to analyze all the compounds in the USEPA method 557 (nine HAAs, bromate and dalapon) plus four potentially more toxic iodinated HAAs in water by coupling ion chromatography with tandem mass spectrometry (IC-MS/MS). This aqueous direct injection method has significant advantages over traditional GC methods, which require a derivatization and sample extraction that are laborious, time-consuming, and can negatively impact reproducibility. The method developed in this study requires half the time of the current USEPA method 557 on IC-MS/MS while including more compounds and achieving sub-μg/L level method detection limits (MDLs) for all 15 target analytes. The single laboratory lowest concentration minimum reporting level (LCMRL) has also been determined in reagent water, which ranged from 0.011 to 0.62μg/L for the analytes. The mean recoveries of the analytes during matrix spike recovery tests were 77-125% in finished drinking water and 81-112% in surface water. This method was then applied to untreated, chlorinated, and chloraminated groundwater and surface water samples. Bromate and 9 HAAs were detected at different levels in some of these samples. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Clark, A. E.; Yoon, S.; Sheesley, R. J.; Usenko, S.
2014-12-01
DISCOVER-AQ is a NASA mission seeking to better understand air quality in cities across the United States. In September 2013, flight, satellite and ground-based data was collected in Houston, TX and the surrounding metropolitan area. Over 300 particulate matter filter samples were collected as part of the ground-based sampling efforts, at four sites across Houston. Samples include total suspended particle matter (TSP) and fine particulate matter (less than 2.5 μm in aerodynamic diameter; PM2.5). For this project, an analytical method has been developed for the pressurized liquid extraction (PLE) of a wide variety of organic tracers and contaminants from quartz fiber filters (QFFs). Over 100 compounds were selected including polycyclic aromatic hydrocarbons (PAHs), hopanes, levoglucosan, organochlorine pesticides, polychlorinated biphenyls (PCBs), polybrominated diphenyl ethers (PBDEs), and organophosphate flame retardants (OPFRs). Currently, there is no analytical method validated for the reproducible extraction of all seven compound classes in a single automated technique. Prior to extraction, QFF samples were spiked with known amounts of target analyte standards and isotopically-labeled surrogate standards. The QFF were then extracted with methylene chloride:acetone at high temperatures (100˚C) and pressures (1500 psi) using a Thermo Dionex Accelerated Solvent Extractor system (ASE 350). Extracts were concentrated, spiked with known amounts of isotopically-labeled internal standards, and analyzed by gas chromatography coupled with mass spectrometry utilizing electron ionization and electron capture negative ionization. Target analytes were surrogate recovery-corrected to account for analyte loss during sample preparation. Ambient concentrations of over 100 organic tracers and contaminants will be presented for four sites in Houston during DISCOVER-AQ.
NASA Astrophysics Data System (ADS)
Castiglione, Steven Louis
As scientific research trends towards trace levels and smaller architectures, the analytical chemist is often faced with the challenge of quantitating said species in a variety of matricies. The challenge is heightened when the analytes prove to be potentially toxic or possess physical or chemical properties that make traditional analytical methods problematic. In such cases, the successful development of an acceptable quantitative method plays a critical role in the ability to further develop the species under study. This is particularly true for pharmaceutical impurities and nanoparticles (NP). The first portion of the research focuses on the development of a part-per-billion level HPLC method for a substituted phenazine-class pharmaceutical impurity. The development of this method was required due to the need for a rapid methodology to quantitatively determine levels of a potentially toxic phenazine moiety in order to ensure patient safety. As the synthetic pathway for the active ingredient was continuously refined to produce progressively lower amounts of the phenazine impurity, the approach for increasingly sensitive quantitative methods was required. The approaches evolved across four discrete methods, each employing a unique scheme for analyte detection. All developed methods were evaluated with regards to accuracy, precision and linear adherence as well as ancillary benefits and detriments -- e.g., one method in this evolution demonstrated the ability to resolve and detect other species from the phenazine class. The second portion of the research focuses on the development of an HPLC method for the quantitative determination of NP size distributions. The current methodology for the determination of NP sizes employs tunneling electron microscopy (TEM), which requires sample drying without particle size alteration and which, in many cases, may prove infeasible due to cost or availability. The feasibility of an HPLC method for NP size characterizations evolved across three methods, each employing a different approach for size resolution. These methods were evaluated primarily for sensitivity, which proved to be a substantial hurdle to further development, but does not appear to deter future research efforts.
Hoseini, Bibi Leila; Mazloum, Seyed Reza; Jafarnejad, Farzaneh; Foroughipour, Mohsen
2013-03-01
The clinical evaluation, as one of the most important elements in medical education, must measure students' competencies and abilities. The implementation of any assessment tool is basically dependent on the acceptance of students. This study tried to assess midwifery students' satisfaction with Direct Observation of Procedural Skills (DOPS) and current clinical evaluation methods. This quasi-experimental study was conducted in the university hospitals affiliated to Mashhad University of Medical Sciences. The subjects comprised 67 undergraduate midwifery students selected by convenience sampling and allocated to control and intervention groups according to the training transposition. Current method was performed in the control group, and DOPS was conducted in the intervention group. The applied tools included DOPS rating scales, logbook, and satisfaction questionnaires with clinical evaluation methods. Validity and reliability of these tools were approved. At the end of training, students' satisfaction with the evaluation methods was assessed by the mentioned tools. The data were analyzed by descriptive and analytical statistics. Satisfaction mean scores of midwifery students with DOPS and current methods were 76.7 ± 12.9 and 62.6 ± 14.7 (out of 100), respectively. DOPS students' satisfaction mean score was significantly higher than the score obtained in current method (P < 0.000). The most satisfactory domains in the current method were "consistence with learning objectives" (71.2 ± 14.9) and "objectiveness" in DOPS (87.9 ± 15.0). In contrast, the least satisfactory domains in the current method were "interested in applying the method" (57.8 ± 26.5) and "number of assessments for each skill" (58.8 ± 25.9) in DOPS method. This study showed that DOPS method is associated with greater students' satisfaction. Since the students' satisfaction with the current method was also acceptable, we recommend combining this new clinical evaluation method with the current method, which covers its weaknesses, to promote the students' satisfaction with clinical evaluation methods in a perfect manner.
NASA Astrophysics Data System (ADS)
Amerian, Z.; Salem, M. K.; Salar Elahi, A.; Ghoranneviss, M.
2017-03-01
Equilibrium reconstruction consists of identifying, from experimental measurements, a distribution of the plasma current density that satisfies the pressure balance constraint. Numerous methods exist to solve the Grad-Shafranov equation, describing the equilibrium of plasma confined by an axisymmetric magnetic field. In this paper, we have proposed a new numerical solution to the Grad-Shafranov equation (an axisymmetric, magnetic field transformed in cylindrical coordinates solved with the Chebyshev collocation method) when the source term (current density function) on the right-hand side is linear. The Chebyshev collocation method is a method for computing highly accurate numerical solutions of differential equations. We describe a circular cross-section of the tokamak and present numerical result of magnetic surfaces on the IR-T1 tokamak and then compare the results with an analytical solution.
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Enforcement analytical method. 158.355... DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An analytical method suitable for enforcement purposes must be provided for each active ingredient in the...
NASA Technical Reports Server (NTRS)
Wiebe, D. T.; Zigmond, M. J.; Tufts, C. A.
2002-01-01
The National Aeronautics and Space Administration (NASA) White Sands Test Facility (WSTF) was established in 1963 primarily to provide rocket engine testing services for several NASA programs. The groundwater underlying the site has been contaminated as a result of historical operations. Groundwater contaminants include several volatile organic compounds (VOCs) and two semi-volatile compounds: N-nitrosodimethylamine (NDMA) and N-nitrodimethylamine (DMN). This paper discusses some of the technical, analytical, regulatory, and health risk issues associated with the contaminant plume. The plume has moved approximately 2.5 miles downgradient of the facility industrial boundary, with evidence of continued migration. As a result, NASA has proposed a pump and treat system using air strippers and ultraviolet (UV) oxidation to stabilize future movement of the contaminant plume. The system has been designed to treat 1,076 gallons (4,073 liters) per minute, with provisions for future expansion. The UV oxidation process was selected to treat NDMA-contaminated groundwater based on successes at other NDMA-contaminated sites. Bench- and pilot-scale testing of WSTF groundwater confirmed the ability of UV oxidation to destroy NDMA and generated sufficient data to design the proposed full-scale treatment system. NDMA is acutely toxic and is a probable human carcinogen. EPA-recommended health risk criteria for the residential consumption of NDMA/DMN-contaminated groundwater was used to determine that a 1.0 x 10(exp -6) excess cancer risk corresponds to 1.7 parts per trillion (ppt). EPA analytical methods are unable to detect NDMA and DMN in the low ppt range. EPA's current Appendix IX analytical method used to screen for NDMA, Method 8270, can detect NDMA only at levels that are orders of magnitude greater than the recommended health risk level. Additionally, EPA Method 607, the most sensitive EPA approved method, has a detection limit of 150 ppt. This corresponds to an excess cancer risk of 9.0 x 10(exp -5), which exceeds the State of New Mexico's water quality standard of a cancer risk less than 1 x 10(exp -5). The treatment system has been engineered to treat contaminated groundwater to levels significantly below the New Mexico standard. However, the inability of EPA-approved analytical methods to detect NDMA and DMN at low ppt levels, and to provide verification of compliance with the 1 x 10(exp -5) cancer risk, introduces a notable risk to the long-term operation of the system. WSTF has been working with Southwest Research Institute to develop a non-EPA analytical method that can achieve a reporting limit of 1 ppt, which corresponds to an excess cancer risk of 7.6 x 10(exp -7). WSTF is currently developing a proposal to obtain approval from the New Mexico Environment Department (NMED) of this non-EPA method.
Methods for detection of GMOs in food and feed.
Marmiroli, Nelson; Maestri, Elena; Gullì, Mariolina; Malcevschi, Alessio; Peano, Clelia; Bordoni, Roberta; De Bellis, Gianluca
2008-10-01
This paper reviews aspects relevant to detection and quantification of genetically modified (GM) material within the feed/food chain. The GM crop regulatory framework at the international level is evaluated with reference to traceability and labelling. Current analytical methods for the detection, identification, and quantification of transgenic DNA in food and feed are reviewed. These methods include quantitative real-time PCR, multiplex PCR, and multiplex real-time PCR. Particular attention is paid to methods able to identify multiple GM events in a single reaction and to the development of microdevices and microsensors, though they have not been fully validated for application.
Unlocking Proteomic Heterogeneity in Complex Diseases through Visual Analytics
Bhavnani, Suresh K.; Dang, Bryant; Bellala, Gowtham; Divekar, Rohit; Visweswaran, Shyam; Brasier, Allan; Kurosky, Alex
2015-01-01
Despite years of preclinical development, biological interventions designed to treat complex diseases like asthma often fail in phase III clinical trials. These failures suggest that current methods to analyze biomedical data might be missing critical aspects of biological complexity such as the assumption that cases and controls come from homogeneous distributions. Here we discuss why and how methods from the rapidly evolving field of visual analytics can help translational teams (consisting of biologists, clinicians, and bioinformaticians) to address the challenge of modeling and inferring heterogeneity in the proteomic and phenotypic profiles of patients with complex diseases. Because a primary goal of visual analytics is to amplify the cognitive capacities of humans for detecting patterns in complex data, we begin with an overview of the cognitive foundations for the field of visual analytics. Next, we organize the primary ways in which a specific form of visual analytics called networks have been used to model and infer biological mechanisms, which help to identify the properties of networks that are particularly useful for the discovery and analysis of proteomic heterogeneity in complex diseases. We describe one such approach called subject-protein networks, and demonstrate its application on two proteomic datasets. This demonstration provides insights to help translational teams overcome theoretical, practical, and pedagogical hurdles for the widespread use of subject-protein networks for analyzing molecular heterogeneities, with the translational goal of designing biomarker-based clinical trials, and accelerating the development of personalized approaches to medicine. PMID:25684269
Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C
2014-01-01
Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patients pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (SBM), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or QCP) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patients physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patients condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the simulated biosignals in the early stages of physiologic deterioration and while the variables are still within normal ranges. Thus, the SBM system was found to identify pathophysiologic conditions in a timeframe that would not have been detected in a usual clinical monitoring scenario. Conclusion. In this study the functionality of a multivariate machine learning predictive methodology that that incorporates commonly monitored clinical information was tested using a computer model of human physiology. SBM and predictive analytics were able to differentiate a state of decompensation while the monitored variables were still within normal clinical ranges. This finding suggests that the SBM could provide for early identification of a clinical deterioration using predictive analytic techniques. predictive analytics, hemodynamic, monitoring.
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
Adaptive Variable Bias Magnetic Bearing Control
NASA Technical Reports Server (NTRS)
Johnson, Dexter; Brown, Gerald V.; Inman, Daniel J.
1998-01-01
Most magnetic bearing control schemes use a bias current with a superimposed control current to linearize the relationship between the control current and the force it delivers. With the existence of the bias current, even in no load conditions, there is always some power consumption. In aerospace applications, power consumption becomes an important concern. In response to this concern, an alternative magnetic bearing control method, called Adaptive Variable Bias Control (AVBC), has been developed and its performance examined. The AVBC operates primarily as a proportional-derivative controller with a relatively slow, bias current dependent, time-varying gain. The AVBC is shown to reduce electrical power loss, be nominally stable, and provide control performance similar to conventional bias control. Analytical, computer simulation, and experimental results are presented in this paper.
ERIC Educational Resources Information Center
MacNeill, Sheila; Campbell, Lorna M.; Hawksey, Martin
2014-01-01
This article presents an overview of the development and use of analytics in the context of education. Using Buckingham Shum's three levels of analytics, the authors present a critical analysis of current developments in the domain of learning analytics, and contrast the potential value of analytics research and development with real world…
Grelewska-Nowotko, Katarzyna; Żurawska-Zajfert, Magdalena; Żmijewska, Ewelina; Sowa, Sławomir
2018-05-01
In recent years, digital polymerase chain reaction (dPCR), a new molecular biology technique, has been gaining in popularity. Among many other applications, this technique can also be used for the detection and quantification of genetically modified organisms (GMOs) in food and feed. It might replace the currently widely used real-time PCR method (qPCR), by overcoming problems related to the PCR inhibition and the requirement of certified reference materials to be used as a calibrant. In theory, validated qPCR methods can be easily transferred to the dPCR platform. However, optimization of the PCR conditions might be necessary. In this study, we report the transfer of two validated qPCR methods for quantification of maize DAS1507 and NK603 events to the droplet dPCR (ddPCR) platform. After some optimization, both methods have been verified according to the guidance of the European Network of GMO Laboratories (ENGL) on analytical method verification (ENGL working group on "Method Verification." (2011) Verification of Analytical Methods for GMO Testing When Implementing Interlaboratory Validated Methods). Digital PCR methods performed equally or better than the qPCR methods. Optimized ddPCR methods confirm their suitability for GMO determination in food and feed.
Li, Tingting; Wang, Wei; Zhao, Haijian; He, Falin; Zhong, Kun; Yuan, Shuai; Wang, Zhiguo
2017-09-07
This study aimed to investigate the status of internal quality control (IQC) for cardiac biomarkers from 2011 to 2016 so that we can have overall knowledge of the precision level of measurements in China and set appropriate precision specifications. Internal quality control data of cardiac biomarkers, including creatinine kinase MB (CK-MB) (μg/L), CK-MB(U/L), myoglobin (Mb), cardiac troponin I (cTnI), cardiac troponin T (cTnT), and homocysteines (HCY), were collected by a web-based external quality assessment (EQA) system. Percentages of laboratories meeting five precision quality specifications for current coefficient of variations (CVs) were calculated. Then, appropriate precision specifications were chosen for these six analytes. Finally, the CVs and IQC practice were further analyzed with different grouping methods. The current CVs remained nearly constant for 6 years. cTnT had the highest pass rates every year against five specifications, whereas HCY had the lowest pass rates. Overall, most analytes had a satisfactory performance (pass rates >80%), except for HCY, if one-third TEa or the minimum specification were employed. When the optimal specification was applied, the performance of most analytes was frustrating (pass rates < 60%) except for cTnT. The appropriate precision specifications of Mb, cTnI, cTnT and HCY were set as current CVs less than 9.20%, 9.90%, 7.50%, 10.54%, 7.63%, and 6.67%, respectively. The data of IQC practices indicated wide variation and substantial progress. The precision performance of cTnT was already satisfying, while the other five analytes, especially HCY, were still frustrating; thus, ongoing investigation and continuous improvement for IQC are still needed. © 2017 Wiley Periodicals, Inc.
Rahmani, Meisam; Ahmadi, Mohammad Taghi; Abadi, Hediyeh Karimi Feiz; Saeidmanesh, Mehdi; Akbari, Elnaz; Ismail, Razali
2013-01-30
Recent development of trilayer graphene nanoribbon Schottky-barrier field-effect transistors (FETs) will be governed by transistor electrostatics and quantum effects that impose scaling limits like those of Si metal-oxide-semiconductor field-effect transistors. The current-voltage characteristic of a Schottky-barrier FET has been studied as a function of physical parameters such as effective mass, graphene nanoribbon length, gate insulator thickness, and electrical parameters such as Schottky barrier height and applied bias voltage. In this paper, the scaling behaviors of a Schottky-barrier FET using trilayer graphene nanoribbon are studied and analytically modeled. A novel analytical method is also presented for describing a switch in a Schottky-contact double-gate trilayer graphene nanoribbon FET. In the proposed model, different stacking arrangements of trilayer graphene nanoribbon are assumed as metal and semiconductor contacts to form a Schottky transistor. Based on this assumption, an analytical model and numerical solution of the junction current-voltage are presented in which the applied bias voltage and channel length dependence characteristics are highlighted. The model is then compared with other types of transistors. The developed model can assist in comprehending experiments involving graphene nanoribbon Schottky-barrier FETs. It is demonstrated that the proposed structure exhibits negligible short-channel effects, an improved on-current, realistic threshold voltage, and opposite subthreshold slope and meets the International Technology Roadmap for Semiconductors near-term guidelines. Finally, the results showed that there is a fast transient between on-off states. In other words, the suggested model can be used as a high-speed switch where the value of subthreshold slope is small and thus leads to less power consumption.
Lim, Chee Wei; Tai, Siew Hoon; Lee, Lin Min; Chan, Sheot Harn
2012-07-01
The current food crisis demands unambiguous determination of mycotoxin contamination in staple foods to achieve safer food for consumption. This paper describes the first accurate LC-MS/MS method developed to analyze tricothecenes in grains by applying multiple reaction monitoring (MRM) transition and MS(3) quantitation strategies in tandem. The tricothecenes are nivalenol, deoxynivalenol, deoxynivalenol-3-glucoside, fusarenon X, 3-acetyl-deoxynivalenol, 15-acetyldeoxynivalenol, diacetoxyscirpenol, and HT-2 and T-2 toxins. Acetic acid and ammonium acetate were used to convert the analytes into their respective acetate adducts and ammonium adducts under negative and positive MS polarity conditions, respectively. The mycotoxins were separated by reversed-phase LC in a 13.5-min run, ionized using electrospray ionization, and detected by tandem mass spectrometry. Analyte-specific mass-to-charge (m/z) ratios were used to perform quantitation under MRM transition and MS(3) (linear ion trap) modes. Three experiments were made for each quantitation mode and matrix in batches over 6 days for recovery studies. The matrix effect was investigated at concentration levels of 20, 40, 80, 120, 160, and 200 μg kg(-1) (n = 3) in 5 g corn flour and rice flour. Extraction with acetonitrile provided a good overall recovery range of 90-108% (n = 3) at three levels of spiking concentration of 40, 80, and 120 μg kg(-1). A quantitation limit of 2-6 μg kg(-1) was achieved by applying an MRM transition quantitation strategy. Under MS(3) mode, a quantitation limit of 4-10 μg kg(-1) was achieved. Relative standard deviations of 2-10% and 2-11% were reported for MRM transition and MS(3) quantitation, respectively. The successful utilization of MS(3) enabled accurate analyte fragmentation pattern matching and its quantitation, leading to the development of analytical methods in fields that demand both analyte specificity and fragmentation fingerprint-matching capabilities that are unavailable under MRM transition.
Zimpl, Milan; Skopalova, Jana; Jirovsky, David; Bartak, Petr; Navratil, Tomas; Sedonikova, Jana; Kotoucek, Milan
2012-01-01
Derivatives of quinoxalin-2-one are interesting compounds with potential pharmacological activity. From this point of view, understanding of their electrochemical behavior is of great importance. In the present paper, a mechanism of electrochemical reduction of quinoxalin-2-one derivatives at mercury dropping electrode was proposed. Pyrazine ring was found to be the main electroactive center undergoing a pH-dependent two-electron reduction process. The molecule protonization of nitrogen in the position 4 precedes the electron acceptance forming a semiquinone radical intermediate which is relatively stable in acidic solutions. Its further reduction is manifested by separated current signal. A positive mesomeric effect of the nonprotonized amino group in the position 7 of the derivative III accelerates the semiquinone reduction yielding a single current wave. The suggested reaction mechanism was verified by means of direct current polarography, differential pulse, cyclic and elimination voltammetry, and coulometry with subsequent GC/MS analysis. The understanding of the mechanism was applied in developing of analytical method for the determination of the studied compounds. PMID:22666117
An Efficient and Effective Design of InP Nanowires for Maximal Solar Energy Harvesting.
Wu, Dan; Tang, Xiaohong; Wang, Kai; He, Zhubing; Li, Xianqiang
2017-11-25
Solar cells based on subwavelength-dimensions semiconductor nanowire (NW) arrays promise a comparable or better performance than their planar counterparts by taking the advantages of strong light coupling and light trapping. In this paper, we present an accurate and time-saving analytical design for optimal geometrical parameters of vertically aligned InP NWs for maximal solar energy absorption. Short-circuit current densities are calculated for each NW array with different geometrical dimensions under solar illumination. Optimal geometrical dimensions are quantitatively presented for single, double, and multiple diameters of the NW arrays arranged both squarely and hexagonal achieving the maximal short-circuit current density of 33.13 mA/cm 2 . At the same time, intensive finite-difference time-domain numerical simulations are performed to investigate the same NW arrays for the highest light absorption. Compared with time-consuming simulations and experimental results, the predicted maximal short-circuit current densities have tolerances of below 2.2% for all cases. These results unambiguously demonstrate that this analytical method provides a fast and accurate route to guide high performance InP NW-based solar cell design.
An Efficient and Effective Design of InP Nanowires for Maximal Solar Energy Harvesting
NASA Astrophysics Data System (ADS)
Wu, Dan; Tang, Xiaohong; Wang, Kai; He, Zhubing; Li, Xianqiang
2017-11-01
Solar cells based on subwavelength-dimensions semiconductor nanowire (NW) arrays promise a comparable or better performance than their planar counterparts by taking the advantages of strong light coupling and light trapping. In this paper, we present an accurate and time-saving analytical design for optimal geometrical parameters of vertically aligned InP NWs for maximal solar energy absorption. Short-circuit current densities are calculated for each NW array with different geometrical dimensions under solar illumination. Optimal geometrical dimensions are quantitatively presented for single, double, and multiple diameters of the NW arrays arranged both squarely and hexagonal achieving the maximal short-circuit current density of 33.13 mA/cm2. At the same time, intensive finite-difference time-domain numerical simulations are performed to investigate the same NW arrays for the highest light absorption. Compared with time-consuming simulations and experimental results, the predicted maximal short-circuit current densities have tolerances of below 2.2% for all cases. These results unambiguously demonstrate that this analytical method provides a fast and accurate route to guide high performance InP NW-based solar cell design.
PCR technology for screening and quantification of genetically modified organisms (GMOs).
Holst-Jensen, Arne; Rønning, Sissel B; Løvseth, Astrid; Berdal, Knut G
2003-04-01
Although PCR technology has obvious limitations, the potentially high degree of sensitivity and specificity explains why it has been the first choice of most analytical laboratories interested in detection of genetically modified (GM) organisms (GMOs) and derived materials. Because the products that laboratories receive for analysis are often processed and refined, the quality and quantity of target analyte (e.g. protein or DNA) frequently challenges the sensitivity of any detection method. Among the currently available methods, PCR methods are generally accepted as the most sensitive and reliable methods for detection of GM-derived material in routine applications. The choice of target sequence motif is the single most important factor controlling the specificity of the PCR method. The target sequence is normally a part of the modified gene construct, for example a promoter, a terminator, a gene, or a junction between two of these elements. However, the elements may originate from wildtype organisms, they may be present in more than one GMO, and their copy number may also vary from one GMO to another. They may even be combined in a similar way in more than one GMO. Thus, the choice of method should fit the purpose. Recent developments include event-specific methods, particularly useful for identification and quantification of GM content. Thresholds for labelling are now in place in many countries including those in the European Union. The success of the labelling schemes is dependent upon the efficiency with which GM-derived material can be detected. We will present an overview of currently available PCR methods for screening and quantification of GM-derived DNA, and discuss their applicability and limitations. In addition, we will discuss some of the major challenges related to determination of the limits of detection (LOD) and quantification (LOQ), and to validation of methods.
Calculation of AC loss in two-layer superconducting cable with equal currents in the layers
NASA Astrophysics Data System (ADS)
Erdogan, Muzaffer
2016-12-01
A new method for calculating AC loss of two-layer SC power transmission cables using the commercial software Comsol Multiphysics, relying on the approach of the equal partition of current between the layers is proposed. Applying the method to calculate the AC-loss in a cable composed of two coaxial cylindrical SC tubes, the results are in good agreement with the analytical ones of duoblock model. Applying the method to calculate the AC-losses of a cable composed of a cylindrical copper former, surrounded by two coaxial cylindrical layers of superconducting tapes embedded in an insulating medium with tape-on-tape and tape-on-gap configurations are compared. A good agreement between the duoblock model and the numerical results for the tape-on-gap cable is observed.
The current role of on-line extraction approaches in clinical and forensic toxicology.
Mueller, Daniel M
2014-08-01
In today's clinical and forensic toxicological laboratories, automation is of interest because of its ability to optimize processes, to reduce manual workload and handling errors and to minimize exposition to potentially infectious samples. Extraction is usually the most time-consuming step; therefore, automation of this step is reasonable. Currently, from the field of clinical and forensic toxicology, methods using the following on-line extraction techniques have been published: on-line solid-phase extraction, turbulent flow chromatography, solid-phase microextraction, microextraction by packed sorbent, single-drop microextraction and on-line desorption of dried blood spots. Most of these published methods are either single-analyte or multicomponent procedures; methods intended for systematic toxicological analysis are relatively scarce. However, the use of on-line extraction will certainly increase in the near future.
Noguchi, Akio; Nakamura, Kosuke; Sakata, Kozue; Sato-Fukuda, Nozomi; Ishigaki, Takumi; Mano, Junichi; Takabatake, Reona; Kitta, Kazumi; Teshima, Reiko; Kondo, Kazunari; Nishimaki-Mogami, Tomoko
2016-04-19
A number of genetically modified (GM) maize events have been developed and approved worldwide for commercial cultivation. A screening method is needed to monitor GM maize approved for commercialization in countries that mandate the labeling of foods containing a specified threshold level of GM crops. In Japan, a screening method has been implemented to monitor approved GM maize since 2001. However, the screening method currently used in Japan is time-consuming and requires generation of a calibration curve and experimental conversion factor (C(f)) value. We developed a simple screening method that avoids the need for a calibration curve and C(f) value. In this method, ΔC(q) values between the target sequences and the endogenous gene are calculated using multiplex real-time PCR, and the ΔΔC(q) value between the analytical and control samples is used as the criterion for determining analytical samples in which the GM organism content is below the threshold level for labeling of GM crops. An interlaboratory study indicated that the method is applicable independently with at least two models of PCR instruments used in this study.
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture... POULTRY AND EGG PRODUCTS Mandatory Analyses of Egg Products § 94.4 Analytical methods. The majority of analytical methods used by the USDA laboratories to perform mandatory analyses for egg products are listed as...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
Hess, Cornelius; Sydow, Konrad; Kueting, Theresa; Kraemer, Michael; Maas, Alexandra
2018-02-01
The requirement for correct evaluation of forensic toxicological results in daily routine work and scientific studies is reliable analytical data based on validated methods. Validation of a method gives the analyst tools to estimate the efficacy and reliability of the analytical method. Without validation, data might be contested in court and lead to unjustified legal consequences for a defendant. Therefore, new analytical methods to be used in forensic toxicology require careful method development and validation of the final method. Until now, there are no publications on the validation of chromatographic mass spectrometric methods for the detection of endogenous substances although endogenous analytes can be important in Forensic Toxicology (alcohol consumption marker, congener alcohols, gamma hydroxy butyric acid, human insulin and C-peptide, creatinine, postmortal clinical parameters). For these analytes, conventional validation instructions cannot be followed completely. In this paper, important practical considerations in analytical method validation for endogenous substances will be discussed which may be used as guidance for scientists wishing to develop and validate analytical methods for analytes produced naturally in the human body. Especially the validation parameters calibration model, analytical limits, accuracy (bias and precision) and matrix effects and recovery have to be approached differently. Highest attention should be paid to selectivity experiments. Copyright © 2017 Elsevier B.V. All rights reserved.
Analytical chemistry in water quality monitoring during manned space missions
NASA Astrophysics Data System (ADS)
Artemyeva, Anastasia A.
2016-09-01
Water quality monitoring during human spaceflights is essential. However, most of the traditional methods require sample collection with a subsequent ground analysis because of the limitations in volume, power, safety and gravity. The space missions are becoming longer-lasting; hence methods suitable for in-flight monitoring are demanded. Since 2009, water quality has been monitored in-flight with colorimetric methods allowing for detection of iodine and ionic silver. Organic compounds in water have been monitored with a second generation total organic carbon analyzer, which provides information on the amount of carbon in water at both the U.S. and Russian segments of the International Space Station since 2008. The disadvantage of this approach is the lack of compound-specific information. The recently developed methods and tools may potentially allow one to obtain in-flight a more detailed information on water quality. Namely, the microanalyzers based on potentiometric measurements were designed for online detection of chloride, potassium, nitrate ions and ammonia. The recent application of the current highly developed air quality monitoring system for water analysis was a logical step because most of the target analytes are the same in air and water. An electro-thermal vaporizer was designed, manufactured and coupled with the air quality control system. This development allowed for liberating the analytes from the aqueous matrix and further compound-specific analysis in the gas phase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholtz, Jean; Burtner, Edwin R.; Cook, Kristin A.
This course will introduce the field of Visual Analytics to HCI researchers and practitioners highlighting the contributions they can make to this field. Topics will include a definition of visual analytics along with examples of current systems, types of tasks and end users, issues in defining user requirements, design of visualizations and interactions, guidelines and heuristics, the current state of user-centered evaluations, and metrics for evaluation. We encourage designers, HCI researchers, and HCI practitioners to attend to learn how their skills can contribute to advancing the state of the art of visual analytics
Schindler, Birgit K; Koslitz, Stephan; Meier, Swetlana; Belov, Vladimir N; Koch, Holger M; Weiss, Tobias; Brüning, Thomas; Käfferlein, Heiko U
2012-04-17
N-Methyl- and N-ethyl-2-pyrollidone (NMP and NEP) are frequently used industrial solvents and were shown to be embryotoxic in animal experiments. We developed a sensitive, specific, and robust analytical method based on cooled-injection (CIS) gas chromatography and isotope dilution mass spectrometry to analyze 5-hydroxy-N-ethyl-2-pyrrolidone (5-HNEP) and 2-hydroxy-N-ethylsuccinimide (2-HESI), two newly identified presumed metabolites of NEP, and their corresponding methyl counterparts (5-HNMP, 2-HMSI) in human urine. The urine was spiked with deuterium-labeled analogues of these metabolites. The analytes were separated from urinary matrix by solid-phase extraction and silylated prior to quantification. Validation of this method was carried out by using both, spiked pooled urine samples and urine samples from 56 individuals of the general population with no known occupational exposure to NMP and NEP. Interday and intraday imprecision was better than 8% for all metabolites, while the limits of detection were between 5 and 20 μg/L depending on the analyte. The high sensitivity of the method enables us to quantify NMP and NEP metabolites at current environmental exposures by human biomonitoring.
Janiszewski, J; Schneider, P; Hoffmaster, K; Swyden, M; Wells, D; Fouda, H
1997-01-01
The development and application of membrane solid phase extraction (SPE) in 96-well microtiter plate format is described for the automated analysis of drugs in biological fluids. The small bed volume of the membrane allows elution of the analyte in a very small solvent volume, permitting direct HPLC injection and negating the need for the time consuming solvent evaporation step. A programmable liquid handling station (Quadra 96) was modified to automate all SPE steps. To avoid drying of the SPE bed and to enhance the analytical precision a novel protocol for performing the condition, load and wash steps in rapid succession was utilized. A block of 96 samples can now be extracted in 10 min., about 30 times faster than manual solvent extraction or single cartridge SPE methods. This processing speed complements the high-throughput speed of contemporary high performance liquid chromatography mass spectrometry (HPLC/MS) analysis. The quantitative analysis of a test analyte (Ziprasidone) in plasma demonstrates the utility and throughput of membrane SPE in combination with HPLC/MS. The results obtained with the current automated procedure compare favorably with those obtained using solvent and traditional solid phase extraction methods. The method has been used for the analysis of numerous drug prototypes in biological fluids to support drug discovery efforts.
Kaufmann, Anton; Widmer, Mirjam; Maden, Kathryn; Butcher, Patrick; Walker, Stephan
2018-03-05
A reversed-phase ion-pairing chromatographic method was developed for the detection and quantification of inorganic and organic anionic food additives. A single-stage high-resolution mass spectrometer (orbitrap ion trap, Orbitrap) was used to detect the accurate masses of the unfragmented analyte ions. The developed ion-pairing chromatography method was based on a dibutylamine/hexafluoro-2-propanol buffer. Dibutylamine can be charged to serve as a chromatographic ion-pairing agent. This ensures sufficient retention of inorganic and organic anions. Yet, unlike quaternary amines, it can be de-charged in the electrospray to prevent the formation of neutral analyte ion-pairing agent adducts. This process is significantly facilitated by the added hexafluoro-2-propanol. This approach permits the sensitive detection and quantification of additives like nitrate and mono-, di-, and triphosphate as well as citric acid, a number of artificial sweeteners like cyclamate and aspartame, flavor enhancers like glutamate, and preservatives like sorbic acid. This is a major advantage, since the currently used analytical methods as utilized in food safety laboratories are only capable in monitoring a few compounds or a particular category of food additives. Graphical abstract Deptotonation of ion pair agent in the electrospray interface.
ERIC Educational Resources Information Center
Alowaydhi, Wafa Hafez
2016-01-01
The current study aimed at standardizing the program of learning Arabic for non-native speakers in Saudi Electronic University according to certain standards of total quality. To achieve its purpose, the study adopted the descriptive analytical method. The author prepared a measurement tool for evaluating the electronic learning programs in light…
ERIC Educational Resources Information Center
Homem, Vera; Alves, Arminda; Santos, Lu´cia
2014-01-01
A laboratory application with a strong component in analytical chemistry was designed for undergraduate students, in order to introduce a current problem in the environmental science field, the water contamination by antibiotics. Therefore, a simple and rapid method based on direct injection and high performance liquid chromatography-tandem mass…
Acoustic fatigue life prediction for nonlinear structures with multiple resonant modes
NASA Technical Reports Server (NTRS)
Miles, R. N.
1992-01-01
This report documents an effort to develop practical and accurate methods for estimating the fatigue lives of complex aerospace structures subjected to intense random excitations. The emphasis of the current program is to construct analytical schemes for performing fatigue life estimates for structures that exhibit nonlinear vibration behavior and that have numerous resonant modes contributing to the response.
ERIC Educational Resources Information Center
Alqahtani, Abdulmuhsen Ayedh; Almutairi, Yousef B.
2013-01-01
The purpose of the current study is to examine, in retrospect, trainees' perceptions of the reasons some of their peers dropped out of the vocational education at the Industrial Institute-Shuwaikh (IIS), Kuwait. Using the descriptive-analytical method, a reliable questionnaire was developed to achieve this purpose. Results show that: (a) the…
Shared decision-making – transferring research into practice: the Analytic Hierarchy Process (AHP)
Dolan, James G.
2008-01-01
Objective To illustrate how the Analytic Hierarchy Process (AHP) can be used to promote shared decision-making and enhance clinician-patient communication. Methods Tutorial review. Results The AHP promotes shared decision making by creating a framework that is used to define the decision, summarize the information available, prioritize information needs, elicit preferences and values, and foster meaningful communication among decision stakeholders. Conclusions The AHP and related multi-criteria methods have the potential for improving the quality of clinical decisions and overcoming current barriers to implementing shared decision making in busy clinical settings. Further research is needed to determine the best way to implement these tools and to determine their effectiveness. Practice Implications Many clinical decisions involve preference-based trade-offs between competing risks and benefits. The AHP is a well-developed method that provides a practical approach for improving patient-provider communication, clinical decision-making, and the quality of patient care in these situations. PMID:18760559
Probabilistic dual heuristic programming-based adaptive critic
NASA Astrophysics Data System (ADS)
Herzallah, Randa
2010-02-01
Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.
Evaluation of analytical procedures for prediction of turbulent boundary layers on a porous wall
NASA Technical Reports Server (NTRS)
Towne, C. E.
1974-01-01
An analytical study has been made to determine how well current boundary layer prediction techniques work when there is mass transfer normal to the wall. The data that were considered in this investigation were for two-dimensional, incompressible, turbulent boundary layers with suction and blowing. Some of the bleed data were taken in an adverse pressure gradient. An integral prediction method was used three different porous wall skin friction relations, in addition to a solid-surface relation for the suction cases. A numerical prediction method was also used. Comparisons were made between theoretical and experimental skin friction coefficients, displacement and momentum thicknesses, and velocity profiles. The integral method with one of the porous wall skin friction laws gave very good agreement with data for most of the cases considered. The use of the solid-surface skin friction law caused the integral to overpredict the effectiveness of the bleed. The numerical techniques also worked well for most of the cases.
NASA Astrophysics Data System (ADS)
Claycomb, James Ronald
1998-10-01
Several High-T c Superconducting (HTS) eddy current probes have been developed for applications in electromagnetic nondestructive evaluation (NDE) of conducting materials. The probes utilize high-T c SUperconducting Quantum Interference Device (SQUID) magnetometers to detect the fields produced by the perturbation of induced eddy currents resulting from subsurface flaws. Localized HTS shields are incorporated to selectively screen out environmental electromagnetic interference and enable movement of the instrument in the Earth's magnetic field. High permeability magnetic shields are employed to focus flux into, and thereby increase the eddy current density in the metallic test samples. NDE test results are presented, in which machined flaws in aluminum alloy are detected by probes of different design. A novel current injection technique performing NDE of wires using SQUIDs is also discussed. The HTS and high permeability shields are designed based on analytical and numerical finite element method (FEM) calculations presented here. Superconducting and high permeability magnetic shields are modeled in uniform noise fields and in the presence of dipole fields characteristic of flaw signals. Several shield designs are characterized in terms of (1) their ability to screen out uniform background noise fields; (2) the resultant improvement in signal-to-noise ratio and (3) the extent to which dipole source fields are distorted. An analysis of eddy current induction is then presented for low frequency SQUID NDE. Analytical expressions are developed for the induced eddy currents and resulting magnetic fields produced by excitation sources above conducting plates of varying thickness. The expressions derived here are used to model the SQUID's response to material thinning. An analytical defect model is also developed, taking into account the attenuation of the defect field through the conducting material, as well as the current flow around the edges of the flaw. Time harmonic FEM calculations are then used to model the electromagnetic response of eight probe designs, consisting of an eddy current drive coil coupled to a SQUID surrounded by superconducting and/or high permeability magnetic shielding. Simulations are carried out with the eddy current probes located a finite distance above a conducting surface. Results are quantified in terms of shielding and focus factors for each probe design.
Technology advancement for integrative stem cell analyses.
Jeong, Yoon; Choi, Jonghoon; Lee, Kwan Hyi
2014-12-01
Scientists have endeavored to use stem cells for a variety of applications ranging from basic science research to translational medicine. Population-based characterization of such stem cells, while providing an important foundation to further development, often disregard the heterogeneity inherent among individual constituents within a given population. The population-based analysis and characterization of stem cells and the problems associated with such a blanket approach only underscore the need for the development of new analytical technology. In this article, we review current stem cell analytical technologies, along with the advantages and disadvantages of each, followed by applications of these technologies in the field of stem cells. Furthermore, while recent advances in micro/nano technology have led to a growth in the stem cell analytical field, underlying architectural concepts allow only for a vertical analytical approach, in which different desirable parameters are obtained from multiple individual experiments and there are many technical challenges that limit vertically integrated analytical tools. Therefore, we propose--by introducing a concept of vertical and horizontal approach--that there is the need of adequate methods to the integration of information, such that multiple descriptive parameters from a stem cell can be obtained from a single experiment.
Technology Advancement for Integrative Stem Cell Analyses
Jeong, Yoon
2014-01-01
Scientists have endeavored to use stem cells for a variety of applications ranging from basic science research to translational medicine. Population-based characterization of such stem cells, while providing an important foundation to further development, often disregard the heterogeneity inherent among individual constituents within a given population. The population-based analysis and characterization of stem cells and the problems associated with such a blanket approach only underscore the need for the development of new analytical technology. In this article, we review current stem cell analytical technologies, along with the advantages and disadvantages of each, followed by applications of these technologies in the field of stem cells. Furthermore, while recent advances in micro/nano technology have led to a growth in the stem cell analytical field, underlying architectural concepts allow only for a vertical analytical approach, in which different desirable parameters are obtained from multiple individual experiments and there are many technical challenges that limit vertically integrated analytical tools. Therefore, we propose—by introducing a concept of vertical and horizontal approach—that there is the need of adequate methods to the integration of information, such that multiple descriptive parameters from a stem cell can be obtained from a single experiment. PMID:24874188
Streamflow variability and optimal capacity of run-of-river hydropower plants
NASA Astrophysics Data System (ADS)
Basso, S.; Botter, G.
2012-10-01
The identification of the capacity of a run-of-river plant which allows for the optimal utilization of the available water resources is a challenging task, mainly because of the inherent temporal variability of river flows. This paper proposes an analytical framework to describe the energy production and the economic profitability of small run-of-river power plants on the basis of the underlying streamflow regime. We provide analytical expressions for the capacity which maximize the produced energy as a function of the underlying flow duration curve and minimum environmental flow requirements downstream of the plant intake. Similar analytical expressions are derived for the capacity which maximize the economic return deriving from construction and operation of a new plant. The analytical approach is applied to a minihydro plant recently proposed in a small Alpine catchment in northeastern Italy, evidencing the potential of the method as a flexible and simple design tool for practical application. The analytical model provides useful insight on the major hydrologic and economic controls (e.g., streamflow variability, energy price, costs) on the optimal plant capacity and helps in identifying policy strategies to reduce the current gap between the economic and energy optimizations of run-of-river plants.
Achieving optimal SERS through enhanced experimental design
Fisk, Heidi; Westley, Chloe; Turner, Nicholas J.
2016-01-01
One of the current limitations surrounding surface‐enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal‐based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd. PMID:27587905
Achieving optimal SERS through enhanced experimental design.
Fisk, Heidi; Westley, Chloe; Turner, Nicholas J; Goodacre, Royston
2016-01-01
One of the current limitations surrounding surface-enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal-based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd.
Validation of a BOTDR-based system for the detection of smuggling tunnels
NASA Astrophysics Data System (ADS)
Elkayam, Itai; Klar, Assaf; Linker, Raphael; Marshall, Alec M.
2010-04-01
Cross-border smuggling tunnels enable unmonitored movement of people, drugs and weapons and pose a very serious threat to homeland security. Recently, Klar and Linker (2009) [SPIE paper No. 731603] presented an analytical study of the feasibility of a Brillouin Optical Time Domain Reflectometry (BOTDR) based system for the detection of small sized smuggling tunnels. The current study extends this work by validating the analytical models against real strain measurements in soil obtained from small scale experiments in a geotechnical centrifuge. The soil strains were obtained using an image analysis method that tracked the displacement of discrete patches of soil through a sequence of digital images of the soil around the tunnel during the centrifuge test. The results of the present study are in agreement with those of a previous study which was based on synthetic signals generated using empirical and analytical models from the literature.
Pre-analytical method for NMR-based grape metabolic fingerprinting and chemometrics.
Ali, Kashif; Maltese, Federica; Fortes, Ana Margarida; Pais, Maria Salomé; Verpoorte, Robert; Choi, Young Hae
2011-10-10
Although metabolomics aims at profiling all the metabolites in organisms, data quality is quite dependent on the pre-analytical methods employed. In order to evaluate current methods, different pre-analytical methods were compared and used for the metabolic profiling of grapevine as a model plant. Five grape cultivars from Portugal in combination with chemometrics were analyzed in this study. A common extraction method with deuterated water and methanol was found effective in the case of amino acids, organic acids, and sugars. For secondary metabolites like phenolics, solid phase extraction with C-18 cartridges showed good results. Principal component analysis, in combination with NMR spectroscopy, was applied and showed clear distinction among the cultivars. Primary metabolites such as choline, sucrose, and leucine were found discriminating for 'Alvarinho', while elevated levels of alanine, valine, and acetate were found in 'Arinto' (white varieties). Among the red cultivars, higher signals for citrate and GABA in 'Touriga Nacional', succinate and fumarate in 'Aragonês', and malate, ascorbate, fructose and glucose in 'Trincadeira', were observed. Based on the phenolic profile, 'Arinto' was found with higher levels of phenolics as compared to 'Alvarinho'. 'Trincadeira' showed lowest phenolics content while higher levels of flavonoids and phenylpropanoids were found in 'Aragonês' and 'Touriga Nacional', respectively. It is shown that the metabolite composition of the extract is highly affected by the extraction procedure and this consideration has to be taken in account for metabolomics studies. Copyright © 2011 Elsevier B.V. All rights reserved.
Campone, Luca; Piccinelli, Anna Lisa; Celano, Rita; Russo, Mariateresa; Valdés, Alberto; Ibáñez, Clara; Rastrelli, Luca
2015-04-01
According to current demands and future perspectives in food safety, this study reports a fast and fully automated analytical method for the simultaneous analysis of the mycotoxins with high toxicity and wide spread, aflatoxins (AFs) and ochratoxin A (OTA) in dried fruits, a high-risk foodstuff. The method is based on pressurized liquid extraction (PLE), with aqueous methanol (30%) at 110 °C, of the slurried dried fruit and online solid-phase extraction (online SPE) cleanup of the PLE extracts with a C18 cartridge. The purified sample was directly analysed by ultra-high-pressure liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) for sensitive and selective determination of AFs and OTA. The proposed analytical procedure was validated for different dried fruits (vine fruit, fig and apricot), providing method detection and quantification limits much lower than the AFs and OTA maximum levels imposed by EU regulation in dried fruit for direct human consumption. Also, recoveries (83-103%) and repeatability (RSD < 8, n = 3) meet the performance criteria required by EU regulation for the determination of the levels of mycotoxins in foodstuffs. The main advantage of the proposed method is full automation of the whole analytical procedure that reduces the time and cost of the analysis, sample manipulation and solvent consumption, enabling high-throughput analysis and highly accurate and precise results.
Fast solver for large scale eddy current non-destructive evaluation problems
NASA Astrophysics Data System (ADS)
Lei, Naiguang
Eddy current testing plays a very important role in non-destructive evaluations of conducting test samples. Based on Faraday's law, an alternating magnetic field source generates induced currents, called eddy currents, in an electrically conducting test specimen. The eddy currents generate induced magnetic fields that oppose the direction of the inducing magnetic field in accordance with Lenz's law. In the presence of discontinuities in material property or defects in the test specimen, the induced eddy current paths are perturbed and the associated magnetic fields can be detected by coils or magnetic field sensors, such as Hall elements or magneto-resistance sensors. Due to the complexity of the test specimen and the inspection environments, the availability of theoretical simulation models is extremely valuable for studying the basic field/flaw interactions in order to obtain a fuller understanding of non-destructive testing phenomena. Theoretical models of the forward problem are also useful for training and validation of automated defect detection systems. Theoretical models generate defect signatures that are expensive to replicate experimentally. In general, modelling methods can be classified into two categories: analytical and numerical. Although analytical approaches offer closed form solution, it is generally not possible to obtain largely due to the complex sample and defect geometries, especially in three-dimensional space. Numerical modelling has become popular with advances in computer technology and computational methods. However, due to the huge time consumption in the case of large scale problems, accelerations/fast solvers are needed to enhance numerical models. This dissertation describes a numerical simulation model for eddy current problems using finite element analysis. Validation of the accuracy of this model is demonstrated via comparison with experimental measurements of steam generator tube wall defects. These simulations generating two-dimension raster scan data typically takes one to two days on a dedicated eight-core PC. A novel direct integral solver for eddy current problems and GPU-based implementation is also investigated in this research to reduce the computational time.
Masters, Andrea R; McCoy, Michael; Jones, David R; Desta, Zeruesenay
2016-03-15
Bupropion metabolites formed via oxidation and reduction exhibit pharmacological activity, but little is known regarding their stereoselective disposition. A novel stereoselective liquid chromatography-tandem mass spectrometry (LC-MS/MS) method was developed to separate and quantify enantiomers of bupropion, 4-hydroxybupropion, and erythro- and threo-dihydrobupropion. Liquid-liquid extraction was implemented to extract all analytes from 50 μL human plasma. Acetaminophen (APAP) was used as an internal standard. The analytes were separated on a Lux 3 μ Cellulose-3 250×4.6 mm column by methanol: acetonitrile: ammonium bicarbonate: ammonium hydroxide gradient elution and monitored using an ABSciex 5500 QTRAP triple-quadrupole mass spectrometer equipped with electrospray ionization probe in positive mode. Extraction efficiency for all analytes was ≥70%. The stability at a single non-extracted concentration for over 48 h at ambient temperature resulted in less than 9.8% variability for all analytes. The limit of quantification (LOQ) for enantiomers of bupropion and 4-hydroxybupropion was 0.3 ng/mL, while the LOQ for enantiomers of erythro- and threo-hydrobupropion was 0.15 ng/mL. The intra-day precision and accuracy estimates for enantiomers of bupropion and its metabolites ranged from 3.4% to 15.4% and from 80.6% to 97.8%, respectively, while the inter-day precision and accuracy ranged from 6.1% to 19.9% and from 88.5% to 99.9%, respectively. The current method was successfully implemented to determine the stereoselective pharmacokinetics of bupropion and its metabolites in 3 healthy volunteers administered a single 100mg oral dose of racemic bupropion. This novel, accurate, and precise HPLC-MS/MS method should enhance further research into bupropion stereoselective metabolism and drug interactions. Copyright © 2016 Elsevier B.V. All rights reserved.
Determination of vitamin C in foods: current state of method validation.
Spínola, Vítor; Llorent-Martínez, Eulogio J; Castilho, Paula C
2014-11-21
Vitamin C is one of the most important vitamins, so reliable information about its content in foodstuffs is a concern to both consumers and quality control agencies. However, the heterogeneity of food matrixes and the potential degradation of this vitamin during its analysis create enormous challenges. This review addresses the development and validation of high-performance liquid chromatography methods for vitamin C analysis in food commodities, during the period 2000-2014. The main characteristics of vitamin C are mentioned, along with the strategies adopted by most authors during sample preparation (freezing and acidification) to avoid vitamin oxidation. After that, the advantages and handicaps of different analytical methods are discussed. Finally, the main aspects concerning method validation for vitamin C analysis are critically discussed. Parameters such as selectivity, linearity, limit of quantification, and accuracy were studied by most authors. Recovery experiments during accuracy evaluation were in general satisfactory, with usual values between 81 and 109%. However, few methods considered vitamin C stability during the analytical process, and the study of the precision was not always clear or complete. Potential future improvements regarding proper method validation are indicated to conclude this review. Copyright © 2014. Published by Elsevier B.V.
ICP-MS: Analytical Method for Identification and Detection of Elemental Impurities.
Mittal, Mohini; Kumar, Kapil; Anghore, Durgadas; Rawal, Ravindra K
2017-01-01
Aim of this article is to review and discuss the currently used quantitative analytical method ICP-MS, which is used for quality control of pharmaceutical products. ICP-MS technique has several applications such as determination of single elements, multi element analysis in synthetic drugs, heavy metals in environmental water, trace element content of selected fertilizers and dairy manures. ICP-MS is also used for determination of toxic and essential elements in different varieties of food samples and metal pollutant present in the environment. The pharmaceuticals may generate impurities at various stages of development, transportation and storage which make them risky to be administered. Thus, it is essential that these impurities must be detected and quantified. ICP-MS plays an important function in the recognition and revealing of elemental impurities. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Greenhouse Gas Analysis by GC/MS
NASA Astrophysics Data System (ADS)
Bock, E. M.; Easton, Z. M.; Macek, P.
2015-12-01
Current methods to analyze greenhouse gases rely on designated complex, multiple-column, multiple-detector gas chromatographs. A novel method was developed in partnership with Shimadzu for simultaneous quantification of carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) in environmental gas samples. Gas bulbs were used to make custom standard mixtures by injecting small volumes of pure analyte into the nitrogen-filled bulb. Resulting calibration curves were validated using a certified gas standard. The use of GC/MS systems to perform this analysis has the potential to move the analysis of greenhouse gasses from expensive, custom GC systems to standard single-quadrupole GC/MS systems that are available in most laboratories, which wide variety of applications beyond greenhouse gas analysis. Additionally, use of mass spectrometry can provide confirmation of identity of target analytes, and will assist in the identification of unknown peaks should they be present in the chromatogram.
Stabilization of glucose-oxidase in the graphene paste for screen-printed glucose biosensor
NASA Astrophysics Data System (ADS)
Pepłowski, Andrzej; Janczak, Daniel; Jakubowska, Małgorzata
2015-09-01
Various methods and materials for enzyme stabilization within screen-printed graphene sensor were analyzed. Main goal was to develop technology allowing immediate printing of the biosensors in single printing process. Factors being considered were: toxicity of the materials used, ability of the material to be screen-printed (squeezed through the printing mesh) and temperatures required in the fabrication process. Performance of the examined sensors was measured using chemical amperometry method, then appropriate analysis of the measurements was conducted. The analysis results were then compared with the medical requirements. Parameters calculated were: correlation coefficient between concentration of the analyte and the measured electrical current (0.986) and variation coefficient for the particular concentrations of the analyte used as the calibration points. Variation of the measured values was significant only in ranges close to 0, decreasing for the concentrations of clinical importance. These outcomes justify further development of the graphene-based biosensors fabricated through printing techniques.
Visualization of Microfloral Metabolism for Marine Waste Recycling
Ogura, Tatsuki; Hoshino, Reona; Date, Yasuhiro; Kikuchi, Jun
2016-01-01
Marine biomass including fishery products are precious protein resources for human foods and are an alternative to livestock animals in order to reduce the virtual water problem. However, a large amount of marine waste can be generated from fishery products and it is not currently recycled. We evaluated the metabolism of digested marine waste using integrated analytical methods, under anaerobic conditions and the fertilization of abandoned agricultural soils. Dynamics of fish waste digestion revealed that samples of meat and bony parts had similar dynamics under anaerobic conditions in spite of large chemical variations in input marine wastes. Abandoned agricultural soils fertilized with fish waste accumulated some amino acids derived from fish waste, and accumulation of l-arginine and l-glutamine were higher in plant seedlings. Therefore, we have proposed an analytical method to visualize metabolic dynamics for recycling of fishery waste processes. PMID:26828528
Advances in Molecular Rotational Spectroscopy for Applied Science
NASA Astrophysics Data System (ADS)
Harris, Brent; Fields, Shelby S.; Pulliam, Robin; Muckle, Matt; Neill, Justin L.
2017-06-01
Advances in chemical sensitivity and robust, solid-state designs for microwave/millimeter-wave instrumentation compel the expansion of molecular rotational spectroscopy as research tool into applied science. It is familiar to consider molecular rotational spectroscopy for air analysis. Those techniques for molecular rotational spectroscopy are included in our presentation of a more broad application space for materials analysis using Fourier Transform Molecular Rotational Resonance (FT-MRR) spectrometers. There are potentially transformative advantages for direct gas analysis of complex mixtures, determination of unknown evolved gases with parts per trillion detection limits in solid materials, and unambiguous chiral determination. The introduction of FT-MRR as an alternative detection principle for analytical chemistry has created a ripe research space for the development of new analytical methods and sampling equipment to fully enable FT-MRR. We present the current state of purpose-built FT-MRR instrumentation and the latest application measurements that make use of new sampling methods.
Analytical modeling and tolerance analysis of a linear variable filter for spectral order sorting.
Ko, Cheng-Hao; Chang, Kuei-Ying; Huang, You-Min
2015-02-23
This paper proposes an innovative method to overcome the low production rate of current linear variable filter (LVF) fabrication. During the fabrication process, a commercial coater is combined with a local mask on a substrate. The proposed analytical thin film thickness model, which is based on the geometry of the commercial coater, is developed to more effectively calculate the profiles of LVFs. Thickness tolerance, LVF zone width, thin film layer structure, transmission spectrum and the effects of variations in critical parameters of the coater are analyzed. Profile measurements demonstrate the efficacy of local mask theory in the prediction of evaporation profiles with a high degree of accuracy.
Uniform GTD solution for the diffraction by metallic tapes on panelled compact-range reflectors
NASA Technical Reports Server (NTRS)
Somers, G. A.; Pathak, P. H.
1992-01-01
Metallic tape is commonly used to cover the interpanel gaps which occur in paneled compact-range reflectors. It is therefore of interest to study the effect of the scattering by the tape on the field in the target zone of the range. An analytical solution is presented for the target zone fields scattered by 2D metallic tapes. It is formulated by the generalized scattering matrix technique in conjunction with the Wiener-Hopf procedure. An extension to treat 3D tapes can be accomplished using the 2D solution via the equivalent current concept. The analytical solution is compared with a reference moment method solution to confirm the accuracy of the former.
Current Trends in Nanomaterial-Based Amperometric Biosensors
Hayat, Akhtar; Catanante, Gaëlle; Marty, Jean Louis
2014-01-01
The last decade has witnessed an intensive research effort in the field of electrochemical sensors, with a particular focus on the design of amperometric biosensors for diverse analytical applications. In this context, nanomaterial integration in the construction of amperometric biosensors may constitute one of the most exciting approaches. The attractive properties of nanomaterials have paved the way for the design of a wide variety of biosensors based on various electrochemical detection methods to enhance the analytical characteristics. However, most of these nanostructured materials are not explored in the design of amperometric biosensors. This review aims to provide insight into the diverse properties of nanomaterials that can be possibly explored in the construction of amperometric biosensors. PMID:25494347
Chylewska, Agnieszka; Ogryzek, M; Makowski, Mariusz
2017-10-23
New analytical and molecular methods for microorganisms are being developed on various features of identification i.e. selectivity, specificity, sensitivity, rapidity and discrimination of the viable cell. The presented review was established following the current trends in improved pathogens separation and detection methods and their subsequent use in medical diagnosis. This contribution also focuses on the development of analytical and biological methods in the analysis of microorganisms, with special attention paid to bio-samples containing microbes (blood, urine, lymph, wastewater). First, the paper discusses microbes characterization, their structure, surface, properties, size and then it describes pivotal points in the bacteria, viruses and fungi separation procedure obtained by researchers in the last 30 years. According to the above, detection techniques can be classified into three categories, which were, in our opinion, examined and modified most intensively during this period: electrophoretic, nucleic-acid-based, and immunological methods. The review covers also the progress, limitations and challenges of these approaches and emphasizes the advantages of new separative techniques in selective fractionating of microorganisms. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Hill, Ryan C; Oman, Trent J; Shan, Guomin; Schafer, Barry; Eble, Julie; Chen, Cynthia
2015-08-26
Currently, traditional immunochemistry technologies such as enzyme-linked immunosorbent assays (ELISA) are the predominant analytical tool used to measure levels of recombinant proteins expressed in genetically engineered (GE) plants. Recent advances in agricultural biotechnology have created a need to develop methods capable of selectively detecting and quantifying multiple proteins in complex matrices because of increasing numbers of transgenic proteins being coexpressed or "stacked" to achieve tolerance to multiple herbicides or to provide multiple modes of action for insect control. A multiplexing analytical method utilizing liquid chromatography with tandem mass spectrometry (LC-MS/MS) has been developed and validated to quantify three herbicide-tolerant proteins in soybean tissues: aryloxyalkanoate dioxygenase (AAD-12), 5-enol-pyruvylshikimate-3-phosphate synthase (2mEPSPS), and phosphinothricin acetyltransferase (PAT). Results from the validation showed high recovery and precision over multiple analysts and laboratories. Results from this method were comparable to those obtained with ELISA with respect to protein quantitation, and the described method was demonstrated to be suitable for multiplex quantitation of transgenic proteins in GE crops.
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2011 CFR
2011-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2011-04-01 2011-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2014 CFR
2014-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2014-04-01 2014-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2012 CFR
2012-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2012-04-01 2012-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2013 CFR
2013-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2013-04-01 2013-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...
First Order Reliability Application and Verification Methods for Semistatic Structures
NASA Technical Reports Server (NTRS)
Verderaime, Vincent
1994-01-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.
An interlaboratory transfer of a multi-analyte assay between continents.
Georgiou, Alexandra; Dong, Kelly; Hughes, Stephen; Barfield, Matthew
2015-01-01
Alex has worked at GlaxoSmithKline for the past 15 years and currently works within the bioanalytical and toxicokinetic group in the United Kingdom. Alex's role in previous years has been the in-house support of preclinical and clinical bioanalysis, from method development through to sample analysis activities as well as acting as PI for GLP bioanalysis and toxicokinetics. For the past two years, Alex has applied this analytical and regulatory experience to focus on the outsourcing of preclinical bioanalysis, toxicokinetics and clinical bioanalysis, working closely with multiple bioanalytical and in-life CRO partners worldwide. Alex works to support DMPK and Safety Assessment outsourcing activities for GSK across multiple therapeutic areas, from the first GLP study through to late stage clinical PK studies. Transfer and cross-validation of an existing analytical assay between a laboratory providing current analytical support, and a laboratory needed for new or additional support, can present the bioanalyst with numerous challenges. These challenges can be technical or logistical in nature and may prove to be significant when transferring an assay between laboratories in different continents. Part of GlaxoSmithKline's strategy to improve confidence in providing quality data, is to cross-validate between laboratories. If the cross-validation fails predefined acceptance criteria, then a subsequent investigation would follow. This may also prove to be challenging. The importance of thorough planning and good communication throughout assay transfer, cross-validation and any subsequent investigations is illustrated in this case study.
Andra, Syam S; Austin, Christine; Yang, Juan; Patel, Dhavalkumar; Arora, Manish
2016-12-01
Human exposures to bisphenol A (BPA) has attained considerable global health attention and represents one of the leading environmental contaminants with potential adverse health effects including endocrine disruption. Current practice of measuring of exposure to BPA includes the measurement of unconjugated BPA (aglycone) and total (both conjugated and unconjugated) BPA; the difference between the two measurements leads to estimation of conjugated forms. However, the measurement of BPA as the end analyte leads to inaccurate estimates from potential interferences from background sources during sample collection and analysis. BPA glucuronides (BPAG) and sulfates (BPAS) represent better candidates for biomarkers of BPA exposure, since they require in vivo metabolism and are not prone to external contamination. In this work, the primary focus was to review the current state of the art in analytical methods available to quantitate BPA conjugates. The entire analytical procedure for the simultaneous extraction and detection of aglycone BPA and conjugates is covered, from sample pre-treatment, extraction, separation, ionization, and detection. Solid phase extraction coupled with liquid chromatograph and tandem mass spectrometer analysis provides the most sensitive detection and quantification of BPA conjugates. Discussed herein are the applications of BPA conjugates analysis in human exposure assessment studies. Measuring these potential biomarkers of BPA exposure has only recently become analytically feasible and there are limitations and challenges to overcome in biomonitoring studies. Copyright © 2016 Elsevier B.V. All rights reserved.
Applications of Raman Spectroscopy in Biopharmaceutical Manufacturing: A Short Review.
Buckley, Kevin; Ryder, Alan G
2017-06-01
The production of active pharmaceutical ingredients (APIs) is currently undergoing its biggest transformation in a century. The changes are based on the rapid and dramatic introduction of protein- and macromolecule-based drugs (collectively known as biopharmaceuticals) and can be traced back to the huge investment in biomedical science (in particular in genomics and proteomics) that has been ongoing since the 1970s. Biopharmaceuticals (or biologics) are manufactured using biological-expression systems (such as mammalian, bacterial, insect cells, etc.) and have spawned a large (>€35 billion sales annually in Europe) and growing biopharmaceutical industry (BioPharma). The structural and chemical complexity of biologics, combined with the intricacy of cell-based manufacturing, imposes a huge analytical burden to correctly characterize and quantify both processes (upstream) and products (downstream). In small molecule manufacturing, advances in analytical and computational methods have been extensively exploited to generate process analytical technologies (PAT) that are now used for routine process control, leading to more efficient processes and safer medicines. In the analytical domain, biologic manufacturing is considerably behind and there is both a huge scope and need to produce relevant PAT tools with which to better control processes, and better characterize product macromolecules. Raman spectroscopy, a vibrational spectroscopy with a number of useful properties (nondestructive, non-contact, robustness) has significant potential advantages in BioPharma. Key among them are intrinsically high molecular specificity, the ability to measure in water, the requirement for minimal (or no) sample pre-treatment, the flexibility of sampling configurations, and suitability for automation. Here, we review and discuss a representative selection of the more important Raman applications in BioPharma (with particular emphasis on mammalian cell culture). The review shows that the properties of Raman have been successfully exploited to deliver unique and useful analytical solutions, particularly for online process monitoring. However, it also shows that its inherent susceptibility to fluorescence interference and the weakness of the Raman effect mean that it can never be a panacea. In particular, Raman-based methods are intrinsically limited by the chemical complexity and wide analyte-concentration-profiles of cell culture media/bioprocessing broths which limit their use for quantitative analysis. Nevertheless, with appropriate foreknowledge of these limitations and good experimental design, robust analytical methods can be produced. In addition, new technological developments such as time-resolved detectors, advanced lasers, and plasmonics offer potential of new Raman-based methods to resolve existing limitations and/or provide new analytical insights.
3-MCPD in food other than soy sauce or hydrolysed vegetable protein (HVP).
Baer, Ines; de la Calle, Beatriz; Taylor, Philip
2010-01-01
This review gives an overview of current knowledge about 3-monochloropropane-1,2-diol (3-MCPD) formation and detection. Although 3-MCPD is often mentioned with regard to soy sauce and acid-hydrolysed vegetable protein (HVP), and much research has been done in that area, the emphasis here is placed on other foods. This contaminant can be found in a great variety of foodstuffs and is difficult to avoid in our daily nutrition. Despite its low concentration in most foods, its carcinogenic properties are of general concern. Its formation is a multivariate problem influenced by factors such as heat, moisture and sugar/lipid content, depending on the type of food and respective processing employed. Understanding the formation of this contaminant in food is fundamental to not only preventing or reducing it, but also developing efficient analytical methods of detecting it. Considering the differences between 3-MCPD-containing foods, and the need to test for the contaminant at different levels of food processing, one would expect a variety of analytical approaches. In this review, an attempt is made to provide an up-to-date list of available analytical methods and to highlight the differences among these techniques. Finally, the emergence of 3-MCPD esters and analytical techniques for them are also discussed here, although they are not the main focus of this review.
Idder, Salima; Ley, Laurent; Mazellier, Patrick; Budzinski, Hélène
2013-12-17
One of the current environmental issues concerns the presence and fate of pharmaceuticals in water bodies as these compounds may represent a potential environmental problem. The characterization of pharmaceutical contamination requires powerful analytical method able to quantify these pollutants at very low concentration (few ng L(-1)). In this work, a multi-residue analytical methodology (on-line solid phase extraction-liquid chromatography-triple quadrupole mass spectrometry using positive and negative electrospray ionization) has been developed and validated for 40 multi-class pharmaceuticals and metabolites for tap and surface waters. This on-line SPE method was very convenient and efficient compared to classical off-line SPE method because of its shorter total run time including sample preparation and smaller sample volume (1 mL vs up to 1 L). The optimized method included several therapeutic classes as lipid regulators, antibiotics, beta-blockers, non-steroidal anti-inflammatories, antineoplastic, etc., with various physicochemical properties. Quantification has been achieved with the internal standards. The limits of detection are between 0.7 and 15 ng L(-1) for drinking waters and 2-15 ng L(-1) for surface waters. The inter-day precision values are below 20% for each studied level. The improvement and strength of the analytical method has been verified along a monitoring of these 40 pharmaceuticals in Isle River, a French stream located in the South West of France. During this survey, 16 pharmaceutical compounds have been detected. Copyright © 2013 Elsevier B.V. All rights reserved.
Laboratories measuring target chemical, radiochemical, pathogens, and biotoxin analytes in environmental samples can use this online query tool to identify analytical methods included in EPA's Selected Analytical Methods for Environmental Remediation
Al-Kinani, Ali Athab; Naughton, Declan P; Calabrese, Gianpiero; Vangala, Anil; Smith, James R; Pierscionek, Barbara K; Alany, Raid G
2015-03-01
Oxidative damage due to low levels of glutathione (GSH) is one of the main causes of cataract formation. It has been reported that 2-oxothiazolidine-4-carboxylic acid (OTZ), a cysteine prodrug, can increase the cellular level of GSH. Currently, there is no analytical method to separate and quantify OTZ from aqueous humour samples for cataract research. The present study aims to develop and validate a hydrophilic interaction liquid chromatography (HILIC) method for the quantification of OTZ in simulated aqueous humour (SAH). The developed method was validated according to FDA guidelines. Accuracy, precision, selectivity, sensitivity, linearity, lower limit of quantification (LLOQ), lower limit of detection (LLOD) and stability were the parameters assessed in the method validation. The developed method was found to be accurate and precise with LLOQ and LLOD of 200 and 100 ng/mL, respectively; method selectivity was confirmed by the absence of any matrix interference with the analyte peak. The constructed calibration curve was linear in the range of 0.2-10 μg/mL, with a regression coefficient of 0.999. In addition, the OTZ was found to be stable in SAH after three freeze/thaw cycles. Chitosan nanoparticles loaded with OTZ were formulated by the ionic gelation method. The nanoparticles were found to be uniform in shape and well dispersed with average size of 153 nm. The in vitro release of OTZ from the nanoparticles was quantified using the developed analytical method over 96 h. Permeation of OTZ through excised bovine cornea was measured using HILIC. The lag time and the flux were 0.2 h and 3.05 μg/cm(2) h, respectively.
Covaci, Adrian; Voorspoels, Stefan; Abdallah, Mohamed Abou-Elwafa; Geens, Tinne; Harrad, Stuart; Law, Robin J
2009-01-16
The present article reviews the available literature on the analytical and environmental aspects of tetrabromobisphenol-A (TBBP-A), a currently intensively used brominated flame retardant (BFR). Analytical methods, including sample preparation, chromatographic separation, detection techniques, and quality control are discussed. An important recent development in the analysis of TBBP-A is the growing tendency for liquid chromatographic techniques. At the detection stage, mass-spectrometry is a well-established and reliable technology in the identification and quantification of TBBP-A. Although interlaboratory exercises for BFRs have grown in popularity in the last 10 years, only a few participating laboratories report concentrations for TBBP-A. Environmental levels of TBBP-A in abiotic and biotic matrices are low, probably due to the major use of TBBP-A as reactive FR. As a consequence, the expected human exposure is low. This is in agreement with the EU risk assessment that concluded that there is no risk for humans concerning TBBP-A exposure. Much less analytical and environmental information exists for the various groups of TBBP-A derivatives which are largely used as additive flame retardants.
NASA Astrophysics Data System (ADS)
Ranamukhaarachchi, Sahan A.; Padeste, Celestino; Häfeli, Urs O.; Stoeber, Boris; Cadarso, Victor J.
2018-02-01
A hollow metallic microneedle is integrated with microfluidics and photonic components to form a microneedle-optofluidic biosensor suitable for therapeutic drug monitoring (TDM) in biological fluids, like interstitial fluid, that can be collected in a painless and minimally-invasive manner. The microneedle inner lumen surface is bio-functionalized to trap and bind target analytes on-site in a sample volume as small as 0.6 nl, and houses an enzyme-linked assay on its 0.06 mm2 wall. The optofluidic components are designed to rapidly quantify target analytes present in the sample and collected in the microneedle using a simple and sensitive absorbance scheme. This contribution describes how the biosensor components were optimized to detect in vitro streptavidin-horseradish peroxidase (Sav-HRP) as a model analyte over a large detection range (0-7.21 µM) and a very low limit of detection (60.2 nM). This biosensor utilizes the lowest analyte volume reported for TDM with microneedle technology, and presents significant avenues to improve current TDM methods for patients, by potentially eliminating blood draws for several drug candidates.
Koerbin, Gus; Liu, Jiakai; Eigenstetter, Alex; Tan, Chin Hon; Badrick, Tony; Loh, Tze Ping
2018-02-15
A product recall was issued for the Roche/Hitachi Cobas Gentamicin II assays on 25 th May 2016 in Australia, after a 15 - 20% positive analytical shift was discovered. Laboratories were advised to employ the Thermo Fisher Gentamicin assay as an alternative. Following the reintroduction of the revised assay on 12 th September 2016, a second reagent recall was made on 20 th March 2017 after the discovery of a 20% negative analytical shift due to erroneous instrument adjustment factor. The practices of an index laboratory were examined to determine how the analytical shifts evaded detection by routine internal quality control (IQC) and external quality assurance (EQA) systems. The ability of the patient result-based approaches, including moving average (MovAvg) and moving sum of outliers (MovSO) approaches in detecting these shifts were examined. Internal quality control data of the index laboratory were acceptable prior to the product recall. The practice of adjusting IQC target following a change in assay method resulted in the missed negative shift when the revised Roche assay was reintroduced. While the EQA data of the Roche subgroup showed clear negative bias relative to other laboratory methods, the results were considered as possible 'matrix effect'. The MovAvg method detected the positive shift before the product recall. The MovSO did not detect the negative shift in the index laboratory but did so in another laboratory 5 days before the second product recall. There are gaps in current laboratory quality practices that leave room for analytical errors to evade detection.
Laboratories measuring target pathogen analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select pathogens.
Acoustic Radiation From Rotating Blades: The Kirchhoff Method in Aeroacoustics
NASA Technical Reports Server (NTRS)
Farassat, F.
2000-01-01
This paper reviews the current status of discrete frequency noise prediction for rotating blade machinery in the time domain. There are two major approaches both of which can be classified as the Kirchhoff method. These methods depend on the solution of two linear wave equations called the K and FW-H equations. The solutions of these equations for subsonic and supersonic surfaces are discussed and some important results of the research in the past years are presented. This paper is analytical in nature and emphasizes the work of the author and coworkers at NASA Langley Research Center.
On Statistical Approaches for Demonstrating Analytical Similarity in the Presence of Correlation.
Yang, Harry; Novick, Steven; Burdick, Richard K
Analytical similarity is the foundation for demonstration of biosimilarity between a proposed product and a reference product. For this assessment, currently the U.S. Food and Drug Administration (FDA) recommends a tiered system in which quality attributes are categorized into three tiers commensurate with their risk and approaches of varying statistical rigor are subsequently used for the three-tier quality attributes. Key to the analyses of Tiers 1 and 2 quality attributes is the establishment of equivalence acceptance criterion and quality range. For particular licensure applications, the FDA has provided advice on statistical methods for demonstration of analytical similarity. For example, for Tier 1 assessment, an equivalence test can be used based on an equivalence margin of 1.5 σ R , where σ R is the reference product variability estimated by the sample standard deviation S R from a sample of reference lots. The quality range for demonstrating Tier 2 analytical similarity is of the form X̄ R ± K × σ R where the constant K is appropriately justified. To demonstrate Tier 2 analytical similarity, a large percentage (e.g., 90%) of test product must fall in the quality range. In this paper, through both theoretical derivations and simulations, we show that when the reference drug product lots are correlated, the sample standard deviation S R underestimates the true reference product variability σ R As a result, substituting S R for σ R in the Tier 1 equivalence acceptance criterion and the Tier 2 quality range inappropriately reduces the statistical power and the ability to declare analytical similarity. Also explored is the impact of correlation among drug product lots on Type I error rate and power. Three methods based on generalized pivotal quantities are introduced, and their performance is compared against a two-one-sided tests (TOST) approach. Finally, strategies to mitigate risk of correlation among the reference products lots are discussed. A biosimilar is a generic version of the original biological drug product. A key component of a biosimilar development is the demonstration of analytical similarity between the biosimilar and the reference product. Such demonstration relies on application of statistical methods to establish a similarity margin and appropriate test for equivalence between the two products. This paper discusses statistical issues with demonstration of analytical similarity and provides alternate approaches to potentially mitigate these problems. © PDA, Inc. 2016.
NASA Technical Reports Server (NTRS)
Dobrinskaya, Tatiana
2015-01-01
This paper suggests a new method for optimizing yaw maneuvers on the International Space Station (ISS). Yaw rotations are the most common large maneuvers on the ISS often used for docking and undocking operations, as well as for other activities. When maneuver optimization is used, large maneuvers, which were performed on thrusters, could be performed either using control moment gyroscopes (CMG), or with significantly reduced thruster firings. Maneuver optimization helps to save expensive propellant and reduce structural loads - an important factor for the ISS service life. In addition, optimized maneuvers reduce contamination of the critical elements of the vehicle structure, such as solar arrays. This paper presents an analytical solution for optimizing yaw attitude maneuvers. Equations describing pitch and roll motion needed to counteract the major torques during a yaw maneuver are obtained. A yaw rate profile is proposed. Also the paper describes the physical basis of the suggested optimization approach. In the obtained optimized case, the torques are significantly reduced. This torque reduction was compared to the existing optimization method which utilizes the computational solution. It was shown that the attitude profiles and the torque reduction have a good match for these two methods of optimization. The simulations using the ISS flight software showed similar propellant consumption for both methods. The analytical solution proposed in this paper has major benefits with respect to computational approach. In contrast to the current computational solution, which only can be calculated on the ground, the analytical solution does not require extensive computational resources, and can be implemented in the onboard software, thus, making the maneuver execution automatic. The automatic maneuver significantly simplifies the operations and, if necessary, allows to perform a maneuver without communication with the ground. It also reduces the probability of command errors. The suggested analytical solution provides a new method of maneuver optimization which is less complicated, automatic and more universal. A maneuver optimization approach, presented in this paper, can be used not only for the ISS, but for other orbiting space vehicles.
Fluorescence metrology used for analytics of high-quality optical materials
NASA Astrophysics Data System (ADS)
Engel, Axel; Haspel, Rainer; Rupertus, Volker
2004-09-01
Optical, glass ceramics and crystals are used for various specialized applications in telecommunication, biomedical, optical, and micro lithography technology. In order to qualify and control the material quality during the research and production processes several specialized ultra trace analytisis methods have to be appliedcs Schott Glas is applied. One focus of our the activities is the determination of impurities ranging in the sub ppb-regime, because such kind of impurity level is required e.g. for pure materials used for microlithography for example. Common analytical techniques for these impurity levels areSuch impurities are determined using analytical methods like LA ICP-MS and or Neutron Activation Analysis for example. On the other hand direct and non-destructive optical analysistic becomes is attractive because it visualizes the requirement of the optical applications additionally. Typical eExamples are absorption and laser resistivity measurements of optical material with optical methods like precision spectral photometers and or in-situ transmission measurements by means ofusing lamps and or UV lasers. Analytical methods have the drawback that they are time consuming and rather expensive, whereas the sensitivity for the absorption method will not be sufficient to characterize the future needs (coefficient much below 10-3 cm-1). For a non-destructive qualification for the current and future quality requirements a Jobin Yvon FLUOROLOG 3.22 fluorescence spectrometery is employed to enable fast and precise qualification and analysis. The main advantage of this setup is the combination of highest sensitivity (more than one order of magnitude higher sensitivity than state of the art UV absorption spectroscopy), fast measurement and evaluation cycles (several minutes compared to several hours necessary for chemical analystics). An overview is given for spectral characteristics using specified standards, which are necessary to establish the analytical system. The elementary fluorescence and absorption of rare earth element impurities as well as crystal defects induced luminescence originated by impurities was investigated. Quantitative numbers are given for the relative quantum yield as well as for the excitation cross section for doped glass and calcium fluoride.
Alvarez-Fernández, Ana; Orera, Irene; Abadía, Javier; Abadía, Anunciación
2007-01-01
A high-performance liquid chromatography-electrospray ionization/mass spectrometry (time of flight) method has been developed for the simultaneous determination of synthetic Fe(III)-chelates used as fertilizers. Analytes included the seven major Fe(III)-chelates used in agriculture, Fe(III)-EDTA, Fe(III)-DTPA, Fe(III)-HEDTA, Fe(III)-CDTA, Fe(III)-o,oEDDHA, Fe(III)-o,pEDDHA, and Fe(III)-EDDHMA, and the method was validated using isotope labeled (57)Fe(III)-chelates as internal standards. Calibration curves had R values in the range 0.9962-0.9997. Limits of detection and quantification were in the ranges 3-164 and 14-945 pmol, respectively. Analyte concentrations could be determined between the limits of quantification and 25 muM (racemic and meso Fe(III)-o,oEDDHA and Fe(III)-EDDHMA) or 50 muM (Fe(III)-EDTA, Fe(III)-HEDTA, Fe(III)-DTPA, Fe(III)-CDTA and Fe(III)-o,pEDDHA). The average intraday repeatability values were approximately 0.5 and 5% for retention time and peak area, respectively, whereas the interday repeatability values were approximately 0.7 and 8% for retention time and peak area, respectively. The method was validated using four different agricultural matrices, including nutrient solution, irrigation water, soil solution, and plant xylem exudates, spiked with Fe(III)-chelate standards and their stable isotope-labeled corresponding chelates. Analyte recoveries found were in the ranges 92-101% (nutrient solution), 89-102% (irrigation water), 82-100% (soil solution), and 70-111% (plant xylem exudates). Recoveries depended on the analyte, with Fe(III)-EDTA and Fe(III)-DTPA showing the lowest recoveries (average values of 87 and 88%, respectively, for all agricultural matrices used), whereas for other analytes recoveries were between 91 and 101%. The method was also used to determine the real concentrations of Fe(III)-chelates in commercial fertilizers. Furthermore, the method is also capable of resolving two more synthetic Fe(III)-chelates, Fe(III)-EDDHSA and Fe(III)-EDDCHA, whose exact quantification is not currently possible because of lack of commercial standards.
Filamentation instability of a fast electron beam in a dielectric target.
Debayle, A; Tikhonchuk, V T
2008-12-01
High-intensity laser-matter interaction is an efficient method for high-current relativistic electron beam production. At current densities exceeding a several kA microm{-2} , the beam propagation is maintained by an almost complete current neutralization by the target electrons. In such a geometry of two oppositely directed flows, beam instabilities can develop, depending on the target and the beam parameters. The present paper proposes an analytical description of the filamentation instability of an electron beam propagating through an insulator target. It is shown that the collisionless and resistive instabilities enter into competition with the ionization instability. This latter process is dominant in insulator targets where the field ionization by the fast beam provides free electrons for the neutralization current.
Evaluation of ion collection area in Faraday probes.
Brown, Daniel L; Gallimore, Alec D
2010-06-01
A Faraday probe with three concentric rings was designed and fabricated to assess the effect of gap width and collector diameter in a systematic study of the diagnostic ion collection area. The nested Faraday probe consisted of two concentric collector rings and an outer guard ring, which enabled simultaneous current density measurements on the inner and outer collectors. Two versions of the outer collector were fabricated to create gaps of 0.5 and 1.5 mm between the rings. Distribution of current density in the plume of a low-power Hall thruster ion source was measured in azimuthal sweeps at constant radius from 8 to 20 thruster diameters downstream of the exit plane with variation in facility background pressure. A new analytical technique is proposed to account for ions collected in the gap between the Faraday probe collector and guard ring. This method is shown to exhibit excellent agreement between all nested Faraday probe configurations, and to reduce the magnitude of integrated ion beam current to levels consistent with Hall thruster performance analyses. The technique is further studied by varying the guard ring bias potential with a fixed collector bias potential, thereby controlling ion collection in the gap. Results are in agreement with predictions based on the proposed analytical technique. The method is applied to a past study comparing the measured ion current density profiles of two Faraday probe designs. These findings provide new insight into the nature of ion collection in Faraday probe diagnostics, and lead to improved accuracy with a significant reduction in measurement uncertainty.
Adulterants in Urine Drug Testing.
Fu, S
Urine drug testing plays an important role in monitoring licit and illicit drug use for both medico-legal and clinical purposes. One of the major challenges of urine drug testing is adulteration, a practice involving manipulation of a urine specimen with chemical adulterants to produce a false negative test result. This problem is compounded by the number of easily obtained chemicals that can effectively adulterate a urine specimen. Common adulterants include some household chemicals such as hypochlorite bleach, laundry detergent, table salt, and toilet bowl cleaner and many commercial products such as UrinAid (glutaraldehyde), Stealth® (containing peroxidase and peroxide), Urine Luck (pyridinium chlorochromate, PCC), and Klear® (potassium nitrite) available through the Internet. These adulterants can invalidate a screening test result, a confirmatory test result, or both. To counteract urine adulteration, drug testing laboratories have developed a number of analytical methods to detect adulterants in a urine specimen. While these methods are useful in detecting urine adulteration when such activities are suspected, they do not reveal what types of drugs are being concealed. This is particularly the case when oxidizing urine adulterants are involved as these oxidants are capable of destroying drugs and their metabolites in urine, rendering the drug analytes undetectable by any testing technology. One promising approach to address this current limitation has been the use of unique oxidation products formed from reaction of drug analytes with oxidizing adulterants as markers for monitoring drug misuse and urine adulteration. This novel approach will ultimately improve the effectiveness of the current urine drug testing programs. © 2016 Elsevier Inc. All rights reserved.
2017-01-01
This work focuses on the design of transmitting coils in weakly coupled magnetic induction communication systems. We propose several optimization methods that reduce the active, reactive and apparent power consumption of the coil. These problems are formulated as minimization problems, in which the power consumed by the transmitting coil is minimized, under the constraint of providing a required magnetic field at the receiver location. We develop efficient numeric and analytic methods to solve the resulting problems, which are of high dimension, and in certain cases non-convex. For the objective of minimal reactive power an analytic solution for the optimal current distribution in flat disc transmitting coils is provided. This problem is extended to general three-dimensional coils, for which we develop an expression for the optimal current distribution. Considering the objective of minimal apparent power, a method is developed to reduce the computational complexity of the problem by transforming it to an equivalent problem of lower dimension, allowing a quick and accurate numeric solution. These results are verified experimentally by testing a number of coil geometries. The results obtained allow reduced power consumption and increased performances in magnetic induction communication systems. Specifically, for wideband systems, an optimal design of the transmitter coil reduces the peak instantaneous power provided by the transmitter circuitry, and thus reduces its size, complexity and cost. PMID:28192463
Wang, Xiang-Hua; Yin, Wen-Yan; Chen, Zhi Zhang David
2013-09-09
The one-step leapfrog alternating-direction-implicit finite-difference time-domain (ADI-FDTD) method is reformulated for simulating general electrically dispersive media. It models material dispersive properties with equivalent polarization currents. These currents are then solved with the auxiliary differential equation (ADE) and then incorporated into the one-step leapfrog ADI-FDTD method. The final equations are presented in the form similar to that of the conventional FDTD method but with second-order perturbation. The adapted method is then applied to characterize (a) electromagnetic wave propagation in a rectangular waveguide loaded with a magnetized plasma slab, (b) transmission coefficient of a plane wave normally incident on a monolayer graphene sheet biased by a magnetostatic field, and (c) surface plasmon polaritons (SPPs) propagation along a monolayer graphene sheet biased by an electrostatic field. The numerical results verify the stability, accuracy and computational efficiency of the proposed one-step leapfrog ADI-FDTD algorithm in comparison with analytical results and the results obtained with the other methods.
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method... (analytical method) provided that the chemistry of the method or the determinative technique is not changed... prevent efficient recovery of organic pollutants and prevent the method from meeting QC requirements, the...
Spectral methods in edge-diffraction theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnold, J.M.
Spectral methods for the construction of uniform asymptotic representations of the field diffracted by an aperture in a plane screen are reviewed. These are separated into contrasting approaches, roughly described as physical and geometrical. It is concluded that the geometrical methods provide a direct route to the construction of uniform representations that are formally identical to the equivalent-edge-current concept. Some interpretive and analytical difficulties that complicate the physical methods of obtaining uniform representations are analyzed. Spectral synthesis proceeds directly from the ray geometry and diffraction coefficients, without any intervening current representation, and the representation is uniform at shadow boundaries andmore » caustics of the diffracted field. The physical theory of diffraction postulates currents on the diffracting screen that give rise to the diffracted field. The difficulties encountered in evaluating the current integrals are throughly examined, and it is concluded that the additional data provided by the physical theory of diffraction (diffraction coefficients off the Keller diffraction cone) are not actually required for obtaining uniform asymptotics at the leading order. A new diffraction representation that generalizes to arbitrary plane-convex apertures a formula given by Knott and Senior [Proc. IEEE 62, 1468 (1974)] for circular apertures is deduced. 34 refs., 1 fig.« less
Laboratories measuring target biotoxin analytes in environmental samples can use this online query tool to identify analytical methods included in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select biotoxins.
Laboratories measuring target chemical, radiochemical, pathogens, and biotoxin analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery
Behavior analytic approaches to problem behavior in intellectual disabilities.
Hagopian, Louis P; Gregory, Meagan K
2016-03-01
The purpose of the current review is to summarize recent behavior analytic research on problem behavior in individuals with intellectual disabilities. We have focused our review on studies published from 2013 to 2015, but also included earlier studies that were relevant. Behavior analytic research on problem behavior continues to focus on the use and refinement of functional behavioral assessment procedures and function-based interventions. During the review period, a number of studies reported on procedures aimed at making functional analysis procedures more time efficient. Behavioral interventions continue to evolve, and there were several larger scale clinical studies reporting on multiple individuals. There was increased attention on the part of behavioral researchers to develop statistical methods for analysis of within subject data and continued efforts to aggregate findings across studies through evaluative reviews and meta-analyses. Findings support continued utility of functional analysis for guiding individualized interventions and for classifying problem behavior. Modifications designed to make functional analysis more efficient relative to the standard method of functional analysis were reported; however, these require further validation. Larger scale studies on behavioral assessment and treatment procedures provided additional empirical support for effectiveness of these approaches and their sustainability outside controlled clinical settings.
Evolution of accelerometer methods for physical activity research.
Troiano, Richard P; McClain, James J; Brychta, Robert J; Chen, Kong Y
2014-07-01
The technology and application of current accelerometer-based devices in physical activity (PA) research allow the capture and storage or transmission of large volumes of raw acceleration signal data. These rich data not only provide opportunities to improve PA characterisation, but also bring logistical and analytic challenges. We discuss how researchers and developers from multiple disciplines are responding to the analytic challenges and how advances in data storage, transmission and big data computing will minimise logistical challenges. These new approaches also bring the need for several paradigm shifts for PA researchers, including a shift from count-based approaches and regression calibrations for PA energy expenditure (PAEE) estimation to activity characterisation and EE estimation based on features extracted from raw acceleration signals. Furthermore, a collaborative approach towards analytic methods is proposed to facilitate PA research, which requires a shift away from multiple independent calibration studies. Finally, we make the case for a distinction between PA represented by accelerometer-based devices and PA assessed by self-report. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Analytical model for describing ion guiding through capillaries in insulating polymers
NASA Astrophysics Data System (ADS)
Liu, Shi-Dong; Zhao, Yong-Tao; Wang, Yu-Yu; N, Stolterfoht; Cheng, Rui; Zhou, Xian-Ming; Xu, Hu-Shan; Xiao, Guo-Qing
2015-08-01
An analytical description for guiding of ions through nanocapillaries is given on the basis of previous work. The current entering into the capillary is assumed to be divided into a current fraction transmitted through the capillary, a current fraction flowing away via the capillary conductivity and a current fraction remaining within the capillary, which is responsible for its charge-up. The discharging current is assumed to be governed by the Frenkel-Poole process. At higher conductivities the analytical model shows a blocking of the ion transmission, which is in agreement with recent simulations. Also, it is shown that ion blocking observed in experiments is well reproduced by the analytical formula. Furthermore, the asymptotic fraction of transmitted ions is determined. Apart from the key controlling parameter (charge-to-energy ratio), the ratio of the capillary conductivity to the incident current is included in the model. Differences resulting from the nonlinear and linear limits of the Frenkel-Poole discharge are pointed out. Project supported by the Major State Basic Research Development Program of China (Grant No. 2010CB832902) and the National Natural Science Foundation of China (Grant Nos. 11275241, 11275238, 11105192, and 11375034).
ERIC Educational Resources Information Center
Lopez Flores, Emily
2014-01-01
Research has been conducted to identify and analyze how schools are determining that the activities of their Professional Learning Community (PLC) are directly tied to student achievement as there is currently a gap in the existing literature with regards to this topic. For the purpose of this study, a "successful" PLC was defined as one…
ERIC Educational Resources Information Center
Khasawneh, Kholoud Ahmed Saleem
2014-01-01
This study aimed to identify the attitudes of public secondary school students in the State of Hail towards the modern educational concepts, and what are the differences between them. It has been used in the study descriptive analytical method. The study was conducted on a sample of 400 male and female students, chosen randomly according to the…
Phonon scattering in nanoscale systems: lowest order expansion of the current and power expressions
NASA Astrophysics Data System (ADS)
Paulsson, Magnus; Frederiksen, Thomas; Brandbyge, Mads
2006-04-01
We use the non-equilibrium Green's function method to describe the effects of phonon scattering on the conductance of nano-scale devices. Useful and accurate approximations are developed that both provide (i) computationally simple formulas for large systems and (ii) simple analytical models. In addition, the simple models can be used to fit experimental data and provide physical parameters.
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...
NASA Technical Reports Server (NTRS)
DeChant, Lawrence Justin
1998-01-01
In spite of rapid advances in both scalar and parallel computational tools, the large number of variables involved in both design and inverse problems make the use of sophisticated fluid flow models impractical, With this restriction, it is concluded that an important family of methods for mathematical/computational development are reduced or approximate fluid flow models. In this study a combined perturbation/numerical modeling methodology is developed which provides a rigorously derived family of solutions. The mathematical model is computationally more efficient than classical boundary layer but provides important two-dimensional information not available using quasi-1-d approaches. An additional strength of the current methodology is its ability to locally predict static pressure fields in a manner analogous to more sophisticated parabolized Navier Stokes (PNS) formulations. To resolve singular behavior, the model utilizes classical analytical solution techniques. Hence, analytical methods have been combined with efficient numerical methods to yield an efficient hybrid fluid flow model. In particular, the main objective of this research has been to develop a system of analytical and numerical ejector/mixer nozzle models, which require minimal empirical input. A computer code, DREA Differential Reduced Ejector/mixer Analysis has been developed with the ability to run sufficiently fast so that it may be used either as a subroutine or called by an design optimization routine. Models are of direct use to the High Speed Civil Transport Program (a joint government/industry project seeking to develop an economically.viable U.S. commercial supersonic transport vehicle) and are currently being adopted by both NASA and industry. Experimental validation of these models is provided by comparison to results obtained from open literature and Limited Exclusive Right Distribution (LERD) sources, as well as dedicated experiments performed at Texas A&M. These experiments have been performed using a hydraulic/gas flow analog. Results of comparisons of DREA computations with experimental data, which include entrainment, thrust, and local profile information, are overall good. Computational time studies indicate that DREA provides considerably more information at a lower computational cost than contemporary ejector nozzle design models. Finally. physical limitations of the method, deviations from experimental data, potential improvements and alternative formulations are described. This report represents closure to the NASA Graduate Researchers Program. Versions of the DREA code and a user's guide may be obtained from the NASA Lewis Research Center.
NASA Astrophysics Data System (ADS)
Asadpour-Zeynali, Karim; Bastami, Mohammad
2010-02-01
In this work a new modification of the standard addition method called "net analyte signal standard addition method (NASSAM)" is presented for the simultaneous spectrofluorimetric and spectrophotometric analysis. The proposed method combines the advantages of standard addition method with those of net analyte signal concept. The method can be applied for the determination of analyte in the presence of known interferents. The accuracy of the predictions against H-point standard addition method is not dependent on the shape of the analyte and interferent spectra. The method was successfully applied to simultaneous spectrofluorimetric and spectrophotometric determination of pyridoxine (PY) and melatonin (MT) in synthetic mixtures and in a pharmaceutical formulation.
Transverse oscillations and stability of prominences in a magnetic field dip
NASA Astrophysics Data System (ADS)
Kolotkov, D. Y.; Nisticò, G.; Nakariakov, V. M.
2016-05-01
Aims: We developed an analytical model of the global transverse oscillations and mechanical stability of a quiescent prominence in the magnetised environment with a magnetic field dip that accounts for the mirror current effect. Methods: The model is based on the interaction of line currents through the Lorentz force. Within this concept the prominence is treated as a straight current-carrying wire, and the magnetic dip is provided by two photospheric current sources. Results: Properties of both vertical and horizontal oscillations are determined by the value of the prominence current, its density and height above the photosphere, and the parameters of the magnetic dip. The prominence can be stable in both horizontal and vertical directions simultaneously when the prominence current dominates in the system and its height is less than the half-distance between the photospheric sources.
NASA Occupant Protection Standards Development
NASA Technical Reports Server (NTRS)
Somers, Jeffrey; Gernhardt, Michael; Lawrence, Charles
2012-01-01
Historically, spacecraft landing systems have been tested with human volunteers, because analytical methods for estimating injury risk were insufficient. These tests were conducted with flight-like suits and seats to verify the safety of the landing systems. Currently, NASA uses the Brinkley Dynamic Response Index to estimate injury risk, although applying it to the NASA environment has drawbacks: (1) Does not indicate severity or anatomical location of injury (2) Unclear if model applies to NASA applications. Because of these limitations, a new validated, analytical approach was desired. Leveraging off of the current state of the art in automotive safety and racing, a new approach was developed. The approach has several aspects: (1) Define the acceptable level of injury risk by injury severity (2) Determine the appropriate human surrogate for testing and modeling (3) Mine existing human injury data to determine appropriate Injury Assessment Reference Values (IARV). (4) Rigorously Validate the IARVs with sub-injurious human testing (5) Use validated IARVs to update standards and vehicle requirement
De Dominicis, Emiliano; Commissati, Italo; Suman, Michele
2012-09-01
In the food industry, it is frequently necessary to check the quality of an ingredient to decide whether to use it in production and/or to have an idea of the final possible contamination of the finished product. The current need to quickly separate and identify relevant contaminants within different classes, often with legal residue limits on the order of 1-100 µg kg(-1), has led to the need for more effective analytical methods. With thousands of organic compounds present in complex food matrices, the development of new analytical solutions leaned towards simplified extraction/clean-up procedures and chromatography coupled with mass spectrometry. Efforts must also be made regarding the instrumental phase to overcome sensitivity/selectivity limits and interferences. For this purpose, high-resolution full scan analysis in mass spectrometry is an interesting alternative to the traditional tandem mass approach. A fast method for extracting and purifying bakery matrices was therefore developed and combined with the exploitation of ultra-high-pressure liquid chromatography (UHPLC) coupled to a Orbitrap Exactive™ high-resolution mass spectrometer (HRMS). Extracts of blank, naturally contaminated and fortified minicakes, prepared through a combined use of industrial and pilot plant production lines, were analyzed at different concentration levels (1-100 µg kg(-1)) of various contaminants: a limit of detection at 10 µg kg(-1) was possible for most of the analytes within all the categories analyzed, including pesticides, aflatoxins, trichothecene toxins and veterinary drugs. The application of accurate mass targeted screening described in this article demonstrates that current single-stage HRMS analytical instrumentation is well equipped to meet the challenges posed by chemical contaminants in the screening of both bakery raw materials and finished products. Copyright © 2012 John Wiley & Sons, Ltd.
Efficient Radiative Transfer for Dynamically Evolving Stratified Atmospheres
NASA Astrophysics Data System (ADS)
Judge, Philip G.
2017-12-01
We present a fast multi-level and multi-atom non-local thermodynamic equilibrium radiative transfer method for dynamically evolving stratified atmospheres, such as the solar atmosphere. The preconditioning method of Rybicki & Hummer (RH92) is adopted. But, pressed for the need of speed and stability, a “second-order escape probability” scheme is implemented within the framework of the RH92 method, in which frequency- and angle-integrals are carried out analytically. While minimizing the computational work needed, this comes at the expense of numerical accuracy. The iteration scheme is local, the formal solutions for the intensities are the only non-local component. At present the methods have been coded for vertical transport, applicable to atmospheres that are highly stratified. The probabilistic method seems adequately fast, stable, and sufficiently accurate for exploring dynamical interactions between the evolving MHD atmosphere and radiation using current computer hardware. Current 2D and 3D dynamics codes do not include this interaction as consistently as the current method does. The solutions generated may ultimately serve as initial conditions for dynamical calculations including full 3D radiative transfer. The National Center for Atmospheric Research is sponsored by the National Science Foundation.
Dielectrophoretic label-free immunoassay for rare-analyte quantification in biological samples
NASA Astrophysics Data System (ADS)
Velmanickam, Logeeshan; Laudenbach, Darrin; Nawarathna, Dharmakeerthi
2016-10-01
The current gold standard for detecting or quantifying target analytes from blood samples is the ELISA (enzyme-linked immunosorbent assay). The detection limit of ELISA is about 250 pg/ml. However, to quantify analytes that are related to various stages of tumors including early detection requires detecting well below the current limit of the ELISA test. For example, Interleukin 6 (IL-6) levels of early oral cancer patients are <100 pg/ml and the prostate specific antigen level of the early stage of prostate cancer is about 1 ng/ml. Further, it has been reported that there are significantly less than 1 pg /mL of analytes in the early stage of tumors. Therefore, depending on the tumor type and the stage of the tumors, it is required to quantify various levels of analytes ranging from ng/ml to pg/ml. To accommodate these critical needs in the current diagnosis, there is a need for a technique that has a large dynamic range with an ability to detect extremely low levels of target analytes (
Failure of Standard Training Sets in the Analysis of Fast-Scan Cyclic Voltammetry Data.
Johnson, Justin A; Rodeberg, Nathan T; Wightman, R Mark
2016-03-16
The use of principal component regression, a multivariate calibration method, in the analysis of in vivo fast-scan cyclic voltammetry data allows for separation of overlapping signal contributions, permitting evaluation of the temporal dynamics of multiple neurotransmitters simultaneously. To accomplish this, the technique relies on information about current-concentration relationships across the scan-potential window gained from analysis of training sets. The ability of the constructed models to resolve analytes depends critically on the quality of these data. Recently, the use of standard training sets obtained under conditions other than those of the experimental data collection (e.g., with different electrodes, animals, or equipment) has been reported. This study evaluates the analyte resolution capabilities of models constructed using this approach from both a theoretical and experimental viewpoint. A detailed discussion of the theory of principal component regression is provided to inform this discussion. The findings demonstrate that the use of standard training sets leads to misassignment of the current-concentration relationships across the scan-potential window. This directly results in poor analyte resolution and, consequently, inaccurate quantitation, which may lead to erroneous conclusions being drawn from experimental data. Thus, it is strongly advocated that training sets be obtained under the experimental conditions to allow for accurate data analysis.
Predicting Student Success using Analytics in Course Learning Management Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olama, Mohammed M; Thakur, Gautam; McNair, Wade
Educational data analytics is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from the educational context. For example, predicting college student performance is crucial for both the student and educational institutions. It can support timely intervention to prevent students from failing a course, increasing efficacy of advising functions, and improving course completion rate. In this paper, we present the efforts carried out at Oak Ridge National Laboratory (ORNL) toward conducting predictive analytics to academic data collected from 2009 through 2013 and available in one of the most commonly used learning management systems,more » called Moodle. First, we have identified the data features useful for predicting student outcomes such as students scores in homework assignments, quizzes, exams, in addition to their activities in discussion forums and their total GPA at the same term they enrolled in the course. Then, Logistic Regression and Neural Network predictive models are used to identify students as early as possible that are in danger of failing the course they are currently enrolled in. These models compute the likelihood of any given student failing (or passing) the current course. Numerical results are presented to evaluate and compare the performance of the developed models and their predictive accuracy.« less
Predicting student success using analytics in course learning management systems
NASA Astrophysics Data System (ADS)
Olama, Mohammed M.; Thakur, Gautam; McNair, Allen W.; Sukumar, Sreenivas R.
2014-05-01
Educational data analytics is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from the educational context. For example, predicting college student performance is crucial for both the student and educational institutions. It can support timely intervention to prevent students from failing a course, increasing efficacy of advising functions, and improving course completion rate. In this paper, we present the efforts carried out at Oak Ridge National Laboratory (ORNL) toward conducting predictive analytics to academic data collected from 2009 through 2013 and available in one of the most commonly used learning management systems, called Moodle. First, we have identified the data features useful for predicting student outcomes such as students' scores in homework assignments, quizzes, exams, in addition to their activities in discussion forums and their total GPA at the same term they enrolled in the course. Then, Logistic Regression and Neural Network predictive models are used to identify students as early as possible that are in danger of failing the course they are currently enrolled in. These models compute the likelihood of any given student failing (or passing) the current course. Numerical results are presented to evaluate and compare the performance of the developed models and their predictive accuracy.
Tidally induced residual current over the Malin Sea continental slope
NASA Astrophysics Data System (ADS)
Stashchuk, Nataliya; Vlasenko, Vasiliy; Hosegood, Phil; Nimmo-Smith, W. Alex M.
2017-05-01
Tidally induced residual currents generated over shelf-slope topography are investigated analytically and numerically using the Massachusetts Institute of Technology general circulation model. Observational support for the presence of such a slope current was recorded over the Malin Sea continental slope during the 88-th cruise of the RRS ;James Cook; in July 2013. A simple analytical formula developed here in the framework of time-averaged shallow water equations has been validated against a fully nonlinear nonhydrostatic numerical solution. A good agreement between analytical and numerical solutions is found for a wide range of input parameters of the tidal flow and bottom topography. In application to the Malin Shelf area both the numerical model and analytical solution predicted a northward moving current confined to the slope with its core located above the 400 m isobath and with vertically averaged maximum velocities up to 8 cm s-1, which is consistent with the in-situ data recorded at three moorings and along cross-slope transects.
NASA Astrophysics Data System (ADS)
Bescond, Marc; Li, Changsheng; Mera, Hector; Cavassilas, Nicolas; Lannoo, Michel
2013-10-01
We present a one-shot current-conserving approach to model the influence of electron-phonon scattering in nano-transistors using the non-equilibrium Green's function formalism. The approach is based on the lowest order approximation (LOA) to the current and its simplest analytic continuation (LOA+AC). By means of a scaling argument, we show how both LOA and LOA+AC can be easily obtained from the first iteration of the usual self-consistent Born approximation (SCBA) algorithm. Both LOA and LOA+AC are then applied to model n-type silicon nanowire field-effect-transistors and are compared to SCBA current characteristics. In this system, the LOA fails to describe electron-phonon scattering, mainly because of the interactions with acoustic phonons at the band edges. In contrast, the LOA+AC still well approximates the SCBA current characteristics, thus demonstrating the power of analytic continuation techniques. The limits of validity of LOA+AC are also discussed, and more sophisticated and general analytic continuation techniques are suggested for more demanding cases.
ERIC Educational Resources Information Center
Kimaru, Irene; Koether, Marina; Chichester, Kimberly; Eaton, Lafayette
2017-01-01
Analytical method transfer (AMT) and dissolution testing are important topics required in industry that should be taught in analytical chemistry courses. Undergraduate students in senior level analytical chemistry laboratory courses at Kennesaw State University (KSU) and St. John Fisher College (SJFC) participated in development, validation, and…
NASA Astrophysics Data System (ADS)
Oudini, N.; Sirse, N.; Taccogna, F.; Ellingboe, A. R.; Bendib, A.
2018-05-01
We propose a new technique for diagnosing negative ion properties using Langmuir probe assisted pulsed laser photo-detachment. While the classical technique uses a laser pulse to convert negative ions into electron-atom pairs and a positively biased Langmuir probe tracking the change of electron saturation current, the proposed method uses a negatively biased Langmuir probe to track the temporal evolution of positive ion current. The negative bias aims to avoid the parasitic electron current inherent to probe tip surface ablation. In this work, we show through analytical and numerical approaches that, by knowing electron temperature and performing photo-detachment at two different laser wavelengths, it is possible to deduce plasma electronegativity (ratio of negative ion to electron densities) α, and anisothermicity (ratio of electron to negative ion temperatures) γ-. We present an analytical model that links the change in the collected positive ion current to plasma electronegativity and anisothermicity. Particle-In-Cell simulation is used as a numerical experiment covering a wide range of α and γ- to test the new analysis technique. The new technique is sensitive to α in the range 0.5 < α < 10 and yields γ- for large α, where negative ion flux affects the probe sheath behavior, typically α > 1.
(Bio)Sensing Using Nanoparticle Arrays: On the Effect of Analyte Transport on Sensitivity.
Lynn, N Scott; Homola, Jiří
2016-12-20
There has recently been an extensive amount of work regarding the development of optical, electrical, and mechanical (bio)sensors employing planar arrays of surface-bound nanoparticles. The sensor output for these systems is dependent on the rate at which analyte is transported to, and interacts with, each nanoparticle in the array. There has so far been little discussion on the relationship between the design parameters of an array and the interplay of convection, diffusion, and reaction. Moreover, current methods providing such information require extensive computational simulation. Here we demonstrate that the rate of analyte transport to a nanoparticle array can be quantified analytically. We show that such rates are bound by both the rate to a single NP and that to a planar surface (having equivalent size as the array), with the specific rate determined by the fill fraction: the ratio between the total surface area used for biomolecular capture with respect to the entire sensing area. We characterize analyte transport to arrays with respect to changes in numerous parameters relevant to experiment, including variation of the nanoparticle shape and size, packing density, flow conditions, and analyte diffusivity. We also explore how analyte capture is dependent on the kinetic parameters related to an affinity-based biosensor, and furthermore, we classify the conditions under which the array might be diffusion- or reaction-limited. The results obtained herein are applicable toward the design and optimization of all (bio)sensors based on nanoparticle arrays.
Collier, J W; Shah, R B; Bryant, A R; Habib, M J; Khan, M A; Faustino, P J
2011-02-20
A rapid, selective, and sensitive gradient HPLC method was developed for the analysis of dissolution samples of levothyroxine sodium tablets. Current USP methodology for levothyroxine (L-T(4)) was not adequate to resolve co-elutants from a variety of levothyroxine drug product formulations. The USP method for analyzing dissolution samples of the drug product has shown significant intra- and inter-day variability. The sources of method variability include chromatographic interferences introduced by the dissolution media and the formulation excipients. In the present work, chromatographic separation of levothyroxine was achieved on an Agilent 1100 Series HPLC with a Waters Nova-pak column (250 mm × 3.9 mm) using a 0.01 M phosphate buffer (pH 3.0)-methanol (55:45, v/v) in a gradient elution mobile phase at a flow rate of 1.0 mL/min and detection UV wavelength of 225 nm. The injection volume was 800 μL and the column temperature was maintained at 28°C. The method was validated according to USP Category I requirements. The validation characteristics included accuracy, precision, specificity, linearity, and analytical range. The standard curve was found to have a linear relationship (r(2)>0.99) over the analytical range of 0.08-0.8 μg/mL. Accuracy ranged from 90 to 110% for low quality control (QC) standards and 95 to 105% for medium and high QC standards. Precision was <2% at all QC levels. The method was found to be accurate, precise, selective, and linear for L-T(4) over the analytical range. The HPLC method was successfully applied to the analysis of dissolution samples of marketed levothyroxine sodium tablets. Published by Elsevier B.V.
Collier, J.W.; Shah, R.B.; Bryant, A.R.; Habib, M.J.; Khan, M.A.; Faustino, P.J.
2011-01-01
A rapid, selective, and sensitive gradient HPLC method was developed for the analysis of dissolution samples of levothyroxine sodium tablets. Current USP methodology for levothyroxine (l-T4) was not adequate to resolve co-elutants from a variety of levothyroxine drug product formulations. The USP method for analyzing dissolution samples of the drug product has shown significant intra- and inter-day variability. The sources of method variability include chromatographic interferences introduced by the dissolution media and the formulation excipients. In the present work, chromatographic separation of levothyroxine was achieved on an Agilent 1100 Series HPLC with a Waters Nova-pak column (250mm × 3.9mm) using a 0.01 M phosphate buffer (pH 3.0)–methanol (55:45, v/v) in a gradient elution mobile phase at a flow rate of 1.0 mL/min and detection UV wavelength of 225 nm. The injection volume was 800 µL and the column temperature was maintained at 28 °C. The method was validated according to USP Category I requirements. The validation characteristics included accuracy, precision, specificity, linearity, and analytical range. The standard curve was found to have a linear relationship (r2 > 0.99) over the analytical range of 0.08–0.8 µg/mL. Accuracy ranged from 90 to 110% for low quality control (QC) standards and 95 to 105% for medium and high QC standards. Precision was <2% at all QC levels. The method was found to be accurate, precise, selective, and linear for l-T4 over the analytical range. The HPLC method was successfully applied to the analysis of dissolution samples of marketed levothyroxine sodium tablets. PMID:20947276
Green analytical chemistry--theory and practice.
Tobiszewski, Marek; Mechlińska, Agata; Namieśnik, Jacek
2010-08-01
This tutorial review summarises the current state of green analytical chemistry with special emphasis on environmentally friendly sample preparation techniques. Green analytical chemistry is a part of the sustainable development concept; its history and origins are described. Miniaturisation of analytical devices and shortening the time elapsing between performing analysis and obtaining reliable analytical results are important aspects of green analytical chemistry. Solventless extraction techniques, the application of alternative solvents and assisted extractions are considered to be the main approaches complying with green analytical chemistry principles.
Digital barcodes of suspension array using laser induced breakdown spectroscopy
He, Qinghua; Liu, Yixi; He, Yonghong; Zhu, Liang; Zhang, Yilong; Shen, Zhiyuan
2016-01-01
We show a coding method of suspension array based on the laser induced breakdown spectroscopy (LIBS), which promotes the barcodes from analog to digital. As the foundation of digital optical barcodes, nanocrystals encoded microspheres are prepared with self-assembly encapsulation method. We confirm that digital multiplexing of LIBS-based coding method becomes feasible since the microsphere can be coded with direct read-out data of wavelengths, and the method can avoid fluorescence signal crosstalk between barcodes and analyte tags, which lead to overall advantages in accuracy and stability to current fluorescent multicolor coding method. This demonstration increases the capability of multiplexed detection and accurate filtrating, expanding more extensive applications of suspension array in life science. PMID:27808270
NASA Astrophysics Data System (ADS)
Li, Jiangui; Wang, Junhua; Zhigang, Zhao; Yan, Weili
2012-04-01
In this paper, analytical analysis of the permanent magnet vernier (PMV) is presented. The key is to analytically solve the governing Laplacian/quasi-Poissonian field equations in the motor regions. By using the time-stepping finite element method, the analytical method is verified. Hence, the performances of the PMV machine are quantitatively compared with that of the analytical results. The analytical results agree well with the finite element method results. Finally, the experimental results are given to further show the validity of the analysis.
Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards
Smith, Justin D.
2013-01-01
This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have precluded widespread implementation and acceptance of the SCED as a viable complementary methodology to the predominant group design. This article includes a description of the research design, measurement, and analysis domains distinctive to the SCED; a discussion of the results within the framework of contemporary standards and guidelines in the field; and a presentation of updated benchmarks for key characteristics (e.g., baseline sampling, method of analysis), and overall, it provides researchers and reviewers with a resource for conducting and evaluating SCED research. The results of the systematic review of 409 studies suggest that recently published SCED research is largely in accordance with contemporary criteria for experimental quality. Analytic method emerged as an area of discord. Comparison of the findings of this review with historical estimates of the use of statistical analysis indicates an upward trend, but visual analysis remains the most common analytic method and also garners the most support amongst those entities providing SCED standards. Although consensus exists along key dimensions of single-case research design and researchers appear to be practicing within these parameters, there remains a need for further evaluation of assessment and sampling techniques and data analytic methods. PMID:22845874
A Review of Numerical Simulation and Analytical Modeling for Medical Devices Safety in MRI
Kabil, J.; Belguerras, L.; Trattnig, S.; Pasquier, C.; Missoffe, A.
2016-01-01
Summary Objectives To review past and present challenges and ongoing trends in numerical simulation for MRI (Magnetic Resonance Imaging) safety evaluation of medical devices. Methods A wide literature review on numerical and analytical simulation on simple or complex medical devices in MRI electromagnetic fields shows the evolutions through time and a growing concern for MRI safety over the years. Major issues and achievements are described, as well as current trends and perspectives in this research field. Results Numerical simulation of medical devices is constantly evolving, supported by calculation methods now well-established. Implants with simple geometry can often be simulated in a computational human model, but one issue remaining today is the experimental validation of these human models. A great concern is to assess RF heating on implants too complex to be traditionally simulated, like pacemaker leads. Thus, ongoing researches focus on alternative hybrids methods, both numerical and experimental, with for example a transfer function method. For the static field and gradient fields, analytical models can be used for dimensioning simple implants shapes, but limited for complex geometries that cannot be studied with simplifying assumptions. Conclusions Numerical simulation is an essential tool for MRI safety testing of medical devices. The main issues remain the accuracy of simulations compared to real life and the studies of complex devices; but as the research field is constantly evolving, some promising ideas are now under investigation to take up the challenges. PMID:27830244
NASA Technical Reports Server (NTRS)
Littell, Justin D.; Binienda, Wieslaw K.; Arnold, William A.; Roberts, Gary D.; Goldberg, Robert K.
2010-01-01
The reliability of impact simulations for aircraft components made with triaxial-braided carbon-fiber composites is currently limited by inadequate material property data and lack of validated material models for analysis. Methods to characterize the material properties used in the analytical models from a systematically obtained set of test data are also lacking. A macroscopic finite element based analytical model to analyze the impact response of these materials has been developed. The stiffness and strength properties utilized in the material model are obtained from a set of quasi-static in-plane tension, compression and shear coupon level tests. Full-field optical strain measurement techniques are applied in the testing, and the results are used to help in characterizing the model. The unit cell of the braided composite is modeled as a series of shell elements, where each element is modeled as a laminated composite. The braided architecture can thus be approximated within the analytical model. The transient dynamic finite element code LS-DYNA is utilized to conduct the finite element simulations, and an internal LS-DYNA constitutive model is utilized in the analysis. Methods to obtain the stiffness and strength properties required by the constitutive model from the available test data are developed. Simulations of quasi-static coupon tests and impact tests of a represented braided composite are conducted. Overall, the developed method shows promise, but improvements that are needed in test and analysis methods for better predictive capability are examined.
Nanopore with Transverse Nanoelectrodes for Electrical Characterization and Sequencing of DNA
Gierhart, Brian C.; Howitt, David G.; Chen, Shiahn J.; Zhu, Zhineng; Kotecki, David E.; Smith, Rosemary L.; Collins, Scott D.
2009-01-01
A DNA sequencing device which integrates transverse conducting electrodes for the measurement of electrode currents during DNA translocation through a nanopore has been nanofabricated and characterized. A focused electron beam (FEB) milling technique, capable of creating features on the order of 1 nm in diameter, was used to create the nanopore. The device was characterized electrically using gold nanoparticles as an artificial analyte with both DC and AC measurement methods. Single nanoparticle/electrode interaction events were recorded. A low-noise, high-speed transimpedance current amplifier for the detection of nano to picoampere currents at microsecond time scales was designed, fabricated and tested for future integration with the nanopore device. PMID:19584949
Nanopore with Transverse Nanoelectrodes for Electrical Characterization and Sequencing of DNA.
Gierhart, Brian C; Howitt, David G; Chen, Shiahn J; Zhu, Zhineng; Kotecki, David E; Smith, Rosemary L; Collins, Scott D
2008-06-16
A DNA sequencing device which integrates transverse conducting electrodes for the measurement of electrode currents during DNA translocation through a nanopore has been nanofabricated and characterized. A focused electron beam (FEB) milling technique, capable of creating features on the order of 1 nm in diameter, was used to create the nanopore. The device was characterized electrically using gold nanoparticles as an artificial analyte with both DC and AC measurement methods. Single nanoparticle/electrode interaction events were recorded. A low-noise, high-speed transimpedance current amplifier for the detection of nano to picoampere currents at microsecond time scales was designed, fabricated and tested for future integration with the nanopore device.
Schänzer, Wilhelm
2008-01-01
Doping and manipulation are undesirable companions of professional and amateur sport. Numerous adverse analytical findings as well as confessions of athletes have demonstrated the variety of doping agents and methods as well as the inventiveness of cheating sportsmen. Besides ‘conventional’ misuse of drugs such as erythropoietin and insulins, experts fear that therapeutics that are currently undergoing clinical trials might be part of current or future doping regimens, which aim for an increased functionality and performance or organs and tissues. Emerging drugs such as selective androgen receptor modulators (SARMs), hypoxia-inducible factor (HIF) complex stabilizers or modulators of muscle fiber calcium channels are considered relevant for current and future doping controls due to their high potential for misuse in sports. PMID:19337407
Electric current focusing efficiency in a graphene electric lens.
Mu, Weihua; Zhang, Gang; Tang, Yunqing; Wang, Wei; Ou-Yang, Zhongcan
2011-12-14
In the present work, we study theoretically the electron wave's focusing phenomenon in a single-layered graphene pn junction (PNJ) and obtain the electric current density distribution of graphene PNJ, which is in good agreement with the qualitative result in previous numerical calculations (Cheianov et al 2007 Science, 315, 1252). In addition, we find that, for a symmetric PNJ, 1/4 of total electric current radiated from the source electrode can be collected by the drain electrode. Furthermore, this ratio reduces to 3/16 in a symmetric graphene npn junction. Our results obtained by the present analytical method provide a general design rule for an electric lens based on negative refractory index systems. © 2011 IOP Publishing Ltd
A sample preparation method for recovering suppressed analyte ions in MALDI TOF MS.
Lou, Xianwen; de Waal, Bas F M; Milroy, Lech-Gustav; van Dongen, Joost L J
2015-05-01
In matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI TOF MS), analyte signals can be substantially suppressed by other compounds in the sample. In this technical note, we describe a modified thin-layer sample preparation method that significantly reduces the analyte suppression effect (ASE). In our method, analytes are deposited on top of the surface of matrix preloaded on the MALDI plate. To prevent embedding of analyte into the matrix crystals, the sample solution were prepared without matrix and efforts were taken not to re-dissolve the preloaded matrix. The results with model mixtures of peptides, synthetic polymers and lipids show that detection of analyte ions, which were completely suppressed using the conventional dried-droplet method, could be effectively recovered by using our method. Our findings suggest that the incorporation of analytes in the matrix crystals has an important contributory effect on ASE. By reducing ASE, our method should be useful for the direct MALDI MS analysis of multicomponent mixtures. Copyright © 2015 John Wiley & Sons, Ltd.
Zeleny, Reinhard; Harbeck, Stefan; Schimmel, Heinz
2009-01-09
A liquid chromatography-electrospray ionisation tandem mass spectrometry method for the simultaneous detection and quantitation of 5-nitroimidazole veterinary drugs in lyophilised pork meat, the chosen format of a candidate certified reference material, has been developed and validated. Six analytes have been included in the scope of validation, i.e. dimetridazole (DMZ), metronidazole (MNZ), ronidazole (RNZ), hydroxymetronidazole (MNZOH), hydroxyipronidazole (IPZOH), and 2-hydroxymethyl-1-methyl-5-nitroimidazole (HMMNI). The analytes were extracted from the sample with ethyl acetate, chromatographically separated on a C(18) column, and finally identified and quantified by tandem mass spectrometry in the multiple reaction monitoring mode (MRM) using matrix-matched calibration and (2)H(3)-labelled analogues of the analytes (except for MNZOH, where [(2)H(3)]MNZ was used). The method was validated in accordance with Commission Decision 2002/657/EC, by determining selectivity, linearity, matrix effect, apparent recovery, repeatability and intermediate precision, decision limits and detection capabilities, robustness of sample preparation method, and stability of extracts. Recovery at 1 microg/kg level was at 100% (estimates in the range of 101-107%) for all analytes, repeatabilities and intermediate precisions at this level were in the range of 4-12% and 2-9%, respectively. Linearity of calibration curves in the working range 0.5-10 microg/kg was confirmed, with r values typically >0.99. Decision limits (CCalpha) and detection capabilities (CCbeta) according to ISO 11843-2 (calibration curve approach) were 0.29-0.44 and 0.36-0.54 microg/kg, respectively. The method reliably identifies and quantifies the selected nitroimidazoles in the reconstituted pork meat in the low and sub-microg/kg range and will be applied in an interlaboratory comparison for determining the mass fraction of the selected nitroimidazoles in the candidate reference material currently developed at IRMM.
NASA Astrophysics Data System (ADS)
Chen, Jui-Sheng; Liu, Chen-Wuing; Liang, Ching-Ping; Lai, Keng-Hsin
2012-08-01
SummaryMulti-species advective-dispersive transport equations sequentially coupled with first-order decay reactions are widely used to describe the transport and fate of the decay chain contaminants such as radionuclide, chlorinated solvents, and nitrogen. Although researchers attempted to present various types of methods for analytically solving this transport equation system, the currently available solutions are mostly limited to an infinite or a semi-infinite domain. A generalized analytical solution for the coupled multi-species transport problem in a finite domain associated with an arbitrary time-dependent source boundary is not available in the published literature. In this study, we first derive generalized analytical solutions for this transport problem in a finite domain involving arbitrary number of species subject to an arbitrary time-dependent source boundary. Subsequently, we adopt these derived generalized analytical solutions to obtain explicit analytical solutions for a special-case transport scenario involving an exponentially decaying Bateman type time-dependent source boundary. We test the derived special-case solutions against the previously published coupled 4-species transport solution and the corresponding numerical solution with coupled 10-species transport to conduct the solution verification. Finally, we compare the new analytical solutions derived for a finite domain against the published analytical solutions derived for a semi-infinite domain to illustrate the effect of the exit boundary condition on coupled multi-species transport with an exponential decaying source boundary. The results show noticeable discrepancies between the breakthrough curves of all the species in the immediate vicinity of the exit boundary obtained from the analytical solutions for a finite domain and a semi-infinite domain for the dispersion-dominated condition.
STATISTICAL ANALYSIS OF TANK 18F FLOOR SAMPLE RESULTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, S.
2010-09-02
Representative sampling has been completed for characterization of the residual material on the floor of Tank 18F as per the statistical sampling plan developed by Shine [1]. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL [2]. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples resultsmore » [3] to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL{sub 95%}) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 18F. The uncertainty is quantified in this report by an upper 95% confidence limit (UCL{sub 95%}) on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL{sub 95%} was based entirely on the six current scrape sample results (each averaged across three analytical determinations).« less
STATISTICAL ANALYSIS OF TANK 19F FLOOR SAMPLE RESULTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, S.
2010-09-02
Representative sampling has been completed for characterization of the residual material on the floor of Tank 19F as per the statistical sampling plan developed by Harris and Shine. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples resultsmore » to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL95%) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current scrape sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 19F. The uncertainty is quantified in this report by an UCL95% on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL95% was based entirely on the six current scrape sample results (each averaged across three analytical determinations).« less
Molognoni, Luciano; Daguer, Heitor; de Sá Ploêncio, Leandro Antunes; De Dea Lindner, Juliano
2018-02-01
This paper describes an innovative fast and multipurpose method for the chemical inspection of meat and fish products by liquid chromatography-tandem mass spectrometry. Solid-liquid extraction and low temperature partitioning were applied to 17 analytes, which included large bacteriocins (3.5kDa) and small molecules (organic acids, heterocyclic compounds, polyene macrolides, alkyl esters of the p-hydroxybenzoic acid, aromatic, and aliphatic biogenic amines and polyamines). Chromatographic separation was achieved in 10min, using stationary phase of di-isopropyl-3-aminopropyl silane bound to hydroxylated silica. Method validation was in accordance to Commission Decision 657/2002/CE. Linear ranges were among 1.25-10.0mgkg -1 (natamycin and parabens), 2.50-10.0mgkg -1 (sorbate and nisin), 25.0-200mgkg -1 (biogenic amines, hexamethylenetetramine, benzoic and lactic acids), and 50.0-400mgkg -1 (citric acid). Expanded measurement uncertainty (U) was estimated by single laboratory validation combined to modeling in two calculation approaches: internal (U = 5%) and external standardization (U = 24%). Method applicability was checked on 89 real samples among raw, cooked, dry fermented and cured products, yielding acceptable recoveries. Many regulatory issues were revealed, corroborating the need for enhancement of the current analytical methods. This simple execution method dispenses the use of additional procedures of extraction and, therefore, reduces costs over time. It is suitable for routine analysis as a screening or confirmatory tool for both qualitative and quantitative results, replacing many time consuming analytical procedures. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi
This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.
Constrained Burn Optimization for the International Space Station
NASA Technical Reports Server (NTRS)
Brown, Aaron J.; Jones, Brandon A.
2017-01-01
In long-term trajectory planning for the International Space Station (ISS), translational burns are currently targeted sequentially to meet the immediate trajectory constraints, rather than simultaneously to meet all constraints, do not employ gradient-based search techniques, and are not optimized for a minimum total deltav (v) solution. An analytic formulation of the constraint gradients is developed and used in an optimization solver to overcome these obstacles. Two trajectory examples are explored, highlighting the advantage of the proposed method over the current approach, as well as the potential v and propellant savings in the event of propellant shortages.
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...
77 FR 41336 - Analytical Methods Used in Periodic Reporting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-13
... Methods Used in Periodic Reporting AGENCY: Postal Regulatory Commission. ACTION: Notice of filing. SUMMARY... proceeding to consider changes in analytical methods used in periodic reporting. This notice addresses... informal rulemaking proceeding to consider changes in the analytical methods approved for use in periodic...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...
DOT National Transportation Integrated Search
1974-10-01
The author has brought the review of published analytical methods for determining alcohol in body materials up-to- date. The review deals with analytical methods for alcohol in blood and other body fluids and tissues; breath alcohol methods; factors ...
Zhang, Yuanchao; Liu, Jingquan; Li, Da; Dai, Xing; Yan, Fuhua; Conlan, Xavier A; Zhou, Ruhong; Barrow, Colin J; He, Jin; Wang, Xin; Yang, Wenrong
2016-05-24
Chirality sensing is a very challenging task. Here, we report a method for ultrasensitive detection of chiral molecule l/d-carnitine based on changes in the recognition tunneling current across self-assembled core-satellite gold nanoparticle (GNP) networks. The recognition tunneling technique has been demonstrated to work at the single molecule level where the binding between the reader molecules and the analytes in a nanojunction. This process was observed to generate a unique and sensitive change in tunneling current, which can be used to identify the analytes of interest. The molecular recognition mechanism between amino acid l-cysteine and l/d-carnitine has been studied with the aid of SERS. The different binding strength between homo- or heterochiral pairs can be effectively probed by the copper ion replacement fracture. The device resistance was measured before and after the sequential exposures to l/d-carnitine and copper ions. The normalized resistance change was found to be extremely sensitive to the chirality of carnitine molecule. The results suggested that a GNP networks device optimized for recognition tunneling was successfully built and that such a device can be used for ultrasensitive detection of chiral molecules.
Campione, Salvatore; Warne, Larry K.; Basilio, Lorena I.; ...
2017-01-13
This study details a model for the response of a finite- or an infinite-length wire interacting with a conducting ground to an electromagnetic pulse excitation. We develop a frequency–domain method based on transmission line theory that we name ATLOG – Analytic Transmission Line Over Ground. This method is developed as an alternative to full-wave methods, as it delivers a fast and reliable solution. It allows for the treatment of finite or infinite lossy, coated wires, and lossy grounds. The cases of wire above ground, as well as resting on the ground and buried beneath the ground are treated. The reportedmore » method is general and the time response of the induced current is obtained using an inverse Fourier transform of the current in the frequency domain. The focus is on the characteristics and propagation of the transmission line mode. Comparisons with full-wave simulations strengthen the validity of the proposed method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campione, Salvatore; Warne, Larry K.; Basilio, Lorena I.
This study details a model for the response of a finite- or an infinite-length wire interacting with a conducting ground to an electromagnetic pulse excitation. We develop a frequency–domain method based on transmission line theory that we name ATLOG – Analytic Transmission Line Over Ground. This method is developed as an alternative to full-wave methods, as it delivers a fast and reliable solution. It allows for the treatment of finite or infinite lossy, coated wires, and lossy grounds. The cases of wire above ground, as well as resting on the ground and buried beneath the ground are treated. The reportedmore » method is general and the time response of the induced current is obtained using an inverse Fourier transform of the current in the frequency domain. The focus is on the characteristics and propagation of the transmission line mode. Comparisons with full-wave simulations strengthen the validity of the proposed method.« less