Sample records for analytic continuation method

  1. Stabilizing potentials in bound state analytic continuation methods for electronic resonances in polyatomic molecules

    DOE PAGES

    White, Alec F.; Head-Gordon, Martin; McCurdy, C. William

    2017-01-30

    The computation of Siegert energies by analytic continuation of bound state energies has recently been applied to shape resonances in polyatomic molecules by several authors. Here, we critically evaluate a recently proposed analytic continuation method based on low order (type III) Padé approximants as well as an analytic continuation method based on high order (type II) Padé approximants. We compare three classes of stabilizing potentials: Coulomb potentials, Gaussian potentials, and attenuated Coulomb potentials. These methods are applied to a model potential where the correct answer is known exactly and to the 2Π g shape resonance of N 2 - whichmore » has been studied extensively by other methods. Both the choice of stabilizing potential and method of analytic continuation prove to be important to the accuracy of the results. We then conclude that an attenuated Coulomb potential is the most effective of the three for bound state analytic continuation methods. With the proper potential, such methods show promise for algorithmic determination of the positions and widths of molecular shape resonances.« less

  2. Stabilizing potentials in bound state analytic continuation methods for electronic resonances in polyatomic molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Alec F.; Head-Gordon, Martin; McCurdy, C. William

    The computation of Siegert energies by analytic continuation of bound state energies has recently been applied to shape resonances in polyatomic molecules by several authors. Here, we critically evaluate a recently proposed analytic continuation method based on low order (type III) Padé approximants as well as an analytic continuation method based on high order (type II) Padé approximants. We compare three classes of stabilizing potentials: Coulomb potentials, Gaussian potentials, and attenuated Coulomb potentials. These methods are applied to a model potential where the correct answer is known exactly and to the 2Π g shape resonance of N 2 - whichmore » has been studied extensively by other methods. Both the choice of stabilizing potential and method of analytic continuation prove to be important to the accuracy of the results. We then conclude that an attenuated Coulomb potential is the most effective of the three for bound state analytic continuation methods. With the proper potential, such methods show promise for algorithmic determination of the positions and widths of molecular shape resonances.« less

  3. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...

  4. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...

  5. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...

  6. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...

  7. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An...

  8. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An...

  9. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An...

  10. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 24 2011-07-01 2011-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An...

  11. Probabilistic assessment methodology for continuous-type petroleum accumulations

    USGS Publications Warehouse

    Crovelli, R.A.

    2003-01-01

    The analytic resource assessment method, called ACCESS (Analytic Cell-based Continuous Energy Spreadsheet System), was developed to calculate estimates of petroleum resources for the geologic assessment model, called FORSPAN, in continuous-type petroleum accumulations. The ACCESS method is based upon mathematical equations derived from probability theory in the form of a computer spreadsheet system. ?? 2003 Elsevier B.V. All rights reserved.

  12. A numerical test of the topographic bias

    NASA Astrophysics Data System (ADS)

    Sjöberg, L. E.; Joud, M. S. S.

    2018-02-01

    In 1962 A. Bjerhammar introduced the method of analytical continuation in physical geodesy, implying that surface gravity anomalies are downward continued into the topographic masses down to an internal sphere (the Bjerhammar sphere). The method also includes analytical upward continuation of the potential to the surface of the Earth to obtain the quasigeoid. One can show that also the common remove-compute-restore technique for geoid determination includes an analytical continuation as long as the complete density distribution of the topography is not known. The analytical continuation implies that the downward continued gravity anomaly and/or potential are/is in error by the so-called topographic bias, which was postulated by a simple formula of L E Sjöberg in 2007. Here we will numerically test the postulated formula by comparing it with the bias obtained by analytical downward continuation of the external potential of a homogeneous ellipsoid to an inner sphere. The result shows that the postulated formula holds: At the equator of the ellipsoid, where the external potential is downward continued 21 km, the computed and postulated topographic biases agree to less than a millimetre (when the potential is scaled to the unit of metre).

  13. Analytical Assessment for Transient Stability Under Stochastic Continuous Disturbances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ju, Ping; Li, Hongyu; Gan, Chun

    Here, with the growing integration of renewable power generation, plug-in electric vehicles, and other sources of uncertainty, increasing stochastic continuous disturbances are brought to power systems. The impact of stochastic continuous disturbances on power system transient stability attracts significant attention. To address this problem, this paper proposes an analytical assessment method for transient stability of multi-machine power systems under stochastic continuous disturbances. In the proposed method, a probability measure of transient stability is presented and analytically solved by stochastic averaging. Compared with the conventional method (Monte Carlo simulation), the proposed method is many orders of magnitude faster, which makes itmore » very attractive in practice when many plans for transient stability must be compared or when transient stability must be analyzed quickly. Also, it is found that the evolution of system energy over time is almost a simple diffusion process by the proposed method, which explains the impact mechanism of stochastic continuous disturbances on transient stability in theory.« less

  14. On the calculation of resonances by analytic continuation of eigenvalues from the stabilization graph

    NASA Astrophysics Data System (ADS)

    Haritan, Idan; Moiseyev, Nimrod

    2017-07-01

    Resonances play a major role in a large variety of fields in physics and chemistry. Accordingly, there is a growing interest in methods designed to calculate them. Recently, Landau et al. proposed a new approach to analytically dilate a single eigenvalue from the stabilization graph into the complex plane. This approach, termed Resonances Via Padé (RVP), utilizes the Padé approximant and is based on a unique analysis of the stabilization graph. Yet, analytic continuation of eigenvalues from the stabilization graph into the complex plane is not a new idea. In 1975, Jordan suggested an analytic continuation method based on the branch point structure of the stabilization graph. The method was later modified by McCurdy and McNutt, and it is still being used today. We refer to this method as the Truncated Characteristic Polynomial (TCP) method. In this manuscript, we perform an in-depth comparison between the RVP and the TCP methods. We demonstrate that while both methods are important and complementary, the advantage of one method over the other is problem-dependent. Illustrative examples are provided in the manuscript.

  15. Analytical resource assessment method for continuous (unconventional) oil and gas accumulations - The "ACCESS" Method

    USGS Publications Warehouse

    Crovelli, Robert A.; revised by Charpentier, Ronald R.

    2012-01-01

    The U.S. Geological Survey (USGS) periodically assesses petroleum resources of areas within the United States and the world. The purpose of this report is to explain the development of an analytic probabilistic method and spreadsheet software system called Analytic Cell-Based Continuous Energy Spreadsheet System (ACCESS). The ACCESS method is based upon mathematical equations derived from probability theory. The ACCESS spreadsheet can be used to calculate estimates of the undeveloped oil, gas, and NGL (natural gas liquids) resources in a continuous-type assessment unit. An assessment unit is a mappable volume of rock in a total petroleum system. In this report, the geologic assessment model is defined first, the analytic probabilistic method is described second, and the spreadsheet ACCESS is described third. In this revised version of Open-File Report 00-044 , the text has been updated to reflect modifications that were made to the ACCESS program. Two versions of the program are added as appendixes.

  16. Creating analytically divergence-free velocity fields from grid-based data

    NASA Astrophysics Data System (ADS)

    Ravu, Bharath; Rudman, Murray; Metcalfe, Guy; Lester, Daniel R.; Khakhar, Devang V.

    2016-10-01

    We present a method, based on B-splines, to calculate a C2 continuous analytic vector potential from discrete 3D velocity data on a regular grid. A continuous analytically divergence-free velocity field can then be obtained from the curl of the potential. This field can be used to robustly and accurately integrate particle trajectories in incompressible flow fields. Based on the method of Finn and Chacon (2005) [10] this new method ensures that the analytic velocity field matches the grid values almost everywhere, with errors that are two to four orders of magnitude lower than those of existing methods. We demonstrate its application to three different problems (each in a different coordinate system) and provide details of the specifics required in each case. We show how the additional accuracy of the method results in qualitatively and quantitatively superior trajectories that results in more accurate identification of Lagrangian coherent structures.

  17. Maximum entropy formalism for the analytic continuation of matrix-valued Green's functions

    NASA Astrophysics Data System (ADS)

    Kraberger, Gernot J.; Triebl, Robert; Zingl, Manuel; Aichhorn, Markus

    2017-10-01

    We present a generalization of the maximum entropy method to the analytic continuation of matrix-valued Green's functions. To treat off-diagonal elements correctly based on Bayesian probability theory, the entropy term has to be extended for spectral functions that are possibly negative in some frequency ranges. In that way, all matrix elements of the Green's function matrix can be analytically continued; we introduce a computationally cheap element-wise method for this purpose. However, this method cannot ensure important constraints on the mathematical properties of the resulting spectral functions, namely positive semidefiniteness and Hermiticity. To improve on this, we present a full matrix formalism, where all matrix elements are treated simultaneously. We show the capabilities of these methods using insulating and metallic dynamical mean-field theory (DMFT) Green's functions as test cases. Finally, we apply the methods to realistic material calculations for LaTiO3, where off-diagonal matrix elements in the Green's function appear due to the distorted crystal structure.

  18. 40 CFR 161.180 - Enforcement analytical method.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Enforcement analytical method. 161.180 Section 161.180 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data Requirements...

  19. 40 CFR 161.180 - Enforcement analytical method.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 24 2011-07-01 2011-07-01 false Enforcement analytical method. 161.180 Section 161.180 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data Requirements...

  20. 40 CFR 161.180 - Enforcement analytical method.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Enforcement analytical method. 161.180 Section 161.180 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data Requirements...

  1. Shape resonances of Be- and Mg- investigated with the method of analytic continuation

    NASA Astrophysics Data System (ADS)

    Čurík, Roman; Paidarová, I.; Horáček, J.

    2018-05-01

    The regularized method of analytic continuation is used to study the low-energy negative-ion states of beryllium (configuration 2 s2ɛ p 2P ) and magnesium (configuration 3 s2ɛ p 2P ) atoms. The method applies an additional perturbation potential and requires only routine bound-state multi-electron quantum calculations. Such computations are accessible by most of the free or commercial quantum chemistry software available for atoms and molecules. The perturbation potential is implemented as a spherical Gaussian function with a fixed width. Stability of the analytic continuation technique with respect to the width and with respect to the input range of electron affinities is studied in detail. The computed resonance parameters Er=0.282 eV, Γ =0.316 eV for the 2 p state of Be- and Er=0.188 eV, Γ =0.167 for the 3 p state of Mg- agree well with the best results obtained by much more elaborate and computationally demanding present-day methods.

  2. 40 CFR Appendix B to Part 136 - Definition and Procedure for the Determination of the Method Detection Limit-Revision 1.11

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES... to a wide variety of sample types ranging from reagent (blank) water containing analyte to wastewater... times the standard deviation of replicate instrumental measurements of the analyte in reagent water. (c...

  3. 21 CFR 530.24 - Procedure for announcing analytical methods for drug residue quantification.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ..., DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) ANIMAL DRUGS, FEEDS, AND RELATED PRODUCTS EXTRALABEL DRUG USE IN ANIMALS Specific Provisions Relating to Extralabel Use of Animal and Human Drugs in Food-Producing Animals § 530.24 Procedure for announcing analytical methods for drug residue quantification. (a...

  4. 40 CFR 260.21 - Petitions for equivalent testing or analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.21... method; (2) A description of the types of wastes or waste matrices for which the proposed method may be... will be incorporated by reference in § 260.11 and added to “Test Methods for Evaluating Solid Waste...

  5. 40 CFR 260.21 - Petitions for equivalent testing or analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.21... method; (2) A description of the types of wastes or waste matrices for which the proposed method may be... will be incorporated by reference in § 260.11 and added to “Test Methods for Evaluating Solid Waste...

  6. 40 CFR 260.21 - Petitions for equivalent testing or analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.21... method; (2) A description of the types of wastes or waste matrices for which the proposed method may be... will be incorporated by reference in § 260.11 and added to “Test Methods for Evaluating Solid Waste...

  7. 40 CFR 260.21 - Petitions for equivalent testing or analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.21... will be incorporated by reference in § 260.11 and added to “Test Methods for Evaluating Solid Waste... method; (2) A description of the types of wastes or waste matrices for which the proposed method may be...

  8. 21 CFR 320.29 - Analytical methods for an in vivo bioavailability or bioequivalence study.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 5 2010-04-01 2010-04-01 false Analytical methods for an in vivo bioavailability..., DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) DRUGS FOR HUMAN USE BIOAVAILABILITY AND BIOEQUIVALENCE REQUIREMENTS Procedures for Determining the Bioavailability or Bioequivalence of Drug Products § 320.29...

  9. Method for Continuous Monitoring of Electrospray Ion Formation

    NASA Astrophysics Data System (ADS)

    Metzler, Guille; Crathern, Susan; Bachmann, Lorin; Fernández-Metzler, Carmen; King, Richard

    2017-10-01

    A method for continuously monitoring the performance of electrospray ionization without the addition of hardware or chemistry to the system is demonstrated. In the method, which we refer to as SprayDx, cluster ions with solvent vapor natively formed by electrospray are followed throughout the collection of liquid chromatography-selected reaction monitoring data. The cluster ion extracted ion chromatograms report on the consistency of the ion formation and detection system. The data collected by the SprayDx method resemble the data collected for postcolumn infusion of analyte. The response of the cluster ions monitored reports on changes in the physical parameters of the ion source such as voltage and gas flow. SprayDx is also observed to report on ion suppression in a fashion very similar to a postcolumn infusion of analyte. We anticipate the method finding utility as a continuous readout on the performance of electrospray and other atmospheric pressure ionization processes. [Figure not available: see fulltext.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, W.W.; Sullivan, H.H.

    Electroless nicke-plate characteristics are substantially influenced by percent phosphorous concentrations. Available ASTM analytical methods are designed for phosphorous concentrations of less than one percent compared to the 4.0 to 20.0% concentrations common in electroless nickel plate. A variety of analytical adaptations are applied through the industry resulting in poor data continuity. This paper presents a statistical comparison of five analytical methods and recommends accurate and precise procedures for use in percent phosphorous determinations in electroless nickel plate. 2 figures, 1 table.

  11. One-dimensional backreacting holographic superconductors with exponential nonlinear electrodynamics

    NASA Astrophysics Data System (ADS)

    Ghotbabadi, B. Binaei; Zangeneh, M. Kord; Sheykhi, A.

    2018-05-01

    In this paper, we investigate the effects of nonlinear exponential electrodynamics as well as backreaction on the properties of one-dimensional s-wave holographic superconductors. We continue our study both analytically and numerically. In analytical study, we employ the Sturm-Liouville method while in numerical approach we perform the shooting method. We obtain a relation between the critical temperature and chemical potential analytically. Our results show a good agreement between analytical and numerical methods. We observe that the increase in the strength of both nonlinearity and backreaction parameters causes the formation of condensation in the black hole background harder and critical temperature lower. These results are consistent with those obtained for two dimensional s-wave holographic superconductors.

  12. New Tools to Prepare ACE Cross-section Files for MCNP Analytic Test Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    Monte Carlo calculations using one-group cross sections, multigroup cross sections, or simple continuous energy cross sections are often used to: (1) verify production codes against known analytical solutions, (2) verify new methods and algorithms that do not involve detailed collision physics, (3) compare Monte Carlo calculation methods with deterministic methods, and (4) teach fundamentals to students. In this work we describe 2 new tools for preparing the ACE cross-section files to be used by MCNP ® for these analytic test problems, simple_ace.pl and simple_ace_mg.pl.

  13. The calculation of transport properties in quantum liquids using the maximum entropy numerical analytic continuation method: Application to liquid para-hydrogen

    PubMed Central

    Rabani, Eran; Reichman, David R.; Krilov, Goran; Berne, Bruce J.

    2002-01-01

    We present a method based on augmenting an exact relation between a frequency-dependent diffusion constant and the imaginary time velocity autocorrelation function, combined with the maximum entropy numerical analytic continuation approach to study transport properties in quantum liquids. The method is applied to the case of liquid para-hydrogen at two thermodynamic state points: a liquid near the triple point and a high-temperature liquid. Good agreement for the self-diffusion constant and for the real-time velocity autocorrelation function is obtained in comparison to experimental measurements and other theoretical predictions. Improvement of the methodology and future applications are discussed. PMID:11830656

  14. Thermodynamics of Newman-Unti-Tamburino charged spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mann, Robert; Department of Physics, University of Waterloo, 200 University Avenue West, Waterloo, Ontario, N2L 3G1; Stelea, Cristian

    We discuss and compare at length the results of two methods used recently to describe the thermodynamics of Taub-Newman-Unti-Tamburino (NUT) solutions in a de Sitter background. In the first approach (C approach), one deals with an analytically continued version of the metric while in the second approach (R approach), the discussion is carried out using the unmodified metric with Lorentzian signature. No analytic continuation is performed on the coordinates and/or the parameters that appear in the metric. We find that the results of both these approaches are completely equivalent modulo analytic continuation and we provide the exact prescription that relatesmore » the results in both methods. The extension of these results to the AdS/flat cases aims to give a physical interpretation of the thermodynamics of NUT-charged spacetimes in the Lorentzian sector. We also briefly discuss the higher-dimensional spaces and note that, analogous with the absence of hyperbolic NUTs in AdS backgrounds, there are no spherical Taub-NUT-dS solutions.« less

  15. Approximate analytic solutions to 3D unconfined groundwater flow within regional 2D models

    NASA Astrophysics Data System (ADS)

    Luther, K.; Haitjema, H. M.

    2000-04-01

    We present methods for finding approximate analytic solutions to three-dimensional (3D) unconfined steady state groundwater flow near partially penetrating and horizontal wells, and for combining those solutions with regional two-dimensional (2D) models. The 3D solutions use distributed singularities (analytic elements) to enforce boundary conditions on the phreatic surface and seepage faces at vertical wells, and to maintain fixed-head boundary conditions, obtained from the 2D model, at the perimeter of the 3D model. The approximate 3D solutions are analytic (continuous and differentiable) everywhere, including on the phreatic surface itself. While continuity of flow is satisfied exactly in the infinite 3D flow domain, water balance errors can occur across the phreatic surface.

  16. 40 CFR 141.25 - Analytical methods for radioactivity.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Monitoring and Analytical Requirements... obtaining these documents can be obtained from the Safe Drinking Water Hotline at 800-426-4791. Documents may be inspected at EPA's Drinking Water Docket, EPA West, 1301 Constitution Avenue, NW., Room 3334...

  17. 40 CFR 141.25 - Analytical methods for radioactivity.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Monitoring and Analytical Requirements... obtaining these documents can be obtained from the Safe Drinking Water Hotline at 800-426-4791. Documents may be inspected at EPA's Drinking Water Docket, EPA West, 1301 Constitution Avenue, NW., Room 3334...

  18. 40 CFR 141.25 - Analytical methods for radioactivity.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Monitoring and Analytical Requirements... obtaining these documents can be obtained from the Safe Drinking Water Hotline at 800-426-4791. Documents may be inspected at EPA's Drinking Water Docket, EPA West, 1301 Constitution Avenue, NW., Room 3334...

  19. 40 CFR 141.25 - Analytical methods for radioactivity.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Monitoring and Analytical Requirements... obtaining these documents can be obtained from the Safe Drinking Water Hotline at 800-426-4791. Documents may be inspected at EPA's Drinking Water Docket, EPA West, 1301 Constitution Avenue, NW., Room 3334...

  20. On-Line Ion Exchange Liquid Chromatography as a Process Analytical Technology for Monoclonal Antibody Characterization in Continuous Bioprocessing.

    PubMed

    Patel, Bhumit A; Pinto, Nuno D S; Gospodarek, Adrian; Kilgore, Bruce; Goswami, Kudrat; Napoli, William N; Desai, Jayesh; Heo, Jun H; Panzera, Dominick; Pollard, David; Richardson, Daisy; Brower, Mark; Richardson, Douglas D

    2017-11-07

    Combining process analytical technology (PAT) with continuous production provides a powerful tool to observe and control monoclonal antibody (mAb) fermentation and purification processes. This work demonstrates on-line liquid chromatography (on-line LC) as a PAT tool for monitoring a continuous biologics process and forced degradation studies. Specifically, this work focused on ion exchange chromatography (IEX), which is a critical separation technique to detect charge variants. Product-related impurities, including charge variants, that impact function are classified as critical quality attributes (CQAs). First, we confirmed no significant differences were observed in the charge heterogeneity profile of a mAb through both at-line and on-line sampling and that the on-line method has the ability to rapidly detect changes in protein quality over time. The robustness and versatility of the PAT methods were tested by sampling from two purification locations in a continuous mAb process. The PAT IEX methods used with on-line LC were a weak cation exchange (WCX) separation and a newly developed shorter strong cation exchange (SCX) assay. Both methods provided similar results with the distribution of percent acidic, main, and basic species remaining unchanged over a 2 week period. Second, a forced degradation study showed an increase in acidic species and a decrease in basic species when sampled on-line over 7 days. These applications further strengthen the use of on-line LC to monitor CQAs of a mAb continuously with various PAT IEX analytical methods. Implementation of on-line IEX will enable faster decision making during process development and could potentially be applied to control in biomanufacturing.

  1. Behavior analytic approaches to problem behavior in intellectual disabilities.

    PubMed

    Hagopian, Louis P; Gregory, Meagan K

    2016-03-01

    The purpose of the current review is to summarize recent behavior analytic research on problem behavior in individuals with intellectual disabilities. We have focused our review on studies published from 2013 to 2015, but also included earlier studies that were relevant. Behavior analytic research on problem behavior continues to focus on the use and refinement of functional behavioral assessment procedures and function-based interventions. During the review period, a number of studies reported on procedures aimed at making functional analysis procedures more time efficient. Behavioral interventions continue to evolve, and there were several larger scale clinical studies reporting on multiple individuals. There was increased attention on the part of behavioral researchers to develop statistical methods for analysis of within subject data and continued efforts to aggregate findings across studies through evaluative reviews and meta-analyses. Findings support continued utility of functional analysis for guiding individualized interventions and for classifying problem behavior. Modifications designed to make functional analysis more efficient relative to the standard method of functional analysis were reported; however, these require further validation. Larger scale studies on behavioral assessment and treatment procedures provided additional empirical support for effectiveness of these approaches and their sustainability outside controlled clinical settings.

  2. Rapid and continuous analyte processing in droplet microfluidic devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strey, Helmut; Kimmerling, Robert; Bakowski, Tomasz

    The compositions and methods described herein are designed to introduce functionalized microparticles into droplets that can be manipulated in microfluidic devices by fields, including electric (dielectrophoretic) or magnetic fields, and extracted by splitting a droplet to separate the portion of the droplet that contains the majority of the microparticles from the part that is largely devoid of the microparticles. Within the device, channels are variously configured at Y- or T junctions that facilitate continuous, serial isolation and dilution of analytes in solution. The devices can be limited in the sense that they can be designed to output purified analytes thatmore » are then further analyzed in separate machines or they can include additional channels through which purified analytes can be further processed and analyzed.« less

  3. Comparative spectral analysis of veterinary powder product by continuous wavelet and derivative transforms

    NASA Astrophysics Data System (ADS)

    Dinç, Erdal; Kanbur, Murat; Baleanu, Dumitru

    2007-10-01

    Comparative simultaneous determination of chlortetracycline and benzocaine in the commercial veterinary powder product was carried out by continuous wavelet transform (CWT) and classical derivative transform (or classical derivative spectrophotometry). In this quantitative spectral analysis, two proposed analytical methods do not require any chemical separation process. In the first step, several wavelet families were tested to find an optimal CWT for the overlapping signal processing of the analyzed compounds. Subsequently, we observed that the coiflets (COIF-CWT) method with dilation parameter, a = 400, gives suitable results for this analytical application. For a comparison, the classical derivative spectrophotometry (CDS) approach was also applied to the simultaneous quantitative resolution of the same analytical problem. Calibration functions were obtained by measuring the transform amplitudes corresponding to zero-crossing points for both CWT and CDS methods. The utility of these two analytical approaches were verified by analyzing various synthetic mixtures consisting of chlortetracycline and benzocaine and they were applied to the real samples consisting of veterinary powder formulation. The experimental results obtained from the COIF-CWT approach were statistically compared with those obtained by classical derivative spectrophotometry and successful results were reported.

  4. Direct current electrical potential measurement of the growth of small cracks

    NASA Technical Reports Server (NTRS)

    Gangloff, Richard P.; Slavik, Donald C.; Piascik, Robert S.; Van Stone, Robert H.

    1992-01-01

    The analytical and experimental aspects of the direct-current electrical potential difference (dcEPD) method for continuous monitoring of the growth kinetics of short (50 to 500 microns) fatigue cracks are reviewed, and successful applications of the deEPD method to study fatigue crack propagation in a variety of metallic alloys exposed to various environments are described. Particular attention is given to the principle of the dcEPD method, the analytical electrical potential calibration relationships, and the experimental procedures and equipment.

  5. Analytical interferences of mercuric chloride preservative in environmental water samples: Determination of organic compounds isolated by continuous liquid-liquid extraction or closed-loop stripping

    USGS Publications Warehouse

    Foreman, W.T.; Zaugg, S.D.; Falres, L.M.; Werner, M.G.; Leiker, T.J.; Rogerson, P.F.

    1992-01-01

    Analytical interferences were observed during the determination of organic compounds in groundwater samples preserved with mercuric chloride. The nature of the interference was different depending on the analytical isolation technique employed. (1) Water samples extracted with dichloromethane by continuous liquid-liquid extraction (CLLE) and analyzed by gas chromatography/mass spectrometry revealed a broad HgCl2 'peak' eluting over a 3-5-min span which interfered with the determination of coeluting organic analytes. Substitution of CLLE for separatory funnel extraction in EPA method 508 also resulted in analytical interferences from the use of HgCl2 preservative. (2) Mercuric chloride was purged, along with organic contaminants, during closed-loop stripping (CLS) of groundwater samples and absorbed onto the activated charcoal trap. Competitive sorption of the HgCl2 by the trap appeared to contribute to the observed poor recoveries for spiked organic contaminants. The HgCl2 was not displaced from the charcoal with the dichloromethane elution solvent and required strong nitric acid to achieve rapid, complete displacement. Similar competitive sorption mechanisms might also occur in other purge and trap methods when this preservative is used.

  6. Life cycle management of analytical methods.

    PubMed

    Parr, Maria Kristina; Schmidt, Alexander H

    2018-01-05

    In modern process management, the life cycle concept gains more and more importance. It focusses on the total costs of the process from invest to operation and finally retirement. Also for analytical procedures an increasing interest for this concept exists in the recent years. The life cycle of an analytical method consists of design, development, validation (including instrumental qualification, continuous method performance verification and method transfer) and finally retirement of the method. It appears, that also regulatory bodies have increased their awareness on life cycle management for analytical methods. Thus, the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH), as well as the United States Pharmacopeial Forum discuss the enrollment of new guidelines that include life cycle management of analytical methods. The US Pharmacopeia (USP) Validation and Verification expert panel already proposed a new General Chapter 〈1220〉 "The Analytical Procedure Lifecycle" for integration into USP. Furthermore, also in the non-regulated environment a growing interest on life cycle management is seen. Quality-by-design based method development results in increased method robustness. Thereby a decreased effort is needed for method performance verification, and post-approval changes as well as minimized risk of method related out-of-specification results. This strongly contributes to reduced costs of the method during its life cycle. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Learning Analytics in Higher Education Development: A Roadmap

    ERIC Educational Resources Information Center

    Adejo, Olugbenga; Connolly, Thomas

    2017-01-01

    The increase in education data and advance in technology are bringing about enhanced teaching and learning methodology. The emerging field of Learning Analytics (LA) continues to seek ways to improve the different methods of gathering, analysing, managing and presenting learners' data with the sole aim of using it to improve the student learning…

  8. HANDBOOK: CONTINUOUS EMISSION MONITORING SYSTEMS FOR NON-CRITERIA POLLUTANTS

    EPA Science Inventory

    This Handbook provides a description of the methods used to continuously monitor non-criteria pollutants emitted from stationary sources. The Handbook contains a review of current regulatory programs, the state-of-the-art sampling system design, analytical techniques, and the use...

  9. Extrapolation of scattering data to the negative-energy region. II. Applicability of effective range functions within an exactly solvable model

    DOE PAGES

    Blokhintsev, L. D.; Kadyrov, A. S.; Mukhamedzhanov, A. M.; ...

    2018-02-05

    A problem of analytical continuation of scattering data to the negative-energy region to obtain information about bound states is discussed within an exactly solvable potential model. This work is continuation of the previous one by the same authors [L. D. Blokhintsev et al., Phys. Rev. C 95, 044618 (2017)]. The goal of this paper is to determine the most effective way of analytic continuation for different systems. The d + α and α + 12C systems are considered and, for comparison, an effective-range function approach and a recently suggested Δ method [O. L. Ramírez Suárez and J.-M. Sparenberg, Phys. Rev.more » C 96, 034601 (2017).] are applied. We conclude that the method is more effective for heavier systems with large values of the Coulomb parameter, whereas for light systems with small values of the Coulomb parameter the effective-range function method might be preferable.« less

  10. Extrapolation of scattering data to the negative-energy region. II. Applicability of effective range functions within an exactly solvable model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blokhintsev, L. D.; Kadyrov, A. S.; Mukhamedzhanov, A. M.

    A problem of analytical continuation of scattering data to the negative-energy region to obtain information about bound states is discussed within an exactly solvable potential model. This work is continuation of the previous one by the same authors [L. D. Blokhintsev et al., Phys. Rev. C 95, 044618 (2017)]. The goal of this paper is to determine the most effective way of analytic continuation for different systems. The d + α and α + 12C systems are considered and, for comparison, an effective-range function approach and a recently suggested Δ method [O. L. Ramírez Suárez and J.-M. Sparenberg, Phys. Rev.more » C 96, 034601 (2017).] are applied. We conclude that the method is more effective for heavier systems with large values of the Coulomb parameter, whereas for light systems with small values of the Coulomb parameter the effective-range function method might be preferable.« less

  11. Semantic Interaction for Visual Analytics: Toward Coupling Cognition and Computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander

    2014-07-01

    The dissertation discussed in this article [1] was written in the midst of an era of digitization. The world is becoming increasingly instrumented with sensors, monitoring, and other methods for generating data describing social, physical, and natural phenomena. Thus, data exist with the potential of being analyzed to uncover, or discover, the phenomena from which it was created. However, as the analytic models leveraged to analyze these data continue to increase in complexity and computational capability, how can visualizations and user interaction methodologies adapt and evolve to continue to foster discovery and sensemaking?

  12. Method and apparatus for continuous fluid leak monitoring and detection in analytical instruments and instrument systems

    DOEpatents

    Weitz, Karl K [Pasco, WA; Moore, Ronald J [West Richland, WA

    2010-07-13

    A method and device are disclosed that provide for detection of fluid leaks in analytical instruments and instrument systems. The leak detection device includes a collection tube, a fluid absorbing material, and a circuit that electrically couples to an indicator device. When assembled, the leak detection device detects and monitors for fluid leaks, providing a preselected response in conjunction with the indicator device when contacted by a fluid.

  13. Approaching near real-time biosensing: microfluidic microsphere based biosensor for real-time analyte detection.

    PubMed

    Cohen, Noa; Sabhachandani, Pooja; Golberg, Alexander; Konry, Tania

    2015-04-15

    In this study we describe a simple lab-on-a-chip (LOC) biosensor approach utilizing well mixed microfluidic device and a microsphere-based assay capable of performing near real-time diagnostics of clinically relevant analytes such cytokines and antibodies. We were able to overcome the adsorption kinetics reaction rate-limiting mechanism, which is diffusion-controlled in standard immunoassays, by introducing the microsphere-based assay into well-mixed yet simple microfluidic device with turbulent flow profiles in the reaction regions. The integrated microsphere-based LOC device performs dynamic detection of the analyte in minimal amount of biological specimen by continuously sampling micro-liter volumes of sample per minute to detect dynamic changes in target analyte concentration. Furthermore we developed a mathematical model for the well-mixed reaction to describe the near real time detection mechanism observed in the developed LOC method. To demonstrate the specificity and sensitivity of the developed real time monitoring LOC approach, we applied the device for clinically relevant analytes: Tumor Necrosis Factor (TNF)-α cytokine and its clinically used inhibitor, anti-TNF-α antibody. Based on the reported results herein, the developed LOC device provides continuous sensitive and specific near real-time monitoring method for analytes such as cytokines and antibodies, reduces reagent volumes by nearly three orders of magnitude as well as eliminates the washing steps required by standard immunoassays. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Methods and devices for high-throughput dielectrophoretic concentration

    DOEpatents

    Simmons, Blake A.; Cummings, Eric B.; Fiechtner, Gregory J.; Fintschenko, Yolanda; McGraw, Gregory J.; Salmi, Allen

    2010-02-23

    Disclosed herein are methods and devices for assaying and concentrating analytes in a fluid sample using dielectrophoresis. As disclosed, the methods and devices utilize substrates having a plurality of pores through which analytes can be selectively prevented from passing, or inhibited, on application of an appropriate electric field waveform. The pores of the substrate produce nonuniform electric field having local extrema located near the pores. These nonuniform fields drive dielectrophoresis, which produces the inhibition. Arrangements of electrodes and porous substrates support continuous, bulk, multi-dimensional, and staged selective concentration.

  15. Using PAT to accelerate the transition to continuous API manufacturing.

    PubMed

    Gouveia, Francisca F; Rahbek, Jesper P; Mortensen, Asmus R; Pedersen, Mette T; Felizardo, Pedro M; Bro, Rasmus; Mealy, Michael J

    2017-01-01

    Significant improvements can be realized by converting conventional batch processes into continuous ones. The main drivers include reduction of cost and waste, increased safety, and simpler scale-up and tech transfer activities. Re-designing the process layout offers the opportunity to incorporate a set of process analytical technologies (PAT) embraced in the Quality-by-Design (QbD) framework. These tools are used for process state estimation, providing enhanced understanding of the underlying variability in the process impacting quality and yield. This work describes a road map for identifying the best technology to speed-up the development of continuous processes while providing the basis for developing analytical methods for monitoring and controlling the continuous full-scale reaction. The suitability of in-line Raman, FT-infrared (FT-IR), and near-infrared (NIR) spectroscopy for real-time process monitoring was investigated in the production of 1-bromo-2-iodobenzene. The synthesis consists of three consecutive reaction steps including the formation of an unstable diazonium salt intermediate, which is critical to secure high yield and avoid formation of by-products. All spectroscopic methods were able to capture critical information related to the accumulation of the intermediate with very similar accuracy. NIR spectroscopy proved to be satisfactory in terms of performance, ease of installation, full-scale transferability, and stability to very adverse process conditions. As such, in-line NIR was selected to monitor the continuous full-scale production. The quantitative method was developed against theoretical concentration values of the intermediate since representative sampling for off-line reference analysis cannot be achieved. The rapid and reliable analytical system allowed the following: speeding up the design of the continuous process and a better understanding of the manufacturing requirements to ensure optimal yield and avoid unreacted raw materials and by-products in the continuous reactor effluent. Graphical Abstract Using PAT to accelerate the transition to continuous API manufacturing.

  16. Universal analytical scattering form factor for shell-, core-shell, or homogeneous particles with continuously variable density profile shape.

    PubMed

    Foster, Tobias

    2011-09-01

    A novel analytical and continuous density distribution function with a widely variable shape is reported and used to derive an analytical scattering form factor that allows us to universally describe the scattering from particles with the radial density profile of homogeneous spheres, shells, or core-shell particles. Composed by the sum of two Fermi-Dirac distribution functions, the shape of the density profile can be altered continuously from step-like via Gaussian-like or parabolic to asymptotically hyperbolic by varying a single "shape parameter", d. Using this density profile, the scattering form factor can be calculated numerically. An analytical form factor can be derived using an approximate expression for the original Fermi-Dirac distribution function. This approximation is accurate for sufficiently small rescaled shape parameters, d/R (R being the particle radius), up to values of d/R ≈ 0.1, and thus captures step-like, Gaussian-like, and parabolic as well as asymptotically hyperbolic profile shapes. It is expected that this form factor is particularly useful in a model-dependent analysis of small-angle scattering data since the applied continuous and analytical function for the particle density profile can be compared directly with the density profile extracted from the data by model-free approaches like the generalized inverse Fourier transform method. © 2011 American Chemical Society

  17. High-throughput liquid-absorption preconcentrator sampling methods

    DOEpatents

    Zaromb, Solomon

    1994-01-01

    A system for detecting trace concentrations of an analyte in air includes a preconcentrator for the analyte and an analyte detector. The preconcentrator includes an elongated tubular container comprising a wettable material. The wettable material is continuously wetted with an analyte-sorbing liquid which flows from one part of the container to a lower end. Sampled air flows through the container in contact with the wetted material with a swirling motion which results in efficient transfer of analyte vapors or aerosol particles to the sorbing liquid and preconcentration of traces of analyte in the liquid. The preconcentrated traces of analyte may be either detected within the container or removed therefrom for injection into a separate detection means or for subsequent analysis.

  18. High-throughput liquid-absorption preconcentrator sampling methods

    DOEpatents

    Zaromb, S.

    1994-07-12

    A system for detecting trace concentrations of an analyte in air includes a preconcentrator for the analyte and an analyte detector. The preconcentrator includes an elongated tubular container comprising a wettable material. The wettable material is continuously wetted with an analyte-sorbing liquid which flows from one part of the container to a lower end. Sampled air flows through the container in contact with the wetted material with a swirling motion which results in efficient transfer of analyte vapors or aerosol particles to the sorbing liquid and preconcentration of traces of analyte in the liquid. The preconcentrated traces of analyte may be either detected within the container or removed therefrom for injection into a separate detection means or for subsequent analysis. 12 figs.

  19. Learning Analytics Methods, Benefits, and Challenges in Higher Education: A Systematic Literature Review

    ERIC Educational Resources Information Center

    Avella, John T.; Kebritchi, Mansureh; Nunn, Sandra G.; Kanai, Therese

    2016-01-01

    Higher education for the 21st century continues to promote discoveries in the field through learning analytics (LA). The problem is that the rapid embrace of of LA diverts educators' attention from clearly identifying requirements and implications of using LA in higher education. LA is a promising emerging field, yet higher education stakeholders…

  20. Implementation and application of moving average as continuous analytical quality control instrument demonstrated for 24 routine chemistry assays.

    PubMed

    Rossum, Huub H van; Kemperman, Hans

    2017-07-26

    General application of a moving average (MA) as continuous analytical quality control (QC) for routine chemistry assays has failed due to lack of a simple method that allows optimization of MAs. A new method was applied to optimize the MA for routine chemistry and was evaluated in daily practice as continuous analytical QC instrument. MA procedures were optimized using an MA bias detection simulation procedure. Optimization was graphically supported by bias detection curves. Next, all optimal MA procedures that contributed to the quality assurance were run for 100 consecutive days and MA alarms generated during working hours were investigated. Optimized MA procedures were applied for 24 chemistry assays. During this evaluation, 303,871 MA values and 76 MA alarms were generated. Of all alarms, 54 (71%) were generated during office hours. Of these, 41 were further investigated and were caused by ion selective electrode (ISE) failure (1), calibration failure not detected by QC due to improper QC settings (1), possible bias (significant difference with the other analyzer) (10), non-human materials analyzed (2), extreme result(s) of a single patient (2), pre-analytical error (1), no cause identified (20), and no conclusion possible (4). MA was implemented in daily practice as a continuous QC instrument for 24 routine chemistry assays. In our setup when an MA alarm required follow-up, a manageable number of MA alarms was generated that resulted in valuable MA alarms. For the management of MA alarms, several applications/requirements in the MA management software will simplify the use of MA procedures.

  1. Implementation of structural response sensitivity calculations in a large-scale finite-element analysis system

    NASA Technical Reports Server (NTRS)

    Giles, G. L.; Rogers, J. L., Jr.

    1982-01-01

    The implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calclating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of the system are also discussed.

  2. Validation of the replica trick for simple models

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2018-04-01

    We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.

  3. Ammonia Monitor

    NASA Technical Reports Server (NTRS)

    Sauer, Richard L. (Inventor); Akse, James R. (Inventor); Thompson, John O. (Inventor); Atwater, James E. (Inventor)

    1999-01-01

    Ammonia monitor and method of use are disclosed. A continuous, real-time determination of the concentration of ammonia in an aqueous process stream is possible over a wide dynamic range of concentrations. No reagents are required because pH is controlled by an in-line solid-phase base. Ammonia is selectively transported across a membrane from the process stream to an analytical stream to an analytical stream under pH control. The specific electrical conductance of the analytical stream is measured and used to determine the concentration of ammonia.

  4. Analytical approximation and numerical simulations for periodic travelling water waves

    NASA Astrophysics Data System (ADS)

    Kalimeris, Konstantinos

    2017-12-01

    We present recent analytical and numerical results for two-dimensional periodic travelling water waves with constant vorticity. The analytical approach is based on novel asymptotic expansions. We obtain numerical results in two different ways: the first is based on the solution of a constrained optimization problem, and the second is realized as a numerical continuation algorithm. Both methods are applied on some examples of non-constant vorticity. This article is part of the theme issue 'Nonlinear water waves'.

  5. On the analytic and numeric optimisation of airplane trajectories under real atmospheric conditions

    NASA Astrophysics Data System (ADS)

    Gonzalo, J.; Domínguez, D.; López, D.

    2014-12-01

    From the beginning of aviation era, economic constraints have forced operators to continuously improve the planning of the flights. The revenue is proportional to the cost per flight and the airspace occupancy. Many methods, the first started in the middle of last century, have explore analytical, numerical and artificial intelligence resources to reach the optimal flight planning. In parallel, advances in meteorology and communications allow an almost real-time knowledge of the atmospheric conditions and a reliable, error-bounded forecast for the near future. Thus, apart from weather risks to be avoided, airplanes can dynamically adapt their trajectories to minimise their costs. International regulators are aware about these capabilities, so it is reasonable to envisage some changes to allow this dynamic planning negotiation to soon become operational. Moreover, current unmanned airplanes, very popular and often small, suffer the impact of winds and other weather conditions in form of dramatic changes in their performance. The present paper reviews analytic and numeric solutions for typical trajectory planning problems. Analytic methods are those trying to solve the problem using the Pontryagin principle, where influence parameters are added to state variables to form a split condition differential equation problem. The system can be solved numerically -indirect optimisation- or using parameterised functions -direct optimisation-. On the other hand, numerical methods are based on Bellman's dynamic programming (or Dijkstra algorithms), where the fact that two optimal trajectories can be concatenated to form a new optimal one if the joint point is demonstrated to belong to the final optimal solution. There is no a-priori conditions for the best method. Traditionally, analytic has been more employed for continuous problems whereas numeric for discrete ones. In the current problem, airplane behaviour is defined by continuous equations, while wind fields are given in a discrete grid at certain time intervals. The research demonstrates advantages and disadvantages of each method as well as performance figures of the solutions found for typical flight conditions under static and dynamic atmospheres. This provides significant parameters to be used in the selection of solvers for optimal trajectories.

  6. Free vibration analysis of a robotic fish based on a continuous and non-uniform flexible backbone with distributed masses

    NASA Astrophysics Data System (ADS)

    Coral, W.; Rossi, C.; Curet, O. M.

    2015-12-01

    This paper presents a Differential Quadrature Element Method for free transverse vibration of a robotic fish based on a continuous and non-uniform flexible backbone with distributed masses (fish ribs). The proposed method is based on the theory of a Timoshenko cantilever beam. The effects of the masses (number, magnitude and position) on the value of natural frequencies are investigated. Governing equations, compatibility and boundary conditions are formulated according to the Differential Quadrature rules. The convergence, efficiency and accuracy are compared to other analytical solution proposed in the literature. Moreover, the proposed method has been validate against the physical prototype of a flexible fish backbone. The main advantages of this method, compared to the exact solutions available in the literature are twofold: first, smaller computational cost and second, it allows analysing the free vibration in beams whose section is an arbitrary function, which is normally difficult or even impossible with other analytical methods.

  7. IR spectroscopic studies in microchannel structures

    NASA Astrophysics Data System (ADS)

    Guber, A. E.; Bier, W.

    1998-06-01

    By means of the various microengineering methods available, microreaction systems can be produced among others. These microreactors consist of microchannels, where chemical reactions take place under defined conditions. For optimum process control, continuous online analytics is envisaged in the microchannels. For this purpose, a special analytical module has been developed. It may be applied for IR spectroscopic studies at any point of the microchannel.

  8. Formal and physical equivalence in two cases in contemporary quantum physics

    NASA Astrophysics Data System (ADS)

    Fraser, Doreen

    2017-08-01

    The application of analytic continuation in quantum field theory (QFT) is juxtaposed to T-duality and mirror symmetry in string theory. Analytic continuation-a mathematical transformation that takes the time variable t to negative imaginary time-it-was initially used as a mathematical technique for solving perturbative Feynman diagrams, and was subsequently the basis for the Euclidean approaches within mainstream QFT (e.g., Wilsonian renormalization group methods, lattice gauge theories) and the Euclidean field theory program for rigorously constructing non-perturbative models of interacting QFTs. A crucial difference between theories related by duality transformations and those related by analytic continuation is that the former are judged to be physically equivalent while the latter are regarded as physically inequivalent. There are other similarities between the two cases that make comparing and contrasting them a useful exercise for clarifying the type of argument that is needed to support the conclusion that dual theories are physically equivalent. In particular, T-duality and analytic continuation in QFT share the criterion for predictive equivalence that two theories agree on the complete set of expectation values and the mass spectra and the criterion for formal equivalence that there is a "translation manual" between the physically significant algebras of observables and sets of states in the two theories. The analytic continuation case study illustrates how predictive and formal equivalence are compatible with physical inequivalence, but not in the manner of standard underdetermination cases. Arguments for the physical equivalence of dual theories must cite considerations beyond predictive and formal equivalence. The analytic continuation case study is an instance of the strategy of developing a physical theory by extending the formal or mathematical equivalence with another physical theory as far as possible. That this strategy has resulted in developments in pure mathematics as well as theoretical physics is another feature that this case study has in common with dualities in string theory.

  9. The role of analytical science in natural resource decision making

    NASA Astrophysics Data System (ADS)

    Miller, Alan

    1993-09-01

    There is a continuing debate about the proper role of analytical (positivist) science in natural resource decision making. Two diametrically opposed views are evident, arguing for and against a more extended role for scientific information. The debate takes on a different complexion if one recognizes that certain kinds of problem, referred to here as “wicked” or “trans-science” problems, may not be amenable to the analytical process. Indeed, the mistaken application of analytical methods to trans-science problems may not only be a waste of time and money but also serve to hinder policy development. Since many environmental issues are trans-science in nature, then it follows that alternatives to analytical science need to be developed. In this article, the issues involved in the debate are clarified by examining the impact of the use of analytical methods in a particular case, the spruce budworm controversy in New Brunswick. The article ends with some suggestions about a “holistic” approach to the problem.

  10. Performance Evaluation of an Improved GC-MS Method to Quantify Methylmercury in Fish.

    PubMed

    Watanabe, Takahiro; Kikuchi, Hiroyuki; Matsuda, Rieko; Hayashi, Tomoko; Akaki, Koichi; Teshima, Reiko

    2015-01-01

    Here, we set out to improve our previously developed methylmercury analytical method, involving phenyl derivatization and gas chromatography-mass spectrometry (GC-MS). In the improved method, phenylation of methylmercury with sodium tetraphenylborate was carried out in a toluene/water two-phase system, instead of in water alone. The modification enabled derivatization at optimum pH, and the formation of by-products was dramatically reduced. In addition, adsorption of methyl phenyl mercury in the GC system was suppressed by co-injection of PEG200, enabling continuous analysis without loss of sensitivity. The performance of the improved analytical method was independently evaluated by three analysts using certified reference materials and methylmercury-spiked fresh fish samples. The present analytical method was validated as suitable for determination of compliance with the provisional regulation value for methylmercury in fish, set in the Food Sanitation haw.

  11. [Continual improvement of quantitative analytical method development of Panax notogineng saponins based on quality by design].

    PubMed

    Dai, Sheng-Yun; Xu, Bing; Shi, Xin-Yuan; Xu, Xiang; Sun, Ying-Qiang; Qiao, Yan-Jiang

    2017-03-01

    This study is aimed to propose a continual improvement strategy based on quality by design (QbD). An ultra high performance liquid chromatography (UPLC) method was developed to accomplish the method transformation from HPLC to UPLC of Panax notogineng saponins (PNS) and achieve the continual improvement of PNS based on QbD, for example. Plackett-Burman screening design and Box-Behnken optimization design were employed to further understand the relationship between the critical method parameters (CMPs) and critical method attributes (CMAs). And then the Bayesian design space was built. The separation degree of the critical peaks (ginsenoside Rg₁ and ginsenoside Re) was over 2.0 and the analysis time was less than 17 min by a method chosen from the design space with 20% of the initial concentration of the acetonitrile, 10 min of the isocratic time and 6%•min⁻¹ of the gradient slope. At last, the optimum method was validated by accuracy profile. Based on the same analytical target profile (ATP), the comparison of HPLC and UPLC including chromatograph method, CMA identification, CMP-CMA model and system suitability test (SST) indicated that the UPLC method could shorten the analysis time, improve the critical separation and satisfy the requirement of the SST. In all, HPLC method could be replaced by UPLC for the quantity analysis of PNS. Copyright© by the Chinese Pharmaceutical Association.

  12. Detection, Occurrence and Fate of Emerging Contaminants in Agricultural Environments

    PubMed Central

    Cassada, David A.; Bartelt–Hunt, Shannon L.; Li, Xu; D’Alessio, Matteo; Zhang, Yun; Zhang, Yuping; Sallach, J. Brett

    2018-01-01

    A total of 59 papers published in 2015 were reviewed ranging from detailed descriptions of analytical methods, to fate and occurrence studies, to ecological effects and sampling techniques for a wide variety of emerging contaminants likely to occur in agricultural environments. New methods and studies on veterinary pharmaceuticals, steroids, antibiotic resistance genes in agricultural environments continue to expand our knowledge base on the occurrence and potential impacts of these compounds. This review is divided into the following sections: Introduction, Analytical Methods, Steroid Hormones, Pharmaceutical Contaminants, Transformation Products, and “Antibiotic Resistance, Drugs, Bugs and Genes”. PMID:27620078

  13. Entropy generation in Gaussian quantum transformations: applying the replica method to continuous-variable quantum information theory

    NASA Astrophysics Data System (ADS)

    Gagatsos, Christos N.; Karanikas, Alexandros I.; Kordas, Georgios; Cerf, Nicolas J.

    2016-02-01

    In spite of their simple description in terms of rotations or symplectic transformations in phase space, quadratic Hamiltonians such as those modelling the most common Gaussian operations on bosonic modes remain poorly understood in terms of entropy production. For instance, determining the quantum entropy generated by a Bogoliubov transformation is notably a hard problem, with generally no known analytical solution, while it is vital to the characterisation of quantum communication via bosonic channels. Here we overcome this difficulty by adapting the replica method, a tool borrowed from statistical physics and quantum field theory. We exhibit a first application of this method to continuous-variable quantum information theory, where it enables accessing entropies in an optical parametric amplifier. As an illustration, we determine the entropy generated by amplifying a binary superposition of the vacuum and a Fock state, which yields a surprisingly simple, yet unknown analytical expression.

  14. Recovering Paleo-Records from Antarctic Ice-Cores by Coupling a Continuous Melting Device and Fast Ion Chromatography.

    PubMed

    Severi, Mirko; Becagli, Silvia; Traversi, Rita; Udisti, Roberto

    2015-11-17

    Recently, the increasing interest in the understanding of global climatic changes and on natural processes related to climate yielded the development and improvement of new analytical methods for the analysis of environmental samples. The determination of trace chemical species is a useful tool in paleoclimatology, and the techniques for the analysis of ice cores have evolved during the past few years from laborious measurements on discrete samples to continuous techniques allowing higher temporal resolution, higher sensitivity and, above all, higher throughput. Two fast ion chromatographic (FIC) methods are presented. The first method was able to measure Cl(-), NO3(-) and SO4(2-) in a melter-based continuous flow system separating the three analytes in just 1 min. The second method (called Ultra-FIC) was able to perform a single chromatographic analysis in just 30 s and the resulting sampling resolution was 1.0 cm with a typical melting rate of 4.0 cm min(-1). Both methods combine the accuracy, precision, and low detection limits of ion chromatography with the enhanced speed and high depth resolution of continuous melting systems. Both methods have been tested and validated with the analysis of several hundred meters of different ice cores. In particular, the Ultra-FIC method was used to reconstruct the high-resolution SO4(2-) profile of the last 10,000 years for the EDML ice core, allowing the counting of the annual layers, which represents a key point in dating these kind of natural archives.

  15. Improved full analytical polygon-based method using Fourier analysis of the three-dimensional affine transformation.

    PubMed

    Pan, Yijie; Wang, Yongtian; Liu, Juan; Li, Xin; Jia, Jia

    2014-03-01

    Previous research [Appl. Opt.52, A290 (2013)] has revealed that Fourier analysis of three-dimensional affine transformation theory can be used to improve the computation speed of the traditional polygon-based method. In this paper, we continue our research and propose an improved full analytical polygon-based method developed upon this theory. Vertex vectors of primitive and arbitrary triangles and the pseudo-inverse matrix were used to obtain an affine transformation matrix representing the spatial relationship between the two triangles. With this relationship and the primitive spectrum, we analytically obtained the spectrum of the arbitrary triangle. This algorithm discards low-level angular dependent computations. In order to add diffusive reflection to each arbitrary surface, we also propose a whole matrix computation approach that takes advantage of the affine transformation matrix and uses matrix multiplication to calculate shifting parameters of similar sub-polygons. The proposed method improves hologram computation speed for the conventional full analytical approach. Optical experimental results are demonstrated which prove that the proposed method can effectively reconstruct three-dimensional scenes.

  16. Multi-crosswell profile 3D imaging and method

    DOEpatents

    Washbourne, John K.; Rector, III, James W.; Bube, Kenneth P.

    2002-01-01

    Characterizing the value of a particular property, for example, seismic velocity, of a subsurface region of ground is described. In one aspect, the value of the particular property is represented using at least one continuous analytic function such as a Chebychev polynomial. The seismic data may include data derived from at least one crosswell dataset for the subsurface region of interest and may also include other data. In either instance, data may simultaneously be used from a first crosswell dataset in conjunction with one or more other crosswell datasets and/or with the other data. In another aspect, the value of the property is characterized in three dimensions throughout the region of interest using crosswell and/or other data. In still another aspect, crosswell datasets for highly deviated or horizontal boreholes are inherently useful. The method is performed, in part, by fitting a set of vertically spaced layer boundaries, represented by an analytic function such as a Chebychev polynomial, within and across the region encompassing the boreholes such that a series of layers is defined between the layer boundaries. Initial values of the particular property are then established between the layer boundaries and across the subterranean region using a series of continuous analytic functions. The continuous analytic functions are then adjusted to more closely match the value of the particular property across the subterranean region of ground to determine the value of the particular property for any selected point within the region.

  17. Analytical method for the fast time-domain reconstruction of fluorescent inclusions in vitro and in vivo.

    PubMed

    Han, Sung-Ho; Farshchi-Heydari, Salman; Hall, David J

    2010-01-20

    A novel time-domain optical method to reconstruct the relative concentration, lifetime, and depth of a fluorescent inclusion is described. We establish an analytical method for the estimations of these parameters for a localized fluorescent object directly from the simple evaluations of continuous wave intensity, exponential decay, and temporal position of the maximum of the fluorescence temporal point-spread function. Since the more complex full inversion process is not involved, this method permits a robust and fast processing in exploring the properties of a fluorescent inclusion. This method is confirmed by in vitro and in vivo experiments. Copyright 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  18. Development and Preparation of Lead-Containing Paint Films and Diagnostic Test Materials

    EPA Science Inventory

    Lead in paint continues to be a threat to children’s health in cities across the United States, which means there is an ongoing need for testing and analysis of paint. This ongoing analytical effort and especially development of new methods continue to drive the need for diagnost...

  19. Numerical implementation of complex orthogonalization, parallel transport on Stiefel bundles, and analyticity

    NASA Astrophysics Data System (ADS)

    Avitabile, Daniele; Bridges, Thomas J.

    2010-06-01

    Numerical integration of complex linear systems of ODEs depending analytically on an eigenvalue parameter are considered. Complex orthogonalization, which is required to stabilize the numerical integration, results in non-analytic systems. It is shown that properties of eigenvalues are still efficiently recoverable by extracting information from a non-analytic characteristic function. The orthonormal systems are constructed using the geometry of Stiefel bundles. Different forms of continuous orthogonalization in the literature are shown to correspond to different choices of connection one-form on the Stiefel bundle. For the numerical integration, Gauss-Legendre Runge-Kutta algorithms are the principal choice for preserving orthogonality, and performance results are shown for a range of GLRK methods. The theory and methods are tested by application to example boundary value problems including the Orr-Sommerfeld equation in hydrodynamic stability.

  20. Comparison of a discrete steepest ascent method with the continuous steepest ascent method for optimal programing

    NASA Technical Reports Server (NTRS)

    Childs, A. G.

    1971-01-01

    A discrete steepest ascent method which allows controls which are not piecewise constant (for example, it allows all continuous piecewise linear controls) was derived for the solution of optimal programming problems. This method is based on the continuous steepest ascent method of Bryson and Denham and new concepts introduced by Kelley and Denham in their development of compatible adjoints for taking into account the effects of numerical integration. The method is a generalization of the algorithm suggested by Canon, Cullum, and Polak with the details of the gradient computation given. The discrete method was compared with the continuous method for an aerodynamics problem for which an analytic solution is given by Pontryagin's maximum principle, and numerical results are presented. The discrete method converges more rapidly than the continuous method at first, but then for some undetermined reason, loses its exponential convergence rate. A comparsion was also made for the algorithm of Canon, Cullum, and Polak using piecewise constant controls. This algorithm is very competitive with the continuous algorithm.

  1. In-line Raman spectroscopic monitoring and feedback control of a continuous twin-screw pharmaceutical powder blending and tableting process.

    PubMed

    Nagy, Brigitta; Farkas, Attila; Gyürkés, Martin; Komaromy-Hiller, Szofia; Démuth, Balázs; Szabó, Bence; Nusser, Dávid; Borbás, Enikő; Marosi, György; Nagy, Zsombor Kristóf

    2017-09-15

    The integration of Process Analytical Technology (PAT) initiative into the continuous production of pharmaceuticals is indispensable for reliable production. The present paper reports the implementation of in-line Raman spectroscopy in a continuous blending and tableting process of a three-component model pharmaceutical system, containing caffeine as model active pharmaceutical ingredient (API), glucose as model excipient and magnesium stearate as lubricant. The real-time analysis of API content, blend homogeneity, and tablet content uniformity was performed using a Partial Least Squares (PLS) quantitative method. The in-line Raman spectroscopic monitoring showed that the continuous blender was capable of producing blends with high homogeneity, and technological malfunctions can be detected by the proposed PAT method. The Raman spectroscopy-based feedback control of the API feeder was also established, creating a 'Process Analytically Controlled Technology' (PACT), which guarantees the required API content in the produced blend. This is, to the best of the authors' knowledge, the first ever application of Raman-spectroscopy in continuous blending and the first Raman-based feedback control in the formulation technology of solid pharmaceuticals. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. 40 CFR 141.74 - Analytical and monitoring requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., Volume 54, p. 3197, December, 1988), may be obtained from the American Water Works Association Research... method for use with a continuous monitoring instrument provided the chemistry, accuracy, and precision...

  3. 40 CFR 141.74 - Analytical and monitoring requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., Volume 54, p. 3197, December, 1988), may be obtained from the American Water Works Association Research... method for use with a continuous monitoring instrument provided the chemistry, accuracy, and precision...

  4. 40 CFR 141.74 - Analytical and monitoring requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., Volume 54, p. 3197, December, 1988), may be obtained from the American Water Works Association Research... method for use with a continuous monitoring instrument provided the chemistry, accuracy, and precision...

  5. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1991-01-01

    Continuing studies associated with the development of the quasi-analytical (QA) sensitivity method for three dimensional transonic flow about wings are presented. Furthermore, initial results using the quasi-analytical approach were obtained and compared to those computed using the finite difference (FD) approach. The basic goals achieved were: (1) carrying out various debugging operations pertaining to the quasi-analytical method; (2) addition of section design variables to the sensitivity equation in the form of multiple right hand sides; (3) reconfiguring the analysis/sensitivity package in order to facilitate the execution of analysis/FD/QA test cases; and (4) enhancing the display of output data to allow careful examination of the results and to permit various comparisons of sensitivity derivatives obtained using the FC/QA methods to be conducted easily and quickly. In addition to discussing the above goals, the results of executing subcritical and supercritical test cases are presented.

  6. A monolithic homotopy continuation algorithm with application to computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Brown, David A.; Zingg, David W.

    2016-09-01

    A new class of homotopy continuation methods is developed suitable for globalizing quasi-Newton methods for large sparse nonlinear systems of equations. The new continuation methods, described as monolithic homotopy continuation, differ from the classical predictor-corrector algorithm in that the predictor and corrector phases are replaced with a single phase which includes both a predictor and corrector component. Conditional convergence and stability are proved analytically. Using a Laplacian-like operator to construct the homotopy, the new algorithm is shown to be more efficient than the predictor-corrector homotopy continuation algorithm as well as an implementation of the widely-used pseudo-transient continuation algorithm for some inviscid and turbulent, subsonic and transonic external aerodynamic flows over the ONERA M6 wing and the NACA 0012 airfoil using a parallel implicit Newton-Krylov finite-difference flow solver.

  7. Methods of analysis by the U.S. Geological Survey National Water Quality Laboratory; determination of inorganic and organic constituents in water and fluvial sediments

    USGS Publications Warehouse

    Fishman, M. J.

    1993-01-01

    Methods to be used to analyze samples of water, suspended sediment and bottom material for their content of inorganic and organic constituents are presented. Technology continually changes, and so this laboratory manual includes new and revised methods for determining the concentration of dissolved constituents in water, whole water recoverable constituents in water-suspended sediment samples, and recoverable concentration of constit- uents in bottom material. For each method, the general topics covered are the application, the principle of the method, interferences, the apparatus and reagents required, a detailed description of the analytical procedure, reporting results, units and significant figures, and analytical precision data. Included in this manual are 30 methods.

  8. 40 CFR 799.9410 - TSCA chronic toxicity.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... should be used, if possible, throughout the duration of the study, and the research sample should be... continuously or intermittently depending on the method of analysis. Chamber concentration may be measured using gravimetric or analytical methods, as appropriate. If trial run measurements are reasonably consistent (±10...

  9. 40 CFR 799.9410 - TSCA chronic toxicity.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... should be used, if possible, throughout the duration of the study, and the research sample should be... continuously or intermittently depending on the method of analysis. Chamber concentration may be measured using gravimetric or analytical methods, as appropriate. If trial run measurements are reasonably consistent (±10...

  10. 40 CFR 799.9410 - TSCA chronic toxicity.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... should be used, if possible, throughout the duration of the study, and the research sample should be... continuously or intermittently depending on the method of analysis. Chamber concentration may be measured using gravimetric or analytical methods, as appropriate. If trial run measurements are reasonably consistent (±10...

  11. 40 CFR 799.9410 - TSCA chronic toxicity.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... should be used, if possible, throughout the duration of the study, and the research sample should be... continuously or intermittently depending on the method of analysis. Chamber concentration may be measured using gravimetric or analytical methods, as appropriate. If trial run measurements are reasonably consistent (±10...

  12. 40 CFR 799.9410 - TSCA chronic toxicity.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... should be used, if possible, throughout the duration of the study, and the research sample should be... continuously or intermittently depending on the method of analysis. Chamber concentration may be measured using gravimetric or analytical methods, as appropriate. If trial run measurements are reasonably consistent (±10...

  13. Approximate analytical description of the elastic strain field due to an inclusion in a continuous medium with cubic anisotropy

    NASA Astrophysics Data System (ADS)

    Nenashev, A. V.; Koshkarev, A. A.; Dvurechenskii, A. V.

    2018-03-01

    We suggest an approach to the analytical calculation of the strain distribution due to an inclusion in elastically anisotropic media for the case of cubic anisotropy. The idea consists in the approximate reduction of the anisotropic problem to a (simpler) isotropic problem. This gives, for typical semiconductors, an improvement in accuracy by an order of magnitude, compared to the isotropic approximation. Our method allows using, in the case of elastically anisotropic media, analytical solutions obtained for isotropic media only, such as analytical formulas for the strain due to polyhedral inclusions. The present work substantially extends the applicability of analytical results, making them more suitable for describing real systems, such as epitaxial quantum dots.

  14. Evaluation of the matrix effect on gas chromatography--mass spectrometry with carrier gas containing ethylene glycol as an analyte protectant.

    PubMed

    Fujiyoshi, Tomoharu; Ikami, Takahito; Sato, Takashi; Kikukawa, Koji; Kobayashi, Masato; Ito, Hiroshi; Yamamoto, Atsushi

    2016-02-19

    The consequences of matrix effects in GC are a major issue of concern in pesticide residue analysis. The aim of this study was to evaluate the applicability of an analyte protectant generator in pesticide residue analysis using a GC-MS system. The technique is based on continuous introduction of ethylene glycol into the carrier gas. Ethylene glycol as an analyte protectant effectively compensated the matrix effects in agricultural product extracts. All peak intensities were increased by this technique without affecting the GC-MS performance. Calibration curves for ethylene glycol in the GC-MS system with various degrees of pollution were compared and similar response enhancements were observed. This result suggests a convenient multi-residue GC-MS method using an analyte protectant generator instead of the conventional compensation method for matrix-induced response enhancement adding the mixture of analyte protectants into both neat and sample solutions. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. 40 CFR Appendix A to Part 63 - Test Methods Pollutant Measurement Methods From Various Waste Media

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 15 2013-07-01 2013-07-01 false Test Methods Pollutant Measurement... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. A Appendix A to Part 63—Test Methods Pollutant... analyte spiking? 13.0How do I conduct tests at similar sources? Optional Requirements 14.0How do I use and...

  16. 40 CFR Appendix A to Part 63 - Test Methods Pollutant Measurement Methods From Various Waste Media

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 15 2014-07-01 2014-07-01 false Test Methods Pollutant Measurement... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. A Appendix A to Part 63—Test Methods Pollutant... analyte spiking? 13.0How do I conduct tests at similar sources? Optional Requirements 14.0How do I use and...

  17. 40 CFR Appendix A to Part 63 - Test Methods Pollutant Measurement Methods From Various Waste Media

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 14 2011-07-01 2011-07-01 false Test Methods Pollutant Measurement... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. A Appendix A to Part 63—Test Methods Pollutant... analyte spiking? 13.0How do I conduct tests at similar sources? Optional Requirements 14.0How do I use and...

  18. 40 CFR Appendix A to Part 63 - Test Methods Pollutant Measurement Methods From Various Waste Media

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 15 2012-07-01 2012-07-01 false Test Methods Pollutant Measurement... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. A Appendix A to Part 63—Test Methods Pollutant... analyte spiking? 13.0How do I conduct tests at similar sources? Optional Requirements 14.0How do I use and...

  19. Analytical capillary isotachophoresis after 50 years of development: Recent progress 2014-2016.

    PubMed

    Malá, Zdena; Gebauer, Petr; Boček, Petr

    2017-01-01

    This review brings a survey of papers on analytical ITP published since 2014 until the first quarter of 2016. The 50th anniversary of ITP as a modern analytical method offers the opportunity to present a brief view on its beginnings and to discuss the present state of the art from the viewpoint of the history of its development. Reviewed papers from the field of theory and principles confirm the continuing importance of computer simulations in the discovery of new and unexpected phenomena. The strongly developing field of instrumentation and techniques shows novel channel methodologies including use of porous media and new on-chip assays, where ITP is often included in a preseparative or even preparative function. A number of new analytical applications are reported, with ITP appearing almost exclusively in combination with other principles and methods. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Process analytical technology in continuous manufacturing of a commercial pharmaceutical product.

    PubMed

    Vargas, Jenny M; Nielsen, Sarah; Cárdenas, Vanessa; Gonzalez, Anthony; Aymat, Efrain Y; Almodovar, Elvin; Classe, Gustavo; Colón, Yleana; Sanchez, Eric; Romañach, Rodolfo J

    2018-03-01

    The implementation of process analytical technology and continuous manufacturing at an FDA approved commercial manufacturing site is described. In this direct compaction process the blends produced were monitored with a Near Infrared (NIR) spectroscopic calibration model developed with partial least squares (PLS) regression. The authors understand that this is the first study where the continuous manufacturing (CM) equipment was used as a gravimetric reference method for the calibration model. A principal component analysis (PCA) model was also developed to identify the powder blend, and determine whether it was similar to the calibration blends. An air diagnostic test was developed to assure that powder was present within the interface when the NIR spectra were obtained. The air diagnostic test as well the PCA and PLS calibration model were integrated into an industrial software platform that collects the real time NIR spectra and applies the calibration models. The PCA test successfully detected an equipment malfunction. Variographic analysis was also performed to estimate the sampling analytical errors that affect the results from the NIR spectroscopic method during commercial production. The system was used to monitor and control a 28 h continuous manufacturing run, where the average drug concentration determined by the NIR method was 101.17% of label claim with a standard deviation of 2.17%, based on 12,633 spectra collected. The average drug concentration for the tablets produced from these blends was 100.86% of label claim with a standard deviation of 0.4%, for 500 tablets analyzed by Fourier Transform Near Infrared (FT-NIR) transmission spectroscopy. The excellent agreement between the mean drug concentration values in the blends and tablets produced provides further evidence of the suitability of the validation strategy that was followed. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. 21 CFR 160.180 - Egg yolks.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 2 2012-04-01 2012-04-01 false Egg yolks. 160.180 Section 160.180 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN... the method prescribed in “Official Methods of Analysis of the Association of Official Analytical...

  2. Numerical optimization using flow equations.

    PubMed

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  3. Numerical optimization using flow equations

    NASA Astrophysics Data System (ADS)

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  4. Determination of the aerosol size distribution by analytic inversion of the extinction spectrum in the complex anomalous diffraction approximation.

    PubMed

    Franssens, G; De Maziére, M; Fonteyn, D

    2000-08-20

    A new derivation is presented for the analytical inversion of aerosol spectral extinction data to size distributions. It is based on the complex analytic extension of the anomalous diffraction approximation (ADA). We derive inverse formulas that are applicable to homogeneous nonabsorbing and absorbing spherical particles. Our method simplifies, generalizes, and unifies a number of results obtained previously in the literature. In particular, we clarify the connection between the ADA transform and the Fourier and Laplace transforms. Also, the effect of the particle refractive-index dispersion on the inversion is examined. It is shown that, when Lorentz's model is used for this dispersion, the continuous ADA inverse transform is mathematically well posed, whereas with a constant refractive index it is ill posed. Further, a condition is given, in terms of Lorentz parameters, for which the continuous inverse operator does not amplify the error.

  5. Annual banned-substance review: analytical approaches in human sports drug testing.

    PubMed

    Thevis, Mario; Kuuranne, Tiia; Geyer, Hans; Schänzer, Wilhelm

    2017-01-01

    There has been an immense amount of visibility of doping issues on the international stage over the past 12 months with the complexity of doping controls reiterated on various occasions. Hence, analytical test methods continuously being updated, expanded, and improved to provide specific, sensitive, and comprehensive test results in line with the World Anti-Doping Agency's (WADA) 2016 Prohibited List represent one of several critical cornerstones of doping controls. This enterprise necessitates expediting the (combined) exploitation of newly generated information on novel and/or superior target analytes for sports drug testing assays, drug elimination profiles, alternative test matrices, and recent advances in instrumental developments. This paper is a continuation of the series of annual banned-substance reviews appraising the literature published between October 2015 and September 2016 concerning human sports drug testing in the context of WADA's 2016 Prohibited List. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Lead Slowing-Down Spectrometry Time Spectral Analysis for Spent Fuel Assay: FY11 Status Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulisek, Jonathan A.; Anderson, Kevin K.; Bowyer, Sonya M.

    2011-09-30

    Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration, of which PNNL is a part, to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertaintymore » considerably lower than the approximately 10% typical of today's confirmatory assay methods. This document is a progress report for FY2011 PNNL analysis and algorithm development. Progress made by PNNL in FY2011 continues to indicate the promise of LSDS analysis and algorithms applied to used fuel. PNNL developed an empirical model based on calibration of the LSDS to responses generated from well-characterized used fuel. The empirical model, which accounts for self-shielding effects using empirical basis vectors calculated from the singular value decomposition (SVD) of a matrix containing the true self-shielding functions of the used fuel assembly models. The potential for the direct and independent assay of the sum of the masses of 239Pu and 241Pu to within approximately 3% over a wide used fuel parameter space was demonstrated. Also, in FY2011, PNNL continued to develop an analytical model. Such efforts included the addition of six more non-fissile absorbers in the analytical shielding function and the non-uniformity of the neutron flux across the LSDS assay chamber. A hybrid analytical-empirical approach was developed to determine the mass of total Pu (sum of the masses of 239Pu, 240Pu, and 241Pu), which is an important quantity in safeguards. Results using this hybrid method were of approximately the same accuracy as the pure empirical approach. In addition, total Pu with much better accuracy with the hybrid approach than the pure analytical approach. In FY2012, PNNL will continue efforts to optimize its empirical model and minimize its reliance on calibration data. In addition, PNNL will continue to develop an analytical model, considering effects such as neutron-scattering in the fuel and cladding, as well as neutrons streaming through gaps between fuel pins in the fuel assembly.« less

  7. Corn leaf nitrate reductase - A nontoxic alternative to cadmium for photometric nitrate determinations in water samples by air-segmented continuous-flow analysis

    USGS Publications Warehouse

    Patton, C.J.; Fischer, A.E.; Campbell, W.H.; Campbell, E.R.

    2002-01-01

    Development, characterization, and operational details of an enzymatic, air-segmented continuous-flow analytical method for colorimetric determination of nitrate + nitrite in natural-water samples is described. This method is similar to U.S. Environmental Protection Agency method 353.2 and U.S. Geological Survey method 1-2545-90 except that nitrate is reduced to nitrite by soluble nitrate reductase (NaR, EC 1.6.6.1) purified from corn leaves rather than a packed-bed cadmium reactor. A three-channel, air-segmented continuous-flow analyzer-configured for simultaneous determination of nitrite (0.020-1.000 mg-N/L) and nitrate + nitrite (0.05-5.00 mg-N/L) by the nitrate reductase and cadmium reduction methods-was used to characterize analytical performance of the enzymatic reduction method. At a sampling rate of 90 h-1, sample interaction was less than 1% for all three methods. Method detection limits were 0.001 mg of NO2- -N/L for nitrite, 0.003 mg of NO3-+ NO2- -N/L for nitrate + nitrite by the cadmium-reduction method, and 0.006 mg of NO3- + NO2- -N/L for nitrate + nitrite by the enzymatic-reduction method. Reduction of nitrate to nitrite by both methods was greater than 95% complete over the entire calibration range. The difference between the means of nitrate + nitrite concentrations in 124 natural-water samples determined simultaneously by the two methods was not significantly different from zero at the p = 0.05 level.

  8. Dark soliton dynamics and interactions in continuous-wave-induced lattices.

    PubMed

    Tsopelas, Ilias; Kominis, Yannis; Hizanidis, Kyriakos

    2007-10-01

    The dynamics of dark spatial soliton beams and their interaction under the presence of a continuous wave (CW), which dynamically induces a photonic lattice, are investigated. It is shown that appropriate selection of the characteristic parameters of the CW result in controllable steering of a single soliton as well as controllable interaction between two solitons. Depending on the CW parameters, the soliton angle of propagation can be changed drastically, while two-soliton interaction can be either enhanced or reduced, suggesting a reconfigurable soliton control mechanism. Our analytical approach, based on the variational perturbation method, provides a dynamical system for the dark soliton evolution parameters. Analytical results are shown in good agreement with direct numerical simulations.

  9. Green's functions in equilibrium and nonequilibrium from real-time bold-line Monte Carlo

    NASA Astrophysics Data System (ADS)

    Cohen, Guy; Gull, Emanuel; Reichman, David R.; Millis, Andrew J.

    2014-03-01

    Green's functions for the Anderson impurity model are obtained within a numerically exact formalism. We investigate the limits of analytical continuation for equilibrium systems, and show that with real time methods even sharp high-energy features can be reliably resolved. Continuing to an Anderson impurity in a junction, we evaluate two-time correlation functions, spectral properties, and transport properties, showing how the correspondence between the spectral function and the differential conductance breaks down when nonequilibrium effects are taken into account. Finally, a long-standing dispute regarding this model has involved the voltage splitting of the Kondo peak, an effect which was predicted over a decade ago by approximate analytical methods but never successfully confirmed by numerics. We settle the issue by demonstrating in an unbiased manner that this splitting indeed occurs. Yad Hanadiv-Rothschild Foundation, TG-DMR120085, TG-DMR130036, NSF CHE-1213247, NSF DMR 1006282, DOE ER 46932.

  10. Acidified pressurized hot water for the continuous extraction of cadmium and lead from plant materials prior to ETAAS

    NASA Astrophysics Data System (ADS)

    Morales-Muñoz, S.; Luque-García, J. L.; Luque de Castro, M. D.

    2003-01-01

    Acidified and pressurized hot water is proposed for the continuous leaching of Cd and Pb from plants prior to determination by electrothermal atomic absorption spectrometry. Beech leaves (a certified reference material—CRM 100—where the analytes were not certified) were used for optimizing the method by a multivariate approach. The samples (0.5 g) were subjected to dynamic extraction with water modified with 1% v/v HNO 3 at 250 °C as leachant. A kinetics study was performed in order to know the pattern of the extraction process. The method was validated with a CRM (olive leaves, 062 from the BCR) where the analytes had been certified. The agreement between the certified values and those found using the proposed method demonstrates its usefulness. The repeatability and within-laboratory reproducibility were 3.7 and 2.3% for Cd and 1.04% and 6.3% for Pb, respectively. The precision of the method, together with its efficiency, rapidity, and environmental acceptability, makes it a good alternative for the determination of trace metals in plant material.

  11. Applications of computer algebra to distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Storch, Joel A.

    1993-01-01

    In the analysis of vibrations of continuous elastic systems, one often encounters complicated transcendental equations with roots directly related to the system's natural frequencies. Typically, these equations contain system parameters whose values must be specified before a numerical solution can be obtained. The present paper presents a method whereby the fundamental frequency can be obtained in analytical form to any desired degree of accuracy. The method is based upon truncation of rapidly converging series involving inverse powers of the system natural frequencies. A straightforward method to developing these series and summing them in closed form is presented. It is demonstrated how Computer Algebra can be exploited to perform the intricate analytical procedures which otherwise would render the technique difficult to apply in practice. We illustrate the method by developing two analytical approximations to the fundamental frequency of a vibrating cantilever carrying a rigid tip body. The results are compared to the numerical solution of the exact (transcendental) frequency equation over a range of system parameters.

  12. Analytical Approaches to Verify Food Integrity: Needs and Challenges.

    PubMed

    Stadler, Richard H; Tran, Lien-Anh; Cavin, Christophe; Zbinden, Pascal; Konings, Erik J M

    2016-09-01

    A brief overview of the main analytical approaches and practices to determine food authenticity is presented, addressing, as well, food supply chain and future requirements to more effectively mitigate food fraud. Food companies are introducing procedures and mechanisms that allow them to identify vulnerabilities in their food supply chain under the umbrella of a food fraud prevention management system. A key step and first line of defense is thorough supply chain mapping and full transparency, assessing the likelihood of fraudsters to penetrate the chain at any point. More vulnerable chains, such as those where ingredients and/or raw materials are purchased through traders or auctions, may require a higher degree of sampling, testing, and surveillance. Access to analytical tools is therefore pivotal, requiring continuous development and possibly sophistication in identifying chemical markers, data acquisition, and modeling. Significant progress in portable technologies is evident already today, for instance, as in the rapid testing now available at the agricultural level. In the near future, consumers may also have the ability to scan products in stores or at home to authenticate labels and food content. For food manufacturers, targeted analytical methods complemented by untargeted approaches are end control measures at the factory gate when the material is delivered. In essence, testing for food adulterants is an integral part of routine QC, ideally tailored to the risks in the individual markets and/or geographies or supply chains. The development of analytical methods is a first step in verifying the compliance and authenticity of food materials. A next, more challenging step is the successful establishment of global consensus reference methods as exemplified by the AOAC Stakeholder Panel on Infant Formula and Adult Nutritionals initiative, which can serve as an approach that could also be applied to methods for contaminants and adulterants in food. The food industry has taken these many challenges aboard, working closely with all stakeholders and continuously communicating on progress in a fully transparent manner.

  13. A simple analytical aerodynamic model of Langley Winged-Cone Aerospace Plane concept

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.

    1994-01-01

    A simple three DOF analytical aerodynamic model of the Langley Winged-Coned Aerospace Plane concept is presented in a form suitable for simulation, trajectory optimization, and guidance and control studies. The analytical model is especially suitable for methods based on variational calculus. Analytical expressions are presented for lift, drag, and pitching moment coefficients from subsonic to hypersonic Mach numbers and angles of attack up to +/- 20 deg. This analytical model has break points at Mach numbers of 1.0, 1.4, 4.0, and 6.0. Across these Mach number break points, the lift, drag, and pitching moment coefficients are made continuous but their derivatives are not. There are no break points in angle of attack. The effect of control surface deflection is not considered. The present analytical model compares well with the APAS calculations and wind tunnel test data for most angles of attack and Mach numbers.

  14. U.S. Geological Survey Standard Reference Sample Project: Performance Evaluation of Analytical Laboratories

    USGS Publications Warehouse

    Long, H. Keith; Daddow, Richard L.; Farrar, Jerry W.

    1998-01-01

    Since 1962, the U.S. Geological Survey (USGS) has operated the Standard Reference Sample Project to evaluate the performance of USGS, cooperator, and contractor analytical laboratories that analyze chemical constituents of environmental samples. The laboratories are evaluated by using performance evaluation samples, called Standard Reference Samples (SRSs). SRSs are submitted to laboratories semi-annually for round-robin laboratory performance comparison purposes. Currently, approximately 100 laboratories are evaluated for their analytical performance on six SRSs for inorganic and nutrient constituents. As part of the SRS Project, a surplus of homogeneous, stable SRSs is maintained for purchase by USGS offices and participating laboratories for use in continuing quality-assurance and quality-control activities. Statistical evaluation of the laboratories results provides information to compare the analytical performance of the laboratories and to determine possible analytical deficiences and problems. SRS results also provide information on the bias and variability of different analytical methods used in the SRS analyses.

  15. Comparison of three methods for wind turbine capacity factor estimation.

    PubMed

    Ditkovich, Y; Kuperman, A

    2014-01-01

    Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first "quasiexact" approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second "analytic" approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third "approximate" approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daley, P F

    The overall objective of this project is the continued development, installation, and testing of continuous water sampling and analysis technologies for application to on-site monitoring of groundwater treatment systems and remediation sites. In a previous project, an on-line analytical system (OLAS) for multistream water sampling was installed at the Fort Ord Operable Unit 2 Groundwater Treatment System, with the objective of developing a simplified analytical method for detection of Compounds of Concern at that plant, and continuous sampling of up to twelve locations in the treatment system, from raw influent waters to treated effluent. Earlier implementations of the water samplingmore » and processing system (Analytical Sampling and Analysis Platform, A A+RT, Milpitas, CA) depended on off-line integrators that produced paper plots of chromatograms, and sent summary tables to a host computer for archiving. We developed a basic LabVIEW (National Instruments, Inc., Austin, TX) based gas chromatography control and data acquisition system that was the foundation for further development and integration with the ASAP system. Advantages of this integration include electronic archiving of all raw chromatographic data, and a flexible programming environment to support development of improved ASAP operation and automated reporting. The initial goals of integrating the preexisting LabVIEW chromatography control system with the ASAP, and demonstration of a simplified, site-specific analytical method were successfully achieved. However, although the principal objective of this system was assembly of an analytical system that would allow plant operators an up-to-the-minute view of the plant's performance, several obstacles remained. Data reduction with the base LabVIEW system was limited to peak detection and simple tabular output, patterned after commercial chromatography integrators, with compound retention times and peak areas. Preparation of calibration curves, method detection limit estimates and trend plotting were performed with spreadsheets and statistics software. Moreover, the analytical method developed was very limited in compound coverage, and unable to closely mirror the standard analytical methods promulgated by the EPA. To address these deficiencies, during this award the original equipment was operated at the OU 2-GTS to further evaluate the use of columns, commercial standard blends and other components to broaden the compound coverage of the chromatography system. A second-generation ASAP was designed and built to replace the original system at the OU 2-GTS, and include provision for introduction of internal standard compounds and surrogates into each sample analyzed. An enhanced, LabVIEW based chromatogram analysis application was written, that manages and archives chemical standards information, and provides a basis for NIST traceability for all analyses. Within this same package, all compound calibration response curves are managed, and different report formats were incorporated, that simplify trend analysis. Test results focus on operation of the original system at the OU 1 Integrated Chemical and Flow Monitoring System, at the OU 1 Fire Drill Area remediation site.« less

  17. Dynamic analysis and numerical experiments for balancing of the continuous single-disc and single-span rotor-bearing system

    NASA Astrophysics Data System (ADS)

    Wang, Aiming; Cheng, Xiaohan; Meng, Guoying; Xia, Yun; Wo, Lei; Wang, Ziyi

    2017-03-01

    Identification of rotor unbalance is critical for normal operation of rotating machinery. The single-disc and single-span rotor, as the most fundamental rotor-bearing system, has attracted research attention over a long time. In this paper, the continuous single-disc and single-span rotor is modeled as a homogeneous and elastic Euler-Bernoulli beam, and the forces applied by bearings and disc on the shaft are considered as point forces. A fourth-order non-homogeneous partial differential equation set with homogeneous boundary condition is solved for analytical solution, which expresses the unbalance response as a function of position, rotor unbalance and the stiffness and damping coefficients of bearings. Based on this analytical method, a novel Measurement Point Vector Method (MPVM) is proposed to identify rotor unbalance while operating. Only a measured unbalance response registered for four selected cross-sections of the rotor-shaft under steady-state operating conditions is needed when using the method. Numerical simulation shows that the detection error of the proposed method is very small when measurement error is negligible. The proposed method provides an efficient way for rotor balancing without test runs and external excitations.

  18. Implementation of structural response sensitivity calculations in a large-scale finite-element analysis system

    NASA Technical Reports Server (NTRS)

    Giles, G. L.; Rogers, J. L., Jr.

    1982-01-01

    The methodology used to implement structural sensitivity calculations into a major, general-purpose finite-element analysis system (SPAR) is described. This implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calculating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of SPAR are also discussed.

  19. Fast and Efficient Stochastic Optimization for Analytic Continuation

    DOE PAGES

    Bao, Feng; Zhang, Guannan; Webster, Clayton G; ...

    2016-09-28

    In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results.more » In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.« less

  20. Design and Validation of In-Source Atmospheric Pressure Photoionization Hydrogen/Deuterium Exchange Mass Spectrometry with Continuous Feeding of D2O.

    PubMed

    Acter, Thamina; Lee, Seulgidaun; Cho, Eunji; Jung, Maeng-Joon; Kim, Sunghwan

    2018-01-01

    In this study, continuous in-source hydrogen/deuterium exchange (HDX) atmospheric pressure photoionization (APPI) mass spectrometry (MS) with continuous feeding of D 2 O was developed and validated. D 2 O was continuously fed using a capillary line placed on the center of a metal plate positioned between the UV lamp and nebulizer. The proposed system overcomes the limitations of previously reported APPI HDX-MS approaches where deuterated solvents were premixed with sample solutions before ionization. This is particularly important for APPI because solvent composition can greatly influence ionization efficiency as well as the solubility of analytes. The experimental parameters for APPI HDX-MS with continuous feeding of D 2 O were optimized, and the optimized conditions were applied for the analysis of nitrogen-, oxygen-, and sulfur-containing compounds. The developed method was also applied for the analysis of the polar fraction of a petroleum sample. Thus, the data presented in this study clearly show that the proposed HDX approach can serve as an effective analytical tool for the structural analysis of complex mixtures. Graphical abstract ᅟ.

  1. Analytical Chemistry Laboratory Progress Report for FY 1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.W.; Boparai, A.S.; Bowers, D.L.

    The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year (FY) 1994 (October 1993 through September 1994). This annual report is the eleventh for the ACL and describes continuing effort on projects, work on new projects, and contributions of the ACL staff to various programs at ANL. The Analytical Chemistry Laboratory is a full-cost-recovery service center, with the primary mission of providing a broad range of analytical chemistry support services to the scientific and engineering programs at ANL. The ACL also has a research program inmore » analytical chemistry, conducts instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL handles a wide range of analytical problems. Some routine or standard analyses are done, but it is common for the Argonne programs to generate unique problems that require significant development of methods and adaption of techniques to obtain useful analytical data. The ACL has four technical groups -- Chemical Analysis, Instrumental Analysis, Organic Analysis, and Environmental Analysis -- which together include about 45 technical staff members. Talents and interests of staff members cross the group lines, as do many projects within the ACL. The Chemical Analysis Group uses wet- chemical and instrumental methods for elemental, compositional, and isotopic determinations in solid, liquid, and gaseous samples and provides specialized analytical services. Major instruments in this group include an ion chromatograph (IC), an inductively coupled plasma/atomic emission spectrometer (ICP/AES), spectrophotometers, mass spectrometers (including gas-analysis and thermal-ionization mass spectrometers), emission spectrographs, autotitrators, sulfur and carbon determinators, and a kinetic phosphorescence uranium analyzer.« less

  2. Three-Dimensional Piecewise-Continuous Class-Shape Transformation of Wings

    NASA Technical Reports Server (NTRS)

    Olson, Erik D.

    2015-01-01

    Class-Shape Transformation (CST) is a popular method for creating analytical representations of the surface coordinates of various components of aerospace vehicles. A wide variety of two- and three-dimensional shapes can be represented analytically using only a modest number of parameters, and the surface representation is smooth and continuous to as fine a degree as desired. This paper expands upon the original two-dimensional representation of airfoils to develop a generalized three-dimensional CST parametrization scheme that is suitable for a wider range of aircraft wings than previous formulations, including wings with significant non-planar shapes such as blended winglets and box wings. The method uses individual functions for the spanwise variation of airfoil shape, chord, thickness, twist, and reference axis coordinates to build up the complete wing shape. An alternative formulation parameterizes the slopes of the reference axis coordinates in order to relate the spanwise variation to the tangents of the sweep and dihedral angles. Also discussed are methods for fitting existing wing surface coordinates, including the use of piecewise equations to handle discontinuities, and mathematical formulations of geometric continuity constraints. A subsonic transport wing model is used as an example problem to illustrate the application of the methodology and to quantify the effects of piecewise representation and curvature constraints.

  3. Using meta-differential evolution to enhance a calculation of a continuous blood glucose level.

    PubMed

    Koutny, Tomas

    2016-09-01

    We developed a new model of glucose dynamics. The model calculates blood glucose level as a function of transcapillary glucose transport. In previous studies, we validated the model with animal experiments. We used analytical method to determine model parameters. In this study, we validate the model with subjects with type 1 diabetes. In addition, we combine the analytic method with meta-differential evolution. To validate the model with human patients, we obtained a data set of type 1 diabetes study that was coordinated by Jaeb Center for Health Research. We calculated a continuous blood glucose level from continuously measured interstitial fluid glucose level. We used 6 different scenarios to ensure robust validation of the calculation. Over 96% of calculated blood glucose levels fit A+B zones of the Clarke Error Grid. No data set required any correction of model parameters during the time course of measuring. We successfully verified the possibility of calculating a continuous blood glucose level of subjects with type 1 diabetes. This study signals a successful transition of our research from an animal experiment to a human patient. Researchers can test our model with their data on-line at https://diabetes.zcu.cz. Copyright © 2016 The Author. Published by Elsevier Ireland Ltd.. All rights reserved.

  4. 40 CFR 141.131 - Analytical requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Disinfectant Residuals, Disinfection Byproducts, and... Constitution Avenue, NW., EPA West, Room B102, Washington, DC 20460, or at the National Archives and Records...: http://www.archives.gov/federal_register/code_of_federal_regulations/ibr_locations.html. EPA Method 552...

  5. Recent developments in cyanide detection: A review

    PubMed Central

    Ma, Jian; Dasgupta, Purnendu K.

    2010-01-01

    The extreme toxicity of cyanide and environmental concerns from its continued industrial use continue to generate interest in facile and sensitive methods for cyanide detection. In recent years there is also additional recognition of HCN toxicity from smoke inhalation and potential use of cyanide as a weapon of terrorism. This review summarizes the literature since 2005 on cyanide measurement in different matrices ranging from drinking water and wastewater, to cigarette smoke and exhaled breath to biological fluids like blood, urine and saliva. The dramatic increase in the number of publications on cyanide measurement is indicative of the great interest in this field not only from analytical chemists, but also researchers from diverse environmental, medical, forensic and clinical arena. The recent methods cover both established and emerging analytical disciplines and include naked eye visual detection, spectrophotometry/colorimetry, capillary electrophoresis with optical absorbance detection, fluorometry, chemiluminescence, near-infrared cavity ring down spectroscopy, atomic absorption spectrometry, electrochemical methods (potentiometry/amperometry/ion chromatography-pulsed amperometry), mass spectrometry (selected ion flow tube mass spectrometry, electrospray ionization mass spectrometry, gas chromatography-mass spectrometry), gas chromatography (nitrogen phosphorus detector, electron capture detector) and quartz crystal mass monitors. PMID:20599024

  6. Projected regression method for solving Fredholm integral equations arising in the analytic continuation problem of quantum physics

    NASA Astrophysics Data System (ADS)

    Arsenault, Louis-François; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.

    2017-11-01

    We present a supervised machine learning approach to the inversion of Fredholm integrals of the first kind as they arise, for example, in the analytic continuation problem of quantum many-body physics. The approach provides a natural regularization for the ill-conditioned inverse of the Fredholm kernel, as well as an efficient and stable treatment of constraints. The key observation is that the stability of the forward problem permits the construction of a large database of outputs for physically meaningful inputs. Applying machine learning to this database generates a regression function of controlled complexity, which returns approximate solutions for previously unseen inputs; the approximate solutions are then projected onto the subspace of functions satisfying relevant constraints. Under standard error metrics the method performs as well or better than the Maximum Entropy method for low input noise and is substantially more robust to increased input noise. We suggest that the methodology will be similarly effective for other problems involving a formally ill-conditioned inversion of an integral operator, provided that the forward problem can be efficiently solved.

  7. A combined analytical formulation and genetic algorithm to analyze the nonlinear damage responses of continuous fiber toughened composites

    NASA Astrophysics Data System (ADS)

    Jeon, Haemin; Yu, Jaesang; Lee, Hunsu; Kim, G. M.; Kim, Jae Woo; Jung, Yong Chae; Yang, Cheol-Min; Yang, B. J.

    2017-09-01

    Continuous fiber-reinforced composites are important materials that have the highest commercialized potential in the upcoming future among existing advanced materials. Despite their wide use and value, their theoretical mechanisms have not been fully established due to the complexity of the compositions and their unrevealed failure mechanisms. This study proposes an effective three-dimensional damage modeling of a fibrous composite by combining analytical micromechanics and evolutionary computation. The interface characteristics, debonding damage, and micro-cracks are considered to be the most influential factors on the toughness and failure behaviors of composites, and a constitutive equation considering these factors was explicitly derived in accordance with the micromechanics-based ensemble volume averaged method. The optimal set of various model parameters in the analytical model were found using modified evolutionary computation that considers human-induced error. The effectiveness of the proposed formulation was validated by comparing a series of numerical simulations with experimental data from available studies.

  8. Mass spectrometric directed system for the continuous-flow synthesis and purification of diphenhydramine.

    PubMed

    Loren, Bradley P; Wleklinski, Michael; Koswara, Andy; Yammine, Kathryn; Hu, Yanyang; Nagy, Zoltan K; Thompson, David H; Cooks, R Graham

    2017-06-01

    A highly integrated approach to the development of a process for the continuous synthesis and purification of diphenhydramine is reported. Mass spectrometry (MS) is utilized throughout the system for on-line reaction monitoring, off-line yield quantitation, and as a reaction screening module that exploits reaction acceleration in charged microdroplets for high throughput route screening. This effort has enabled the discovery and optimization of multiple routes to diphenhydramine in glass microreactors using MS as a process analytical tool (PAT). The ability to rapidly screen conditions in charged microdroplets was used to guide optimization of the process in a microfluidic reactor. A quantitative MS method was developed and used to measure the reaction kinetics. Integration of the continuous-flow reactor/on-line MS methodology with a miniaturized crystallization platform for continuous reaction monitoring and controlled crystallization of diphenhydramine was also achieved. Our findings suggest a robust approach for the continuous manufacture of pharmaceutical drug products, exemplified in the particular case of diphenhydramine, and optimized for efficiency and crystal size, and guided by real-time analytics to produce the agent in a form that is readily adapted to continuous synthesis.

  9. Introduction: Ecological knowledge, theory and information in space and time [Chapter 1

    Treesearch

    Samuel A. Cushman; Falk Huettmann

    2010-01-01

    A central theme of this book is that there is a strong mutual dependence between explanatory theory, available data and analytical method in determining the lurching progress of ecological knowledge (Fig. 1.1). The two central arguments are first that limits in each of theory, data and method have continuously constrained advances in understanding ecological systems...

  10. A new dynamic method for the rapid determination of the biodegradable dissolved organic carbon in drinking water.

    PubMed

    Ribas, F; Frias, J; Lucena, F

    1991-10-01

    A new method for the determination of biodegradable dissolved organic carbon (BDOC), which may be useful to the water industry, is proposed. It is a dynamic method that measures the BDOC of circulating water continuously pumped across a biofilm attached to a special support that fills a system of two glass columns. The BDOC value corresponds to the difference in DOC between inlet and outlet water samples. The sampling may be intermittent or continuous, but the process is continuous. The biofilms give good performances over periods of at least 1 year. The analytical results are not significantly different from those of other bioassays based on the use of indigenous bacteria and the total duration of analysis is between 2 and 3 h.

  11. Marker-based reconstruction of the kinematics of a chain of segments: a new method that incorporates joint kinematic constraints.

    PubMed

    Klous, Miriam; Klous, Sander

    2010-07-01

    The aim of skin-marker-based motion analysis is to reconstruct the motion of a kinematical model from noisy measured motion of skin markers. Existing kinematic models for reconstruction of chains of segments can be divided into two categories: analytical methods that do not take joint constraints into account and numerical global optimization methods that do take joint constraints into account but require numerical optimization of a large number of degrees of freedom, especially when the number of segments increases. In this study, a new and largely analytical method for a chain of rigid bodies is presented, interconnected in spherical joints (chain-method). In this method, the number of generalized coordinates to be determined through numerical optimization is three, irrespective of the number of segments. This new method is compared with the analytical method of Veldpaus et al. [1988, "A Least-Squares Algorithm for the Equiform Transformation From Spatial Marker Co-Ordinates," J. Biomech., 21, pp. 45-54] (Veldpaus-method, a method of the first category) and the numerical global optimization method of Lu and O'Connor [1999, "Bone Position Estimation From Skin-Marker Co-Ordinates Using Global Optimization With Joint Constraints," J. Biomech., 32, pp. 129-134] (Lu-method, a method of the second category) regarding the effects of continuous noise simulating skin movement artifacts and regarding systematic errors in joint constraints. The study is based on simulated data to allow a comparison of the results of the different algorithms with true (noise- and error-free) marker locations. Results indicate a clear trend that accuracy for the chain-method is higher than the Veldpaus-method and similar to the Lu-method. Because large parts of the equations in the chain-method can be solved analytically, the speed of convergence in this method is substantially higher than in the Lu-method. With only three segments, the average number of required iterations with the chain-method is 3.0+/-0.2 times lower than with the Lu-method when skin movement artifacts are simulated by applying a continuous noise model. When simulating systematic errors in joint constraints, the number of iterations for the chain-method was almost a factor 5 lower than the number of iterations for the Lu-method. However, the Lu-method performs slightly better than the chain-method. The RMSD value between the reconstructed and actual marker positions is approximately 57% of the systematic error on the joint center positions for the Lu-method compared with 59% for the chain-method.

  12. International Council for Standardization in Haematology (ICSH) Recommendations for Laboratory Measurement of Direct Oral Anticoagulants.

    PubMed

    Gosselin, Robert C; Adcock, Dorothy M; Bates, Shannon M; Douxfils, Jonathan; Favaloro, Emmanuel J; Gouin-Thibault, Isabelle; Guillermo, Cecilia; Kawai, Yohko; Lindhoff-Last, Edelgard; Kitchen, Steve

    2018-03-01

    This guidance document was prepared on behalf of the International Council for Standardization in Haematology (ICSH) for providing haemostasis-related guidance documents for clinical laboratories. This inaugural coagulation ICSH document was developed by an ad hoc committee, comprised of international clinical and laboratory direct acting oral anticoagulant (DOAC) experts. The committee developed consensus recommendations for laboratory measurement of DOACs (dabigatran, rivaroxaban, apixaban and edoxaban), which would be germane for laboratories assessing DOAC anticoagulation. This guidance document addresses all phases of laboratory DOAC measurements, including pre-analytical (e.g. preferred time sample collection, preferred sample type, sample stability), analytical (gold standard method, screening and quantifying methods) and post analytical (e.g. reporting units, quality assurance). The committee addressed the use and limitations of screening tests such as prothrombin time, activated partial thromboplastin time as well as viscoelastic measurements of clotting blood and point of care methods. Additionally, the committee provided recommendations for the proper validation or verification of performance of laboratory assays prior to implementation for clinical use, and external quality assurance to provide continuous assessment of testing and reporting method. Schattauer GmbH Stuttgart.

  13. Developments in mycotoxin analysis: an update for 2013 – 2014

    USDA-ARS?s Scientific Manuscript database

    This review highlights developments in the determination of mycotoxins over a period between mid-2013 and mid-2014. It continues in the format of the previous articles of this series, emphasising on analytical methods to determine aflatoxins, Alternaria toxins, ergot alkaloids, fumonisins, ochratoxi...

  14. Density functional theory for molecular and periodic systems using density fitting and continuous fast multipole method: Analytical gradients.

    PubMed

    Łazarski, Roman; Burow, Asbjörn Manfred; Grajciar, Lukáš; Sierka, Marek

    2016-10-30

    A full implementation of analytical energy gradients for molecular and periodic systems is reported in the TURBOMOLE program package within the framework of Kohn-Sham density functional theory using Gaussian-type orbitals as basis functions. Its key component is a combination of density fitting (DF) approximation and continuous fast multipole method (CFMM) that allows for an efficient calculation of the Coulomb energy gradient. For exchange-correlation part the hierarchical numerical integration scheme (Burow and Sierka, Journal of Chemical Theory and Computation 2011, 7, 3097) is extended to energy gradients. Computational efficiency and asymptotic O(N) scaling behavior of the implementation is demonstrated for various molecular and periodic model systems, with the largest unit cell of hematite containing 640 atoms and 19,072 basis functions. The overall computational effort of energy gradient is comparable to that of the Kohn-Sham matrix formation. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  15. Analyzing chromatographic data using multilevel modeling.

    PubMed

    Wiczling, Paweł

    2018-06-01

    It is relatively easy to collect chromatographic measurements for a large number of analytes, especially with gradient chromatographic methods coupled with mass spectrometry detection. Such data often have a hierarchical or clustered structure. For example, analytes with similar hydrophobicity and dissociation constant tend to be more alike in their retention than a randomly chosen set of analytes. Multilevel models recognize the existence of such data structures by assigning a model for each parameter, with its parameters also estimated from data. In this work, a multilevel model is proposed to describe retention time data obtained from a series of wide linear organic modifier gradients of different gradient duration and different mobile phase pH for a large set of acids and bases. The multilevel model consists of (1) the same deterministic equation describing the relationship between retention time and analyte-specific and instrument-specific parameters, (2) covariance relationships relating various physicochemical properties of the analyte to chromatographically specific parameters through quantitative structure-retention relationship based equations, and (3) stochastic components of intra-analyte and interanalyte variability. The model was implemented in Stan, which provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods. Graphical abstract Relationships between log k and MeOH content for acidic, basic, and neutral compounds with different log P. CI credible interval, PSA polar surface area.

  16. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  17. An analysis of hypercritical states in elastic and inelastic systems

    NASA Astrophysics Data System (ADS)

    Kowalczk, Maciej

    The author raises a wide range of problems whose common characteristic is an analysis of hypercritical states in elastic and inelastic systems. the article consists of two basic parts. The first part primarily discusses problems of modelling hypercritical states, while the second analyzes numerical methods (so-called continuation methods) used to solve non-linear problems. The original approaches for modelling hypercritical states found in this article include the combination of plasticity theory and an energy condition for cracking, accounting for the variability and cyclical nature of the forms of fracture of a brittle material under a die, and the combination of plasticity theory and a simplified description of the phenomenon of localization along a discontinuity line. The author presents analytical solutions of three non-linear problems for systems made of elastic/brittle/plastic and elastic/ideally plastic materials. The author proceeds to discuss the analytical basics of continuation methods and analyzes the significance of the parameterization of non-linear problems, provides a method for selecting control parameters based on an analysis of the rank of a rectangular matrix of a uniform system of increment equations, and also provides a new method for selecting an equilibrium path originating from a bifurcation point. The author provides a general outline of continuation methods based on an analysis of the rank of a matrix of a corrective system of equations. The author supplements his theoretical solutions with numerical solutions of non-linear problems for rod systems and problems of the plastic disintegration of a notched rectangular plastic plate.

  18. Computer modeling of a two-junction, monolithic cascade solar cell

    NASA Technical Reports Server (NTRS)

    Lamorte, M. F.; Abbott, D.

    1979-01-01

    The theory and design criteria for monolithic, two-junction cascade solar cells are described. The departure from the conventional solar cell analytical method and the reasons for using the integral form of the continuity equations are briefly discussed. The results of design optimization are presented. The energy conversion efficiency that is predicted for the optimized structure is greater than 30% at 300 K, AMO and one sun. The analytical method predicts device performance characteristics as a function of temperature. The range is restricted to 300 to 600 K. While the analysis is capable of determining most of the physical processes occurring in each of the individual layers, only the more significant device performance characteristics are presented.

  19. A study of cell electrophoresis as a means of purifying growth hormone secreting cells

    NASA Technical Reports Server (NTRS)

    Plank, Lindsay D.; Hymer, W. C.; Kunze, M. Elaine; Marks, Gary M.; Lanham, J. Wayne

    1983-01-01

    Growth hormone secreting cells of the rat anterior pituitary are heavily laden with granules of growth hormone and can be partialy purified on the basis of their resulting high density. Two methods of preparative cell electrophoresis were investigated as methods of enhancing the purification of growth hormone producing cells: density gradient electrophoresis and continuous flow electrophoresis. Both methods provided a two- to four-fold enrichment in growth hormone production per cell relative to that achieved by previous methods. Measurements of electrophoretic mobilities by two analytical methods, microscopic electrophoresis and laser-tracking electrophoresis, revealed very little distinction between unpurified anterior pituitary cell suspensions and somatotroph-enriched cell suspensions. Predictions calculated on the basis of analytical electrophoretic data are consistent with the hypothesis that sedimentation plays a significant role in both types of preparative electrophoresis and the electrophoretic mobility of the growth hormone secreting subpopulation of cells remains unknown.

  20. Method for near-real-time continuous air monitoring of phosgene, hydrogen cyanide, and cyanogen chloride

    NASA Astrophysics Data System (ADS)

    Lattin, Frank G.; Paul, Donald G.

    1996-11-01

    A sorbent-based gas chromatographic method provides continuous quantitative measurement of phosgene, hydrogen cyanide, and cyanogen chloride in ambient air. These compounds are subject to workplace exposure limits as well as regulation under terms of the Chemical Arms Treaty and Title III of the 1990 Clean Air Act amendments. The method was developed for on-sit use in a mobile laboratory during remediation operations. Incorporated into the method are automated multi-level calibrations at time weighted average concentrations, or lower. Gaseous standards are prepared in fused silica lined air sampling canisters, then transferred to the analytical system through dynamic spiking. Precision and accuracy studies performed to validate the method are described. Also described are system deactivation and passivation techniques critical to optimum method performance.

  1. Big data science: A literature review of nursing research exemplars.

    PubMed

    Westra, Bonnie L; Sylvia, Martha; Weinfurter, Elizabeth F; Pruinelli, Lisiane; Park, Jung In; Dodd, Dianna; Keenan, Gail M; Senk, Patricia; Richesson, Rachel L; Baukner, Vicki; Cruz, Christopher; Gao, Grace; Whittenburg, Luann; Delaney, Connie W

    Big data and cutting-edge analytic methods in nursing research challenge nurse scientists to extend the data sources and analytic methods used for discovering and translating knowledge. The purpose of this study was to identify, analyze, and synthesize exemplars of big data nursing research applied to practice and disseminated in key nursing informatics, general biomedical informatics, and nursing research journals. A literature review of studies published between 2009 and 2015. There were 650 journal articles identified in 17 key nursing informatics, general biomedical informatics, and nursing research journals in the Web of Science database. After screening for inclusion and exclusion criteria, 17 studies published in 18 articles were identified as big data nursing research applied to practice. Nurses clearly are beginning to conduct big data research applied to practice. These studies represent multiple data sources and settings. Although numerous analytic methods were used, the fundamental issue remains to define the types of analyses consistent with big data analytic methods. There are needs to increase the visibility of big data and data science research conducted by nurse scientists, further examine the use of state of the science in data analytics, and continue to expand the availability and use of a variety of scientific, governmental, and industry data resources. A major implication of this literature review is whether nursing faculty and preparation of future scientists (PhD programs) are prepared for big data and data science. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Apparatus for rapid measurement of aerosol bulk chemical composition

    DOEpatents

    Lee, Yin-Nan E.; Weber, Rodney J.

    2003-01-01

    An apparatus and method for continuous on-line measurement of chemical composition of aerosol particles with a fast time resolution are provided. The apparatus includes a modified particle size magnifier for producing activated aerosol particles and a collection device which collects the activated aerosol particles into a liquid stream for quantitative analysis by analytical methods. The method provided for on-line measurement of chemical composition of aerosol particles includes exposing aerosol carrying sample air to hot saturated steam thereby forming activated aerosol particles; collecting the activated aerosol particles by a collection device for delivery as a jet stream onto an impaction surface; flushing off the activated aerosol particles from the impaction surface into a liquid stream for delivery of the collected liquid stream to an analytical instrument for quantitative measurement.

  3. 40 CFR 141.131 - Analytical requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Disinfectant Residuals, Disinfection Byproducts, and... 5 U.S.C. 552(a) and 1 CFR part 51. Copies may be inspected at EPA's Drinking Water Docket, 1301....1 is in Methods for the Determination of Organic Compounds in Drinking Water-Supplement II, USEPA...

  4. Shock and vibration response of multistage structure

    NASA Technical Reports Server (NTRS)

    Lee, S. Y.; Liyeos, J. G.; Tang, S. S.

    1968-01-01

    Study of the shock and vibration response of a multistage structure employed analytically, lumped-mass, continuous-beam, multimode, and matrix-iteration methods. The study was made on the load paths, transmissibility, and attenuation properties along a longitudinal axis of a long, slender structure with increasing degree of complexity.

  5. Application of enhanced gas chromatography/triple quadrupole mass spectrometry for monitoring petroleum weathering and forensic source fingerprinting in samples impacted by the Deepwater Horizon oil spill.

    PubMed

    Adhikari, Puspa L; Wong, Roberto L; Overton, Edward B

    2017-10-01

    Accurate characterization of petroleum hydrocarbons in complex and weathered oil residues is analytically challenging. This is primarily due to chemical compositional complexity of both the oil residues and environmental matrices, and the lack of instrumental selectivity due to co-elution of interferences with the target analytes. To overcome these analytical selectivity issues, we used an enhanced resolution gas chromatography coupled with triple quadrupole mass spectrometry in Multiple Reaction Monitoring (MRM) mode (GC/MS/MS-MRM) to eliminate interferences within the ion chromatograms of target analytes found in environmental samples. This new GC/MS/MS-MRM method was developed and used for forensic fingerprinting of deep-water and marsh sediment samples containing oily residues from the Deepwater Horizon oil spill. The results showed that the GC/MS/MS-MRM method increases selectivity, eliminates interferences, and provides more accurate quantitation and characterization of trace levels of alkyl-PAHs and biomarker compounds, from weathered oil residues in complex sample matrices. The higher selectivity of the new method, even at low detection limits, provides greater insights on isomer and homolog compositional patterns and the extent of oil weathering under various environmental conditions. The method also provides flat chromatographic baselines for accurate and unambiguous calculation of petroleum forensic biomarker compound ratios. Thus, this GC/MS/MS-MRM method can be a reliable analytical strategy for more accurate and selective trace level analyses in petroleum forensic studies, and for tacking continuous weathering of oil residues. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Big–deep–smart data in imaging for guiding materials design

    DOE PAGES

    Kalinin, Sergei V.; Sumpter, Bobby G.; Archibald, Richard K.

    2015-09-23

    Harnessing big data, deep data, and smart data from state-of-the-art imaging might accelerate the design and realization of advanced functional materials. Here we discuss new opportunities in materials design enabled by the availability of big data in imaging and data analytics approaches, including their limitations, in material systems of practical interest. We specifically focus on how these tools might help realize new discoveries in a timely manner. Such methodologies are particularly appropriate to explore in light of continued improvements in atomistic imaging, modelling and data analytics methods.

  7. Analysis of some types of intermediate orbits used in the theory of artificial Earth satellite motion for the purposes of geodesy.

    NASA Astrophysics Data System (ADS)

    Kotseva, V. I.

    Survey, analysis and comparison of 15 types of intermediate orbits used in the satellite movement theories for the purposes both of the geodesy and geodynamics have been made. The paper is a continuation of the investigations directed to practical realization both of analytical and semi-analytical methods for satellite orbit determination. It is indicated that the intermediate orbit proposed and elaborated by Aksenov, Grebenikov and Demin has got some good qualities and priorities over all the rest intermediate orbits.

  8. Comment on "Classification of aerosol properties derived from AERONET direct sun data" by Gobbi et al. (2007)

    NASA Astrophysics Data System (ADS)

    O'Neill, N. T.

    2010-10-01

    It is pointed out that the graphical, aerosol classification method of Gobbi et al. (2007) can be interpreted as a manifestation of fundamental analytical relations whose existance depends on the simple assumption that the optical effects of aerosols are essentially bimodal in nature. The families of contour lines in their "Ada" curvature space are essentially empirical and discretized illustrations of analytical parabolic forms in (α, α') space (the space formed by the continuously differentiable Angstrom exponent and its spectral derivative).

  9. Big-deep-smart data in imaging for guiding materials design.

    PubMed

    Kalinin, Sergei V; Sumpter, Bobby G; Archibald, Richard K

    2015-10-01

    Harnessing big data, deep data, and smart data from state-of-the-art imaging might accelerate the design and realization of advanced functional materials. Here we discuss new opportunities in materials design enabled by the availability of big data in imaging and data analytics approaches, including their limitations, in material systems of practical interest. We specifically focus on how these tools might help realize new discoveries in a timely manner. Such methodologies are particularly appropriate to explore in light of continued improvements in atomistic imaging, modelling and data analytics methods.

  10. Big-deep-smart data in imaging for guiding materials design

    NASA Astrophysics Data System (ADS)

    Kalinin, Sergei V.; Sumpter, Bobby G.; Archibald, Richard K.

    2015-10-01

    Harnessing big data, deep data, and smart data from state-of-the-art imaging might accelerate the design and realization of advanced functional materials. Here we discuss new opportunities in materials design enabled by the availability of big data in imaging and data analytics approaches, including their limitations, in material systems of practical interest. We specifically focus on how these tools might help realize new discoveries in a timely manner. Such methodologies are particularly appropriate to explore in light of continued improvements in atomistic imaging, modelling and data analytics methods.

  11. Big–deep–smart data in imaging for guiding materials design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalinin, Sergei V.; Sumpter, Bobby G.; Archibald, Richard K.

    Harnessing big data, deep data, and smart data from state-of-the-art imaging might accelerate the design and realization of advanced functional materials. Here we discuss new opportunities in materials design enabled by the availability of big data in imaging and data analytics approaches, including their limitations, in material systems of practical interest. We specifically focus on how these tools might help realize new discoveries in a timely manner. Such methodologies are particularly appropriate to explore in light of continued improvements in atomistic imaging, modelling and data analytics methods.

  12. Reflecting Solutions of High Order Elliptic Differential Equations in Two Independent Variables Across Analytic Arcs. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Carleton, O.

    1972-01-01

    Consideration is given specifically to sixth order elliptic partial differential equations in two independent real variables x, y such that the coefficients of the highest order terms are real constants. It is assumed that the differential operator has distinct characteristics and that it can be factored as a product of second order operators. By analytically continuing into the complex domain and using the complex characteristic coordinates of the differential equation, it is shown that its solutions, u, may be reflected across analytic arcs on which u satisfies certain analytic boundary conditions. Moreover, a method is given whereby one can determine a region into which the solution is extensible. It is seen that this region of reflection is dependent on the original domain of difinition of the solution, the arc and the coefficients of the highest order terms of the equation and not on any sufficiently small quantities; i.e., the reflection is global in nature. The method employed may be applied to similar differential equations of order 2n.

  13. Annual banned-substance review: analytical approaches in human sports drug testing.

    PubMed

    Thevis, Mario; Kuuranne, Tiia; Walpurgis, Katja; Geyer, Hans; Schänzer, Wilhelm

    2016-01-01

    The aim of improving anti-doping efforts is predicated on several different pillars, including, amongst others, optimized analytical methods. These commonly result from exploiting most recent developments in analytical instrumentation as well as research data on elite athletes' physiology in general, and pharmacology, metabolism, elimination, and downstream effects of prohibited substances and methods of doping, in particular. The need for frequent and adequate adaptations of sports drug testing procedures has been incessant, largely due to the uninterrupted emergence of new chemical entities but also due to the apparent use of established or even obsolete drugs for reasons other than therapeutic means, such as assumed beneficial effects on endurance, strength, and regeneration capacities. Continuing the series of annual banned-substance reviews, literature concerning human sports drug testing published between October 2014 and September 2015 is summarized and reviewed in reference to the content of the 2015 Prohibited List as issued by the World Anti-Doping Agency (WADA), with particular emphasis on analytical approaches and their contribution to enhanced doping controls. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Continuous Metabolic Monitoring Based on Multi-Analyte Biomarkers to Predict Exhaustion

    PubMed Central

    Kastellorizios, Michail; Burgess, Diane J.

    2015-01-01

    This work introduces the concept of multi-analyte biomarkers for continuous metabolic monitoring. The importance of using more than one marker lies in the ability to obtain a holistic understanding of the metabolism. This is showcased for the detection and prediction of exhaustion during intense physical exercise. The findings presented here indicate that when glucose and lactate changes over time are combined into multi-analyte biomarkers, their monitoring trends are more sensitive in the subcutaneous tissue, an implantation-friendly peripheral tissue, compared to the blood. This unexpected observation was confirmed in normal as well as type 1 diabetic rats. This study was designed to be of direct value to continuous monitoring biosensor research, where single analytes are typically monitored. These findings can be implemented in new multi-analyte continuous monitoring technologies for more accurate insulin dosing, as well as for exhaustion prediction studies based on objective data rather than the subject’s perception. PMID:26028477

  15. Continuous metabolic monitoring based on multi-analyte biomarkers to predict exhaustion.

    PubMed

    Kastellorizios, Michail; Burgess, Diane J

    2015-06-01

    This work introduces the concept of multi-analyte biomarkers for continuous metabolic monitoring. The importance of using more than one marker lies in the ability to obtain a holistic understanding of the metabolism. This is showcased for the detection and prediction of exhaustion during intense physical exercise. The findings presented here indicate that when glucose and lactate changes over time are combined into multi-analyte biomarkers, their monitoring trends are more sensitive in the subcutaneous tissue, an implantation-friendly peripheral tissue, compared to the blood. This unexpected observation was confirmed in normal as well as type 1 diabetic rats. This study was designed to be of direct value to continuous monitoring biosensor research, where single analytes are typically monitored. These findings can be implemented in new multi-analyte continuous monitoring technologies for more accurate insulin dosing, as well as for exhaustion prediction studies based on objective data rather than the subject's perception.

  16. Rapid and high-resolution stable isotopic measurement of biogenic accretionary carbonate using an online CO2 laser ablation system: Standardization of the analytical protocol.

    PubMed

    Sreemany, Arpita; Bera, Melinda Kumar; Sarkar, Anindya

    2017-12-30

    The elaborate sampling and analytical protocol associated with conventional dual-inlet isotope ratio mass spectrometry has long hindered high-resolution climate studies from biogenic accretionary carbonates. Laser-based on-line systems, in comparison, produce rapid data, but suffer from unresolvable matrix effects. It is, therefore, necessary to resolve these matrix effects to take advantage of the automated laser-based method. Two marine bivalve shells (one aragonite and one calcite) and one fish otolith (aragonite) were first analysed using a CO 2 laser ablation system attached to a continuous flow isotope ratio mass spectrometer under different experimental conditions (different laser power, sample untreated vs vacuum roasted). The shells and the otolith were then micro-drilled and the isotopic compositions of the powders were measured in a dual-inlet isotope ratio mass spectrometer following the conventional acid digestion method. The vacuum-roasted samples (both aragonite and calcite) produced mean isotopic ratios (with a reproducibility of ±0.2 ‰ for both δ 18 O and δ 13 C values) almost identical to the values obtained using the conventional acid digestion method. As the isotopic ratio of the acid digested samples fall within the analytical precision (±0.2 ‰) of the laser ablation system, this suggests the usefulness of the method for studying the biogenic accretionary carbonate matrix. When using laser-based continuous flow isotope ratio mass spectrometry for the high-resolution isotopic measurements of biogenic carbonates, the employment of a vacuum-roasting step will reduce the matrix effect. This method will be of immense help to geologists and sclerochronologists in exploring short-term changes in climatic parameters (e.g. seasonality) in geological times. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Design and Validation of In-Source Atmospheric Pressure Photoionization Hydrogen/Deuterium Exchange Mass Spectrometry with Continuous Feeding of D2O

    NASA Astrophysics Data System (ADS)

    Acter, Thamina; Lee, Seulgidaun; Cho, Eunji; Jung, Maeng-Joon; Kim, Sunghwan

    2018-01-01

    In this study, continuous in-source hydrogen/deuterium exchange (HDX) atmospheric pressure photoionization (APPI) mass spectrometry (MS) with continuous feeding of D2O was developed and validated. D2O was continuously fed using a capillary line placed on the center of a metal plate positioned between the UV lamp and nebulizer. The proposed system overcomes the limitations of previously reported APPI HDX-MS approaches where deuterated solvents were premixed with sample solutions before ionization. This is particularly important for APPI because solvent composition can greatly influence ionization efficiency as well as the solubility of analytes. The experimental parameters for APPI HDX-MS with continuous feeding of D2O were optimized, and the optimized conditions were applied for the analysis of nitrogen-, oxygen-, and sulfur-containing compounds. The developed method was also applied for the analysis of the polar fraction of a petroleum sample. Thus, the data presented in this study clearly show that the proposed HDX approach can serve as an effective analytical tool for the structural analysis of complex mixtures. [Figure not available: see fulltext.

  18. Vortex-Lattice Utilization. [in aeronautical engineering and aircraft design

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The many novel, innovative, and unique implementations and applications of the vortex-lattice method to aerodynamic design and analysis which have been performed by Industry, Government, and Universities were presented. Although this analytical tool is not new, it continues to be utilized and refined in the aeronautical community.

  19. Quantitative Literacy for Undergraduate Business Students in the 21st Century

    ERIC Educational Resources Information Center

    McClure, Richard; Sircar, Sumit

    2008-01-01

    The current business environment is awash in vast amounts of data that ongoing transactions continually generate. Leading-edge corporations are using business analytics to achieve competitive advantage. However, educators are not adequately preparing business school students in quantitative methods to meet this challenge. For more than half a…

  20. Finite element modeling of light propagation in fruit under illumination of continuous-wave beam

    USDA-ARS?s Scientific Manuscript database

    Spatially-resolved spectroscopy provides a means for measuring the optical properties of biological tissues, based on analytical solutions to diffusion approximation for semi-infinite media under the normal illumination of infinitely small size light beam. The method is, however, prone to error in m...

  1. Structural Equations and Path Analysis for Discrete Data.

    ERIC Educational Resources Information Center

    Winship, Christopher; Mare, Robert D.

    1983-01-01

    Presented is an approach to causal models in which some or all variables are discretely measured, showing that path analytic methods permit quantification of causal relationships among variables with the same flexibility and power of interpretation as is feasible in models including only continuous variables. Examples are provided. (Author/IS)

  2. The Latent Structure of Child Depression: A Taxometric Analysis

    ERIC Educational Resources Information Center

    Richey, J. Anthony; Schmidt, Norman B.; Lonigan, Christopher J.; Phillips, Beth M.; Catanzaro, Salvatore J.; Laurent, Jeff; Gerhardstein, Rebecca R.; Kotov, Roman

    2009-01-01

    Background: The current study examined the categorical versus continuous nature of child and adolescent depression among three samples of children and adolescents ranging from 5 to 19 years. Methods: Depression was measured using the Children's Depression Inventory (CDI). Indicators derived from the CDI were based on factor analytic research on…

  3. Applying the Bootstrap to Taxometric Analysis: Generating Empirical Sampling Distributions to Help Interpret Results

    ERIC Educational Resources Information Center

    Ruscio, John; Ruscio, Ayelet Meron; Meron, Mati

    2007-01-01

    Meehl's taxometric method was developed to distinguish categorical and continuous constructs. However, taxometric output can be difficult to interpret because expected results for realistic data conditions and differing procedural implementations have not been derived analytically or studied through rigorous simulations. By applying bootstrap…

  4. Microplastics in the environment: Challenges in analytical chemistry - A review.

    PubMed

    Silva, Ana B; Bastos, Ana S; Justino, Celine I L; da Costa, João P; Duarte, Armando C; Rocha-Santos, Teresa A P

    2018-08-09

    Microplastics can be present in the environment as manufactured microplastics (known as primary microplastics) or resulting from the continuous weathering of plastic litter, which yields progressively smaller plastic fragments (known as secondary microplastics). Herein, we discuss the numerous issues associated with the analysis of microplastics, and to a less extent of nanoplastics, in environmental samples (water, sediments, and biological tissues), from their sampling and sample handling to their identification and quantification. The analytical quality control and quality assurance associated with the validation of analytical methods and use of reference materials for the quantification of microplastics are also discussed, as well as the current challenges within this field of research and possible routes to overcome such limitations. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Quantitative 1H NMR: Development and Potential of an Analytical Method – an Update

    PubMed Central

    Pauli, Guido F.; Gödecke, Tanja; Jaki, Birgit U.; Lankin, David C.

    2012-01-01

    Covering the literature from mid-2004 until the end of 2011, this review continues a previous literature overview on quantitative 1H NMR (qHNMR) methodology and its applications in the analysis of natural products (NPs). Among the foremost advantages of qHNMR is its accurate function with external calibration, the lack of any requirement for identical reference materials, a high precision and accuracy when properly validated, and an ability to quantitate multiple analytes simultaneously. As a result of the inclusion of over 170 new references, this updated review summarizes a wealth of detailed experiential evidence and newly developed methodology that supports qHNMR as a valuable and unbiased analytical tool for natural product and other areas of research. PMID:22482996

  6. Quantum decay model with exact explicit analytical solution

    NASA Astrophysics Data System (ADS)

    Marchewka, Avi; Granot, Er'El

    2009-01-01

    A simple decay model is introduced. The model comprises a point potential well, which experiences an abrupt change. Due to the temporal variation, the initial quantum state can either escape from the well or stay localized as a new bound state. The model allows for an exact analytical solution while having the necessary features of a decay process. The results show that the decay is never exponential, as classical dynamics predicts. Moreover, at short times the decay has a fractional power law, which differs from perturbation quantum method predictions. At long times the decay includes oscillations with an envelope that decays algebraically. This is a model where the final state can be either continuous or localized, and that has an exact analytical solution.

  7. Estimation of the biserial correlation and its sampling variance for use in meta-analysis.

    PubMed

    Jacobs, Perke; Viechtbauer, Wolfgang

    2017-06-01

    Meta-analyses are often used to synthesize the findings of studies examining the correlational relationship between two continuous variables. When only dichotomous measurements are available for one of the two variables, the biserial correlation coefficient can be used to estimate the product-moment correlation between the two underlying continuous variables. Unlike the point-biserial correlation coefficient, biserial correlation coefficients can therefore be integrated with product-moment correlation coefficients in the same meta-analysis. The present article describes the estimation of the biserial correlation coefficient for meta-analytic purposes and reports simulation results comparing different methods for estimating the coefficient's sampling variance. The findings indicate that commonly employed methods yield inconsistent estimates of the sampling variance across a broad range of research situations. In contrast, consistent estimates can be obtained using two methods that appear to be unknown in the meta-analytic literature. A variance-stabilizing transformation for the biserial correlation coefficient is described that allows for the construction of confidence intervals for individual coefficients with close to nominal coverage probabilities in most of the examined conditions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. The magnetic particle in a box: Analytic and micromagnetic analysis of probe-localized spin wave modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adur, Rohan, E-mail: adur@physics.osu.edu; Du, Chunhui; Manuilov, Sergei A.

    2015-05-07

    The dipole field from a probe magnet can be used to localize a discrete spectrum of standing spin wave modes in a continuous ferromagnetic thin film without lithographic modification to the film. Obtaining the resonance field for a localized mode is not trivial due to the effect of the confined and inhomogeneous magnetization precession. We compare the results of micromagnetic and analytic methods to find the resonance field of localized modes in a ferromagnetic thin film, and investigate the accuracy of these methods by comparing with a numerical minimization technique that assumes Bessel function modes with pinned boundary conditions. Wemore » find that the micromagnetic technique, while computationally more intensive, reveals that the true magnetization profiles of localized modes are similar to Bessel functions with gradually decaying dynamic magnetization at the mode edges. We also find that an analytic solution, which is simple to implement and computationally much faster than other methods, accurately describes the resonance field of localized modes when exchange fields are negligible, and demonstrating the accessibility of localized mode analysis.« less

  9. Analysis of high-aspect-ratio jet-flap wings of arbitrary geometry

    NASA Technical Reports Server (NTRS)

    Lissaman, P. B. S.

    1973-01-01

    An analytical technique to compute the performance of an arbitrary jet-flapped wing is developed. The solution technique is based on the method of Maskell and Spence in which the well-known lifting-line approach is coupled with an auxiliary equation providing the extra function needed in jet-flap theory. The present method is generalized to handle straight, uncambered wings of arbitrary planform, twist, and blowing (including unsymmetrical cases). An analytical procedure is developed for continuous variations in the above geometric data with special functions to exactly treat discontinuities in any of the geometric and blowing data. A rational theory for the effect of finite wing thickness is introduced as well as simplified concepts of effective aspect ratio for rapid estimation of performance.

  10. Recent developments in urinalysis of metabolites of new psychoactive substances using LC-MS.

    PubMed

    Peters, Frank T

    2014-08-01

    In the last decade, an ever-increasing number of new psychoactive substances (NPSs) have appeared on the recreational drug market. To account for this development, analytical toxicologists have to continuously adapt their methods to encompass the latest NPSs. Urine is the preferred biological matrix for screening analysis in different areas of analytical toxicology. However, the development of urinalysis procedures for NPSs is complicated by the fact that generally little or no information on urinary excretion patterns of such drugs exists when they first appear on the market. Metabolism studies are therefore a prerequisite in the development of urinalysis methods for NPSs. In this article, the literature on the urinalysis of NPS metabolites will be reviewed, focusing on articles published after 2008.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year (FY) 1993 (October 1992 through September 1993). This annual report is the tenth for the ACL and describes continuing effort on projects, work on new projects, and contributions of the ACL staff to various programs at ANL. The Analytical Chemistry Laboratory is a full-cost-recovery service center, with the primary mission of providing a broad range of analytical chemistry support services to the scientific and engineering programs at ANL. The ACL also has research programs in analyticalmore » chemistry, conducts instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL handles a wide range of analytical problems. Some routine or standard analyses are done, but it is common for the Argonne programs to generate unique problems that require development or modification of methods and adaption of techniques to obtain useful analytical data. The ACL is administratively within the Chemical Technology Division (CMT), its principal ANL client, but provides technical support for many of the technical divisions and programs at ANL. The ACL has four technical groups--Chemical Analysis, Instrumental Analysis, Organic Analysis, and Environmental Analysis--which together include about 45 technical staff members. Talents and interests of staff members cross the group lines, as do many projects within the ACL.« less

  12. Ionization Suppression and Recovery in Direct Biofluid Analysis Using Paper Spray Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Vega, Carolina; Spence, Corina; Zhang, Chengsen; Bills, Brandon J.; Manicke, Nicholas E.

    2016-04-01

    Paper spray mass spectrometry is a method for the direct analysis of biofluid samples in which extraction of analytes from dried biofluid spots and electrospray ionization occur from the paper on which the dried sample is stored. We examined matrix effects in the analysis of small molecule drugs from urine, plasma, and whole blood. The general method was to spike stable isotope labeled analogs of each analyte into the spray solvent, while the analyte itself was in the dried biofluid. Intensity of the labeled analog is proportional to ionization efficiency, whereas the ratio of the analyte intensity to the labeled analog in the spray solvent is proportional to recovery. Ion suppression and recovery were found to be compound- and matrix-dependent. Highest levels of ion suppression were obtained for poor ionizers (e.g., analytes lacking basic aliphatic amine groups) in urine and approached -90%. Ion suppression was much lower or even absent for good ionizers (analytes with aliphatic amines) in dried blood spots. Recovery was generally highest in urine and lowest in blood. We also examined the effect of two experimental parameters on ion suppression and recovery: the spray solvent and the sample position (how far away from the paper tip the dried sample was spotted). Finally, the change in ion suppression and analyte elution as a function of time was examined by carrying out a paper spray analysis of dried plasma spots for 5 min by continually replenishing the spray solvent.

  13. Monte Carlo calculation of dynamical properties of the two-dimensional Hubbard model

    NASA Technical Reports Server (NTRS)

    White, S. R.; Scalapino, D. J.; Sugar, R. L.; Bickers, N. E.

    1989-01-01

    A new method is introduced for analytically continuing imaginary-time data from quantum Monte Carlo calculations to the real-frequency axis. The method is based on a least-squares-fitting procedure with constraints of positivity and smoothness on the real-frequency quantities. Results are shown for the single-particle spectral-weight function and density of states for the half-filled, two-dimensional Hubbard model.

  14. A microfluidic paper-based analytical device for the assay of albumin-corrected fructosamine values from whole blood samples.

    PubMed

    Boonyasit, Yuwadee; Laiwattanapaisal, Wanida

    2015-01-01

    A method for acquiring albumin-corrected fructosamine values from whole blood using a microfluidic paper-based analytical system that offers substantial improvement over previous methods is proposed. The time required to quantify both serum albumin and fructosamine is shortened to 10 min with detection limits of 0.50 g dl(-1) and 0.58 mM, respectively (S/N = 3). The proposed system also exhibited good within-run and run-to-run reproducibility. The results of the interference study revealed that the acceptable recoveries ranged from 95.1 to 106.2%. The system was compared with currently used large-scale methods (n = 15), and the results demonstrated good agreement among the techniques. The microfluidic paper-based system has the potential to continuously monitor glycemic levels in low resource settings.

  15. Development and Applications of Liquid Sample Desorption Electrospray Ionization Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Zheng, Qiuling; Chen, Hao

    2016-06-01

    Desorption electrospray ionization mass spectrometry (DESI-MS) is a recent advance in the field of analytical chemistry. This review surveys the development of liquid sample DESI-MS (LS-DESI-MS), a variant form of DESI-MS that focuses on fast analysis of liquid samples, and its novel analy-tical applications in bioanalysis, proteomics, and reaction kinetics. Due to the capability of directly ionizing liquid samples, liquid sample DESI (LS-DESI) has been successfully used to couple MS with various analytical techniques, such as microfluidics, microextraction, electrochemistry, and chromatography. This review also covers these hyphenated techniques. In addition, several closely related ionization methods, including transmission mode DESI, thermally assisted DESI, and continuous flow-extractive DESI, are briefly discussed. The capabilities of LS-DESI extend and/or complement the utilities of traditional DESI and electrospray ionization and will find extensive and valuable analytical application in the future.

  16. Flow chemistry vs. flow analysis.

    PubMed

    Trojanowicz, Marek

    2016-01-01

    The flow mode of conducting chemical syntheses facilitates chemical processes through the use of on-line analytical monitoring of occurring reactions, the application of solid-supported reagents to minimize downstream processing and computerized control systems to perform multi-step sequences. They are exactly the same attributes as those of flow analysis, which has solid place in modern analytical chemistry in several last decades. The following review paper, based on 131 references to original papers as well as pre-selected reviews, presents basic aspects, selected instrumental achievements and developmental directions of a rapidly growing field of continuous flow chemical synthesis. Interestingly, many of them might be potentially employed in the development of new methods in flow analysis too. In this paper, examples of application of flow analytical measurements for on-line monitoring of flow syntheses have been indicated and perspectives for a wider application of real-time analytical measurements have been discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Immunoassay and antibody microarray analysis of the HUPO Plasma Proteome Project reference specimens: Systematic variation between sample types and calibration of mass spectrometry data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haab, Brian B.; Geierstanger, Bernhard H.; Michailidis, George

    2005-08-01

    Four different immunoassay and antibody microarray methods performed at four different sites were used to measure the levels of a broad range of proteins (N = 323 assays; 39, 88, 168, and 28 assays at the respective sites; 237 unique analytes) in the human serum and plasma reference specimens distributed by the Plasma Proteome Project (PPP) of the HUPO. The methods provided a means to (1) assess the level of systematic variation in protein abundances associated with blood preparation methods (serum, citrate-anticoagulated-plasma, EDTA-anticoagulated-plasma, or heparin-anticoagulated-plasma) and (2) evaluate the dependence on concentration of MS-based protein identifications from data sets usingmore » the HUPO specimens. Some proteins, particularly cytokines, had highly variable concentrations between the different sample preparations, suggesting specific effects of certain anticoagulants on the stability or availability of these proteins. The linkage of antibody-based measurements from 66 different analytes with the combined MS/MS data from 18 different laboratories showed that protein detection and the quality of MS data increased with analyte concentration. The conclusions from these initial analyses are that the optimal blood preparation method is variable between analytes and that the discovery of blood proteins by MS can be extended to concentrations below the ng/mL range under certain circumstances. Continued developments in antibody-based methods will further advance the scientific goals of the PPP.« less

  18. Trace analysis of endocrine disrupting compounds in environmental water samples by use of solid-phase extraction and gas chromatography with mass spectrometry detection.

    PubMed

    Azzouz, Abdelmonaim; Ballesteros, Evaristo

    2014-09-19

    A novel analytical method using a continuous solid-phase extraction system in combination with gas chromatography-mass spectrometry for the simultaneous separation and determination of endocrine disrupting compounds (EDCs) is reported. The method was applied to major EDCs of various types including parabens, alkylphenols, phenylphenols, bisphenol A and triclosan in water. Samples were preconcentrated by using an automatic solid-phase extraction module containing a sorbent column, and retained analytes eluted with acetonitrile for derivatization with a mixture of N,O-bis(trimethylsilyl)trifluoroacetamide and trimethylchlorosilane. A number of variables potentially influencing recovery of the target compounds such as the type of SPE sorbent (Silica gel, Florisil, RP-C18, Amberlite XAD-2 and XAD-4, Oasis HLB and LiChrolut EN), eluent and properties of the water including pH and ionic strength, were examined. LiChrolut EN was found to be the most efficient sorbent for retaining the analytes, with ∼100% efficiency. The ensuing method was validated with good analytical results including low limits of detection (0.01-0.08ng/L for 100mL of sample) and good linearity (r(2)>0.997) throughout the studied concentration ranges. The method exhibited good accuracy (recoveries of 90-101%) and precision (relative standard deviations less than 7%) in the determination of EDCs in drinking, river, pond, well, swimming pool and waste water. Waste water samples were found to contain the largest number and highest concentrations of analytes (3.2-390ng/L). Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Cavity master equation for the continuous time dynamics of discrete-spin models.

    PubMed

    Aurell, E; Del Ferraro, G; Domínguez, E; Mulet, R

    2017-05-01

    We present an alternate method to close the master equation representing the continuous time dynamics of interacting Ising spins. The method makes use of the theory of random point processes to derive a master equation for local conditional probabilities. We analytically test our solution studying two known cases, the dynamics of the mean-field ferromagnet and the dynamics of the one-dimensional Ising system. We present numerical results comparing our predictions with Monte Carlo simulations in three different models on random graphs with finite connectivity: the Ising ferromagnet, the random field Ising model, and the Viana-Bray spin-glass model.

  20. Cavity master equation for the continuous time dynamics of discrete-spin models

    NASA Astrophysics Data System (ADS)

    Aurell, E.; Del Ferraro, G.; Domínguez, E.; Mulet, R.

    2017-05-01

    We present an alternate method to close the master equation representing the continuous time dynamics of interacting Ising spins. The method makes use of the theory of random point processes to derive a master equation for local conditional probabilities. We analytically test our solution studying two known cases, the dynamics of the mean-field ferromagnet and the dynamics of the one-dimensional Ising system. We present numerical results comparing our predictions with Monte Carlo simulations in three different models on random graphs with finite connectivity: the Ising ferromagnet, the random field Ising model, and the Viana-Bray spin-glass model.

  1. Robust Adaptive Dynamic Programming of Two-Player Zero-Sum Games for Continuous-Time Linear Systems.

    PubMed

    Fu, Yue; Fu, Jun; Chai, Tianyou

    2015-12-01

    In this brief, an online robust adaptive dynamic programming algorithm is proposed for two-player zero-sum games of continuous-time unknown linear systems with matched uncertainties, which are functions of system outputs and states of a completely unknown exosystem. The online algorithm is developed using the policy iteration (PI) scheme with only one iteration loop. A new analytical method is proposed for convergence proof of the PI scheme. The sufficient conditions are given to guarantee globally asymptotic stability and suboptimal property of the closed-loop system. Simulation studies are conducted to illustrate the effectiveness of the proposed method.

  2. Generalized model of electromigration with 1:1 (analyte:selector) complexation stoichiometry: part I. Theory.

    PubMed

    Dubský, Pavel; Müllerová, Ludmila; Dvořák, Martin; Gaš, Bohuslav

    2015-03-06

    The model of electromigration of a multivalent weak acidic/basic/amphoteric analyte that undergoes complexation with a mixture of selectors is introduced. The model provides an extension of the series of models starting with the single-selector model without dissociation by Wren and Rowe in 1992, continuing with the monovalent weak analyte/single-selector model by Rawjee, Williams and Vigh in 1993 and that by Lelièvre in 1994, and ending with the multi-selector overall model without dissociation developed by our group in 2008. The new multivalent analyte multi-selector model shows that the effective mobility of the analyte obeys the original Wren and Row's formula. The overall complexation constant, mobility of the free analyte and mobility of complex can be measured and used in a standard way. The mathematical expressions for the overall parameters are provided. We further demonstrate mathematically that the pH dependent parameters for weak analytes can be simply used as an input into the multi-selector overall model and, in reverse, the multi-selector overall parameters can serve as an input into the pH-dependent models for the weak analytes. These findings can greatly simplify the rationale method development in analytical electrophoresis, specifically enantioseparations. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Field demonstration of on-site analytical methods for TNT and RDX in ground water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craig, H.; Ferguson, G.; Markos, A.

    1996-12-31

    A field demonstration was conducted to assess the performance of eight commercially-available and emerging colorimetric, immunoassay, and biosensor on-site analytical methods for explosives 2,4,6-trinitrotoluene (TNT) and hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) in ground water and leachate at the Umatilla Army Depot Activity, Hermiston, Oregon and US Naval Submarine Base, Bangor, Washington, Superfund sites. Ground water samples were analyzed by each of the on-site methods and results compared to laboratory analysis using high performance liquid chromatography (HPLC) with EPA SW-846 Method 8330. The commercial methods evaluated include the EnSys, Inc., TNT and RDX colorimetric test kits (EPA SW-846 Methods 8515 and 8510) with amore » solid phase extraction (SPE) step, the DTECH/EM Science TNT and RDX immunoassay test kits (EPA SW-846 Methods 4050 and 4051), and the Ohmicron TNT immunoassay test kit. The emerging methods tested include the antibody-based Naval Research Laboratory (NRL) Continuous Flow Immunosensor (CFI) for TNT and RDX, and the Fiber Optic Biosensor (FOB) for TNT. Accuracy of the on-site methods were evaluated using linear regression analysis and relative percent difference (RPD) comparison criteria. Over the range of conditions tested, the colorimetric methods for TNT and RDX showed the highest accuracy of the emerging methods for TNT and RDX. The colorimetric method was selected for routine ground water monitoring at the Umatilla site, and further field testing on the NRL CFI and FOB biosensors will continue at both Superfund sites.« less

  4. Time dependence of breakdown in a global fiber-bundle model with continuous damage.

    PubMed

    Moral, L; Moreno, Y; Gómez, J B; Pacheco, A F

    2001-06-01

    A time-dependent global fiber-bundle model of fracture with continuous damage is formulated in terms of a set of coupled nonlinear differential equations. A first integral of this set is analytically obtained. The time evolution of the system is studied by applying a discrete probabilistic method. Several results are discussed emphasizing their differences with the standard time-dependent model. The results obtained show that with this simple model a variety of experimental observations can be qualitatively reproduced.

  5. Integrated sudomotor axon reflex sweat stimulation for continuous sweat analyte analysis with individuals at rest.

    PubMed

    Sonner, Zachary; Wilder, Eliza; Gaillard, Trudy; Kasting, Gerald; Heikenfeld, Jason

    2017-07-25

    Eccrine sweat has rapidly emerged as a non-invasive, ergonomic, and rich source of chemical analytes with numerous technological demonstrations now showing the ability for continuous electrochemical sensing. However, beyond active perspirers (athletes, workers, etc.), continuous sweat access in individuals at rest has hindered the advancement of both sweat sensing science and technology. Reported here is integration of sudomotor axon reflex sweat stimulation for continuous wearable sweat analyte analysis, including the ability for side-by-side integration of chemical stimulants & sensors without cross-contamination. This integration approach is uniquely compatible with sensors which consume the analyte (enzymatic) or sensors which equilibrate with analyte concentrations. In vivo validation is performed using iontophoretic delivery of carbachol with ion-selective and impedance sensors for sweat analysis. Carbachol has shown prolonged sweat stimulation in directly stimulated regions for five hours or longer. This work represents a significant leap forward in sweat sensing technology, and may be of broader interest to those interested in on-skin sensing integrated with drug-delivery.

  6. Finite element modeling of light propagation in turbid media under illumination of a continuous-wave beam

    USDA-ARS?s Scientific Manuscript database

    Spatially-resolved spectroscopy provides a means for measuring the optical properties of biological tissues, based on analytical solutions to diffusion approximation for semi-infinite media under the normal illumination of infinitely small size light beam. The method is, however, prone to error in m...

  7. 21 CFR 530.24 - Procedure for announcing analytical methods for drug residue quantification.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) ANIMAL DRUGS, FEEDS, AND RELATED PRODUCTS EXTRALABEL DRUG USE IN ANIMALS Specific Provisions Relating to Extralabel Use of Animal and Human Drugs in Food... approved human drug or an approved animal drug. The agency will publish in the Federal Register a notice of...

  8. Analytical surveillance of emerging drugs of abuse and drug formulations

    PubMed Central

    Thomas, Brian F.; Pollard, Gerald T.; Grabenauer, Megan

    2012-01-01

    Uncontrolled recreational drugs are proliferating in number and variety. Effects of long-term use are unknown, and regulation is problematic, as efforts to control one chemical often lead to several other structural analogs. Advanced analytical instrumentation and methods are continuing to be developed to identify drugs, chemical constituents of products, and drug substances and metabolites in biological fluids. Several mass spectrometry based approaches appear promising, particularly those that involve high resolution chromatographic and mass spectrometric methods that allow unbiased data acquisition and sophisticated data interrogation. Several of these techniques are shown to facilitate both targeted and broad spectrum analysis, which is often of particular benefit when dealing with misleadingly labeled products or assessing a biological matrix for illicit drugs and metabolites. The development and application of novel analytical approaches such as these will help to assess the nature and degree of exposure and risk and, where necessary, inform forensics and facilitate implementation of specific regulation and control measures. PMID:23154240

  9. The case for visual analytics of arsenic concentrations in foods.

    PubMed

    Johnson, Matilda O; Cohly, Hari H P; Isokpehi, Raphael D; Awofolu, Omotayo R

    2010-05-01

    Arsenic is a naturally occurring toxic metal and its presence in food could be a potential risk to the health of both humans and animals. Prolonged ingestion of arsenic contaminated water may result in manifestations of toxicity in all systems of the body. Visual Analytics is a multidisciplinary field that is defined as the science of analytical reasoning facilitated by interactive visual interfaces. The concentrations of arsenic vary in foods making it impractical and impossible to provide regulatory limit for each food. This review article presents a case for the use of visual analytics approaches to provide comparative assessment of arsenic in various foods. The topics covered include (i) metabolism of arsenic in the human body; (ii) arsenic concentrations in various foods; (ii) factors affecting arsenic uptake in plants; (ii) introduction to visual analytics; and (iv) benefits of visual analytics for comparative assessment of arsenic concentration in foods. Visual analytics can provide an information superstructure of arsenic in various foods to permit insightful comparative risk assessment of the diverse and continually expanding data on arsenic in food groups in the context of country of study or origin, year of study, method of analysis and arsenic species.

  10. The Case for Visual Analytics of Arsenic Concentrations in Foods

    PubMed Central

    Johnson, Matilda O.; Cohly, Hari H.P.; Isokpehi, Raphael D.; Awofolu, Omotayo R.

    2010-01-01

    Arsenic is a naturally occurring toxic metal and its presence in food could be a potential risk to the health of both humans and animals. Prolonged ingestion of arsenic contaminated water may result in manifestations of toxicity in all systems of the body. Visual Analytics is a multidisciplinary field that is defined as the science of analytical reasoning facilitated by interactive visual interfaces. The concentrations of arsenic vary in foods making it impractical and impossible to provide regulatory limit for each food. This review article presents a case for the use of visual analytics approaches to provide comparative assessment of arsenic in various foods. The topics covered include (i) metabolism of arsenic in the human body; (ii) arsenic concentrations in various foods; (ii) factors affecting arsenic uptake in plants; (ii) introduction to visual analytics; and (iv) benefits of visual analytics for comparative assessment of arsenic concentration in foods. Visual analytics can provide an information superstructure of arsenic in various foods to permit insightful comparative risk assessment of the diverse and continually expanding data on arsenic in food groups in the context of country of study or origin, year of study, method of analysis and arsenic species. PMID:20623005

  11. 40 CFR 87.82 - Sampling and analytical procedures for measuring smoke exhaust emissions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) Definitions. Test Procedures for Engine Smoke Emissions (Aircraft Gas Turbine Engines) § 87.82 Sampling and analytical procedures for measuring smoke exhaust...

  12. Modal ring method for the scattering of sound

    NASA Technical Reports Server (NTRS)

    Baumeister, Kenneth J.; Kreider, Kevin L.

    1993-01-01

    The modal element method for acoustic scattering can be simplified when the scattering body is rigid. In this simplified method, called the modal ring method, the scattering body is represented by a ring of triangular finite elements forming the outer surface. The acoustic pressure is calculated at the element nodes. The pressure in the infinite computational region surrounding the body is represented analytically by an eigenfunction expansion. The two solution forms are coupled by the continuity of pressure and velocity on the body surface. The modal ring method effectively reduces the two-dimensional scattering problem to a one-dimensional problem capable of handling very high frequency scattering. In contrast to the boundary element method or the method of moments, which perform a similar reduction in problem dimension, the model line method has the added advantage of having a highly banded solution matrix requiring considerably less computer storage. The method shows excellent agreement with analytic results for scattering from rigid circular cylinders over a wide frequency range (1 is equal to or less than ka is less than or equal to 100) in the near and far fields.

  13. Analytical, Characterization, and Stability Studies of Organic Chemical, Drugs, and Drug Formulation

    DTIC Science & Technology

    2014-05-21

    stability studies was maintained over the entire contract period to ensure the continued integrity of the drug in its clinical use . Because our...facile automation. We demonstrated the method in principle, but were unable to remove the residual t-butanol to ɘ.5%. With additional research using ...to its use of ethylene oxide for sterilization, which is done in small batches. The generally recognized method of choice to produce a parenteral

  14. Whispering Gallery Optical Resonator Spectroscopic Probe and Method

    NASA Technical Reports Server (NTRS)

    Anderson, Mark S. (Inventor)

    2014-01-01

    Disclosed herein is a spectroscopic probe comprising at least one whispering gallery mode optical resonator disposed on a support, the whispering gallery mode optical resonator comprising a continuous outer surface having a cross section comprising a first diameter and a second diameter, wherein the first diameter is greater than the second diameter. A method of measuring a Raman spectrum and an Infra-red spectrum of an analyte using the spectroscopic probe is also disclosed.

  15. Selectivity in analytical chemistry: two interpretations for univariate methods.

    PubMed

    Dorkó, Zsanett; Verbić, Tatjana; Horvai, George

    2015-01-01

    Selectivity is extremely important in analytical chemistry but its definition is elusive despite continued efforts by professional organizations and individual scientists. This paper shows that the existing selectivity concepts for univariate analytical methods broadly fall in two classes: selectivity concepts based on measurement error and concepts based on response surfaces (the response surface being the 3D plot of the univariate signal as a function of analyte and interferent concentration, respectively). The strengths and weaknesses of the different definitions are analyzed and contradictions between them unveiled. The error based selectivity is very general and very safe but its application to a range of samples (as opposed to a single sample) requires the knowledge of some constraint about the possible sample compositions. The selectivity concepts based on the response surface are easily applied to linear response surfaces but may lead to difficulties and counterintuitive results when applied to nonlinear response surfaces. A particular advantage of this class of selectivity is that with linear response surfaces it can provide a concentration independent measure of selectivity. In contrast, the error based selectivity concept allows only yes/no type decision about selectivity. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Chemometric applications to assess quality and critical parameters of virgin and extra-virgin olive oil. A review.

    PubMed

    Gómez-Caravaca, Ana M; Maggio, Rubén M; Cerretani, Lorenzo

    2016-03-24

    Today virgin and extra-virgin olive oil (VOO and EVOO) are food with a large number of analytical tests planned to ensure its quality and genuineness. Almost all official methods demand high use of reagents and manpower. Because of that, analytical development in this area is continuously evolving. Therefore, this review focuses on analytical methods for EVOO/VOO which use fast and smart approaches based on chemometric techniques in order to reduce time of analysis, reagent consumption, high cost equipment and manpower. Experimental approaches of chemometrics coupled with fast analytical techniques such as UV-Vis spectroscopy, fluorescence, vibrational spectroscopies (NIR, MIR and Raman fluorescence), NMR spectroscopy, and other more complex techniques like chromatography, calorimetry and electrochemical techniques applied to EVOO/VOO production and analysis have been discussed throughout this work. The advantages and drawbacks of this association have also been highlighted. Chemometrics has been evidenced as a powerful tool for the oil industry. In fact, it has been shown how chemometrics can be implemented all along the different steps of EVOO/VOO production: raw material input control, monitoring during process and quality control of final product. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Detection, characterization and quantification of inorganic engineered nanomaterials: A review of techniques and methodological approaches for the analysis of complex samples.

    PubMed

    Laborda, Francisco; Bolea, Eduardo; Cepriá, Gemma; Gómez, María T; Jiménez, María S; Pérez-Arantegui, Josefina; Castillo, Juan R

    2016-01-21

    The increasing demand of analytical information related to inorganic engineered nanomaterials requires the adaptation of existing techniques and methods, or the development of new ones. The challenge for the analytical sciences has been to consider the nanoparticles as a new sort of analytes, involving both chemical (composition, mass and number concentration) and physical information (e.g. size, shape, aggregation). Moreover, information about the species derived from the nanoparticles themselves and their transformations must also be supplied. Whereas techniques commonly used for nanoparticle characterization, such as light scattering techniques, show serious limitations when applied to complex samples, other well-established techniques, like electron microscopy and atomic spectrometry, can provide useful information in most cases. Furthermore, separation techniques, including flow field flow fractionation, capillary electrophoresis and hydrodynamic chromatography, are moving to the nano domain, mostly hyphenated to inductively coupled plasma mass spectrometry as element specific detector. Emerging techniques based on the detection of single nanoparticles by using ICP-MS, but also coulometry, are in their way to gain a position. Chemical sensors selective to nanoparticles are in their early stages, but they are very promising considering their portability and simplicity. Although the field is in continuous evolution, at this moment it is moving from proofs-of-concept in simple matrices to methods dealing with matrices of higher complexity and relevant analyte concentrations. To achieve this goal, sample preparation methods are essential to manage such complex situations. Apart from size fractionation methods, matrix digestion, extraction and concentration methods capable of preserving the nature of the nanoparticles are being developed. This review presents and discusses the state-of-the-art analytical techniques and sample preparation methods suitable for dealing with complex samples. Single- and multi-method approaches applied to solve the nanometrological challenges posed by a variety of stakeholders are also presented. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Analytical solution for vacuum preloading considering the nonlinear distribution of horizontal permeability within the smear zone.

    PubMed

    Peng, Jie; He, Xiang; Ye, Hanming

    2015-01-01

    The vacuum preloading is an effective method which is widely used in ground treatment. In consolidation analysis, the soil around prefabricated vertical drain (PVD) is traditionally divided into smear zone and undisturbed zone, both with constant permeability. In reality, the permeability of soil changes continuously within the smear zone. In this study, the horizontal permeability coefficient of soil within the smear zone is described by an exponential function of radial distance. A solution for vacuum preloading consolidation considers the nonlinear distribution of horizontal permeability within the smear zone is presented and compared with previous analytical results as well as a numerical solution, the results show that the presented solution correlates well with the numerical solution, and is more precise than previous analytical solution.

  19. Analytical solution for vacuum preloading considering the nonlinear distribution of horizontal permeability within the smear zone

    PubMed Central

    Peng, Jie; He, Xiang; Ye, Hanming

    2015-01-01

    The vacuum preloading is an effective method which is widely used in ground treatment. In consolidation analysis, the soil around prefabricated vertical drain (PVD) is traditionally divided into smear zone and undisturbed zone, both with constant permeability. In reality, the permeability of soil changes continuously within the smear zone. In this study, the horizontal permeability coefficient of soil within the smear zone is described by an exponential function of radial distance. A solution for vacuum preloading consolidation considers the nonlinear distribution of horizontal permeability within the smear zone is presented and compared with previous analytical results as well as a numerical solution, the results show that the presented solution correlates well with the numerical solution, and is more precise than previous analytical solution. PMID:26447973

  20. Development and Evaluation of an Online CO2 Evolution Test and a Multicomponent Biodegradation Test System

    PubMed Central

    Strotmann, Uwe; Reuschenbach, Peter; Schwarz, Helmut; Pagga, Udo

    2004-01-01

    Well-established biodegradation tests use biogenously evolved carbon dioxide (CO2) as an analytical parameter to determine the ultimate biodegradability of substances. A newly developed analytical technique based on the continuous online measurement of conductivity showed its suitability over other techniques. It could be demonstrated that the method met all criteria of established biodegradation tests, gave continuous biodegradation curves, and was more reliable than other tests. In parallel experiments, only small variations in the biodegradation pattern occurred. When comparing the new online CO2 method with existing CO2 evolution tests, growth rates and lag periods were similar and only the final degree of biodegradation of aniline was slightly lower. A further test development was the unification and parallel measurement of all three important summary parameters for biodegradation—i.e., CO2 evolution, determination of the biochemical oxygen demand (BOD), and removal of dissolved organic carbon (DOC)—in a multicomponent biodegradation test system (MCBTS). The practicability of this test method was demonstrated with aniline. This test system had advantages for poorly water-soluble and highly volatile compounds and allowed the determination of the carbon fraction integrated into biomass (heterotrophic yield). The integrated online measurements of CO2 and BOD systems produced continuous degradation curves, which better met the stringent criteria of ready biodegradability (60% biodegradation in a 10-day window). Furthermore the data could be used to calculate maximal growth rates for the modeling of biodegradation processes. PMID:15294794

  1. Can NMR solve some significant challenges in metabolomics?

    PubMed Central

    Gowda, G.A. Nagana; Raftery, Daniel

    2015-01-01

    The field of metabolomics continues to witness rapid growth driven by fundamental studies, methods development, and applications in a number of disciplines that include biomedical science, plant and nutrition sciences, drug development, energy and environmental sciences, toxicology, etc. NMR spectroscopy is one of the two most widely used analytical platforms in the metabolomics field, along with mass spectrometry (MS). NMR's excellent reproducibility and quantitative accuracy, its ability to identify structures of unknown metabolites, its capacity to generate metabolite profiles using intact biospecimens with no need for separation, and its capabilities for tracing metabolic pathways using isotope labeled substrates offer unique strengths for metabolomics applications. However, NMR's limited sensitivity and resolution continue to pose a major challenge and have restricted both the number and the quantitative accuracy of metabolites analyzed by NMR. Further, the analysis of highly complex biological samples has increased the demand for new methods with improved detection, better unknown identification, and more accurate quantitation of larger numbers of metabolites. Recent efforts have contributed significant improvements in these areas, and have thereby enhanced the pool of routinely quantifiable metabolites. Additionally, efforts focused on combining NMR and MS promise opportunities to exploit the combined strength of the two analytical platforms for direct comparison of the metabolite data, unknown identification and reliable biomarker discovery that continue to challenge the metabolomics field. This article presents our perspectives on the emerging trends in NMR-based metabolomics and NMR's continuing role in the field with an emphasis on recent and ongoing research from our laboratory. PMID:26476597

  2. Can NMR solve some significant challenges in metabolomics?

    NASA Astrophysics Data System (ADS)

    Nagana Gowda, G. A.; Raftery, Daniel

    2015-11-01

    The field of metabolomics continues to witness rapid growth driven by fundamental studies, methods development, and applications in a number of disciplines that include biomedical science, plant and nutrition sciences, drug development, energy and environmental sciences, toxicology, etc. NMR spectroscopy is one of the two most widely used analytical platforms in the metabolomics field, along with mass spectrometry (MS). NMR's excellent reproducibility and quantitative accuracy, its ability to identify structures of unknown metabolites, its capacity to generate metabolite profiles using intact bio-specimens with no need for separation, and its capabilities for tracing metabolic pathways using isotope labeled substrates offer unique strengths for metabolomics applications. However, NMR's limited sensitivity and resolution continue to pose a major challenge and have restricted both the number and the quantitative accuracy of metabolites analyzed by NMR. Further, the analysis of highly complex biological samples has increased the demand for new methods with improved detection, better unknown identification, and more accurate quantitation of larger numbers of metabolites. Recent efforts have contributed significant improvements in these areas, and have thereby enhanced the pool of routinely quantifiable metabolites. Additionally, efforts focused on combining NMR and MS promise opportunities to exploit the combined strength of the two analytical platforms for direct comparison of the metabolite data, unknown identification and reliable biomarker discovery that continue to challenge the metabolomics field. This article presents our perspectives on the emerging trends in NMR-based metabolomics and NMR's continuing role in the field with an emphasis on recent and ongoing research from our laboratory.

  3. Analytic-continuation approach to the resummation of divergent series in Rayleigh-Schrödinger perturbation theory

    NASA Astrophysics Data System (ADS)

    Mihálka, Zsuzsanna É.; Surján, Péter R.

    2017-12-01

    The method of analytic continuation is applied to estimate eigenvalues of linear operators from finite order results of perturbation theory even in cases when the latter is divergent. Given a finite number of terms E(k ),k =1 ,2 ,⋯M resulting from a Rayleigh-Schrödinger perturbation calculation, scaling these numbers by μk (μ being the perturbation parameter) we form the sum E (μ ) =∑kμkE(k ) for small μ values for which the finite series is convergent to a certain numerical accuracy. Extrapolating the function E (μ ) to μ =1 yields an estimation of the exact solution of the problem. For divergent series, this procedure may serve as resummation tool provided the perturbation problem has a nonzero radius of convergence. As illustrations, we treat the anharmonic (quartic) oscillator and an example from the many-electron correlation problem.

  4. An analytical probabilistic model of the quality efficiency of a sewer tank

    NASA Astrophysics Data System (ADS)

    Balistrocchi, Matteo; Grossi, Giovanna; Bacchi, Baldassare

    2009-12-01

    The assessment of the efficiency of a storm water storage facility devoted to the sewer overflow control in urban areas strictly depends on the ability to model the main features of the rainfall-runoff routing process and the related wet weather pollution delivery. In this paper the possibility of applying the analytical probabilistic approach for developing a tank design method, whose potentials are similar to the continuous simulations, is proved. In the model derivation the quality issues of such devices were implemented. The formulation is based on a Weibull probabilistic model of the main characteristics of the rainfall process and on a power law describing the relationship between the dimensionless storm water cumulative runoff volume and the dimensionless cumulative pollutograph. Following this approach, efficiency indexes were established. The proposed model was verified by comparing its results to those obtained by continuous simulations; satisfactory agreement is shown for the proposed efficiency indexes.

  5. A histogram-free multicanonical Monte Carlo algorithm for the construction of analytical density of states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eisenbach, Markus; Li, Ying Wai

    We report a new multicanonical Monte Carlo (MC) algorithm to obtain the density of states (DOS) for physical systems with continuous state variables in statistical mechanics. Our algorithm is able to obtain an analytical form for the DOS expressed in a chosen basis set, instead of a numerical array of finite resolution as in previous variants of this class of MC methods such as the multicanonical (MUCA) sampling and Wang-Landau (WL) sampling. This is enabled by storing the visited states directly in a data set and avoiding the explicit collection of a histogram. This practice also has the advantage ofmore » avoiding undesirable artificial errors caused by the discretization and binning of continuous state variables. Our results show that this scheme is capable of obtaining converged results with a much reduced number of Monte Carlo steps, leading to a significant speedup over existing algorithms.« less

  6. [Analysis of hot spots and trend of molecular pharmacognosy research based on project supported by National Natural Science Foundation of 1995-2014].

    PubMed

    Wang, Jun-Wen; Liu, Yang; Tong, Yuan-Yuan; Yang, Ce; Li, Hai-Yan

    2016-05-01

    This study collected 1995-2014 molecular pharmacognosy study, a total of 595 items, funded by Natural Science Foundation of China (NSFC). TDA and Excel software were used to analyze the data of the projects about general situation, hot spots of research with rank analytic and correlation analytic methods. Supported by NSFC molecular pharmacognosy projects and funding a gradual increase in the number of, the proportion of funds for pharmaceutical research funding tends to be stable; mainly supported by molecular biology methods of genuine medicinal materials, secondary metabolism and Germplasm Resources Research; hot drugs including Radix Salviae Miltiorrhizae, Radix Rehmanniae, Cordyceps sinensis, hot contents including tanshinone biosynthesis, Rehmannia glutinosa continuous cropping obstacle. Copyright© by the Chinese Pharmaceutical Association.

  7. Apparatus for rapid measurement of aerosol bulk chemical composition

    DOEpatents

    Lee, Yin-Nan E.; Weber, Rodney J.; Orsini, Douglas

    2006-04-18

    An apparatus for continuous on-line measurement of chemical composition of aerosol particles with a fast time resolution is provided. The apparatus includes an enhanced particle size magnifier for producing activated aerosol particles and an enhanced collection device which collects the activated aerosol particles into a liquid stream for quantitative analysis by analytical means. Methods for on-line measurement of chemical composition of aerosol particles are also provided, the method including exposing aerosol carrying sample air to hot saturated steam thereby forming activated aerosol particles; collecting the activated aerosol particles by a collection device for delivery as a jet stream onto an impaction surface; and flushing off the activated aerosol particles from the impaction surface into a liquid stream for delivery of the collected liquid stream to an analytical instrument for quantitative measurement.

  8. Plate and butt-weld stresses beyond elastic limit, material and structural modeling

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1991-01-01

    Ultimate safety factors of high performance structures depend on stress behavior beyond the elastic limit, a region not too well understood. An analytical modeling approach was developed to gain fundamental insights into inelastic responses of simple structural elements. Nonlinear material properties were expressed in engineering stresses and strains variables and combined with strength of material stress and strain equations similar to numerical piece-wise linear method. Integrations are continuous which allows for more detailed solutions. Included with interesting results are the classical combined axial tension and bending load model and the strain gauge conversion to stress beyond the elastic limit. Material discontinuity stress factors in butt-welds were derived. This is a working-type document with analytical methods and results applicable to all industries of high reliability structures.

  9. Mass spectrometric directed system for the continuous-flow synthesis and purification of diphenhydramine† †Electronic supplementary information (ESI) available: NMR spectra of selected product, mass spectra of selected products, crystallization information, and experimental procedures are supplied. See DOI: 10.1039/c7sc00905d Click here for additional data file.

    PubMed Central

    Loren, Bradley P.; Wleklinski, Michael; Koswara, Andy; Yammine, Kathryn; Hu, Yanyang

    2017-01-01

    A highly integrated approach to the development of a process for the continuous synthesis and purification of diphenhydramine is reported. Mass spectrometry (MS) is utilized throughout the system for on-line reaction monitoring, off-line yield quantitation, and as a reaction screening module that exploits reaction acceleration in charged microdroplets for high throughput route screening. This effort has enabled the discovery and optimization of multiple routes to diphenhydramine in glass microreactors using MS as a process analytical tool (PAT). The ability to rapidly screen conditions in charged microdroplets was used to guide optimization of the process in a microfluidic reactor. A quantitative MS method was developed and used to measure the reaction kinetics. Integration of the continuous-flow reactor/on-line MS methodology with a miniaturized crystallization platform for continuous reaction monitoring and controlled crystallization of diphenhydramine was also achieved. Our findings suggest a robust approach for the continuous manufacture of pharmaceutical drug products, exemplified in the particular case of diphenhydramine, and optimized for efficiency and crystal size, and guided by real-time analytics to produce the agent in a form that is readily adapted to continuous synthesis. PMID:28979759

  10. Fall Velocities of Hydrometeors in the Atmosphere: Refinements to a Continuous Analytical Power Law.

    NASA Astrophysics Data System (ADS)

    Khvorostyanov, Vitaly I.; Curry, Judith A.

    2005-12-01

    This paper extends the previous research of the authors on the unified representation of fall velocities for both liquid and crystalline particles as a power law over the entire size range of hydrometeors observed in the atmosphere. The power-law coefficients are determined as continuous analytical functions of the Best or Reynolds number or of the particle size. Here, analytical expressions are formulated for the turbulent corrections to the Reynolds number and to the power-law coefficients that describe the continuous transition from the laminar to the turbulent flow around a falling particle. A simple analytical expression is found for the correction of fall velocities for temperature and pressure. These expressions and the resulting fall velocities are compared with observations and other calculations for a range of ice crystal habits and sizes. This approach provides a continuous analytical power-law description of the terminal velocities of liquid and crystalline hydrometeors with sufficiently high accuracy and can be directly used in bin-resolving models or incorporated into parameterizations for cloud- and large-scale models and remote sensing techniques.

  11. A green analytical method using ultrasound in sample preparation for the flow injection determination of iron, manganese, and zinc in soluble solid samples by flame atomic absorption spectrometry.

    PubMed

    Yebra, M Carmen

    2012-01-01

    A simple and rapid analytical method was developed for the determination of iron, manganese, and zinc in soluble solid samples. The method is based on continuous ultrasonic water dissolution of the sample (5-30 mg) at room temperature followed by flow injection flame atomic absorption spectrometric determination. A good precision of the whole procedure (1.2-4.6%) and a sample throughput of ca. 25 samples h(-1) were obtained. The proposed green analytical method has been successfully applied for the determination of iron, manganese, and zinc in soluble solid food samples (soluble cocoa and soluble coffee) and pharmaceutical preparations (multivitamin tablets). The ranges of concentrations found were 21.4-25.61 μg g(-1) for iron, 5.74-18.30 μg g(-1) for manganese, and 33.27-57.90 μg g(-1) for zinc in soluble solid food samples and 3.75-9.90 μg g(-1) for iron, 0.47-5.05 μg g(-1) for manganese, and 1.55-15.12 μg g(-1) for zinc in multivitamin tablets. The accuracy of the proposed method was established by a comparison with the conventional wet acid digestion method using a paired t-test, indicating the absence of systematic errors.

  12. Improved apparatus for continuous culture of hydrogen-fixing bacteria

    NASA Technical Reports Server (NTRS)

    Foster, J. F.; Litchfield, J. H.

    1970-01-01

    Improved apparatus permits the continuous culture of Hydrogenomonas eutropha. System incorporates three essential subsystems - /1/ environmentally isolated culture vessel, /2/ analytical system with appropriate sensors and readout devices, /3/ control system with feedback responses to each analytical measurement.

  13. Lagrangian methods in the analysis of nonlinear wave interactions in plasma

    NASA Technical Reports Server (NTRS)

    Galloway, J. J.

    1972-01-01

    An averaged-Lagrangian method is developed for obtaining the equations which describe the nonlinear interactions of the wave (oscillatory) and background (nonoscillatory) components which comprise a continuous medium. The method applies to monochromatic waves in any continuous medium that can be described by a Lagrangian density, but is demonstrated in the context of plasma physics. The theory is presented in a more general and unified form by way of a new averaged-Lagrangian formalism which simplifies the perturbation ordering procedure. Earlier theory is extended to deal with a medium distributed in velocity space and to account for the interaction of the background with the waves. The analytic steps are systematized, so as to maximize calculational efficiency. An assessment of the applicability and limitations of the method shows that it has some definite advantages over other approaches in efficiency and versatility.

  14. Immobilized aptamer paper spray ionization source for ion mobility spectrometry.

    PubMed

    Zargar, Tahereh; Khayamian, Taghi; Jafari, Mohammad T

    2017-01-05

    A selective thin-film microextraction based on aptamer immobilized on cellulose paper was used as a paper spray ionization source for ion mobility spectrometry (PSI-IMS), for the first time. In this method, the paper is not only used as an ionization source but also it is utilized for the selective extraction of analyte, based on immobilized aptamer. This combination integrates both sample preparation and analyte ionization in a Whatman paper. To that end, an appropriate sample introduction system with a novel design was constructed for the paper spray ionization source. Using this system, a continuous solvent flow works as an elution and spray solvent simultaneously. In this method, analyte is adsorbed on a triangular paper with immobilized aptamer and then it is desorbed and ionized by elution solvent and applied high voltage on paper, respectively. The effects of different experimental parameters such as applied voltage, angle of paper tip, distance between paper tip and counter electrode, elution solvent type, and solvent flow rate were optimized. The proposed method was exhaustively validated in terms of sensitivity and reproducibility by analyzing the standard solutions of codeine and acetamiprid. The analytical results obtained are promising enough to ensure the use of immobilized aptamer paper-spray as both the extraction and ionization techniques in IMS for direct analysis of biomedicine. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Projected Regression Methods for Inverting Fredholm Integrals: Formalism and Application to Analytical Continuation

    NASA Astrophysics Data System (ADS)

    Arsenault, Louis-Francois; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.

    We present a machine learning-based statistical regression approach to the inversion of Fredholm integrals of the first kind by studying an important example for the quantum materials community, the analytical continuation problem of quantum many-body physics. It involves reconstructing the frequency dependence of physical excitation spectra from data obtained at specific points in the complex frequency plane. The approach provides a natural regularization in cases where the inverse of the Fredholm kernel is ill-conditioned and yields robust error metrics. The stability of the forward problem permits the construction of a large database of input-output pairs. Machine learning methods applied to this database generate approximate solutions which are projected onto the subspace of functions satisfying relevant constraints. We show that for low input noise the method performs as well or better than Maximum Entropy (MaxEnt) under standard error metrics, and is substantially more robust to noise. We expect the methodology to be similarly effective for any problem involving a formally ill-conditioned inversion, provided that the forward problem can be efficiently solved. AJM was supported by the Office of Science of the U.S. Department of Energy under Subcontract No. 3F-3138 and LFA by the Columbia Univeristy IDS-ROADS project, UR009033-05 which also provided part support to RN and LH.

  16. A probabilistic and multi-objective analysis of lexicase selection and ε-lexicase selection.

    PubMed

    Cava, William La; Helmuth, Thomas; Spector, Lee; Moore, Jason H

    2018-05-10

    Lexicase selection is a parent selection method that considers training cases individually, rather than in aggregate, when performing parent selection. Whereas previous work has demonstrated the ability of lexicase selection to solve difficult problems in program synthesis and symbolic regression, the central goal of this paper is to develop the theoretical underpinnings that explain its performance. To this end, we derive an analytical formula that gives the expected probabilities of selection under lexicase selection, given a population and its behavior. In addition, we expand upon the relation of lexicase selection to many-objective optimization methods to describe the behavior of lexicase selection, which is to select individuals on the boundaries of Pareto fronts in high-dimensional space. We show analytically why lexicase selection performs more poorly for certain sizes of population and training cases, and show why it has been shown to perform more poorly in continuous error spaces. To address this last concern, we propose new variants of ε-lexicase selection, a method that modifies the pass condition in lexicase selection to allow near-elite individuals to pass cases, thereby improving selection performance with continuous errors. We show that ε-lexicase outperforms several diversity-maintenance strategies on a number of real-world and synthetic regression problems.

  17. A method for direct, semi-quantitative analysis of gas phase samples using gas chromatography-inductively coupled plasma-mass spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, Kimberly E; Gerdes, Kirk

    2013-07-01

    A new and complete GC–ICP-MS method is described for direct analysis of trace metals in a gas phase process stream. The proposed method is derived from standard analytical procedures developed for ICP-MS, which are regularly exercised in standard ICP-MS laboratories. In order to implement the method, a series of empirical factors were generated to calibrate detector response with respect to a known concentration of an internal standard analyte. Calibrated responses are ultimately used to determine the concentration of metal analytes in a gas stream using a semi-quantitative algorithm. The method was verified using a traditional gas injection from a GCmore » sampling valve and a standard gas mixture containing either a 1 ppm Xe + Kr mix with helium balance or 100 ppm Xe with helium balance. Data collected for Xe and Kr gas analytes revealed that agreement of 6–20% with the actual concentration can be expected for various experimental conditions. To demonstrate the method using a relevant “unknown” gas mixture, experiments were performed for continuous 4 and 7 hour periods using a Hg-containing sample gas that was co-introduced into the GC sample loop with the xenon gas standard. System performance and detector response to the dilute concentration of the internal standard were pre-determined, which allowed semi-quantitative evaluation of the analyte. The calculated analyte concentrations varied during the course of the 4 hour experiment, particularly during the first hour of the analysis where the actual Hg concentration was under predicted by up to 72%. Calculated concentration improved to within 30–60% for data collected after the first hour of the experiment. Similar results were seen during the 7 hour test with the deviation from the actual concentration being 11–81% during the first hour and then decreasing for the remaining period. The method detection limit (MDL) was determined for the mercury by injecting the sample gas into the system following a period of equilibration. The MDL for Hg was calculated as 6.8 μg · m -3. This work describes the first complete GC–ICP-MS method to directly analyze gas phase samples, and detailed sample calculations and comparisons to conventional ICP-MS methods are provided.« less

  18. Comparison of Three Methods for Wind Turbine Capacity Factor Estimation

    PubMed Central

    Ditkovich, Y.; Kuperman, A.

    2014-01-01

    Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first “quasiexact” approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second “analytic” approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third “approximate” approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation. PMID:24587755

  19. Design and fabrication of planar structures with graded electromagnetic properties

    NASA Astrophysics Data System (ADS)

    Good, Brandon Lowell

    Successfully integrating electromagnetic properties in planar structures offers numerous benefits to the microwave and optical communities. This work aims at formulating new analytic and optimized design methods, creating new fabrication techniques for achieving those methods, and matching appropriate implementation of methods to fabrication techniques. The analytic method consists of modifying an approach that realizes perfect antireflective properties from graded profiles. This method is shown for all-dielectric and magneto-dielectric grading profiles. The optimized design methods are applied to transformer (discrete) or taper (continuous) designs. From these methods, a subtractive and an additive manufacturing technique were established and are described. The additive method, dry powder dot deposition, enables three dimensional varying electromagnetic properties in a structural composite. Combining the methods and fabrication is shown in two applied methodologies. The first uses dry powder dot deposition to design one dimensionally graded electromagnetic profiles in a planar fiberglass composite. The second method simultaneously applies antireflective properties and adjusts directivity through a slab through the use of subwavelength structures to achieve a flat antireflective lens. The end result of this work is a complete set of methods, formulations, and fabrication techniques to achieve integrated electromagnetic properties in planar structures.

  20. Continuous real-time measurement of aqueous cyanide

    DOEpatents

    Rosentreter, Jeffrey J.; Gering, Kevin L.

    2007-03-06

    This invention provides a method and system capable of the continuous, real-time measurement of low concentrations of aqueous free cyanide (CN) using an on-line, flow through system. The system is based on the selective reactivity of cyanide anions and the characteristically nonreactive nature of metallic gold films, wherein this selective reactivity is exploited as an indirect measurement for aqueous cyanide. In the present invention the dissolution of gold, due to the solubilization reaction with the analyte cyanide anion, is monitored using a piezoelectric microbalance contained within a flow cell.

  1. 40 CFR 140.5 - Analytical procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 140.5 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) MARINE SANITATION DEVICE STANDARD § 140.5 Analytical procedures. In determining the composition and quality of effluent discharge from marine sanitation devices, the procedures contained in 40 CFR part 136...

  2. Analytical methods to predict liquid congealing in ram air heat exchangers during cold operation

    NASA Astrophysics Data System (ADS)

    Coleman, Kenneth; Kosson, Robert

    1989-07-01

    Ram air heat exchangers used to cool liquids such as lube oils or Ethylene-Glycol/water solutions can be subject to congealing in very cold ambients, resulting in a loss of cooling capability. Two-dimensional, transient analytical models have been developed to explore this phenomenon with both continuous and staggered fin cores. Staggered fin predictions are compared to flight test data from the E-2C Allison T56 engine lube oil system during winter conditions. For simpler calculations, a viscosity ratio correction was introduced and found to provide reasonable cold ambient performance predictions for the staggered fin core, using a one-dimensional approach.

  3. A field study of selected U.S. Geological Survey analytical methods for measuring pesticides in filtered stream water, June - September 2012

    USGS Publications Warehouse

    Martin, Jeffrey D.; Norman, Julia E.; Sandstrom, Mark W.; Rose, Claire E.

    2017-09-06

    U.S. Geological Survey monitoring programs extensively used two analytical methods, gas chromatography/mass spectrometry and liquid chromatography/mass spectrometry, to measure pesticides in filtered water samples during 1992–2012. In October 2012, the monitoring programs began using direct aqueous-injection liquid chromatography tandem mass spectrometry as a new analytical method for pesticides. The change in analytical methods, however, has the potential to inadvertently introduce bias in analysis of datasets that span the change.A field study was designed to document performance of the new method in a variety of stream-water matrices and to quantify any potential changes in measurement bias or variability that could be attributed to changes in analytical methods. The goals of the field study were to (1) summarize performance (bias and variability of pesticide recovery) of the new method in a variety of stream-water matrices; (2) compare performance of the new method in laboratory blank water (laboratory reagent spikes) to that in a variety of stream-water matrices; (3) compare performance (analytical recovery) of the new method to that of the old methods in a variety of stream-water matrices; (4) compare pesticide detections and concentrations measured by the new method to those of the old methods in a variety of stream-water matrices; (5) compare contamination measured by field blank water samples in old and new methods; (6) summarize the variability of pesticide detections and concentrations measured by the new method in field duplicate water samples; and (7) identify matrix characteristics of environmental water samples that adversely influence the performance of the new method. Stream-water samples and a variety of field quality-control samples were collected at 48 sites in the U.S. Geological Survey monitoring networks during June–September 2012. Stream sites were located across the United States and included sites in agricultural and urban land-use settings, as well as sites on major rivers.The results of the field study identified several challenges for the analysis and interpretation of data analyzed by both old and new methods, particularly when data span the change in methods and are combined for analysis of temporal trends in water quality. The main challenges identified are large (greater than 30 percent), statistically significant differences in analytical recovery, detection capability, and (or) measured concentrations for selected pesticides. These challenges are documented and discussed, but specific guidance or statistical methods to resolve these differences in methods are beyond the scope of the report. The results of the field study indicate that the implications of the change in analytical methods must be assessed individually for each pesticide and method.Understanding the possible causes of the systematic differences in concentrations between methods that remain after recovery adjustment might be necessary to determine how to account for the differences in data analysis. Because recoveries for each method are independently determined from separate reference standards and spiking solutions, the differences might be due to an error in one of the reference standards or solutions or some other basic aspect of standard procedure in the analytical process. Further investigation of the possible causes is needed, which will lead to specific decisions on how to compensate for these differences in concentrations in data analysis. In the event that further investigations do not provide insight into the causes of systematic differences in concentrations between methods, the authors recommend continuing to collect and analyze paired environmental water samples by both old and new methods. This effort should be targeted to seasons, sites, and expected concentrations to supplement those concentrations already assessed and to compare the ongoing analytical recovery of old and new methods to those observed in the summer and fall of 2012.

  4. Developing strategies to enhance loading efficiency of erythrosensors

    NASA Astrophysics Data System (ADS)

    Bustamante Lopez, Sandra C.; Ritter, Sarah C.; Meissner, Kenith E.

    2014-02-01

    For diabetics, continuous glucose monitoring and the resulting tighter control of glucose levels ameliorate serious complications from hypoglycemia and hyperglycemia. Diabetics measure their blood glucose levels multiple times a day by finger pricks, or use implantable monitoring devices. Still, glucose and other analytes in the blood fluctuate throughout the day and the current monitoring methods are invasive, immunogenic, and/or present biodegradation problems. Using carrier erythrocytes loaded with a fluorescent sensor, we seek to develop a biodegradable, efficient, and potentially cost effective method to continuously sense blood analytes. We aim to reintroduce sensor-loaded erythrocytes to the bloodstream and conserve the erythrocytes lifetime of 120 days in the circulatory system. Here, we compare the efficiency of two loading techniques: hypotonic dilution and electroporation. Hypotonic dilution employs hypotonic buffer to create transient pores in the erythrocyte membrane, allowing dye entrance and a hypertonic buffer to restore tonicity. Electroporation relies on controlled electrical pulses that results in reversible pores formation to allow cargo entrance, follow by incubation at 37°C to reseal. As part of the cellular characterization of loaded erythrocytes, we focus on cell size, shape, and hemoglobin content. Cell recovery, loading efficiency and cargo release measurements render optimal loading conditions. The detected fluorescent signal from sensor-loaded erythrocytes can be translated into a direct measurement of analyte levels in the blood stream. The development of a suitable protocol to engineer carrier erythrocytes has profound and lasting implications in the erythrosensor's lifespan and sensing capabilities.

  5. Analytical Chemistry Laboratory. Progress report for FY 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.W.; Boparai, A.S.; Bowers, D.L.

    The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year (FY) 1996. This annual report is the thirteenth for the ACL. It describes effort on continuing and new projects and contributions of the ACL staff to various programs at ANL. The ACL operates in the ANL system as a full-cost-recovery service center, but has a mission that includes a complementary research and development component: The Analytical Chemistry Laboratory will provide high-quality, cost-effective chemical analysis and related technical support to solve research problems of our clients --more » Argonne National Laboratory, the Department of Energy, and others -- and will conduct world-class research and development in analytical chemistry and its applications. Because of the diversity of research and development work at ANL, the ACL handles a wide range of analytical chemistry problems. Some routine or standard analyses are done, but the ACL usually works with commercial laboratories if our clients require high-volume, production-type analyses. It is common for ANL programs to generate unique problems that require significant development of methods and adaption of techniques to obtain useful analytical data. Thus, much of the support work done by the ACL is very similar to our applied analytical chemistry research.« less

  6. Annual banned-substance review: analytical approaches in human sports drug testing.

    PubMed

    Thevis, Mario; Kuuranne, Tiia; Geyer, Hans; Schänzer, Wilhelm

    2014-01-01

    Monitoring the misuse of drugs and the abuse of substances and methods potentially or evidently improving athletic performance by analytical chemistry strategies is one of the main pillars of modern anti-doping efforts. Owing to the continuously growing knowledge in medicine, pharmacology, and (bio)chemistry, new chemical entities are frequently established and developed, various of which present a temptation for sportsmen and women due to assumed/attributed beneficial effects of such substances and preparations on, for example, endurance, strength, and regeneration. By means of new technologies, expanded existing test protocols, new insights into metabolism, distribution, and elimination of compounds prohibited by the World Anti-Doping Agency (WADA), analytical assays have been further improved in agreement with the content of the 2013 Prohibited List. In this annual banned-substance review, literature concerning human sports drug testing that was published between October 2012 and September 2013 is summarized and reviewed with particular emphasis on analytical approaches and their contribution to enhanced doping controls. Copyright © 2013 John Wiley & Sons, Ltd.

  7. General design method for three-dimensional potential flow fields. 1: Theory

    NASA Technical Reports Server (NTRS)

    Stanitz, J. D.

    1980-01-01

    A general design method was developed for steady, three dimensional, potential, incompressible or subsonic-compressible flow. In this design method, the flow field, including the shape of its boundary, was determined for arbitrarily specified, continuous distributions of velocity as a function of arc length along the boundary streamlines. The method applied to the design of both internal and external flow fields, including, in both cases, fields with planar symmetry. The analytic problems associated with stagnation points, closure of bodies in external flow fields, and prediction of turning angles in three dimensional ducts were reviewed.

  8. Recent Trends in Analytical Methods to Determine New Psychoactive Substances in Hair

    PubMed Central

    Kyriakou, Chrystalla; Pellegrini, Manuela; García-Algar, Oscar; Marinelli, Enrico; Zaami, Simona

    2017-01-01

    New Psychoactive Substances (NPS) belong to several chemical classes, including phenethylamines, piperazines, synthetic cathinones and synthetic cannabinoids. Development and validation of analytical methods for the determination of NPS both in traditional and alternative matrices is of crucial importance to study drug metabolism and to associate consumption to clinical outcomes and eventual intoxication symptoms. Among different biological matrices, hair is the one with the widest time window to investigate drug-related history and demonstrate past intake. The aim of this paper was to overview the trends of the rapidly evolving analytical methods for the determination of NPS in hair and the usefulness of these methods when applied to real cases. A number of rapid and sensitive methods for the determination of NPS in hair matrix has been recently published, most of them using liquid chromatography coupled to mass spectrometry. Hair digestion and subsequent solid phase extraction or liquid-liquid extraction were described as well as extraction in organic solvents. For most of the methods limits of quantification at picogram per milligram hair were obtained. The measured concentrations for most of the NPS in real samples were in the range of picograms of drug per milligram of hair. Interpretation of the results and lack of cut-off values for the discrimination between chronic consumption and occasional use or external contamination are still challenging. Methods for the determination of NPS in hair are continually emerging to include as many NPS as possible due to the great demand for their detection. PMID:27834146

  9. Analytical Solutions for Rumor Spreading Dynamical Model in a Social Network

    NASA Astrophysics Data System (ADS)

    Fallahpour, R.; Chakouvari, S.; Askari, H.

    2015-03-01

    In this paper, Laplace Adomian decomposition method is utilized for evaluating of spreading model of rumor. Firstly, a succinct review is constructed on the subject of using analytical methods such as Adomian decomposion method, Variational iteration method and Homotopy Analysis method for epidemic models and biomathematics. In continue a spreading model of rumor with consideration of forgetting mechanism is assumed and subsequently LADM is exerted for solving of it. By means of the aforementioned method, a general solution is achieved for this problem which can be readily employed for assessing of rumor model without exerting any computer program. In addition, obtained consequences for this problem are discussed for different cases and parameters. Furthermore, it is shown the method is so straightforward and fruitful for analyzing equations which have complicated terms same as rumor model. By employing numerical methods, it is revealed LADM is so powerful and accurate for eliciting solutions of this model. Eventually, it is concluded that this method is so appropriate for this problem and it can provide researchers a very powerful vehicle for scrutinizing rumor models in diverse kinds of social networks such as Facebook, YouTube, Flickr, LinkedIn and Tuitor.

  10. New Coke, Rosetta Stones, and Functional Data Analysis: Recommendations for Developing and Validating New Measures of Depression

    ERIC Educational Resources Information Center

    Santor, Darcy A.

    2006-01-01

    In this article, the author outlines six recommendations that may guide the continued development and validation of measures of depression. These are (a) articulate and revise a formal theory of signs and symptoms; (b) differentiate complex theoretical goals from pragmatic evaluation needs; (c) invest heavily in new methods and analytic models;…

  11. Journal Benchmarking for Strategic Publication Management and for Improving Journal Positioning in the World Ranking Systems

    ERIC Educational Resources Information Center

    Moskovkin, Vladimir M.; Bocharova, Emilia A.; Balashova, Oksana V.

    2014-01-01

    Purpose: The purpose of this paper is to introduce and develop the methodology of journal benchmarking. Design/Methodology/ Approach: The journal benchmarking method is understood to be an analytic procedure of continuous monitoring and comparing of the advance of specific journal(s) against that of competing journals in the same subject area,…

  12. Linear diffusion-wave channel routing using a discrete Hayami convolution method

    Treesearch

    Li Wang; Joan Q. Wu; William J. Elliot; Fritz R. Feidler; Sergey Lapin

    2014-01-01

    The convolution of an input with a response function has been widely used in hydrology as a means to solve various problems analytically. Due to the high computation demand in solving the functions using numerical integration, it is often advantageous to use the discrete convolution instead of the integration of the continuous functions. This approach greatly reduces...

  13. 40 CFR 141.22 - Turbidity sampling and analytical requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... requirements. 141.22 Section 141.22 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Monitoring and Analytical Requirements... suppliers of water for both community and non-community water systems at a representative entry point(s) to...

  14. 40 CFR 141.22 - Turbidity sampling and analytical requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... requirements. 141.22 Section 141.22 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Monitoring and Analytical Requirements... suppliers of water for both community and non-community water systems at a representative entry point(s) to...

  15. 40 CFR 141.22 - Turbidity sampling and analytical requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... requirements. 141.22 Section 141.22 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Monitoring and Analytical Requirements... suppliers of water for both community and non-community water systems at a representative entry point(s) to...

  16. 40 CFR 141.22 - Turbidity sampling and analytical requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... requirements. 141.22 Section 141.22 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Monitoring and Analytical Requirements... suppliers of water for both community and non-community water systems at a representative entry point(s) to...

  17. 40 CFR 141.22 - Turbidity sampling and analytical requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... requirements. 141.22 Section 141.22 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Monitoring and Analytical Requirements... suppliers of water for both community and non-community water systems at a representative entry point(s) to...

  18. Modeling of phonon scattering in n-type nanowire transistors using one-shot analytic continuation technique

    NASA Astrophysics Data System (ADS)

    Bescond, Marc; Li, Changsheng; Mera, Hector; Cavassilas, Nicolas; Lannoo, Michel

    2013-10-01

    We present a one-shot current-conserving approach to model the influence of electron-phonon scattering in nano-transistors using the non-equilibrium Green's function formalism. The approach is based on the lowest order approximation (LOA) to the current and its simplest analytic continuation (LOA+AC). By means of a scaling argument, we show how both LOA and LOA+AC can be easily obtained from the first iteration of the usual self-consistent Born approximation (SCBA) algorithm. Both LOA and LOA+AC are then applied to model n-type silicon nanowire field-effect-transistors and are compared to SCBA current characteristics. In this system, the LOA fails to describe electron-phonon scattering, mainly because of the interactions with acoustic phonons at the band edges. In contrast, the LOA+AC still well approximates the SCBA current characteristics, thus demonstrating the power of analytic continuation techniques. The limits of validity of LOA+AC are also discussed, and more sophisticated and general analytic continuation techniques are suggested for more demanding cases.

  19. QSPR studies on the photoinduced-fluorescence behaviour of pharmaceuticals and pesticides.

    PubMed

    López-Malo, D; Bueso-Bordils, J I; Duart, M J; Alemán-López, P A; Martín-Algarra, R V; Antón-Fos, G M; Lahuerta-Zamora, L; Martínez-Calatayud, J

    2017-07-01

    Fluorimetric analysis is still a growing line of research in the determination of a wide range of organic compounds, including pharmaceuticals and pesticides, which makes necessary the development of new strategies aimed at improving the performance of fluorescence determinations as well as the sensitivity and, especially, the selectivity of the newly developed analytical methods. In this paper are presented applications of a useful and growing tool suitable for fostering and improving research in the analytical field. Experimental screening, molecular connectivity and discriminant analysis are applied to organic compounds to predict their fluorescent behaviour after their photodegradation by UV irradiation in a continuous flow manifold (multicommutation flow assembly). The screening was based on online fluorimetric measurement and comprised pre-selected compounds with different molecular structures (pharmaceuticals and some pesticides with known 'native' fluorescent behaviour) to study their changes in fluorescent behaviour after UV irradiation. Theoretical predictions agree with the results from the experimental screening and could be used to develop selective analytical methods, as well as helping to reduce the need for expensive, time-consuming and trial-and-error screening procedures.

  20. Can NMR solve some significant challenges in metabolomics?

    PubMed

    Nagana Gowda, G A; Raftery, Daniel

    2015-11-01

    The field of metabolomics continues to witness rapid growth driven by fundamental studies, methods development, and applications in a number of disciplines that include biomedical science, plant and nutrition sciences, drug development, energy and environmental sciences, toxicology, etc. NMR spectroscopy is one of the two most widely used analytical platforms in the metabolomics field, along with mass spectrometry (MS). NMR's excellent reproducibility and quantitative accuracy, its ability to identify structures of unknown metabolites, its capacity to generate metabolite profiles using intact bio-specimens with no need for separation, and its capabilities for tracing metabolic pathways using isotope labeled substrates offer unique strengths for metabolomics applications. However, NMR's limited sensitivity and resolution continue to pose a major challenge and have restricted both the number and the quantitative accuracy of metabolites analyzed by NMR. Further, the analysis of highly complex biological samples has increased the demand for new methods with improved detection, better unknown identification, and more accurate quantitation of larger numbers of metabolites. Recent efforts have contributed significant improvements in these areas, and have thereby enhanced the pool of routinely quantifiable metabolites. Additionally, efforts focused on combining NMR and MS promise opportunities to exploit the combined strength of the two analytical platforms for direct comparison of the metabolite data, unknown identification and reliable biomarker discovery that continue to challenge the metabolomics field. This article presents our perspectives on the emerging trends in NMR-based metabolomics and NMR's continuing role in the field with an emphasis on recent and ongoing research from our laboratory. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Monitoring the Presence of 13 Active Compounds in Surface Water Collected from Rural Areas in Northwestern Spain

    PubMed Central

    Iglesias, Alejandra; Nebot, Carolina; Vázquez, Beatriz I.; Coronel-Olivares, Claudia; Franco Abuín, Carlos M.; Cepeda, Alberto

    2014-01-01

    Drug residues are considered environmental contaminants, and their occurrence has recently become a matter of concern. Analytical methods and monitoring systems are therefore required to control the continuous input of these drug residues into the environment. This article presents a suitable HPLC-ESI-MS/MS method for the simultaneous extraction, detection and quantification of residues of 13 drugs (antimicrobials, glucocorticosteroids, anti-inflammatories, anti-hypertensives, anti-cancer drugs and triphenylmethane dyes) in surface water. A monitoring study with 549 water samples was carried out in northwestern Spain to detect the presence of drug residues over two sampling periods during 2010, 2011 and 2012. Samples were collected from rural areas with and without farming activity and from urban areas. The 13 analytes were detected, and 18% of the samples collected showed positive results for the presence of at least one analyte. More collection sites were located in rural areas than in urban areas. However, more positive samples with higher concentrations and a larger number of analytes were detected in samples collected from sites located after the discharge of a WWTP. Results indicated that the WWTPs seems to act as a concentration point. Positive samples were also detected at a site located near a drinking water treatment plant. PMID:24837665

  2. Monitoring the presence of 13 active compounds in surface water collected from rural areas in northwestern Spain.

    PubMed

    Iglesias, Alejandra; Nebot, Carolina; Vázquez, Beatriz I; Coronel-Olivares, Claudia; Abuín, Carlos M Franco; Cepeda, Alberto

    2014-05-15

    Drug residues are considered environmental contaminants, and their occurrence has recently become a matter of concern. Analytical methods and monitoring systems are therefore required to control the continuous input of these drug residues into the environment. This article presents a suitable HPLC-ESI-MS/MS method for the simultaneous extraction, detection and quantification of residues of 13 drugs (antimicrobials, glucocorticosteroids, anti-inflammatories, anti-hypertensives, anti-cancer drugs and triphenylmethane dyes) in surface water. A monitoring study with 549 water samples was carried out in northwestern Spain to detect the presence of drug residues over two sampling periods during 2010, 2011 and 2012. Samples were collected from rural areas with and without farming activity and from urban areas. The 13 analytes were detected, and 18% of the samples collected showed positive results for the presence of at least one analyte. More collection sites were located in rural areas than in urban areas. However, more positive samples with higher concentrations and a larger number of analytes were detected in samples collected from sites located after the discharge of a WWTP. Results indicated that the WWTPs seems to act as a concentration point. Positive samples were also detected at a site located near a drinking water treatment plant.

  3. Methods for the behavioral, educational, and social sciences: an R package.

    PubMed

    Kelley, Ken

    2007-11-01

    Methods for the Behavioral, Educational, and Social Sciences (MBESS; Kelley, 2007b) is an open source package for R (R Development Core Team, 2007b), an open source statistical programming language and environment. MBESS implements methods that are not widely available elsewhere, yet are especially helpful for the idiosyncratic techniques used within the behavioral, educational, and social sciences. The major categories of functions are those that relate to confidence interval formation for noncentral t, F, and chi2 parameters, confidence intervals for standardized effect sizes (which require noncentral distributions), and sample size planning issues from the power analytic and accuracy in parameter estimation perspectives. In addition, MBESS contains collections of other functions that should be helpful to substantive researchers and methodologists. MBESS is a long-term project that will continue to be updated and expanded so that important methods can continue to be made available to researchers in the behavioral, educational, and social sciences.

  4. On the Solution of the Continuity Equation for Precipitating Electrons in Solar Flares

    NASA Technical Reports Server (NTRS)

    Emslie, A. Gordon; Holman, Gordon D.; Litvinenko, Yuri E.

    2014-01-01

    Electrons accelerated in solar flares are injected into the surrounding plasma, where they are subjected to the influence of collisional (Coulomb) energy losses. Their evolution is modeled by a partial differential equation describing continuity of electron number. In a recent paper, Dobranskis & Zharkova claim to have found an "updated exact analytical solution" to this continuity equation. Their solution contains an additional term that drives an exponential decrease in electron density with depth, leading them to assert that the well-known solution derived by Brown, Syrovatskii & Shmeleva, and many others is invalid. We show that the solution of Dobranskis & Zharkova results from a fundamental error in the application of the method of characteristics and is hence incorrect. Further, their comparison of the "new" analytical solution with numerical solutions of the Fokker-Planck equation fails to lend support to their result.We conclude that Dobranskis & Zharkova's solution of the universally accepted and well-established continuity equation is incorrect, and that their criticism of the correct solution is unfounded. We also demonstrate the formal equivalence of the approaches of Syrovatskii & Shmeleva and Brown, with particular reference to the evolution of the electron flux and number density (both differential in energy) in a collisional thick target. We strongly urge use of these long-established, correct solutions in future works.

  5. Meta-analysis in evidence-based healthcare: a paradigm shift away from random effects is overdue.

    PubMed

    Doi, Suhail A R; Furuya-Kanamori, Luis; Thalib, Lukman; Barendregt, Jan J

    2017-12-01

    Each year up to 20 000 systematic reviews and meta-analyses are published whose results influence healthcare decisions, thus making the robustness and reliability of meta-analytic methods one of the world's top clinical and public health priorities. The evidence synthesis makes use of either fixed-effect or random-effects statistical methods. The fixed-effect method has largely been replaced by the random-effects method as heterogeneity of study effects led to poor error estimation. However, despite the widespread use and acceptance of the random-effects method to correct this, it too remains unsatisfactory and continues to suffer from defective error estimation, posing a serious threat to decision-making in evidence-based clinical and public health practice. We discuss here the problem with the random-effects approach and demonstrate that there exist better estimators under the fixed-effect model framework that can achieve optimal error estimation. We argue for an urgent return to the earlier framework with updates that address these problems and conclude that doing so can markedly improve the reliability of meta-analytical findings and thus decision-making in healthcare.

  6. Analytical solutions for solute transport in groundwater and riverine flow using Green's Function Method and pertinent coordinate transformation method

    NASA Astrophysics Data System (ADS)

    Sanskrityayn, Abhishek; Suk, Heejun; Kumar, Naveen

    2017-04-01

    In this study, analytical solutions of one-dimensional pollutant transport originating from instantaneous and continuous point sources were developed in groundwater and riverine flow using both Green's Function Method (GFM) and pertinent coordinate transformation method. Dispersion coefficient and flow velocity are considered spatially and temporally dependent. The spatial dependence of the velocity is linear, non-homogeneous and that of dispersion coefficient is square of that of velocity, while the temporal dependence is considered linear, exponentially and asymptotically decelerating and accelerating. Our proposed analytical solutions are derived for three different situations depending on variations of dispersion coefficient and velocity, respectively which can represent real physical processes occurring in groundwater and riverine systems. First case refers to steady solute transport situation in steady flow in which dispersion coefficient and velocity are only spatially dependent. The second case represents transient solute transport in steady flow in which dispersion coefficient is spatially and temporally dependent while the velocity is spatially dependent. Finally, the third case indicates transient solute transport in unsteady flow in which both dispersion coefficient and velocity are spatially and temporally dependent. The present paper demonstrates the concentration distribution behavior from a point source in realistically occurring flow domains of hydrological systems including groundwater and riverine water in which the dispersivity of pollutant's mass is affected by heterogeneity of the medium as well as by other factors like velocity fluctuations, while velocity is influenced by water table slope and recharge rate. Such capabilities give the proposed method's superiority about application of various hydrological problems to be solved over other previously existing analytical solutions. Especially, to author's knowledge, any other solution doesn't exist for both spatially and temporally variations of dispersion coefficient and velocity. In this study, the existing analytical solutions from previous widely known studies are used for comparison as validation tools to verify the proposed analytical solution as well as the numerical code of the Two-Dimensional Subsurface Flow, Fate and Transport of Microbes and Chemicals (2DFATMIC) code and the developed 1D finite difference code (FDM). All such solutions show perfect match with the respective proposed solutions.

  7. The Analytical Solution of the Transient Radial Diffusion Equation with a Nonuniform Loss Term.

    NASA Astrophysics Data System (ADS)

    Loridan, V.; Ripoll, J. F.; De Vuyst, F.

    2017-12-01

    Many works have been done during the past 40 years to perform the analytical solution of the radial diffusion equation that models the transport and loss of electrons in the magnetosphere, considering a diffusion coefficient proportional to a power law in shell and a constant loss term. Here, we propose an original analytical method to address this challenge with a nonuniform loss term. The strategy is to match any L-dependent electron losses with a piecewise constant function on M subintervals, i.e., dealing with a constant lifetime on each subinterval. Applying an eigenfunction expansion method, the eigenvalue problem becomes presently a Sturm-Liouville problem with M interfaces. Assuming the continuity of both the distribution function and its first spatial derivatives, we are able to deal with a well-posed problem and to find the full analytical solution. We further show an excellent agreement between both the analytical solutions and the solutions obtained directly from numerical simulations for different loss terms of various shapes and with a diffusion coefficient DLL L6. We also give two expressions for the required number of eigenmodes N to get an accurate snapshot of the analytical solution, highlighting that N is proportional to 1/√t0, where t0 is a time of interest, and that N increases with the diffusion power. Finally, the equilibrium time, defined as the time to nearly reach the steady solution, is estimated by a closed-form expression and discussed. Applications to Earth and also Jupiter and Saturn are discussed.

  8. Bicubic uniform B-spline wavefront fitting technology applied in computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Cao, Hui; Sun, Jun-qiang; Chen, Guo-jie

    2006-02-01

    This paper presented a bicubic uniform B-spline wavefront fitting technology to figure out the analytical expression for object wavefront used in Computer-Generated Holograms (CGHs). In many cases, to decrease the difficulty of optical processing, off-axis CGHs rather than complex aspherical surface elements are used in modern advanced military optical systems. In order to design and fabricate off-axis CGH, we have to fit out the analytical expression for object wavefront. Zernike Polynomial is competent for fitting wavefront of centrosymmetric optical systems, but not for axisymmetrical optical systems. Although adopting high-degree polynomials fitting method would achieve higher fitting precision in all fitting nodes, the greatest shortcoming of this method is that any departure from the fitting nodes would result in great fitting error, which is so-called pulsation phenomenon. Furthermore, high-degree polynomials fitting method would increase the calculation time in coding computer-generated hologram and solving basic equation. Basing on the basis function of cubic uniform B-spline and the character mesh of bicubic uniform B-spline wavefront, bicubic uniform B-spline wavefront are described as the product of a series of matrices. Employing standard MATLAB routines, four kinds of different analytical expressions for object wavefront are fitted out by bicubic uniform B-spline as well as high-degree polynomials. Calculation results indicate that, compared with high-degree polynomials, bicubic uniform B-spline is a more competitive method to fit out the analytical expression for object wavefront used in off-axis CGH, for its higher fitting precision and C2 continuity.

  9. Exact analytical solutions of continuity equation for electron beams precipitating in Coulomb collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobranskis, R. R.; Zharkova, V. V., E-mail: valentina.zharkova@northumbria.ac.uk

    2014-06-10

    The original continuity equation (CE) used for the interpretation of the power law energy spectra of beam electrons in flares was written and solved for an electron beam flux while ignoring an additional free term with an electron density. In order to remedy this omission, the original CE for electron flux, considering beam's energy losses in Coulomb collisions, was first differentiated by the two independent variables: depth and energy leading to partial differential equation for an electron beam density instead of flux with the additional free term. The analytical solution of this partial differential continuity equation (PDCE) is obtained bymore » using the method of characteristics. This solution is further used to derive analytical expressions for mean electron spectra for Coulomb collisions and to carry out numeric calculations of hard X-ray (HXR) photon spectra for beams with different parameters. The solutions revealed a significant departure of electron densities at lower energies from the original results derived from the CE for the flux obtained for Coulomb collisions. This departure is caused by the additional exponential term that appeared in the updated solutions for electron differential density leading to its faster decrease at lower energies (below 100 keV) with every precipitation depth similar to the results obtained with numerical Fokker-Planck solutions. The effects of these updated solutions for electron densities on mean electron spectra and HXR photon spectra are also discussed.« less

  10. A modified Dodge algorithm for the parabolized Navier-Stokes equation and compressible duct flows

    NASA Technical Reports Server (NTRS)

    Cooke, C. H.

    1981-01-01

    A revised version of Dodge's split-velocity method for numerical calculation of compressible duct flow was developed. The revision incorporates balancing of mass flow rates on each marching step in order to maintain front-to-back continuity during the calculation. The (checkerboard) zebra algorithm is applied to solution of the three dimensional continuity equation in conservative form. A second-order A-stable linear multistep method is employed in effecting a marching solution of the parabolized momentum equations. A checkerboard iteration is used to solve the resulting implicit nonlinear systems of finite-difference equations which govern stepwise transition. Qualitive agreement with analytical predictions and experimental results was obtained for some flows with well-known solutions.

  11. Exact solution of conductive heat transfer in cylindrical composite laminate

    NASA Astrophysics Data System (ADS)

    Kayhani, M. H.; Shariati, M.; Nourozi, M.; Karimi Demneh, M.

    2009-11-01

    This paper presents an exact solution for steady-state conduction heat transfer in cylindrical composite laminates. This laminate is cylindrical shape and in each lamina, fibers have been wound around the cylinder. In this article heat transfer in composite laminates is being investigated, by using separation of variables method and an analytical relation for temperature distribution in these laminates has been obtained under specific boundary conditions. Also Fourier coefficients in each layer obtain by solving set of equations that related to thermal boundary layer conditions at inside and outside of the cylinder also thermal continuity and heat flux continuity between each layer is considered. In this research LU factorization method has been used to solve the set of equations.

  12. Emanation of radon from household granite.

    PubMed

    Kitto, Michael E; Haines, Douglas K; Arauzo, Hernando Diaz

    2009-04-01

    Emanation of radon (222Rn) from granite used for countertops and mantels was measured with continuous and integrating radon monitors. Each of the 24 granite samples emitted a measurable amount of radon. Of the two analytical methods that utilized electret-based detectors, one measured the flux of radon from the granite surfaces, and the other one measured radon levels in a glass jar containing granite cores. Additional methods that were applied utilized alpha-scintillation cells and a continuous radon monitor. Measured radon flux from the granites ranged from 2 to 310 mBq m-2 s-1, with most granites emitting <20 mBq m-2 s-1. Emanation of radon from granites encapsulated in airtight containers produced equilibrium concentrations ranging from <0.01 to 11 Bq kg-1 when alpha-scintillation cells were used, and from <0.01 to 4.0 Bq kg-1 when the continuous radon monitor was used.

  13. A Method for Continuous (239)Pu Determinations in Arctic and Antarctic Ice Cores.

    PubMed

    Arienzo, M M; McConnell, J R; Chellman, N; Criscitiello, A S; Curran, M; Fritzsche, D; Kipfstuhl, S; Mulvaney, R; Nolan, M; Opel, T; Sigl, M; Steffensen, J P

    2016-07-05

    Atmospheric nuclear weapons testing (NWT) resulted in the injection of plutonium (Pu) into the atmosphere and subsequent global deposition. We present a new method for continuous semiquantitative measurement of (239)Pu in ice cores, which was used to develop annual records of fallout from NWT in ten ice cores from Greenland and Antarctica. The (239)Pu was measured directly using an inductively coupled plasma-sector field mass spectrometer, thereby reducing analysis time and increasing depth-resolution with respect to previous methods. To validate this method, we compared our one year averaged results to published (239)Pu records and other records of NWT. The (239)Pu profiles from the Arctic ice cores reflected global trends in NWT and were in agreement with discrete Pu profiles from lower latitude ice cores. The (239)Pu measurements in the Antarctic ice cores tracked low latitude NWT, consistent with previously published discrete records from Antarctica. Advantages of the continuous (239)Pu measurement method are (1) reduced sample preparation and analysis time; (2) no requirement for additional ice samples for NWT fallout determinations; (3) measurements are exactly coregistered with all other chemical, elemental, isotopic, and gas measurements from the continuous analytical system; and (4) the long half-life means the (239)Pu record is stable through time.

  14. A Green Analytical Method Using Ultrasound in Sample Preparation for the Flow Injection Determination of Iron, Manganese, and Zinc in Soluble Solid Samples by Flame Atomic Absorption Spectrometry

    PubMed Central

    Yebra, M. Carmen

    2012-01-01

    A simple and rapid analytical method was developed for the determination of iron, manganese, and zinc in soluble solid samples. The method is based on continuous ultrasonic water dissolution of the sample (5–30 mg) at room temperature followed by flow injection flame atomic absorption spectrometric determination. A good precision of the whole procedure (1.2–4.6%) and a sample throughput of ca. 25 samples h–1 were obtained. The proposed green analytical method has been successfully applied for the determination of iron, manganese, and zinc in soluble solid food samples (soluble cocoa and soluble coffee) and pharmaceutical preparations (multivitamin tablets). The ranges of concentrations found were 21.4–25.61 μg g−1 for iron, 5.74–18.30 μg g−1 for manganese, and 33.27–57.90 μg g−1 for zinc in soluble solid food samples and 3.75–9.90 μg g−1 for iron, 0.47–5.05 μg g−1 for manganese, and 1.55–15.12 μg g−1 for zinc in multivitamin tablets. The accuracy of the proposed method was established by a comparison with the conventional wet acid digestion method using a paired t-test, indicating the absence of systematic errors. PMID:22567553

  15. 40 CFR 87.82 - Sampling and analytical procedures for measuring smoke exhaust emissions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Sampling and analytical procedures for measuring smoke exhaust emissions. 87.82 Section 87.82 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM AIRCRAFT AND AIRCRAFT ENGINES...

  16. 40 CFR 87.64 - Sampling and analytical procedures for measuring gaseous exhaust emissions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Sampling and analytical procedures for measuring gaseous exhaust emissions. 87.64 Section 87.64 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM AIRCRAFT AND AIRCRAFT ENGINES...

  17. 40 CFR 87.82 - Sampling and analytical procedures for measuring smoke exhaust emissions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Sampling and analytical procedures for measuring smoke exhaust emissions. 87.82 Section 87.82 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM AIRCRAFT AND AIRCRAFT ENGINES...

  18. 40 CFR 87.64 - Sampling and analytical procedures for measuring gaseous exhaust emissions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Sampling and analytical procedures for measuring gaseous exhaust emissions. 87.64 Section 87.64 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM AIRCRAFT AND AIRCRAFT ENGINES...

  19. Milky Way mass and potential recovery using tidal streams in a realistic halo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonaca, Ana; Geha, Marla; Küpper, Andreas H. W.

    2014-11-01

    We present a new method for determining the Galactic gravitational potential based on forward modeling of tidal stellar streams. We use this method to test the performance of smooth and static analytic potentials in representing realistic dark matter halos, which have substructure and are continually evolving by accretion. Our FAST-FORWARD method uses a Markov Chain Monte Carlo algorithm to compare, in six-dimensional phase space, an 'observed' stream to models created in trial analytic potentials. We analyze a large sample of streams that evolved in the Via Lactea II (VL2) simulation, which represents a realistic Galactic halo potential. The recovered potentialmore » parameters are in agreement with the best fit to the global, present-day VL2 potential. However, merely assuming an analytic potential limits the dark matter halo mass measurement to an accuracy of 5%-20%, depending on the choice of analytic parameterization. Collectively, the mass estimates using streams from our sample reach this fundamental limit, but individually they can be highly biased. Individual streams can both under- and overestimate the mass, and the bias is progressively worse for those with smaller perigalacticons, motivating the search for tidal streams at galactocentric distances larger than 70 kpc. We estimate that the assumption of a static and smooth dark matter potential in modeling of the GD-1- and Pal5-like streams introduces an error of up to 50% in the Milky Way mass estimates.« less

  20. [Recent Development of Atomic Spectrometry in China].

    PubMed

    Xiao, Yuan-fang; Wang, Xiao-hua; Hang, Wei

    2015-09-01

    As an important part of modern analytical techniques, atomic spectrometry occupies a decisive status in the whole analytical field. The development of atomic spectrometry also reflects the continuous reform and innovation of analytical techniques. In the past fifteen years, atomic spectrometry has experienced rapid development and been applied widely in many fields in China. This review has witnessed its development and remarkable achievements. It contains several directions of atomic spectrometry, including atomic emission spectrometry (AES), atomic absorption spectrometry (AAS), atomic fluorescence spectrometry (AFS), X-ray fluorescence spectrometry (XRF), and atomic mass spectrometry (AMS). Emphasis is put on the innovation of the detection methods and their applications in related fields, including environmental samples, biological samples, food and beverage, and geological materials, etc. There is also a brief introduction to the hyphenated techniques utilized in atomic spectrometry. Finally, the prospects of atomic spectrometry in China have been forecasted.

  1. Study of a vibrating plate: comparison between experimental (ESPI) and analytical results

    NASA Astrophysics Data System (ADS)

    Romero, G.; Alvarez, L.; Alanís, E.; Nallim, L.; Grossi, R.

    2003-07-01

    Real-time electronic speckle pattern interferometry (ESPI) was used for tuning and visualization of natural frequencies of a trapezoidal plate. The plate was excited to resonant vibration by a sinusoidal acoustical source, which provided a continuous range of audio frequencies. Fringe patterns produced during the time-average recording of the vibrating plate—corresponding to several resonant frequencies—were registered. From these interferograms, calculations of vibrational amplitudes by means of zero-order Bessel functions were performed in some particular cases. The system was also studied analytically. The analytical approach developed is based on the Rayleigh-Ritz method and on the use of non-orthogonal right triangular co-ordinates. The deflection of the plate is approximated by a set of beam characteristic orthogonal polynomials generated by using the Gram-Schmidt procedure. A high degree of correlation between computational analysis and experimental results was observed.

  2. Development of analytically capable time-of-flight mass spectrometer with continuous ion introduction

    NASA Astrophysics Data System (ADS)

    Hárs, György; Dobos, Gábor

    2010-03-01

    The present article describes the results and findings explored in the course of the development of the analytically capable prototype of continuous time-of-flight (CTOF) mass spectrometer. Currently marketed pulsed TOF (PTOF) instruments use ion introduction with a 10 ns or so pulse width, followed by a waiting period roughly 100 μs. Accordingly, the sample is under excitation in 10-4 part of the total measuring time. This very low duty cycle severely limits the sensitivity of the PTOF method. A possible approach to deal with this problem is to use linear sinusoidal dual modulation technique (CTOF) as described in this article. This way the sensitivity of the method is increased, due to the 50% duty cycle of the excitation. All other types of TOF spectrometer use secondary electron multiplier (SEM) for detection, which unfortunately discriminates in amplification in favor of the lighter ions. This discrimination effect is especially undesirable in a mass spectrometric method, which targets high mass range. In CTOF method, SEM is replaced with Faraday cup detector, thus eliminating the mass discrimination effect. Omitting SEM is made possible by the high ion intensity and the very slow ion detection with some hundred hertz detection bandwidth. The electrometer electronics of the Faraday cup detector operates with amplification 1010 V/A. The primary ion beam is highly monoenergetic due to the construction of the ion gun, which made possible to omit any electrostatic mirror configuration for bunching the ions. The measurement is controlled by a personal computer and the intelligent signal generator Type Tabor WW 2571, which uses the direct digital synthesis technique for making arbitrary wave forms. The data are collected by a Labjack interface board, and the fast Fourier transformation is performed by the software. Noble gas mixture has been used to test the analytical capabilities of the prototype setup. Measurement presented proves the results of the mathematical calculations as well as the future potentiality for use in chemical analysis of gaseous mixtures.

  3. A new method for blood velocity measurements using ultrasound FMCW signals.

    PubMed

    Kunita, Masanori; Sudo, Masamitsu; Inoue, Shinya; Akahane, Mutsuhiro

    2010-05-01

    The low peak power of frequency-modulated continuous wave (FMCW) radar makes it attractive for various applications, including vehicle collision warning systems and airborne radio altimeters. This paper describes a new ultrasound Doppler measurement system that measures blood flow velocity based on principles similar to those of FMCW radar. We propose a sinusoidal wave for FM modulation and introduce a new demodulation technique for obtaining Doppler information with high SNR and range resolution. Doppler signals are demodulated with a reference FMCW signal to adjust delay times so that they are equal to propagation times between the transmitter and the receiver. Analytical results suggest that Doppler signals can be obtained from a selected position, as with a sample volume in pulse wave Doppler systems, and that the resulting SNR is nearly identical to that obtained with continuous wave (CW) Doppler systems. Additionally, clutter power is less than that of CW Doppler systems. The analytical results were verified by experiments involving electronic circuits and Doppler ultrasound phantoms.

  4. Simple and clean determination of tetracyclines by flow injection analysis

    NASA Astrophysics Data System (ADS)

    Rodríguez, Michael Pérez; Pezza, Helena Redigolo; Pezza, Leonardo

    2016-01-01

    An environmentally reliable analytical methodology was developed for direct quantification of tetracycline (TC) and oxytetracycline (OTC) using continuous flow injection analysis with spectrophotometric detection. The method is based on the diazo coupling reaction between the tetracyclines and diazotized sulfanilic acid in a basic medium, resulting in the formation of an intense orange azo compound that presents maximum absorption at 434 nm. Experimental design was used to optimize the analytical conditions. The proposed technique was validated over the concentration range of 1 to 40 μg mL- 1, and was successfully applied to samples of commercial veterinary pharmaceuticals. The detection (LOD) and quantification (LOQ) limits were 0.40 and 1.35 μg mL- 1, respectively. The samples were also analyzed by an HPLC method, and the results showed agreement with the proposed technique. The new flow injection method can be immediately used for quality control purposes in the pharmaceutical industry, facilitating monitoring in real time during the production processes of tetracycline formulations for veterinary use.

  5. Analytical solution for a class of network dynamics with mechanical and financial applications

    NASA Astrophysics Data System (ADS)

    Krejčí, P.; Lamba, H.; Melnik, S.; Rachinskii, D.

    2014-09-01

    We show that for a certain class of dynamics at the nodes the response of a network of any topology to arbitrary inputs is defined in a simple way by its response to a monotone input. The nodes may have either a discrete or continuous set of states and there is no limit on the complexity of the network. The results provide both an efficient numerical method and the potential for accurate analytic approximation of the dynamics on such networks. As illustrative applications, we introduce a quasistatic mechanical model with objects interacting via frictional forces and a financial market model with avalanches and critical behavior that are generated by momentum trading strategies.

  6. Tidal analysis of Met rocket wind data

    NASA Technical Reports Server (NTRS)

    Bedinger, J. F.; Constantinides, E.

    1976-01-01

    A method of analyzing Met Rocket wind data is described. Modern tidal theory and specialized analytical techniques were used to resolve specific tidal modes and prevailing components in observed wind data. A representation of the wind which is continuous in both space and time was formulated. Such a representation allows direct comparison with theory, allows the derivation of other quantities such as temperature and pressure which in turn may be compared with observed values, and allows the formation of a wind model which extends over a broader range of space and time. Significant diurnal tidal modes with wavelengths of 10 and 7 km were present in the data and were resolved by the analytical technique.

  7. On the contact interaction of two identical stringers with an elastic semi-infinite continuous or vertically cracked plate

    NASA Astrophysics Data System (ADS)

    Grigoryan, M. S.

    2018-04-01

    This paper considers two connected contact problems on the interaction of stringers with an elastic semi-infinite plate. In the first problem, an elastic half-infinite continuous plate is reinforced on its boundary by two identical stringers exposed to a tensile external force. In the second problem, in the presence of the same stringers, the plate contains a collinear system of cracks on its vertical axis. The solution of both problems is reduced to the solution of singular integral equations (SIE) that are solved by a known numerical-analytical method.

  8. A modified Dodge algorithm for the parabolized Navier-Stokes equations and compressible duct flows

    NASA Technical Reports Server (NTRS)

    Cooke, C. H.; Dwoyer, D. M.

    1983-01-01

    A revised version of Dodge's split-velocity method for numerical calculation of compressible duct flow has been developed. The revision incorporates balancing of massflow rates on each marching step in order to maintain front-to-back continuity during the calculation. Qualitative agreement with analytical predictions and experimental results has been obtained for some flows with well-known solutions.

  9. Digital Mapping Techniques '11–12 workshop proceedings

    USGS Publications Warehouse

    Soller, David R.

    2014-01-01

    At these meetings, oral and poster presentations and special discussion sessions emphasized: (1) methods for creating and publishing map products (here, "publishing" includes Web-based release); (2) field data capture software and techniques, including the use of LiDAR; (3) digital cartographic techniques; (4) migration of digital maps into ArcGIS Geodatabase formats; (5) analytical GIS techniques; and (6) continued development of the National Geologic Map Database.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.W.; Boparai, A.S.; Bowers, D.L.

    This report summarizes the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year (FY) 2000 (October 1999 through September 2000). This annual progress report, which is the seventeenth in this series for the ACL, describes effort on continuing projects, work on new projects, and contributions of the ACL staff to various programs at ANL. The ACL operates within the ANL system as a full-cost-recovery service center, but it has a mission that includes a complementary research and development component: The Analytical Chemistry Laboratory will provide high-quality, cost-effective chemical analysis and related technical support tomore » solve research problems of our clients--Argonne National Laboratory, the Department of Energy, and others--and will conduct world-class research and development in analytical chemistry and its applications. The ACL handles a wide range of analytical problems that reflects the diversity of research and development (R&D) work at ANL. Some routine or standard analyses are done, but the ACL operates more typically in a problem-solving mode in which development of methods is required or adaptation of techniques is needed to obtain useful analytical data. The ACL works with clients and commercial laboratories if a large number of routine analyses are required. Much of the support work done by the ACL is very similar to applied analytical chemistry research work.« less

  11. Advancing Continuous Predictive Analytics Monitoring: Moving from Implementation to Clinical Action in a Learning Health System.

    PubMed

    Keim-Malpass, Jessica; Kitzmiller, Rebecca R; Skeeles-Worley, Angela; Lindberg, Curt; Clark, Matthew T; Tai, Robert; Calland, James Forrest; Sullivan, Kevin; Randall Moorman, J; Anderson, Ruth A

    2018-06-01

    In the intensive care unit, clinicians monitor a diverse array of data inputs to detect early signs of impending clinical demise or improvement. Continuous predictive analytics monitoring synthesizes data from a variety of inputs into a risk estimate that clinicians can observe in a streaming environment. For this to be useful, clinicians must engage with the data in a way that makes sense for their clinical workflow in the context of a learning health system (LHS). This article describes the processes needed to evoke clinical action after initiation of continuous predictive analytics monitoring in an LHS. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Application of surface analytical methods for hazardous situation in the Adriatic Sea: monitoring of organic matter dynamics and oil pollution

    NASA Astrophysics Data System (ADS)

    Pletikapić, Galja; Ivošević DeNardis, Nadica

    2017-01-01

    Surface analytical methods are applied to examine the environmental status of seawaters. The present overview emphasizes advantages of combining surface analytical methods, applied to a hazardous situation in the Adriatic Sea, such as monitoring of the first aggregation phases of dissolved organic matter in order to potentially predict the massive mucilage formation and testing of oil spill cleanup. Such an approach, based on fast and direct characterization of organic matter and its high-resolution visualization, sets a continuous-scale description of organic matter from micro- to nanometre scales. Electrochemical method of chronoamperometry at the dropping mercury electrode meets the requirements for monitoring purposes due to the simple and fast analysis of a large number of natural seawater samples enabling simultaneous differentiation of organic constituents. In contrast, atomic force microscopy allows direct visualization of biotic and abiotic particles and provides an insight into structural organization of marine organic matter at micro- and nanometre scales. In the future, merging data at different spatial scales, taking into account experimental input on micrometre scale, observations on metre scale and modelling on kilometre scale, will be important for developing sophisticated technological platforms for knowledge transfer, reports and maps applicable for the marine environmental protection and management of the coastal area, especially for tourism, fishery and cruiser trafficking.

  13. HPLC method development for evolving applications in the pharmaceutical industry and nanoscale chemistry

    NASA Astrophysics Data System (ADS)

    Castiglione, Steven Louis

    As scientific research trends towards trace levels and smaller architectures, the analytical chemist is often faced with the challenge of quantitating said species in a variety of matricies. The challenge is heightened when the analytes prove to be potentially toxic or possess physical or chemical properties that make traditional analytical methods problematic. In such cases, the successful development of an acceptable quantitative method plays a critical role in the ability to further develop the species under study. This is particularly true for pharmaceutical impurities and nanoparticles (NP). The first portion of the research focuses on the development of a part-per-billion level HPLC method for a substituted phenazine-class pharmaceutical impurity. The development of this method was required due to the need for a rapid methodology to quantitatively determine levels of a potentially toxic phenazine moiety in order to ensure patient safety. As the synthetic pathway for the active ingredient was continuously refined to produce progressively lower amounts of the phenazine impurity, the approach for increasingly sensitive quantitative methods was required. The approaches evolved across four discrete methods, each employing a unique scheme for analyte detection. All developed methods were evaluated with regards to accuracy, precision and linear adherence as well as ancillary benefits and detriments -- e.g., one method in this evolution demonstrated the ability to resolve and detect other species from the phenazine class. The second portion of the research focuses on the development of an HPLC method for the quantitative determination of NP size distributions. The current methodology for the determination of NP sizes employs tunneling electron microscopy (TEM), which requires sample drying without particle size alteration and which, in many cases, may prove infeasible due to cost or availability. The feasibility of an HPLC method for NP size characterizations evolved across three methods, each employing a different approach for size resolution. These methods were evaluated primarily for sensitivity, which proved to be a substantial hurdle to further development, but does not appear to deter future research efforts.

  14. From pixel to voxel: a deeper view of biological tissue by 3D mass spectral imaging

    PubMed Central

    Ye, Hui; Greer, Tyler; Li, Lingjun

    2011-01-01

    Three dimensional mass spectral imaging (3D MSI) is an exciting field that grants the ability to study a broad mass range of molecular species ranging from small molecules to large proteins by creating lateral and vertical distribution maps of select compounds. Although the general premise behind 3D MSI is simple, factors such as choice of ionization method, sample handling, software considerations and many others must be taken into account for the successful design of a 3D MSI experiment. This review provides a brief overview of ionization methods, sample preparation, software types and technological advancements driving 3D MSI research of a wide range of low- to high-mass analytes. Future perspectives in this field are also provided to conclude that the positive and promises ever-growing applications in the biomedical field with continuous developments of this powerful analytical tool. PMID:21320052

  15. Review of levoglucosan in glacier snow and ice studies: Recent progress and future perspectives.

    PubMed

    You, Chao; Xu, Chao

    2018-03-01

    Levoglucosan (LEV) in glacier snow and ice layers provides a fingerprint of fire activity, ranging from modern air pollution to ancient fire emissions. In this study, we review recent progress in our understanding and application of LEV in glaciers, including analytical methods, transport and post-depositional processes, and historical records. We firstly summarize progress in analytical methods for determination of LEV in glacier snow and ice. Then, we discuss the processes influencing the records of LEV in snow and ice layers. Finally, we make some recommendations for future work, such as assessing the stability of LEV and obtaining continuous records, to increase reliability of the reconstructed ancient fire activity. This review provides an update for researchers working with LEV and will facilitate the further use of LEV as a biomarker in paleo-fire studies based on ice core records. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Nonylphenol: Properties, legislation, toxicity and determination.

    PubMed

    Araujo, Frederico G DE; Bauerfeldt, Glauco F; Cid, Yara Peluso

    2017-08-07

    This paper aims to gather and discuss important information about nonylphenol, such as physical chemistry properties, toxicity and analytical methods in various matrices. As a degradation product of ethoxylated alkylphenols, nonylphenol presents a higher degree of reactivity than its precursor. Due to its harmful effects on the environment, use and production of nonylphenol has been banned in European Union countries, alongside their precursors. The guide on quality of drinking water (USEPA) recommends a maximum concentration of 28 µg L-1 for fresh water. In Brazil, there is no clear legislation containing values ​​of maximum concentration of nonylphenol. Due to this lack of regulation, a continuous monitoring is necessary of this pollutant in environmental samples. This paper aims to encourage further studies on nonylphenol, seen as a critical environmental pollutant. For proper monitoring is necessary to have reliable analytical methods and easy to perform in routine analysis.

  17. Calculating mercury loading to the tidal Hudson River, New York, using rating curve and surrogate methodologies

    USGS Publications Warehouse

    Wall, G.R.; Ingleston, H.H.; Litten, S.

    2005-01-01

    Total mercury (THg) load in rivers is often calculated from a site-specific "rating-curve" based on the relation between THg concentration and river discharge along with a continuous record of river discharge. However, there is no physical explanation as to why river discharge should consistently predict THg or any other suspended analyte. THg loads calculated by the rating-curve method were compared with those calculated by a "continuous surrogate concentration" (CSC) method in which a relation between THg concentration and suspended-sediment concentration (SSC) is constructed; THg loads then can be calculated from the continuous record of SSC and river discharge. The rating-curve and CSC methods, respectively, indicated annual THg loads of 46.4 and 75.1 kg for the Mohawk River, and 52.9 and 33.1 kg for the upper Hudson River. Differences between the results of the two methods are attributed to the inability of the rating-curve method to adequately characterize atypical high flows such as an ice-dam release, or to account for hysteresis, which typically degrades the strength of the relation between stream discharge and concentration of material in suspension. ?? Springer 2005.

  18. COHERENT NETWORK ANALYSIS FOR CONTINUOUS GRAVITATIONAL WAVE SIGNALS IN A PULSAR TIMING ARRAY: PULSAR PHASES AS EXTRINSIC PARAMETERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yan; Mohanty, Soumya D.; Jenet, Fredrick A., E-mail: ywang12@hust.edu.cn

    2015-12-20

    Supermassive black hole binaries are one of the primary targets of gravitational wave (GW) searches using pulsar timing arrays (PTAs). GW signals from such systems are well represented by parameterized models, allowing the standard Generalized Likelihood Ratio Test (GLRT) to be used for their detection and estimation. However, there is a dichotomy in how the GLRT can be implemented for PTAs: there are two possible ways in which one can split the set of signal parameters for semi-analytical and numerical extremization. The straightforward extension of the method used for continuous signals in ground-based GW searches, where the so-called pulsar phasemore » parameters are maximized numerically, was addressed in an earlier paper. In this paper, we report the first study of the performance of the second approach where the pulsar phases are maximized semi-analytically. This approach is scalable since the number of parameters left over for numerical optimization does not depend on the size of the PTA. Our results show that for the same array size (9 pulsars), the new method performs somewhat worse in parameter estimation, but not in detection, than the previous method where the pulsar phases were maximized numerically. The origin of the performance discrepancy is likely to be in the ill-posedness that is intrinsic to any network analysis method. However, the scalability of the new method allows the ill-posedness to be mitigated by simply adding more pulsars to the array. This is shown explicitly by taking a larger array of pulsars.« less

  19. Recent Trends in Analytical Methods to Determine New Psychoactive Substances in Hair.

    PubMed

    Kyriakou, Chrystalla; Pellegrini, Manuela; García-Algar, Oscar; Marinelli, Enrico; Zaami, Simona

    2017-01-01

    New Psychoactive Substances (NPS) belong to several chemical classes, including phenethylamines, piperazines, synthetic cathinones and synthetic cannabinoids. Development and validation of analytical methods for the determination of NPS both in traditional and alternative matrices is of crucial importance to study drug metabolism and to associate consumption to clinical outcomes and eventual intoxication symptoms. Among different biological matrices, hair is the one with the widest time window to investigate drug-related history and demonstrate past intake. The aim of this paper was to overview the trends of the rapidly evolving analytical methods for the determination of NPS in hair and the usefulness of these methods when applied to real cases. A number of rapid and sensitive methods for the determination of NPS in hair matrix has been recently published, most of them using liquid chromatography coupled to mass spectrometry. Hair digestion and subsequent solid phase extraction or liquid-liquid extraction were described as well as extraction in organic solvents. For most of the methods limits of quantification at picogram per milligram hair were obtained. The measured concentrations for most of the NPS in real samples were in the range of picograms of drug per milligram of hair. Interpretation of the results and lack of cut-off values for the discrimination between chronic consumption and occasional use or external contamination are still challenging. Methods for the determination of NPS in hair are continually emerging to include as many NPS as possible due to the great demand for their detection. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  20. Downward continuation of airborne gravity data by means of the change of boundary approach

    NASA Astrophysics Data System (ADS)

    Mansi, A. H.; Capponi, M.; Sampietro, D.

    2018-03-01

    Within the modelling of gravity data, a common practice is the upward/downward continuation of the signal, i.e. the process of continuing the gravitational signal in the vertical direction away or closer to the sources, respectively. The gravity field, being a potential field, satisfies the Laplace's equation outside the masses and this means that it allows to unambiguously perform this analytical continuation only in a source-free domain. The analytical continuation problem has been solved both in the space and spectral domains by exploiting different algorithms. As well known, the downward continuation operator, differently from the upward one, is an unstable operator, due to its spectral characteristics similar to those of a high-pass filter, and several regularization methods have been proposed in order to stabilize it. In this work, an iterative procedure to downward/upward continue the gravity field observations, acquired at different altitudes, is proposed. This methodology is based on the change of boundary principle and it has been expressively thought for aerogravimetric observations for geophysical exploration purposes. Within this field of application, usually several simplifications can be applied, basically due to the specific characteristics of the airborne surveys which are usually flown at almost constant altitude as close as possible to the terrain. For instance, these characteristics, as shown in the present work, allow to perform the downward continuation without the need of any regularization. The goodness of the proposed methodology has been evaluated by means of a numerical test on real data, acquired in the South of Australia. The test shows that it is possible to move the aerogravimetric data, acquired along tracks with a maximum height difference of about 250 m, with accuracies of the order of 10^{-3} mGal.

  1. Computing the Evans function via solving a linear boundary value ODE

    NASA Astrophysics Data System (ADS)

    Wahl, Colin; Nguyen, Rose; Ventura, Nathaniel; Barker, Blake; Sandstede, Bjorn

    2015-11-01

    Determining the stability of traveling wave solutions to partial differential equations can oftentimes be computationally intensive but of great importance to understanding the effects of perturbations on the physical systems (chemical reactions, hydrodynamics, etc.) they model. For waves in one spatial dimension, one may linearize around the wave and form an Evans function - an analytic Wronskian-like function which has zeros that correspond in multiplicity to the eigenvalues of the linearized system. If eigenvalues with a positive real part do not exist, the traveling wave will be stable. Two methods exist for calculating the Evans function numerically: the exterior-product method and the method of continuous orthogonalization. The first is numerically expensive, and the second reformulates the originally linear system as a nonlinear system. We develop a new algorithm for computing the Evans function through appropriate linear boundary-value problems. This algorithm is cheaper than the previous methods, and we prove that it preserves analyticity of the Evans function. We also provide error estimates and implement it on some classical one- and two-dimensional systems, one being the Swift-Hohenberg equation in a channel, to show the advantages.

  2. Analytic Approximations to the Free Boundary and Multi-dimensional Problems in Financial Derivatives Pricing

    NASA Astrophysics Data System (ADS)

    Lau, Chun Sing

    This thesis studies two types of problems in financial derivatives pricing. The first type is the free boundary problem, which can be formulated as a partial differential equation (PDE) subject to a set of free boundary condition. Although the functional form of the free boundary condition is given explicitly, the location of the free boundary is unknown and can only be determined implicitly by imposing continuity conditions on the solution. Two specific problems are studied in details, namely the valuation of fixed-rate mortgages and CEV American options. The second type is the multi-dimensional problem, which involves multiple correlated stochastic variables and their governing PDE. One typical problem we focus on is the valuation of basket-spread options, whose underlying asset prices are driven by correlated geometric Brownian motions (GBMs). Analytic approximate solutions are derived for each of these three problems. For each of the two free boundary problems, we propose a parametric moving boundary to approximate the unknown free boundary, so that the original problem transforms into a moving boundary problem which can be solved analytically. The governing parameter of the moving boundary is determined by imposing the first derivative continuity condition on the solution. The analytic form of the solution allows the price and the hedging parameters to be computed very efficiently. When compared against the benchmark finite-difference method, the computational time is significantly reduced without compromising the accuracy. The multi-stage scheme further allows the approximate results to systematically converge to the benchmark results as one recasts the moving boundary into a piecewise smooth continuous function. For the multi-dimensional problem, we generalize the Kirk (1995) approximate two-asset spread option formula to the case of multi-asset basket-spread option. Since the final formula is in closed form, all the hedging parameters can also be derived in closed form. Numerical examples demonstrate that the pricing and hedging errors are in general less than 1% relative to the benchmark prices obtained by numerical integration or Monte Carlo simulation. By exploiting an explicit relationship between the option price and the underlying probability distribution, we further derive an approximate distribution function for the general basket-spread variable. It can be used to approximate the transition probability distribution of any linear combination of correlated GBMs. Finally, an implicit perturbation is applied to reduce the pricing errors by factors of up to 100. When compared against the existing methods, the basket-spread option formula coupled with the implicit perturbation turns out to be one of the most robust and accurate approximation methods.

  3. Visual analytics in healthcare education: exploring novel ways to analyze and represent big data in undergraduate medical education

    PubMed Central

    Nilsson, Gunnar; Zary, Nabil

    2014-01-01

    Introduction. The big data present in the medical curriculum that informs undergraduate medical education is beyond human abilities to perceive and analyze. The medical curriculum is the main tool used by teachers and directors to plan, design, and deliver teaching and assessment activities and student evaluations in medical education in a continuous effort to improve it. Big data remains largely unexploited for medical education improvement purposes. The emerging research field of visual analytics has the advantage of combining data analysis and manipulation techniques, information and knowledge representation, and human cognitive strength to perceive and recognize visual patterns. Nevertheless, there is a lack of research on the use and benefits of visual analytics in medical education. Methods. The present study is based on analyzing the data in the medical curriculum of an undergraduate medical program as it concerns teaching activities, assessment methods and learning outcomes in order to explore visual analytics as a tool for finding ways of representing big data from undergraduate medical education for improvement purposes. Cytoscape software was employed to build networks of the identified aspects and visualize them. Results. After the analysis of the curriculum data, eleven aspects were identified. Further analysis and visualization of the identified aspects with Cytoscape resulted in building an abstract model of the examined data that presented three different approaches; (i) learning outcomes and teaching methods, (ii) examination and learning outcomes, and (iii) teaching methods, learning outcomes, examination results, and gap analysis. Discussion. This study identified aspects of medical curriculum that play an important role in how medical education is conducted. The implementation of visual analytics revealed three novel ways of representing big data in the undergraduate medical education context. It appears to be a useful tool to explore such data with possible future implications on healthcare education. It also opens a new direction in medical education informatics research. PMID:25469323

  4. Visual analytics in healthcare education: exploring novel ways to analyze and represent big data in undergraduate medical education.

    PubMed

    Vaitsis, Christos; Nilsson, Gunnar; Zary, Nabil

    2014-01-01

    Introduction. The big data present in the medical curriculum that informs undergraduate medical education is beyond human abilities to perceive and analyze. The medical curriculum is the main tool used by teachers and directors to plan, design, and deliver teaching and assessment activities and student evaluations in medical education in a continuous effort to improve it. Big data remains largely unexploited for medical education improvement purposes. The emerging research field of visual analytics has the advantage of combining data analysis and manipulation techniques, information and knowledge representation, and human cognitive strength to perceive and recognize visual patterns. Nevertheless, there is a lack of research on the use and benefits of visual analytics in medical education. Methods. The present study is based on analyzing the data in the medical curriculum of an undergraduate medical program as it concerns teaching activities, assessment methods and learning outcomes in order to explore visual analytics as a tool for finding ways of representing big data from undergraduate medical education for improvement purposes. Cytoscape software was employed to build networks of the identified aspects and visualize them. Results. After the analysis of the curriculum data, eleven aspects were identified. Further analysis and visualization of the identified aspects with Cytoscape resulted in building an abstract model of the examined data that presented three different approaches; (i) learning outcomes and teaching methods, (ii) examination and learning outcomes, and (iii) teaching methods, learning outcomes, examination results, and gap analysis. Discussion. This study identified aspects of medical curriculum that play an important role in how medical education is conducted. The implementation of visual analytics revealed three novel ways of representing big data in the undergraduate medical education context. It appears to be a useful tool to explore such data with possible future implications on healthcare education. It also opens a new direction in medical education informatics research.

  5. From nonlinear optimization to convex optimization through firefly algorithm and indirect approach with applications to CAD/CAM.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  6. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  7. Increasing productivity for the analysis of trace contaminants in food by gas chromatography-mass spectrometry using automated liner exchange, backflushing and heart-cutting.

    PubMed

    David, Frank; Tienpont, Bart; Devos, Christophe; Lerch, Oliver; Sandra, Pat

    2013-10-25

    Laboratories focusing on residue analysis in food are continuously seeking to increase sample throughput by minimizing sample preparation. Generic sample extraction methods such as QuEChERS lack selectivity and consequently extracts are not free from non-volatile material that contaminates the analytical system. Co-extracted matrix constituents interfere with target analytes, even if highly sensitive and selective GC-MS/MS is used. A number of GC approaches are described that can be used to increase laboratory productivity. These techniques include automated inlet liner exchange and column backflushing for preservation of the performance of the analytical system and heart-cutting two-dimensional GC for increasing sensitivity and selectivity. The application of these tools is illustrated by the analysis of pesticides in vegetables and fruits, PCBs in milk powder and coplanar PCBs in fish. It is demonstrated that considerable increase in productivity can be achieved by decreasing instrument down-time, while analytical performance is equal or better compared to conventional trace contaminant analysis. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Argentation chromatography coupled to ultrahigh-resolution mass spectrometry for the separation of a heavy crude oil.

    PubMed

    Molnárné Guricza, Lilla; Schrader, Wolfgang

    2017-02-10

    Simplification of highly complex mixtures such as crude oil by using chromatographic methods makes it possible to get more detailed information about the composition of the analyte. Separation by argentation chromatography can be achieved based on the interaction of different strength between the silver ions (Ag + ) immobilized through a spacer on the silica gel surface and the π-bonds of the analytes. Heavy crude oils contain compounds with a high number of heteroatoms (N, O, S) and a high degree of unsaturation thus making them the perfect analyte for argentation chromatography. The direct coupling of argentation chromatography and ultrahigh-resolution mass spectrometry allows to continuously tracking the separation of the many different compounds by retention time and allows sensitive detection on a molecular level. Direct injection of a heavy crude oil into a ultrahigh-resolution mass spectrometer showed components with DBE of up to 25, whereas analytes with DBE of up to 35 could be detected only after separation with argentation chromatography. The reduced complexity achieved by the separation helps increasing the information depth. Copyright © 2016. Published by Elsevier B.V.

  9. Ionic liquids in solid-phase microextraction: a review.

    PubMed

    Ho, Tien D; Canestraro, Anthony J; Anderson, Jared L

    2011-06-10

    Solid-phase microextraction (SPME) has undergone a surge in popularity within the field of analytical chemistry in the past two decades since its introduction. Owing to its nature of extraction, SPME has become widely known as a quick and cost-effective sample preparation technique. Although SPME has demonstrated extraordinary versatility in sampling capabilities, the technique continues to experience a tremendous growth in innovation. Presently, increasing efforts have been directed towards the engineering of novel sorbent material in order to expand the applicability of SPME for a wider range of analytes and matrices. This review highlights the application of ionic liquids (ILs) and polymeric ionic liquids (PILs) as innovative sorbent materials for SPME. Characterized by their unique physico-chemical properties, these compounds can be structurally-designed to selectively extract target analytes based on unique molecular interactions. To examine the advantages of IL and PIL-based sorbent coatings in SPME, the field is reviewed by gathering available experimental data and exploring the sensitivity, linear calibration range, as well as detection limits for a variety of target analytes in the methods that have been developed. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Solution of the advection-dispersion equation: Continuous load of finite duration

    USGS Publications Warehouse

    Runkel, R.L.

    1996-01-01

    Field studies of solute fate and transport in streams and rivers often involve an. experimental release of solutes at an upstream boundary for a finite period of time. A review of several standard references on surface-water-quality modeling indicates that the analytical solution to the constant-parameter advection-dispersion equation for this type of boundary condition has been generally overlooked. Here an exact analytical solution that considers a continuous load of unite duration is compared to an approximate analytical solution presented elsewhere. Results indicate that the exact analytical solution should be used for verification of numerical solutions and other solute-transport problems wherein a high level of accuracy is required. ?? ASCE.

  11. A modified Dodge algorithm for the parabolized Navier-Stokes equations and compressible duct flows

    NASA Technical Reports Server (NTRS)

    Cooke, C. H.; Dwoyer, D. M.

    1983-01-01

    A revised version of Dodge's split-velocity method for numerical calculation of compressible duct flow was developed. The revision incorporates balancing of mass flow rates on each marching step in order to maintain front-to-back continuity during the calculation. The (checkerboard) zebra algorithm is applied to solution of the three dimensional continuity equation in conservative form. A second-order A-stable linear multistep method is employed in effecting a marching solution of the parabolized momentum equations. A checkerboard iteration is used to solve the resulting implicit nonlinear systems of finite-difference equations which govern stepwise transition. Qualitative agreement with analytical predictions and experimental results was obtained for some flows with well-known solutions. Previously announced in STAR as N82-16363

  12. Screening new psychoactive substances in urban wastewater using high resolution mass spectrometry.

    PubMed

    González-Mariño, Iria; Gracia-Lor, Emma; Bagnati, Renzo; Martins, Claudia P B; Zuccato, Ettore; Castiglioni, Sara

    2016-06-01

    Analysis of drug residues in urban wastewater could complement epidemiological studies in detecting the use of new psychoactive substances (NPS), a continuously changing group of drugs hard to monitor by classical methods. We initially selected 52 NPS potentially used in Italy based on seizure data and consumption alerts provided by the Antidrug Police Department and the National Early Warning System. Using a linear ion trap-Orbitrap high resolution mass spectrometer, we designed a suspect screening and a target method approach and compared them for the analysis of 24 h wastewater samples collected at the treatment plant influents of four Italian cities. This highlighted the main limitations of these two approaches, so we could propose requirements for future research. A library of MS/MS spectra of 16 synthetic cathinones and 19 synthetic cannabinoids, for which analytical standards were acquired, was built at different collision energies and is available on request. The stability of synthetic cannabinoids was studied in analytical standards and wastewater, identifying the best analytical conditions for future studies. To the best of our knowledge, these are the first stability data on NPS. Few suspects were identified in Italian wastewater samples, in accordance with recent epidemiological data reporting a very low prevalence of use of NPS in Italy. This study outlines an analytical approach for NPS identification and measurement in urban wastewater and for estimating their use in the population.

  13. Isolation and determination of ivermectin in post-mortem and in vivo tissues of dung beetles using a continuous solid phase extraction method followed by LC-ESI+-MS/MS

    PubMed Central

    Ortiz, Antonio J.; Cortez, Vieyle; Azzouz, Abdelmonaim

    2017-01-01

    A new analytical method based on solvent extraction, followed by continuous solid-phase extraction (SPE) clean-up using a polymeric sorbent, was demonstrated to be applicable for the detection of ivermectin in complex biological matrices of dung beetles (hemolymph, excreta or dry tissues) using liquid chromatography combined with positive electrospray ionization tandem mass spectrometry (LC/ESI+–MS/MS). Using a signal-to-noise ratio of 3:1, the limit of detection (LOD) in the insect matrices at trace levels was 0.01 ng g–1 and the limit of quantification (LOQ) was 0.1 ng g–1. The proposed method was successfully used to quantitatively determine the levels of ivermectin in the analysis of small samples in in vivo and post mortem samples, demonstrating the usefulness for quantitative analyses that are focused on future pharmacokinetic and bioavailability studies in insects and the establishment of a new protocol to study the impact of ivermectin on non-target arthropods such as dung beetles and other insects that are related with the “dung community”. Because satisfactory precision and accuracy values were obtained in both in vivo matrices, we suggest that the method can be consistently used for quantitative determinations that are focused on future pharmacokinetic and bioavailability studies in insects. Furthermore, this new analytical method was successfully applied to biological samples of dead dung beetles from the field suggesting that the method can be used to establish a new routine analysis of ivermectin residues in insect carcasses that is applied to complement typical mortality tests. PMID:28207908

  14. Photodiode design study. Final report, May--December 1977

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamorte, M.F.

    1977-12-01

    The purpose of this work was to apply the analytical method developed for single junction and multijunction solar cells, Contract No. F33615-76-C-1283, to photodiodes and avalanche photodiodes. It was anticipated that this analytical method will advance the state-of-the-art because of the following: (1) the analysis considers the total photodetector multilayer structure rather than just the depleted region; (2) a model of the complete band structure is analyzed; (3) application of the integral form of the continuity equation is used; (4) structures that reduce dark current and/or increase the ratio of photocurrent to dark current are obtained; and (5) structures thatmore » increase spectral response in the depleted region and reduce response in other regions of the diode are obtained. The integral form of the continuity equation developed for solar cells is the steady-state or time-independent form. The contract specified that the time-independent equation would only be employed to determine applicability to photodetectors. The GaAsSb photodiode under development at Rockwell International, Thousand Oaks, California was used to determine the applicability to photodetectors. The diode structure is composed of four layers grown on a substrate. The analysis presents calculations of spectral response. This parameter is used in this study to optimize the structure.« less

  15. Downward continuation of the free-air gravity anomalies to the ellipsoid using the gradient solution and terrain correction: An attempt of global numerical computations

    NASA Technical Reports Server (NTRS)

    Wang, Y. M.

    1989-01-01

    The formulas for the determination of the coefficients of the spherical harmonic expansion of the disturbing potential of the earth are defined for data given on a sphere. In order to determine the spherical harmonic coefficients, the gravity anomalies have to be analytically downward continued from the earth's surface to a sphere-at least to the ellipsoid. The goal is to continue the gravity anomalies from the earth's surface downward to the ellipsoid using recent elevation models. The basic method for the downward continuation is the gradient solution (the g sub 1 term). The terrain correction was also computed because of the role it can play as a correction term when calculating harmonic coefficients from surface gravity data. The fast Fourier transformation was applied to the computations.

  16. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics

    PubMed Central

    2016-01-01

    Background We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. Objective To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. Methods The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Results Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix. Conclusions IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise. PMID:27729304

  17. A QC approach to the determination of day-to-day reproducibility and robustness of LC-MS methods for global metabolite profiling in metabonomics/metabolomics.

    PubMed

    Gika, Helen G; Theodoridis, Georgios A; Earll, Mark; Wilson, Ian D

    2012-09-01

    An approach to the determination of day-to-day analytical robustness of LC-MS-based methods for global metabolic profiling using a pooled QC sample is presented for the evaluation of metabonomic/metabolomic data. A set of 60 urine samples were repeatedly analyzed on five different days and the day-to-day reproducibility of the data obtained was determined. Multivariate statistical analysis was performed with the aim of evaluating variability and selected peaks were assessed and validated in terms of retention time stability, mass accuracy and intensity. The methodology enables the repeatability/reproducibility of extended analytical runs in large-scale studies to be determined, allowing the elimination of analytical (as opposed to biological) variability, in order to discover true patterns and correlations within the data. The day-to-day variability of the data revealed by this process suggested that, for this particular system, 3 days continuous operation was possible without the need for maintenance and cleaning. Variation was generally based on signal intensity changes over the 7-day period of the study, and was mainly a result of source contamination.

  18. On the solution of the continuity equation for precipitating electrons in solar flares

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emslie, A. Gordon; Holman, Gordon D.; Litvinenko, Yuri E., E-mail: emslieg@wku.edu, E-mail: gordon.d.holman@nasa.gov

    2014-09-01

    Electrons accelerated in solar flares are injected into the surrounding plasma, where they are subjected to the influence of collisional (Coulomb) energy losses. Their evolution is modeled by a partial differential equation describing continuity of electron number. In a recent paper, Dobranskis and Zharkova claim to have found an 'updated exact analytical solution' to this continuity equation. Their solution contains an additional term that drives an exponential decrease in electron density with depth, leading them to assert that the well-known solution derived by Brown, Syrovatskii and Shmeleva, and many others is invalid. We show that the solution of Dobranskis andmore » Zharkova results from a fundamental error in the application of the method of characteristics and is hence incorrect. Further, their comparison of the 'new' analytical solution with numerical solutions of the Fokker-Planck equation fails to lend support to their result. We conclude that Dobranskis and Zharkova's solution of the universally accepted and well-established continuity equation is incorrect, and that their criticism of the correct solution is unfounded. We also demonstrate the formal equivalence of the approaches of Syrovatskii and Shmeleva and Brown, with particular reference to the evolution of the electron flux and number density (both differential in energy) in a collisional thick target. We strongly urge use of these long-established, correct solutions in future works.« less

  19. Big Data Analytics Methodology in the Financial Industry

    ERIC Educational Resources Information Center

    Lawler, James; Joseph, Anthony

    2017-01-01

    Firms in industry continue to be attracted by the benefits of Big Data Analytics. The benefits of Big Data Analytics projects may not be as evident as frequently indicated in the literature. The authors of the study evaluate factors in a customized methodology that may increase the benefits of Big Data Analytics projects. Evaluating firms in the…

  20. Analytic Evolution of Singular Distribution Amplitudes in QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tandogan Kunkel, Asli

    2014-08-01

    Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a at (constant) DA, antisymmetric at DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standardmore » method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.« less

  1. LipidQC: Method Validation Tool for Visual Comparison to SRM 1950 Using NIST Interlaboratory Comparison Exercise Lipid Consensus Mean Estimate Values.

    PubMed

    Ulmer, Candice Z; Ragland, Jared M; Koelmel, Jeremy P; Heckert, Alan; Jones, Christina M; Garrett, Timothy J; Yost, Richard A; Bowden, John A

    2017-12-19

    As advances in analytical separation techniques, mass spectrometry instrumentation, and data processing platforms continue to spur growth in the lipidomics field, more structurally unique lipid species are detected and annotated. The lipidomics community is in need of benchmark reference values to assess the validity of various lipidomics workflows in providing accurate quantitative measurements across the diverse lipidome. LipidQC addresses the harmonization challenge in lipid quantitation by providing a semiautomated process, independent of analytical platform, for visual comparison of experimental results of National Institute of Standards and Technology Standard Reference Material (SRM) 1950, "Metabolites in Frozen Human Plasma", against benchmark consensus mean concentrations derived from the NIST Lipidomics Interlaboratory Comparison Exercise.

  2. Using design of experiments to optimize derivatization with methyl chloroformate for quantitative analysis of the aqueous phase from hydrothermal liquefaction of biomass.

    PubMed

    Madsen, René Bjerregaard; Jensen, Mads Mørk; Mørup, Anders Juul; Houlberg, Kasper; Christensen, Per Sigaard; Klemmer, Maika; Becker, Jacob; Iversen, Bo Brummerstedt; Glasius, Marianne

    2016-03-01

    Hydrothermal liquefaction is a promising technique for the production of bio-oil. The process produces an oil phase, a gas phase, a solid residue, and an aqueous phase. Gas chromatography coupled with mass spectrometry is used to analyze the complex aqueous phase. Especially small organic acids and nitrogen-containing compounds are of interest. The efficient derivatization reagent methyl chloroformate was used to make analysis of the complex aqueous phase from hydrothermal liquefaction of dried distillers grains with solubles possible. A circumscribed central composite design was used to optimize the responses of both derivatized and nonderivatized analytes, which included small organic acids, pyrazines, phenol, and cyclic ketones. Response surface methodology was used to visualize significant factors and identify optimized derivatization conditions (volumes of methyl chloroformate, NaOH solution, methanol, and pyridine). Twenty-nine analytes of small organic acids, pyrazines, phenol, and cyclic ketones were quantified. An additional three analytes were pseudoquantified with use of standards with similar mass spectra. Calibration curves with high correlation coefficients were obtained, in most cases R (2)  > 0.991. Method validation was evaluated with repeatability, and spike recoveries of all 29 analytes were obtained. The 32 analytes were quantified in samples from the commissioning of a continuous flow reactor and in samples from recirculation experiments involving the aqueous phase. The results indicated when the steady-state condition of the flow reactor was obtained and the effects of recirculation. The validated method will be especially useful for investigations of the effect of small organic acids on the hydrothermal liquefaction process.

  3. Thallium as a tracer for preindustrial volcanic eruptions in an ice core record from Illimani, Bolivia.

    PubMed

    Kellerhals, Thomas; Tobler, Leonhard; Brütsch, Sabina; Sigl, Michael; Wacker, Lukas; Gäggeler, Heinz W; Schwikowski, Margit

    2010-02-01

    Trace element records from glacier and ice sheet archives provide insights into biogeochemical cycles, atmospheric circulation changes, and anthropogenic pollution history. We present the first continuous high-resolution thallium (Tl) record, derived from an accurately dated ice core from tropical South America, and discuss Tl as a tracer for volcanic eruptions. We identify four prominent Tl peaks and propose that they represent signals from the massive explosive eruptions of the "unknown 1258" A.D. volcano, of Kuwae ( approximately 1450 A.D.), Tambora (1815 A.D.), and Krakatoa (1883 A.D.). The highly resolved record was obtained with an improved setup for the continuous analysis of trace elements in ice with inductively coupled plasma sector field mass spectrometry (ICP-SFMS). The new setup allowed for a stronger initial acidification of the meltwater and shorter tubing length, thereby reducing the risk of memory effects and losses of analytes to the capillary walls. With a comparison of the continuous method to the established conventional decontamination and analysis procedure for discrete samples, we demonstrate the accuracy of the continuous method for Tl analyses.

  4. Continuous Modeling of Calcium Transport Through Biological Membranes

    NASA Astrophysics Data System (ADS)

    Jasielec, J. J.; Filipek, R.; Szyszkiewicz, K.; Sokalski, T.; Lewenstam, A.

    2016-08-01

    In this work an approach to the modeling of the biological membranes where a membrane is treated as a continuous medium is presented. The Nernst-Planck-Poisson model including Poisson equation for electric potential is used to describe transport of ions in the mitochondrial membrane—the interface which joins mitochondrial matrix with cellular cytosis. The transport of calcium ions is considered. Concentration of calcium inside the mitochondrion is not known accurately because different analytical methods give dramatically different results. We explain mathematically these differences assuming the complexing reaction inside mitochondrion and the existence of the calcium set-point (concentration of calcium in cytosis below which calcium stops entering the mitochondrion).

  5. Analytical quality by design: a tool for regulatory flexibility and robust analytics.

    PubMed

    Peraman, Ramalingam; Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy

    2015-01-01

    Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT).

  6. Analytical Quality by Design: A Tool for Regulatory Flexibility and Robust Analytics

    PubMed Central

    Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy

    2015-01-01

    Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT). PMID:25722723

  7. A new approach to the Schrödinger equation with rational potentials

    NASA Astrophysics Data System (ADS)

    Dong, Ming-de; Chu, Jue-Hui

    1984-04-01

    A new analytic theory is established for the Schrödinger equation with a rational potential, including a complete classification of the regular eigenfunctions into three different types, an exact method of obtaining wavefunctions, an explicit formulation of the spectral equation (3 x 3 determinant) etc. All representations are exhibited in a unifying way via function-theoretic methods and therefore given in explicit form, in contrast to the prevailing discussion appealing to perturbation or variation methods or continued-fraction techniques. The irregular eigenfunctions at infinity can be obtained analogously and will be discussed separately as another solvable case for singular potentials.

  8. A continuous function model for path prediction of entities

    NASA Astrophysics Data System (ADS)

    Nanda, S.; Pray, R.

    2007-04-01

    As militaries across the world continue to evolve, the roles of humans in various theatres of operation are being increasingly targeted by military planners for substitution with automation. Forward observation and direction of supporting arms to neutralize threats from dynamic adversaries is one such example. However, contemporary tracking and targeting systems are incapable of serving autonomously for they do not embody the sophisticated algorithms necessary to predict the future positions of adversaries with the accuracy offered by the cognitive and analytical abilities of human operators. The need for these systems to incorporate methods characterizing such intelligence is therefore compelling. In this paper, we present a novel technique to achieve this goal by modeling the path of an entity as a continuous polynomial function of multiple variables expressed as a Taylor series with a finite number of terms. We demonstrate the method for evaluating the coefficient of each term to define this function unambiguously for any given entity, and illustrate its use to determine the entity's position at any point in time in the future.

  9. Steroid hormones in environmental matrices: extraction method comparison.

    PubMed

    Andaluri, Gangadhar; Suri, Rominder P S; Graham, Kendon

    2017-11-09

    The U.S. Environmental Protection Agency (EPA) has developed methods for the analysis of steroid hormones in water, soil, sediment, and municipal biosolids by HRGC/HRMS (EPA Method 1698). Following the guidelines provided in US-EPA Method 1698, the extraction methods were validated with reagent water and applied to municipal wastewater, surface water, and municipal biosolids using GC/MS/MS for the analysis of nine most commonly detected steroid hormones. This is the first reported comparison of the separatory funnel extraction (SFE), continuous liquid-liquid extraction (CLLE), and Soxhlet extraction methods developed by the U.S. EPA. Furthermore, a solid phase extraction (SPE) method was also developed in-house for the extraction of steroid hormones from aquatic environmental samples. This study provides valuable information regarding the robustness of the different extraction methods. Statistical analysis of the data showed that SPE-based methods provided better recovery efficiencies and lower variability of the steroid hormones followed by SFE. The analytical methods developed in-house for extraction of biosolids showed a wide recovery range; however, the variability was low (≤ 7% RSD). Soxhlet extraction and CLLE are lengthy procedures and have been shown to provide highly variably recovery efficiencies. The results of this study are guidance for better sample preparation strategies in analytical methods for steroid hormone analysis, and SPE adds to the choice in environmental sample analysis.

  10. Development of a highly sensitive methodology for quantitative determination of fexofenadine in a microdose study by multiple injection method using ultra-high performance liquid chromatography with tandem mass spectrometry.

    PubMed

    Tanaka, Yukari; Yoshikawa, Yutaka; Yasui, Hiroyuki

    2012-01-01

    An ultra high-sensitivity method for quantifying fexofenadine concentration in rat plasma samples by multiple injection method (MIM) was developed for a microdose study. In this study, MIM involved continuous injections of multiple samples containing the single compound into a column of the ultra-HPLC (UHPLC) system, and then, temporary trapping of the analyte at the column head. This was followed by elution of the compound from the column and detection by mass spectrometer. Fexofenadine, used as a model compound in this study, was extracted from the plasma samples by a protein precipitation method. Chromatographic separation was achieved on a reversed-phase C18 column by using a gradient method with 0.1% formic acid and 0.1% formic acid in acetonitrile as the mobile phase. The analyte was quantified in the positive-ion electrospray ionization mode using selected reaction monitoring. In this study, the analytical time per fexofenadine sample was approximately 2 min according to the UHPLC system. The method exhibited the linear dynamic ranges of 5-5000 pg/mL for fexofenadine in rat plasma. The intra-day precisions were from 3.2 to 8.7% and the accuracy range was 95.2-99.3%. The inter-day precisions and accuracies ranged from 3.5 to 8.4% and from 98.6 to 102.6%, respectively. The validated MIM was successfully applied to a microdose study in the rats that received oral administration of 100 µg/kg fexofenadine. We suggest that this method might be beneficial for the quantification of fexofenadine concentrations in a microdose clinical study.

  11. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-05-15

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  12. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2010-07-13

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  13. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  14. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  15. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  16. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  17. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  18. SAM Radiochemical Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target radiochemical analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select radiochemical analytes.

  19. Cobalt adatoms on graphene: Effects of anisotropies on the correlated electronic structure

    NASA Astrophysics Data System (ADS)

    Mozara, R.; Valentyuk, M.; Krivenko, I.; Şaşıoǧlu, E.; Kolorenč, J.; Lichtenstein, A. I.

    2018-02-01

    Impurities on surfaces experience a geometric symmetry breaking induced not only by the on-site crystal-field splitting and the orbital-dependent hybridization, but also by different screening of the Coulomb interaction in different directions. We present a many-body study of the Anderson impurity model representing a Co adatom on graphene, taking into account all anisotropies of the effective Coulomb interaction, which we obtained by the constrained random-phase approximation. The most pronounced differences are naturally displayed by the many-body self-energy projected onto the single-particle states. For the solution of the Anderson impurity model and analytical continuation of the Matsubara data, we employed new implementations of the continuous-time hybridization expansion quantum Monte Carlo and the stochastic optimization method, and we verified the results in parallel with the exact diagonalization method.

  20. A simple method to calculate first-passage time densities with arbitrary initial conditions

    NASA Astrophysics Data System (ADS)

    Nyberg, Markus; Ambjörnsson, Tobias; Lizana, Ludvig

    2016-06-01

    Numerous applications all the way from biology and physics to economics depend on the density of first crossings over a boundary. Motivated by the lack of general purpose analytical tools for computing first-passage time densities (FPTDs) for complex problems, we propose a new simple method based on the independent interval approximation (IIA). We generalise previous formulations of the IIA to include arbitrary initial conditions as well as to deal with discrete time and non-smooth continuous time processes. We derive a closed form expression for the FPTD in z and Laplace-transform space to a boundary in one dimension. Two classes of problems are analysed in detail: discrete time symmetric random walks (Markovian) and continuous time Gaussian stationary processes (Markovian and non-Markovian). Our results are in good agreement with Langevin dynamics simulations.

  1. Guidance to Achieve Accurate Aggregate Quantitation in Biopharmaceuticals by SV-AUC.

    PubMed

    Arthur, Kelly K; Kendrick, Brent S; Gabrielson, John P

    2015-01-01

    The levels and types of aggregates present in protein biopharmaceuticals must be assessed during all stages of product development, manufacturing, and storage of the finished product. Routine monitoring of aggregate levels in biopharmaceuticals is typically achieved by size exclusion chromatography (SEC) due to its high precision, speed, robustness, and simplicity to operate. However, SEC is error prone and requires careful method development to ensure accuracy of reported aggregate levels. Sedimentation velocity analytical ultracentrifugation (SV-AUC) is an orthogonal technique that can be used to measure protein aggregation without many of the potential inaccuracies of SEC. In this chapter, we discuss applications of SV-AUC during biopharmaceutical development and how characteristics of the technique make it better suited for some applications than others. We then discuss the elements of a comprehensive analytical control strategy for SV-AUC. Successful implementation of these analytical control elements ensures that SV-AUC provides continued value over the long time frames necessary to bring biopharmaceuticals to market. © 2015 Elsevier Inc. All rights reserved.

  2. Prediction of pressure and flow transients in a gaseous bipropellant reaction control rocket engine

    NASA Technical Reports Server (NTRS)

    Markowsky, J. J.; Mcmanus, H. N., Jr.

    1974-01-01

    An analytic model is developed to predict pressure and flow transients in a gaseous hydrogen-oxygen reaction control rocket engine feed system. The one-dimensional equations of momentum and continuity are reduced by the method of characteristics from partial derivatives to a set of total derivatives which describe the state properties along the feedline. System components, e.g., valves, manifolds, and injectors are represented by pseudo steady-state relations at discrete junctions in the system. Solutions were effected by a FORTRAN IV program on an IBM 360/65. The results indicate the relative effect of manifold volume, combustion lag time, feedline pressure fluctuations, propellant temperature, and feedline length on the chamber pressure transient. The analytical combustion model is verified by good correlation between predicted and observed chamber pressure transients. The developed model enables a rocket designer to vary the design parameters analytically to obtain stable combustion for a particular mode of operation which is prescribed by mission objectives.

  3. Annual banned-substance review: analytical approaches in human sports drug testing.

    PubMed

    Thevis, Mario; Kuuranne, Tiia; Geyer, Hans; Schänzer, Wilhelm

    2015-01-01

    Within the mosaic display of international anti-doping efforts, analytical strategies based on up-to-date instrumentation as well as most recent information about physiology, pharmacology, metabolism, etc., of prohibited substances and methods of doping are indispensable. The continuous emergence of new chemical entities and the identification of arguably beneficial effects of established or even obsolete drugs on endurance, strength, and regeneration, necessitate frequent and adequate adaptations of sports drug testing procedures. These largely rely on exploiting new technologies, extending the substance coverage of existing test protocols, and generating new insights into metabolism, distribution, and elimination of compounds prohibited by the World Anti-Doping Agency (WADA). In reference of the content of the 2014 Prohibited List, literature concerning human sports drug testing that was published between October 2013 and September 2014 is summarized and reviewed in this annual banned-substance review, with particular emphasis on analytical approaches and their contribution to enhanced doping controls. Copyright © 2014 John Wiley & Sons, Ltd.

  4. End-point detection in potentiometric titration by continuous wavelet transform.

    PubMed

    Jakubowska, Małgorzata; Baś, Bogusław; Kubiak, Władysław W

    2009-10-15

    The aim of this work was construction of the new wavelet function and verification that a continuous wavelet transform with a specially defined dedicated mother wavelet is a useful tool for precise detection of end-point in a potentiometric titration. The proposed algorithm does not require any initial information about the nature or the type of analyte and/or the shape of the titration curve. The signal imperfection, as well as random noise or spikes has no influence on the operation of the procedure. The optimization of the new algorithm was done using simulated curves and next experimental data were considered. In the case of well-shaped and noise-free titration data, the proposed method gives the same accuracy and precision as commonly used algorithms. But, in the case of noisy or badly shaped curves, the presented approach works good (relative error mainly below 2% and coefficients of variability below 5%) while traditional procedures fail. Therefore, the proposed algorithm may be useful in interpretation of the experimental data and also in automation of the typical titration analysis, specially in the case when random noise interfere with analytical signal.

  5. Continuous surface force based lattice Boltzmann equation method for simulating thermocapillary flow

    NASA Astrophysics Data System (ADS)

    Zheng, Lin; Zheng, Song; Zhai, Qinglan

    2016-02-01

    In this paper, we extend a lattice Boltzmann equation (LBE) with continuous surface force (CSF) to simulate thermocapillary flows. The model is designed on our previous CSF LBE for athermal two phase flow, in which the interfacial tension forces and the Marangoni stresses as the results of the interface interactions between different phases are described by a conception of CSF. In this model, the sharp interfaces between different phases are separated by a narrow transition layers, and the kinetics and morphology evolution of phase separation would be characterized by an order parameter via Cahn-Hilliard equation which is solved in the frame work of LBE. The scalar convection-diffusion equation for temperature field is resolved by thermal LBE. The models are validated by thermal two layered Poiseuille flow, and two superimposed planar fluids at negligibly small Reynolds and Marangoni numbers for the thermocapillary driven convection, which have analytical solutions for the velocity and temperature. Then thermocapillary migration of two/three dimensional deformable droplet are simulated. Numerical results show that the predictions of present LBE agreed with the analytical solution/other numerical results.

  6. Application of the backward extrapolation method to pulsed neutron sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talamo, Alberto; Gohar, Yousry

    We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less

  7. Application of the backward extrapolation method to pulsed neutron sources

    DOE PAGES

    Talamo, Alberto; Gohar, Yousry

    2017-09-23

    We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less

  8. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  9. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  10. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  11. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  12. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  13. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  14. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  15. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture... Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MREs are listed as follows: (1) Official Methods of Analysis of AOAC...

  16. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  17. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  18. Demonstrating Success: Web Analytics and Continuous Improvement

    ERIC Educational Resources Information Center

    Loftus, Wayne

    2012-01-01

    As free and low-cost Web analytics tools become more sophisticated, libraries' approach to user analysis can become more nuanced and precise. Tracking appropriate metrics with a well-formulated analytics program can inform design decisions, demonstrate the degree to which those decisions have succeeded, and thereby inform the next iteration in the…

  19. Real-Time Continuous Identification of Greenhouse Plant Pathogens Based on Recyclable Microfluidic Bioassay System.

    PubMed

    Qu, Xiangmeng; Li, Min; Zhang, Hongbo; Lin, Chenglie; Wang, Fei; Xiao, Mingshu; Zhou, Yi; Shi, Jiye; Aldalbahi, Ali; Pei, Hao; Chen, Hong; Li, Li

    2017-09-20

    The development of a real-time continuous analytical platform for the pathogen detection is of great scientific importance for achieving better disease control and prevention. In this work, we report a rapid and recyclable microfluidic bioassay system constructed from oligonucleotide arrays for selective and sensitive continuous identification of DNA targets of fungal pathogens. We employ the thermal denaturation method to effectively regenerate the oligonucleotide arrays for multiple sample detection, which could considerably reduce the screening effort and costs. The combination of thermal denaturation and laser-induced fluorescence detection technique enables real-time continuous identification of multiple samples (<10 min per sample). As a proof of concept, we have demonstrated that two DNA targets of fungal pathogens (Botrytis cinerea and Didymella bryoniae) can be sequentially analyzed using our rapid microfluidic bioassay system, which provides a new paradigm in the design of microfluidic bioassay system and will be valuable for chemical and biomedical analysis.

  20. 40 CFR 161.180 - Enforcement analytical method.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 161.180... DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data Requirements § 161.180 Enforcement analytical method. An analytical method suitable for enforcement purposes must be...

  1. Lax-Wendroff and TVD finite volume methods for unidimensional thermomechanical numerical simulations of impacts on elastic-plastic solids

    NASA Astrophysics Data System (ADS)

    Heuzé, Thomas

    2017-10-01

    We present in this work two finite volume methods for the simulation of unidimensional impact problems, both for bars and plane waves, on elastic-plastic solid media within the small strain framework. First, an extension of Lax-Wendroff to elastic-plastic constitutive models with linear and nonlinear hardenings is presented. Second, a high order TVD method based on flux-difference splitting [1] and Superbee flux limiter [2] is coupled with an approximate elastic-plastic Riemann solver for nonlinear hardenings, and follows that of Fogarty [3] for linear ones. Thermomechanical coupling is accounted for through dissipation heating and thermal softening, and adiabatic conditions are assumed. This paper essentially focuses on one-dimensional problems since analytical solutions exist or can easily be developed. Accordingly, these two numerical methods are compared to analytical solutions and to the explicit finite element method on test cases involving discontinuous and continuous solutions. This allows to study in more details their respective performance during the loading, unloading and reloading stages. Particular emphasis is also paid to the accuracy of the computed plastic strains, some differences being found according to the numerical method used. Lax-Wendoff two-dimensional discretization of a one-dimensional problem is also appended at the end to demonstrate the extensibility of such numerical scheme to multidimensional problems.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stershic, Andrew J.; Dolbow, John E.; Moës, Nicolas

    The Thick Level-Set (TLS) model is implemented to simulate brittle media undergoing dynamic fragmentation. This non-local model is discretized by the finite element method with damage represented as a continuous field over the domain. A level-set function defines the extent and severity of damage, and a length scale is introduced to limit the damage gradient. Numerical studies in one dimension demonstrate that the proposed method reproduces the rate-dependent energy dissipation and fragment length observations from analytical, numerical, and experimental approaches. In conclusion, additional studies emphasize the importance of appropriate bulk constitutive models and sufficient spatial resolution of the length scale.

  3. Method and Apparatus for Concentrating Vapors for Analysis

    DOEpatents

    Grate, Jay W.; Baldwin, David L.; Anheier, Jr., Norman C.

    2008-10-07

    An apparatus and method are disclosed for pre-concentrating gaseous vapors for analysis. The invention finds application in conjunction with, e.g., analytical instruments where low detection limits for gaseous vapors are desirable. Vapors sorbed and concentrated within the bed of the apparatus can be thermally desorbed achieving at least partial separation of vapor mixtures. The apparatus is suitable, e.g., for preconcentration and sample injection, and provides greater resolution of peaks for vapors within vapor mixtures, yielding detection levels that are 10-10,000 times better than for direct sampling and analysis systems. Features are particularly useful for continuous unattended monitoring applications.

  4. The estimation of the pollutant emissions on-board vessels by means of numerical methods

    NASA Astrophysics Data System (ADS)

    Jenaru, A.; Arsenie, P.; Hanzu-Pazara, R.

    2016-08-01

    Protection of the environment, especially within the most recent years, has become a constant problem considered by the states and the governments of the world, which are more and more concerned about the serious problems caused by the continuous deterioration of the environment. The long term effects of pollution on the environment generated by the lack of penalty regulations, have directed the attention of statesmen upon the necessity of the elaboration of normative acts meant to be effective in the continuous fight with it. Maritime transportation generates approximately 4% of the total of the CO2 emissions produced by human activities. This paper is intended to present two methods of estimation of the gases emissions on-board a vessel, methods that are very useful for the crews which are exploiting them. For the determination and the validation of these methods we are going to use the determinations from the tank ship. This ship has as a main propulsion engine Wärtsilä DU Sulzer RT Flex 50 - 6 cylinders that develops a maximal power of 9720 kW and has a permanent monitoring system of the pollutant emissions. The methods we develop here are using the values of the polluting elements from the exhaust gases that are determined at the exit of the vessel from the ship yard, in the framework of the acceptance tests. These values have been introduced within the framework of a matrix in the MATHCAD program. This matrix represents the starting point of the two mentioned methods: the analytical method and the graphical method. During the study we are going to evaluate the development and validation of an analytical tool to be used to determine the standard of emissions aimed at thermal machines on ships. One of the main objectives of this article represents an objective assessment of the expediency of using non-fuels for internal combustion engines in vessels.

  5. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Enforcement analytical method. 158.355... DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An analytical method suitable for enforcement purposes must be provided for each active ingredient in the...

  6. Analytical investigations in aircraft and spacecraft trajectory optimization and optimal guidance

    NASA Technical Reports Server (NTRS)

    Markopoulos, Nikos; Calise, Anthony J.

    1995-01-01

    A collection of analytical studies is presented related to unconstrained and constrained aircraft (a/c) energy-state modeling and to spacecraft (s/c) motion under continuous thrust. With regard to a/c unconstrained energy-state modeling, the physical origin of the singular perturbation parameter that accounts for the observed 2-time-scale behavior of a/c during energy climbs is identified and explained. With regard to the constrained energy-state modeling, optimal control problems are studied involving active state-variable inequality constraints. Departing from the practical deficiencies of the control programs for such problems that result from the traditional formulations, a complete reformulation is proposed for these problems which, in contrast to the old formulation, will presumably lead to practically useful controllers that can track an inequality constraint boundary asymptotically, and even in the presence of 2-sided perturbations about it. Finally, with regard to s/c motion under continuous thrust, a thrust program is proposed for which the equations of 2-dimensional motion of a space vehicle in orbit, viewed as a point mass, afford an exact analytic solution. The thrust program arises under the assumption of tangential thrust from the costate system corresponding to minimum-fuel, power-limited, coplanar transfers between two arbitrary conics. The thrust program can be used not only with power-limited propulsion systems, but also with any propulsion system capable of generating continuous thrust of controllable magnitude, and, for propulsion types and classes of transfers for which it is sufficiently optimal the results of this report suggest a method of maneuvering during planetocentric or heliocentric orbital operations, requiring a minimum amount of computation; thus uniquely suitable for real-time feedback guidance implementations.

  7. Back to Normal! Gaussianizing posterior distributions for cosmological probes

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2014-05-01

    We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.

  8. Electrospray Modifications for Advancing Mass Spectrometric Analysis

    PubMed Central

    Meher, Anil Kumar; Chen, Yu-Chie

    2017-01-01

    Generation of analyte ions in gas phase is a primary requirement for mass spectrometric analysis. One of the ionization techniques that can be used to generate gas phase ions is electrospray ionization (ESI). ESI is a soft ionization method that can be used to analyze analytes ranging from small organics to large biomolecules. Numerous ionization techniques derived from ESI have been reported in the past two decades. These ion sources are aimed to achieve simplicity and ease of operation. Many of these ionization methods allow the flexibility for elimination or minimization of sample preparation steps prior to mass spectrometric analysis. Such ion sources have opened up new possibilities for taking scientific challenges, which might be limited by the conventional ESI technique. Thus, the number of ESI variants continues to increase. This review provides an overview of ionization techniques based on the use of electrospray reported in recent years. Also, a brief discussion on the instrumentation, underlying processes, and selected applications is also presented. PMID:28573082

  9. "Dip-and-read" paper-based analytical devices using distance-based detection with color screening.

    PubMed

    Yamada, Kentaro; Citterio, Daniel; Henry, Charles S

    2018-05-15

    An improved paper-based analytical device (PAD) using color screening to enhance device performance is described. Current detection methods for PADs relying on the distance-based signalling motif can be slow due to the assay time being limited by capillary flow rates that wick fluid through the detection zone. For traditional distance-based detection motifs, analysis can take up to 45 min for a channel length of 5 cm. By using a color screening method, quantification with a distance-based PAD can be achieved in minutes through a "dip-and-read" approach. A colorimetric indicator line deposited onto a paper substrate using inkjet-printing undergoes a concentration-dependent colorimetric response for a given analyte. This color intensity-based response has been converted to a distance-based signal by overlaying a color filter with a continuous color intensity gradient matching the color of the developed indicator line. As a proof-of-concept, Ni quantification in welding fume was performed as a model assay. The results of multiple independent user testing gave mean absolute percentage error and average relative standard deviations of 10.5% and 11.2% respectively, which were an improvement over analysis based on simple visual color comparison with a read guide (12.2%, 14.9%). In addition to the analytical performance comparison, an interference study and a shelf life investigation were performed to further demonstrate practical utility. The developed system demonstrates an alternative detection approach for distance-based PADs enabling fast (∼10 min), quantitative, and straightforward assays.

  10. Characterization of spacecraft humidity condensate

    NASA Technical Reports Server (NTRS)

    Muckle, Susan; Schultz, John R.; Sauer, Richard L.

    1994-01-01

    When construction of Space Station Freedom reaches the Permanent Manned Capability (PMC) stage, the Water Recovery and Management Subsystem will be fully operational such that (distilled) urine, spent hygiene water, and humidity condensate will be reclaimed to provide water of potable quality. The reclamation technologies currently baselined to process these waste waters include adsorption, ion exchange, catalytic oxidation, and disinfection. To ensure that the baseline technologies will be able to effectively remove those compounds presenting a health risk to the crew, the National Research Council has recommended that additional information be gathered on specific contaminants in waste waters representative of those to be encountered on the Space Station. With the application of new analytical methods and the analysis of waste water samples more representative of the Space Station environment, advances in the identification of the specific contaminants continue to be made. Efforts by the Water and Food Analytical Laboratory at JSC were successful in enlarging the database of contaminants in humidity condensate. These efforts have not only included the chemical characterization of condensate generated during ground-based studies, but most significantly the characterization of cabin and Spacelab condensate generated during Shuttle missions. The analytical results presented in this paper will be used to show how the composition of condensate varies amongst enclosed environments and thus the importance of collecting condensate from an environment close to that of the proposed Space Station. Although advances were made in the characterization of space condensate, complete characterization, particularly of the organics, requires further development of analytical methods.

  11. Experimental study and analytical model of deformation of magnetostrictive films as applied to mirrors for x-ray space telescopes.

    PubMed

    Wang, Xiaoli; Knapp, Peter; Vaynman, S; Graham, M E; Cao, Jian; Ulmer, M P

    2014-09-20

    The desire for continuously gaining new knowledge in astronomy has pushed the frontier of engineering methods to deliver lighter, thinner, higher quality mirrors at an affordable cost for use in an x-ray observatory. To address these needs, we have been investigating the application of magnetic smart materials (MSMs) deposited as a thin film on mirror substrates. MSMs have some interesting properties that make the application of MSMs to mirror substrates a promising solution for making the next generation of x-ray telescopes. Due to the ability to hold a shape with an impressed permanent magnetic field, MSMs have the potential to be the method used to make light weight, affordable x-ray telescope mirrors. This paper presents the experimental setup for measuring the deformation of the magnetostrictive bimorph specimens under an applied magnetic field, and the analytical and numerical analysis of the deformation. As a first step in the development of tools to predict deflections, we deposited Terfenol-D on the glass substrates. We then made measurements that were compared with the results from the analytical and numerical analysis. The surface profiles of thin-film specimens were measured under an external magnetic field with white light interferometry (WLI). The analytical model provides good predictions of film deformation behavior under various magnetic field strengths. This work establishes a solid foundation for further research to analyze the full three-dimensional deformation behavior of magnetostrictive thin films.

  12. The spatial distribution patterns of condensed phase post-blast explosive residues formed during detonation.

    PubMed

    Abdul-Karim, Nadia; Blackman, Christopher S; Gill, Philip P; Karu, Kersti

    2016-10-05

    The continued usage of explosive devices, as well as the ever growing threat of 'dirty' bombs necessitates a comprehensive understanding of particle dispersal during detonation events in order to develop effectual methods for targeting explosive and/or additive remediation efforts. Herein, the distribution of explosive analytes from controlled detonations of aluminised ammonium nitrate and an RDX-based explosive composition were established by systematically sampling sites positioned around each firing. This is the first experimental study to produce evidence that the post-blast residue mass can distribute according to an approximate inverse-square law model, while also demonstrating for the first time that distribution trends can vary depending on individual analytes. Furthermore, by incorporating blast-wave overpressure measurements, high-speed imaging for fireball volume recordings, and monitoring of environmental conditions, it was determined that the principle factor affecting all analyte dispersals was the wind direction, with other factors affecting specific analytes to varying degrees. The dispersal mechanism for explosive residue is primarily the smoke cloud, a finding which in itself has wider impacts on the environment and fundamental detonation theory. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  13. How Can Visual Analytics Assist Investigative Analysis? Design Implications from an Evaluation.

    PubMed

    Youn-Ah Kang; Görg, Carsten; Stasko, John

    2011-05-01

    Despite the growing number of systems providing visual analytic support for investigative analysis, few empirical studies of the potential benefits of such systems have been conducted, particularly controlled, comparative evaluations. Determining how such systems foster insight and sensemaking is important for their continued growth and study, however. Furthermore, studies that identify how people use such systems and why they benefit (or not) can help inform the design of new systems in this area. We conducted an evaluation of the visual analytics system Jigsaw employed in a small investigative sensemaking exercise, and compared its use to three other more traditional methods of analysis. Sixteen participants performed a simulated intelligence analysis task under one of the four conditions. Experimental results suggest that Jigsaw assisted participants to analyze the data and identify an embedded threat. We describe different analysis strategies used by study participants and how computational support (or the lack thereof) influenced the strategies. We then illustrate several characteristics of the sensemaking process identified in the study and provide design implications for investigative analysis tools based thereon. We conclude with recommendations on metrics and techniques for evaluating visual analytics systems for investigative analysis.

  14. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...

  15. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...

  16. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...

  17. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...

  18. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...

  19. From analytic inversion to contemporary IMRT optimization: Radiation therapy planning revisited from a mathematical perspective

    PubMed Central

    Censor, Yair; Unkelbach, Jan

    2011-01-01

    In this paper we look at the development of radiation therapy treatment planning from a mathematical point of view. Historically, planning for Intensity-Modulated Radiation Therapy (IMRT) has been considered as an inverse problem. We discuss first the two fundamental approaches that have been investigated to solve this inverse problem: Continuous analytic inversion techniques on one hand, and fully-discretized algebraic methods on the other hand. In the second part of the paper, we review another fundamental question which has been subject to debate from the beginning of IMRT until the present day: The rotation therapy approach versus fixed angle IMRT. This builds a bridge from historic work on IMRT planning to contemporary research in the context of Intensity-Modulated Arc Therapy (IMAT). PMID:21616694

  20. The Wide-Field Imaging Interferometry Testbed: Enabling Techniques for High Angular Resolution Astronomy

    NASA Technical Reports Server (NTRS)

    Rinehart, S. A.; Armstrong, T.; Frey, Bradley J.; Jung, J.; Kirk, J.; Leisawitz, David T.; Leviton, Douglas B.; Lyon, R.; Maher, Stephen; Martino, Anthony J.; hide

    2007-01-01

    The Wide-Field Imaging Interferometry Testbed (WIIT) was designed to develop techniques for wide-field of view imaging interferometry, using "double-Fourier" methods. These techniques will be important for a wide range of future spacebased interferometry missions. We have provided simple demonstrations of the methodology already, and continuing development of the testbed will lead to higher data rates, improved data quality, and refined algorithms for image reconstruction. At present, the testbed effort includes five lines of development; automation of the testbed, operation in an improved environment, acquisition of large high-quality datasets, development of image reconstruction algorithms, and analytical modeling of the testbed. We discuss the progress made towards the first four of these goals; the analytical modeling is discussed in a separate paper within this conference.

  1. Fermi gases with imaginary mass imbalance and the sign problem in Monte-Carlo calculations

    NASA Astrophysics Data System (ADS)

    Roscher, Dietrich; Braun, Jens; Chen, Jiunn-Wei; Drut, Joaquín E.

    2014-05-01

    Fermi gases in strongly coupled regimes are inherently challenging for many-body methods. Although progress has been made analytically, quantitative results require ab initio numerical approaches, such as Monte-Carlo (MC) calculations. However, mass-imbalanced and spin-imbalanced gases are not accessible to MC calculations due to the infamous sign problem. For finite spin imbalance, the problem can be circumvented using imaginary polarizations and analytic continuation, and large parts of the phase diagram then become accessible. We propose to apply this strategy to the mass-imbalanced case, which opens up the possibility to study the associated phase diagram with MC calculations. We perform a first mean-field analysis which suggests that zero-temperature studies, as well as detecting a potential (tri)critical point, are feasible.

  2. Complex formation of vanadium(V) with resorcylalhydrazides of carboxylic acids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dudarev, V.I.; Dolgorev, V.A.; Volkov, A.N.

    1986-08-01

    In this work, a previous investigation of hydrazine derivatives as analytical reagents for vanadium(V) was continued. The authors studied arylalhydrazones -- derivatives of resorcylalhydrazides of anisic (RHASA), anthranilic (RHANA), and benzoic (RHBA) acids. The reagents presented differ from those studied previously by the presence of a second hydroxy group in the para-position of the benzene ring -the resorcinol fragment -- and substituents in the benzoin fragment. Such changes made it possible to increase the solubility of the reagents in aqueous medium and to estimate the change in the main spectrophotometric parameters of the analytical reaction. A rapid method was developedmore » for the determination of vanadium in steels with the resorcylalhydrazide of anthranilic acid. The minimum determinable vanadium content is 0.18 micrograms/ml.« less

  3. Sponsor relationships, analyte stability in ligand-binding assays and critical reagent management: a bioanalytical CRO perspective.

    PubMed

    Lefor Bradford, Julia

    2015-01-01

    This perspective article discusses key points to address in the establishment of sound partnerships between sponsors and bioanalytical CROs to assure the timeliness, quality and consistency of bioanalysis throughout biological therapeutic development. The performance of ligand-binding assays can be greatly impacted by low-grade reagents, lot-to-lot variability and lack of stability of the analyte in matrix, impacting both timelines and cost. Thorough characterization of the biologic of interest and its assay-enabling critical reagents will lend itself well to conservation of materials and continuity of assay performance. When unplanned events occur, such as performance declines or premature depletion of material, structured procedures are paramount to supplement the current loosely defined regulatory guidance on critical reagent characterization and method bridging.

  4. Prioritizing pesticide compounds for analytical methods development

    USGS Publications Warehouse

    Norman, Julia E.; Kuivila, Kathryn; Nowell, Lisa H.

    2012-01-01

    The U.S. Geological Survey (USGS) has a periodic need to re-evaluate pesticide compounds in terms of priorities for inclusion in monitoring and studies and, thus, must also assess the current analytical capabilities for pesticide detection. To meet this need, a strategy has been developed to prioritize pesticides and degradates for analytical methods development. Screening procedures were developed to separately prioritize pesticide compounds in water and sediment. The procedures evaluate pesticide compounds in existing USGS analytical methods for water and sediment and compounds for which recent agricultural-use information was available. Measured occurrence (detection frequency and concentrations) in water and sediment, predicted concentrations in water and predicted likelihood of occurrence in sediment, potential toxicity to aquatic life or humans, and priorities of other agencies or organizations, regulatory or otherwise, were considered. Several existing strategies for prioritizing chemicals for various purposes were reviewed, including those that identify and prioritize persistent, bioaccumulative, and toxic compounds, and those that determine candidates for future regulation of drinking-water contaminants. The systematic procedures developed and used in this study rely on concepts common to many previously established strategies. The evaluation of pesticide compounds resulted in the classification of compounds into three groups: Tier 1 for high priority compounds, Tier 2 for moderate priority compounds, and Tier 3 for low priority compounds. For water, a total of 247 pesticide compounds were classified as Tier 1 and, thus, are high priority for inclusion in analytical methods for monitoring and studies. Of these, about three-quarters are included in some USGS analytical method; however, many of these compounds are included on research methods that are expensive and for which there are few data on environmental samples. The remaining quarter of Tier 1 compounds are high priority as new analytes. The objective for analytical methods development is to design an integrated analytical strategy that includes as many of the Tier 1 pesticide compounds as possible in a relatively few, cost-effective methods. More than 60 percent of the Tier 1 compounds are high priority because they are anticipated to be present at concentrations approaching levels that could be of concern to human health or aquatic life in surface water or groundwater. An additional 17 percent of Tier 1 compounds were frequently detected in monitoring studies, but either were not measured at levels potentially relevant to humans or aquatic organisms, or do not have benchmarks available with which to compare concentrations. The remaining 21 percent are pesticide degradates that were included because their parent pesticides were in Tier 1. Tier 1 pesticide compounds for water span all major pesticide use groups and a diverse range of chemical classes, with herbicides and their degradates composing half of compounds. Many of the high priority pesticide compounds also are in several national regulatory programs for water, including those that are regulated in drinking water by the U.S. Environmental Protection Agency under the Safe Drinking Water Act and those that are on the latest Contaminant Candidate List. For sediment, a total of 175 pesticide compounds were classified as Tier 1 and, thus, are high priority for inclusion in analytical methods available for monitoring and studies. More than 60 percent of these compounds are included in some USGS analytical method; however, some are spread across several research methods that are expensive to perform, and monitoring data are not extensive for many compounds. The remaining Tier 1 compounds for sediment are high priority as new analytes. The objective for analytical methods development for sediment is to enhance an existing analytical method that currently includes nearly half of the pesticide compounds in Tier 1 by adding as many additional Tier 1 compounds as are analytically compatible. About 35 percent of the Tier 1 compounds for sediment are high priority on the basis of measured occurrence. A total of 74 compounds, or 42 percent, are high priority on the basis of predicted likelihood of occurrence according to physical-chemical properties, and either have potential toxicity to aquatic life, high pesticide useage, or both. The remaining 22 percent of Tier 1 pesticide compounds were either degradates of Tier 1 parent compounds or included for other reasons. As with water, the Tier 1 pesticide compounds for sediment are distributed across the major pesticide-use groups; insecticides and their degradates are the largest fraction, making up 45 percent of Tier 1. In contrast to water, organochlorines, at 17 percent, are the largest chemical class for Tier 1 in sediment, which is to be expected because there is continued widespread detection in sediments of persistent organochlorine pesticides and their degradates at concentrations high enough for potential effects on aquatic life. Compared to water, there are fewer available benchmarks with which to compare contaminant concentrations in sediment, but a total of 19 Tier 1 compounds have at least one sediment benchmark or screening value for aquatic organisms. Of the 175 compounds in Tier 1, 77 percent have high aquatic-life toxicity, as defined for this process. This evaluation of pesticides and degradates resulted in two lists of compounds that are priorities for USGS analytical methods development, one for water and one for sediment. These lists will be used as the basis for redesigning and enhancing USGS analytical capabilities for pesticides in order to capture as many high-priority pesticide compounds as possible using an economically feasible approach.

  5. Therapeutic drug monitoring of beta-lactam antibiotics - Influence of sample stability on the analysis of piperacillin, meropenem, ceftazidime and flucloxacillin by HPLC-UV.

    PubMed

    Pinder, Nadine; Brenner, Thorsten; Swoboda, Stefanie; Weigand, Markus A; Hoppe-Tichy, Torsten

    2017-09-05

    Therapeutic drug monitoring (TDM) is a useful tool to optimize antibiotic therapy. Increasing interest in alternative dosing strategies of beta-lactam antibiotics, e.g. continuous or prolonged infusion, require a feasible analytical method for quantification of these antimicrobial agents. However, pre-analytical issues including sample handling and stability are to be considered to provide valuable analytical results. For the simultaneous determination of piperacillin, meropenem, ceftazidime and flucloxacillin, a high performance liquid chromatography (HPLC) method including protein precipitation was established utilizing ertapenem as internal standard. Long-term stability of stock solutions and plasma samples were monitored. Furthermore, whole blood stability of the analytes in heparinized blood tubes was investigated comparing storage under ambient conditions and 2-8°C. A calibration range of 5-200μg/ml (piperacillin, ceftazidime, flucloxacillin) and 2-200μg/ml (meropenem) was linear with r 2 >0.999, precision and inaccuracy were <9% and <11%, respectively. The successfully validated HPLC assay was applied to clinical samples and stability investigations. At -80°C, plasma samples were stable for 9 months (piperacillin, meropenem) or 13 months (ceftazidime, flucloxacillin). Concentrations of the four beta-lactam antibiotics in whole blood tubes were found to remain within specifications for 8h when stored at 2-8°C but not at room temperature. The presented method is a rapid and simple option for routine TDM of piperacillin, meropenem, ceftazidime and flucloxacillin. Whereas long-term storage of beta-lactam samples at -80°C is possible for at least 9 months, whole blood tubes are recommended to be kept refrigerated until analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. An approximate analytical solution for describing surface runoff and sediment transport over hillslope

    NASA Astrophysics Data System (ADS)

    Tao, Wanghai; Wang, Quanjiu; Lin, Henry

    2018-03-01

    Soil and water loss from farmland causes land degradation and water pollution, thus continued efforts are needed to establish mathematical model for quantitative analysis of relevant processes and mechanisms. In this study, an approximate analytical solution has been developed for overland flow model and sediment transport model, offering a simple and effective means to predict overland flow and erosion under natural rainfall conditions. In the overland flow model, the flow regime was considered to be transitional with the value of parameter β (in the kinematic wave model) approximately two. The change rate of unit discharge with distance was assumed to be constant and equal to the runoff rate at the outlet of the plane. The excess rainfall was considered to be constant under uniform rainfall conditions. The overland flow model developed can be further applied to natural rainfall conditions by treating excess rainfall intensity as constant over a small time interval. For the sediment model, the recommended values of the runoff erosion calibration constant (cr) and the splash erosion calibration constant (cf) have been given in this study so that it is easier to use the model. These recommended values are 0.15 and 0.12, respectively. Comparisons with observed results were carried out to validate the proposed analytical solution. The results showed that the approximate analytical solution developed in this paper closely matches the observed data, thus providing an alternative method of predicting runoff generation and sediment yield, and offering a more convenient method of analyzing the quantitative relationships between variables. Furthermore, the model developed in this study can be used as a theoretical basis for developing runoff and erosion control methods.

  7. Replica symmetric evaluation of the information transfer in a two-layer network in the presence of continuous and discrete stimuli.

    PubMed

    Del Prete, Valeria; Treves, Alessandro

    2002-04-01

    In a previous paper we have evaluated analytically the mutual information between the firing rates of N independent units and a set of multidimensional continuous and discrete stimuli, for a finite population size and in the limit of large noise. Here, we extend the analysis to the case of two interconnected populations, where input units activate output ones via Gaussian weights and a threshold linear transfer function. We evaluate the information carried by a population of M output units, again about continuous and discrete correlates. The mutual information is evaluated solving saddle-point equations under the assumption of replica symmetry, a method that, by taking into account only the term linear in N of the input information, is equivalent to assuming the noise to be large. Within this limitation, we analyze the dependence of the information on the ratio M/N, on the selectivity of the input units and on the level of the output noise. We show analytically, and confirm numerically, that in the limit of a linear transfer function and of a small ratio between output and input noise, the output information approaches asymptotically the information carried in input. Finally, we show that the information loss in output does not depend much on the structure of the stimulus, whether purely continuous, purely discrete or mixed, but only on the position of the threshold nonlinearity, and on the ratio between input and output noise.

  8. New methods for new questions: obstacles and opportunities.

    PubMed

    Foster, E Michael; Kalil, Ariel

    2008-03-01

    Two forces motivate this special section, "New Methods for New Questions in Developmental Psychology." First are recent developments in social science methodology and the increasing availability of those methods in common software packages. Second, at the same time psychologists' understanding of developmental phenomena has continued to grow. At their best, these developments in theory and methods work in tandem, fueling each other. Newer methods make it possible for scientists to better test their ideas; better ideas lead methodologists to techniques that better reflect, capture, and quantify the underlying processes. The articles in this special section represent a sampling of these new methods and new questions. The authors describe common themes in these articles and identify barriers to future progress, such as the lack of data sharing by and analytical training for developmentalists.

  9. 7 CFR 94.4 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture... POULTRY AND EGG PRODUCTS Mandatory Analyses of Egg Products § 94.4 Analytical methods. The majority of analytical methods used by the USDA laboratories to perform mandatory analyses for egg products are listed as...

  10. Estimating habitat volume of living resources using three-dimensional circulation and biogeochemical models

    NASA Astrophysics Data System (ADS)

    Smith, Katharine A.; Schlag, Zachary; North, Elizabeth W.

    2018-07-01

    Coupled three-dimensional circulation and biogeochemical models predict changes in water properties that can be used to define fish habitat, including physiologically important parameters such as temperature, salinity, and dissolved oxygen. However, methods for calculating the volume of habitat defined by the intersection of multiple water properties are not well established for coupled three-dimensional models. The objectives of this research were to examine multiple methods for calculating habitat volume from three-dimensional model predictions, select the most robust approach, and provide an example application of the technique. Three methods were assessed: the "Step," "Ruled Surface", and "Pentahedron" methods, the latter of which was developed as part of this research. Results indicate that the analytical Pentahedron method is exact, computationally efficient, and preserves continuity in water properties between adjacent grid cells. As an example application, the Pentahedron method was implemented within the Habitat Volume Model (HabVol) using output from a circulation model with an Arakawa C-grid and physiological tolerances of juvenile striped bass (Morone saxatilis). This application demonstrates that the analytical Pentahedron method can be successfully applied to calculate habitat volume using output from coupled three-dimensional circulation and biogeochemical models, and it indicates that the Pentahedron method has wide application to aquatic and marine systems for which these models exist and physiological tolerances of organisms are known.

  11. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prasad, Mano K.; Snyderman, Neal J.; Rowland, Mark S.

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  12. A Survey of Mathematical Programming in the Soviet Union (Bibliography),

    DTIC Science & Technology

    1982-01-01

    ASTAFYEV, N. N., "METHOD OF LINEARIZATION IN CONVEX PROGRAMMING", TR4- Y ZIMN SHKOLY PO MAT PROGRAMMIR I XMEZHN VOPR DROGOBYCH, 72, VOL. 3, 54-73 2...AKADEMIYA KOMMUNLN’NOGO KHOZYAYSTVA (MOSCOW), 72, NO. 93, 70-77 19. GIMELFARB , G, V. MARCHENKO, V. RYBAK, "AUTOMATIC IDENTIFICATION OF IDENTICAL POINTS...DYNAMIC PROGRAMMING (CONTINUED) 25. KOLOSOV, G. Y , "ON ANALYTICAL SOLUTION OF DESIGN PROBLEMS FOR DISTRIBUTED OPTIMAL CONTROL SYSTEMS SUBJECTED TO RANDOM

  13. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOEpatents

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-06-05

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  14. Considerations regarding the validation of chromatographic mass spectrometric methods for the quantification of endogenous substances in forensics.

    PubMed

    Hess, Cornelius; Sydow, Konrad; Kueting, Theresa; Kraemer, Michael; Maas, Alexandra

    2018-02-01

    The requirement for correct evaluation of forensic toxicological results in daily routine work and scientific studies is reliable analytical data based on validated methods. Validation of a method gives the analyst tools to estimate the efficacy and reliability of the analytical method. Without validation, data might be contested in court and lead to unjustified legal consequences for a defendant. Therefore, new analytical methods to be used in forensic toxicology require careful method development and validation of the final method. Until now, there are no publications on the validation of chromatographic mass spectrometric methods for the detection of endogenous substances although endogenous analytes can be important in Forensic Toxicology (alcohol consumption marker, congener alcohols, gamma hydroxy butyric acid, human insulin and C-peptide, creatinine, postmortal clinical parameters). For these analytes, conventional validation instructions cannot be followed completely. In this paper, important practical considerations in analytical method validation for endogenous substances will be discussed which may be used as guidance for scientists wishing to develop and validate analytical methods for analytes produced naturally in the human body. Especially the validation parameters calibration model, analytical limits, accuracy (bias and precision) and matrix effects and recovery have to be approached differently. Highest attention should be paid to selectivity experiments. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Applying analytic hierarchy process to assess healthcare-oriented cloud computing service systems.

    PubMed

    Liao, Wen-Hwa; Qiu, Wan-Li

    2016-01-01

    Numerous differences exist between the healthcare industry and other industries. Difficulties in the business operation of the healthcare industry have continually increased because of the volatility and importance of health care, changes to and requirements of health insurance policies, and the statuses of healthcare providers, which are typically considered not-for-profit organizations. Moreover, because of the financial risks associated with constant changes in healthcare payment methods and constantly evolving information technology, healthcare organizations must continually adjust their business operation objectives; therefore, cloud computing presents both a challenge and an opportunity. As a response to aging populations and the prevalence of the Internet in fast-paced contemporary societies, cloud computing can be used to facilitate the task of balancing the quality and costs of health care. To evaluate cloud computing service systems for use in health care, providing decision makers with a comprehensive assessment method for prioritizing decision-making factors is highly beneficial. Hence, this study applied the analytic hierarchy process, compared items related to cloud computing and health care, executed a questionnaire survey, and then classified the critical factors influencing healthcare cloud computing service systems on the basis of statistical analyses of the questionnaire results. The results indicate that the primary factor affecting the design or implementation of optimal cloud computing healthcare service systems is cost effectiveness, with the secondary factors being practical considerations such as software design and system architecture.

  16. Towards European urinalysis guidelines. Introduction of a project under European Confederation of Laboratory Medicine.

    PubMed

    Kouri, T T; Gant, V A; Fogazzi, G B; Hofmann, W; Hallander, H O; Guder, W G

    2000-07-01

    Improved standardized performance is needed because urinalysis continues to be one of the most frequently requested laboratory tests. Since 1997, the European Confederation of Laboratory Medicine (ECLM) has been supporting an interdisciplinary project aiming to produce European urinalysis guidelines. More than seventy clinical chemists, microbiologists and ward-based clinicians, as well as representatives of manufacturers are taking part. These guidelines aim to improve the quality and consistency of chemical urinalysis, particle counting and bacterial culture by suggesting optimal investigative processes that could be applied in Europe. The approach is based on medical needs for urinalysis. The importance of the pre-analytical stage for total quality is stressed by detailed illustrative advice for specimen collection. Attention is also given to emerging automated technology. For cost containment reasons, both optimum (ideal) procedures and minimum analytical approaches are suggested. Since urinalysis mostly lacks genuine reference methods (primary reference measurement procedures; Level 4), a novel classification of the methods is proposed: comparison measurement procedures (Level 3), quantitative routine procedures (Level 2), and ordinal scale examinations (Level 1). Stepwise strategies are suggested to save costs, applying different rules for general and specific patient populations. New analytical quality specifications have been created. After a consultation period, the final written text will be published in full as a separate document.

  17. The influence of parametric and external noise in act-and-wait control with delayed feedback.

    PubMed

    Wang, Jiaxing; Kuske, Rachel

    2017-11-01

    We apply several novel semi-analytic approaches for characterizing and calculating the effects of noise in a system with act-and-wait control. For concrete illustration, we apply these to a canonical balance model for an inverted pendulum to study the combined effect of delay and noise within the act-and-wait setting. While the act-and-wait control facilitates strong stabilization through deadbeat control, a comparison of different models with continuous vs. discrete updating of the control strategy in the active period illustrates how delays combined with the imprecise application of the control can seriously degrade the performance. We give several novel analyses of a generalized act-and-wait control strategy, allowing flexibility in the updating of the control strategy, in order to understand the sensitivities to delays and random fluctuations. In both the deterministic and stochastic settings, we give analytical and semi-analytical results that characterize and quantify the dynamics of the system. These results include the size and shape of stability regions, densities for the critical eigenvalues that capture the rate of reaching the desired stable equilibrium, and amplification factors for sustained fluctuations in the context of external noise. They also provide the dependence of these quantities on the length of the delay and the active period. In particular, we see that the combined influence of delay, parametric error, or external noise and on-off control can qualitatively change the dynamics, thus reducing the robustness of the control strategy. We also capture the dependence on how frequently the control is updated, allowing an interpolation between continuous and frequent updating. In addition to providing insights for these specific models, the methods we propose are generalizable to other settings with noise, delay, and on-off control, where analytical techniques are otherwise severely scarce.

  18. Mesoscopic description of random walks on combs

    NASA Astrophysics Data System (ADS)

    Méndez, Vicenç; Iomin, Alexander; Campos, Daniel; Horsthemke, Werner

    2015-12-01

    Combs are a simple caricature of various types of natural branched structures, which belong to the category of loopless graphs and consist of a backbone and branches. We study continuous time random walks on combs and present a generic method to obtain their transport properties. The random walk along the branches may be biased, and we account for the effect of the branches by renormalizing the waiting time probability distribution function for the motion along the backbone. We analyze the overall diffusion properties along the backbone and find normal diffusion, anomalous diffusion, and stochastic localization (diffusion failure), respectively, depending on the characteristics of the continuous time random walk along the branches, and compare our analytical results with stochastic simulations.

  19. Fabrication of TREAT Fuel with Increased Graphite Loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luther, Erik Paul; Leckie, Rafael M.; Dombrowski, David E.

    2014-02-05

    As part of the feasibility study exploring the replacement of the HEU fuel core of the TREAT reactor at Idaho National Laboratory with LEU fuel, this study demonstrates that it is possible to increase the graphite content of extruded fuel by reformulation. The extrusion process was use to fabricate the “upgrade” core1 for the TREAT reactor. The graphite content achieved is determined by calculation and has not been measured by any analytical method. In conjunction, a technique, Raman Spectroscopy, has been investigated for measuring the graphite content. This method shows some promise in differentiating between carbon and graphite; however, standardsmore » that would allow the technique to be calibrated to quantify the graphite concentration have yet to be fabricated. Continued research into Raman Spectroscopy is on going. As part of this study, cracking of graphite extrusions due to volatile evolution during heat treatment has been largely eliminated. Continued research to optimize this extrusion method is required.« less

  20. Simultaneous determination of 20 pharmacologically active substances in cow's milk, goat's milk, and human breast milk by gas chromatography-mass spectrometry.

    PubMed

    Azzouz, Abdelmonaim; Jurado-Sánchez, Beatriz; Souhail, Badredine; Ballesteros, Evaristo

    2011-05-11

    This paper reports a systematic approach to the development of a method that combines continuous solid-phase extraction and gas chromatography-mass spectrometry for the simultaneous determination of 20 pharmacologically active substances including antibacterials (chloramphenicol, florfenicol, pyrimethamine, thiamphenicol), nonsteroideal anti-inflammatories (diclofenac, flunixin, ibuprofen, ketoprofen, naproxen, mefenamic acid, niflumic acid, phenylbutazone), antiseptic (triclosan), antiepileptic (carbamazepine), lipid regulator (clofibric acid), β-blockers (metoprolol, propranolol), and hormones (17α-ethinylestradiol, estrone, 17β-estradiol) in milk samples. The sample preparation procedure involves deproteination of the milk, followed by sample enrichment and cleanup by continuous solid-phase extraction. The proposed method provides a linear response over the range of 0.6-5000 ng/kg and features limits of detection from 0.2 to 1.2 ng/kg depending on the particular analyte. The method was successfully applied to the determination of pharmacologically active substance residues in food samples including whole, raw, half-skim, skim, and powdered milk from different sources (cow, goat, and human breast).

  1. Dissociable meta-analytic brain networks contribute to coordinated emotional processing.

    PubMed

    Riedel, Michael C; Yanes, Julio A; Ray, Kimberly L; Eickhoff, Simon B; Fox, Peter T; Sutherland, Matthew T; Laird, Angela R

    2018-06-01

    Meta-analytic techniques for mining the neuroimaging literature continue to exert an impact on our conceptualization of functional brain networks contributing to human emotion and cognition. Traditional theories regarding the neurobiological substrates contributing to affective processing are shifting from regional- towards more network-based heuristic frameworks. To elucidate differential brain network involvement linked to distinct aspects of emotion processing, we applied an emergent meta-analytic clustering approach to the extensive body of affective neuroimaging results archived in the BrainMap database. Specifically, we performed hierarchical clustering on the modeled activation maps from 1,747 experiments in the affective processing domain, resulting in five meta-analytic groupings of experiments demonstrating whole-brain recruitment. Behavioral inference analyses conducted for each of these groupings suggested dissociable networks supporting: (1) visual perception within primary and associative visual cortices, (2) auditory perception within primary auditory cortices, (3) attention to emotionally salient information within insular, anterior cingulate, and subcortical regions, (4) appraisal and prediction of emotional events within medial prefrontal and posterior cingulate cortices, and (5) induction of emotional responses within amygdala and fusiform gyri. These meta-analytic outcomes are consistent with a contemporary psychological model of affective processing in which emotionally salient information from perceived stimuli are integrated with previous experiences to engender a subjective affective response. This study highlights the utility of using emergent meta-analytic methods to inform and extend psychological theories and suggests that emotions are manifest as the eventual consequence of interactions between large-scale brain networks. © 2018 Wiley Periodicals, Inc.

  2. Optimal guidance law development for an advanced launch system

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Hodges, Dewey H.; Leung, Martin S.; Bless, Robert R.

    1991-01-01

    The proposed investigation on a Matched Asymptotic Expansion (MAE) method was carried out. It was concluded that the method of MAE is not applicable to launch vehicle ascent trajectory optimization due to a lack of a suitable stretched variable. More work was done on the earlier regular perturbation approach using a piecewise analytic zeroth order solution to generate a more accurate approximation. In the meantime, a singular perturbation approach using manifold theory is also under current investigation. Work on a general computational environment based on the use of MACSYMA and the weak Hamiltonian finite element method continued during this period. This methodology is capable of the solution of a large class of optimal control problems.

  3. Analytical approximation for the Einstein-dilaton-Gauss-Bonnet black hole metric

    NASA Astrophysics Data System (ADS)

    Kokkotas, K. D.; Konoplya, R. A.; Zhidenko, A.

    2017-09-01

    We construct an analytical approximation for the numerical black hole metric of P. Kanti et al. [Phys. Rev. D 54, 5049 (1996), 10.1103/PhysRevD.54.5049] in the four-dimensional Einstein-dilaton-Gauss-Bonnet (EdGB) theory. The continued fraction expansion in terms of a compactified radial coordinate, used here, converges slowly when the dilaton coupling approaches its extremal values, but for a black hole far from the extremal state, the analytical formula has a maximal relative error of a fraction of one percent already within the third order of the continued fraction expansion. The suggested analytical representation of the numerical black hole metric is relatively compact and a good approximation in the whole space outside the black hole event horizon. Therefore, it can serve in the same way as an exact solution when analyzing particles' motion, perturbations, quasinormal modes, Hawking radiation, accreting disks, and many other problems in the vicinity of a black hole. In addition, we construct the approximate analytical expression for the dilaton field.

  4. Resonance-state properties from a phase shift analysis with the S -matrix pole method and the effective-range method

    NASA Astrophysics Data System (ADS)

    Irgaziev, B. F.; Orlov, Yu. V.

    2015-02-01

    Asymptotic normalization coefficients (ANCs) are fundamental nuclear constants playing an important role in nuclear physics and astrophysics. We derive a new useful relationship between ANCs of the Gamow radial wave function and the renormalized (due to the Coulomb interaction) Coulomb-nuclear partial scattering amplitude. We use an analytical approximation in the form of a series for the nonresonant part of the phase shift which can be analytically continued to the point of an isolated resonance pole in the complex plane of the momentum. Earlier, this method which we call the S -matrix pole method was used by us to find the resonance pole energy. We find the corresponding fitting parameters for the 5He,5Li , and 16O concrete resonance states. Additionally, based on the theory of the effective range, we calculate the parameters of the p3 /2 and p1 /2 resonance states of the nuclei 5He and 5Li and compare them with the results obtained by the S -matrix pole method. ANC values are found which can be used to calculate the reaction rate through the 16O resonances which lie slightly above the threshold for the α 12C channel.

  5. On-Site Detection as a Countermeasure to Chemical Warfare/Terrorism.

    PubMed

    Seto, Y

    2014-01-01

    On-site monitoring and detection are necessary in the crisis and consequence management of wars and terrorism involving chemical warfare agents (CWAs) such as sarin. The analytical performance required for on-site detection is mainly determined by the fatal vapor concentration and volatility of the CWAs involved. The analytical performance for presently available on-site technologies and commercially available on-site equipment for detecting CWAs interpreted and compared in this review include: classical manual methods, photometric methods, ion mobile spectrometry, vibrational spectrometry, gas chromatography, mass spectrometry, sensors, and other methods. Some of the data evaluated were obtained from our experiments using authentic CWAs. We concluded that (a) no technologies perfectly fulfill all of the on-site detection requirements and (b) adequate on-site detection requires (i) a combination of the monitoring-tape method and ion-mobility spectrometry for point detection and (ii) a combination of the monitoring-tape method, atmospheric pressure chemical ionization mass spectrometry with counterflow introduction, and gas chromatography with a trap and special detectors for continuous monitoring. The basic properties of CWAs, the concept of on-site detection, and the sarin gas attacks in Japan as well as the forensic investigations thereof, are also explicated in this article. Copyright © 2014 Central Police University.

  6. Sorbent-based sampling methods for volatile and semi-volatile organic compounds in air. Part 2. Sorbent selection and other aspects of optimizing air monitoring methods.

    PubMed

    Woolfenden, Elizabeth

    2010-04-16

    Sorbent tubes/traps are widely used in combination with gas chromatographic (GC) analytical methods to monitor the vapour-phase fraction of organic compounds in air. Applications range from atmospheric research and ambient air monitoring (indoor and outdoor) to occupational hygiene (personal exposure assessment) and measuring chemical emission levels. Part 1 of this paper reviewed the main sorbent-based air sampling strategies including active (pumped) tube monitoring, diffusive (passive) sampling onto sorbent tubes/cartridges plus sorbent trapping/focusing of whole air samples that are either collected in containers (such as canisters or bags) or monitored online. Options for subsequent extraction and transfer to GC(MS) analysis were also summarised and the trend to thermal desorption (TD)-based methods and away from solvent extraction was explained. As a result of this trend, demand for TD-compatible sorbents (alternatives to traditional charcoal) is growing. Part 2 of this paper therefore continues with a summary of TD-compatible sorbents, their respective advantages and limitations and considerations for sorbent selection. Other analytical considerations for optimizing sorbent-based air monitoring methods are also discussed together with recent technical developments and sampling accessories which have extended the application range of sorbent trapping technology generally. Copyright 2010 Elsevier B.V. All rights reserved.

  7. A repeated measures model for analysis of continuous outcomes in sequential parallel comparison design studies.

    PubMed

    Doros, Gheorghe; Pencina, Michael; Rybin, Denis; Meisner, Allison; Fava, Maurizio

    2013-07-20

    Previous authors have proposed the sequential parallel comparison design (SPCD) to address the issue of high placebo response rate in clinical trials. The original use of SPCD focused on binary outcomes, but recent use has since been extended to continuous outcomes that arise more naturally in many fields, including psychiatry. Analytic methods proposed to date for analysis of SPCD trial continuous data included methods based on seemingly unrelated regression and ordinary least squares. Here, we propose a repeated measures linear model that uses all outcome data collected in the trial and accounts for data that are missing at random. An appropriate contrast formulated after the model has been fit can be used to test the primary hypothesis of no difference in treatment effects between study arms. Our extensive simulations show that when compared with the other methods, our approach preserves the type I error even for small sample sizes and offers adequate power and the smallest mean squared error under a wide variety of assumptions. We recommend consideration of our approach for analysis of data coming from SPCD trials. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Topology based data analysis identifies a subgroup of breast cancers with a unique mutational profile and excellent survival.

    PubMed

    Nicolau, Monica; Levine, Arnold J; Carlsson, Gunnar

    2011-04-26

    High-throughput biological data, whether generated as sequencing, transcriptional microarrays, proteomic, or other means, continues to require analytic methods that address its high dimensional aspects. Because the computational part of data analysis ultimately identifies shape characteristics in the organization of data sets, the mathematics of shape recognition in high dimensions continues to be a crucial part of data analysis. This article introduces a method that extracts information from high-throughput microarray data and, by using topology, provides greater depth of information than current analytic techniques. The method, termed Progression Analysis of Disease (PAD), first identifies robust aspects of cluster analysis, then goes deeper to find a multitude of biologically meaningful shape characteristics in these data. Additionally, because PAD incorporates a visualization tool, it provides a simple picture or graph that can be used to further explore these data. Although PAD can be applied to a wide range of high-throughput data types, it is used here as an example to analyze breast cancer transcriptional data. This identified a unique subgroup of Estrogen Receptor-positive (ER(+)) breast cancers that express high levels of c-MYB and low levels of innate inflammatory genes. These patients exhibit 100% survival and no metastasis. No supervised step beyond distinction between tumor and healthy patients was used to identify this subtype. The group has a clear and distinct, statistically significant molecular signature, it highlights coherent biology but is invisible to cluster methods, and does not fit into the accepted classification of Luminal A/B, Normal-like subtypes of ER(+) breast cancers. We denote the group as c-MYB(+) breast cancer.

  9. The G-BHQ synergistic effect: Improved double quenching molecular beacons based on guanine and Black Hole Quencher for sensitive simultaneous detection of two DNAs.

    PubMed

    Xiang, Dongshan; Li, Fengquan; Wu, Chenyi; Shi, Boan; Zhai, Kun

    2017-11-01

    We designed two double quenching molecular beacons (MBs) with simple structure based on guanine (G base) and Black Hole Quencher (BHQ), and developed a new analytical method for sensitive simultaneous detection of two DNAs by synchronous fluorescence analysis. In this analytical method, carboxyl fluorescein (FAM) and tetramethyl-6-carboxyrhodamine (TAMRA) were respectively selected as fluorophore of two MBs, Black Hole Quencher 1 (BHQ-1) and Black Hole Quencher 2 (BHQ-2) were respectively selected as organic quencher, and three continuous nucleotides with G base were connected to organic quencher (BHQ-1 and BHQ-2). In the presence of target DNAs, the two MBs hybridize with the corresponding target DNAs, the fluorophores are separated from organic quenchers and G bases, leading to recovery of fluorescence of FAM and TAMRA. Under a certain conditions, the fluorescence intensities of FAM and TAMRA all exhibited good linear dependence on their concentration of target DNAs (T1 and T2) in the range from 4 × 10 -10 to 4 × 10 -8 molL -1 (M). The detection limit (3σ, n = 13) of T1 was 3 × 10 -10 M and that of T2 was 2×10 -10 M, respectively. Compared with the existing analysis methods for multiplex DNA with MBs, this proposed method based on double quenching MBs is not only low fluorescence background, short analytical time and low detection cost, but also easy synthesis and good stability of MB probes. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Innovations in coating technology.

    PubMed

    Behzadi, Sharareh S; Toegel, Stefan; Viernstein, Helmut

    2008-01-01

    Despite representing one of the oldest pharmaceutical techniques, coating of dosage forms is still frequently used in pharmaceutical manufacturing. The aims of coating range from simply masking the taste or odour of drugs to the sophisticated controlling of site and rate of drug release. The high expectations for different coating technologies have required great efforts regarding the development of reproducible and controllable production processes. Basically, improvements in coating methods have focused on particle movement, spraying systems, and air and energy transport. Thereby, homogeneous distribution of coating material and increased drying efficiency should be accomplished in order to achieve high end product quality. Moreover, given the claim of the FDA to design the end product quality already during the manufacturing process (Quality by Design), the development of analytical methods for the analysis, management and control of coating processes has attracted special attention during recent years. The present review focuses on recent patents claiming improvements in pharmaceutical coating technology and intends to first familiarize the reader with the available procedures and to subsequently explain the application of different analytical tools. Aiming to structure this comprehensive field, coating technologies are primarily divided into pan and fluidized bed coating methods. Regarding pan coating procedures, pans rotating around inclined, horizontal and vertical axes are reviewed separately. On the other hand, fluidized bed technologies are subdivided into those involving fluidized and spouted beds. Then, continuous processing techniques and improvements in spraying systems are discussed in dedicated chapters. Finally, currently used analytical methods for the understanding and management of coating processes are reviewed in detail in the last section of the review.

  11. Analytical Description of Ascending Motion of Rockets in the Atmosphere

    ERIC Educational Resources Information Center

    Rodrigues, H.; de Pinho, M. O.; Portes, D., Jr.; Santiago, A.

    2009-01-01

    In continuation of a previous work, we present an analytic study of ascending vertical motion of a rocket subjected to a quadratic drag for the case where the mass-variation law is a linear function of time. We discuss the detailed analytical solution of the model differential equations in closed form. Examples of application are presented and…

  12. 21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2011-04-01 2011-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...

  13. 21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2014-04-01 2014-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...

  14. 21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2012-04-01 2012-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...

  15. 21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2013-04-01 2013-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...

  16. Prevalence of Pre-Analytical Errors in Clinical Chemistry Diagnostic Labs in Sulaimani City of Iraqi Kurdistan

    PubMed Central

    2017-01-01

    Background Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Materials and Methods Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician’s request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. Results The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results. PMID:28107395

  17. Multimedia Analysis plus Visual Analytics = Multimedia Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chinchor, Nancy; Thomas, James J.; Wong, Pak C.

    2010-10-01

    Multimedia analysis has focused on images, video, and to some extent audio and has made progress in single channels excluding text. Visual analytics has focused on the user interaction with data during the analytic process plus the fundamental mathematics and has continued to treat text as did its precursor, information visualization. The general problem we address in this tutorial is the combining of multimedia analysis and visual analytics to deal with multimedia information gathered from different sources, with different goals or objectives, and containing all media types and combinations in common usage.

  18. Big data in sleep medicine: prospects and pitfalls in phenotyping

    PubMed Central

    Bianchi, Matt T; Russo, Kathryn; Gabbidon, Harriett; Smith, Tiaundra; Goparaju, Balaji; Westover, M Brandon

    2017-01-01

    Clinical polysomnography (PSG) databases are a rich resource in the era of “big data” analytics. We explore the uses and potential pitfalls of clinical data mining of PSG using statistical principles and analysis of clinical data from our sleep center. We performed retrospective analysis of self-reported and objective PSG data from adults who underwent overnight PSG (diagnostic tests, n=1835). Self-reported symptoms overlapped markedly between the two most common categories, insomnia and sleep apnea, with the majority reporting symptoms of both disorders. Standard clinical metrics routinely reported on objective data were analyzed for basic properties (missing values, distributions), pairwise correlations, and descriptive phenotyping. Of 41 continuous variables, including clinical and PSG derived, none passed testing for normality. Objective findings of sleep apnea and periodic limb movements were common, with 51% having an apnea–hypopnea index (AHI) >5 per hour and 25% having a leg movement index >15 per hour. Different visualization methods are shown for common variables to explore population distributions. Phenotyping methods based on clinical databases are discussed for sleep architecture, sleep apnea, and insomnia. Inferential pitfalls are discussed using the current dataset and case examples from the literature. The increasing availability of clinical databases for large-scale analytics holds important promise in sleep medicine, especially as it becomes increasingly important to demonstrate the utility of clinical testing methods in management of sleep disorders. Awareness of the strengths, as well as caution regarding the limitations, will maximize the productive use of big data analytics in sleep medicine. PMID:28243157

  19. Simulation of Trajectories for High Specific Impulse Deep Space Exploration

    NASA Technical Reports Server (NTRS)

    Polsgrove, Tara; Adams, Robert B.; Brady, Hugh J. (Technical Monitor)

    2002-01-01

    Difficulties in approximating flight times and deliverable masses for continuous thrust propulsion systems have complicated comparison and evaluation of proposed propulsion concepts. These continuous thrust propulsion systems are of interest to many groups, not the least of which are the electric propulsion and fusion communities. Several charts plotting the results of well-known trajectory simulation codes were developed and are contained in this paper. These charts illustrate the dependence of time of flight and payload ratio on jet power, initial mass, specific impulse and specific power. These charts are intended to be a tool by which people in the propulsion community can explore the possibilities of their propulsion system concepts. Trajectories were simulated using the tools VARITOP and IPOST. VARITOP is a well known trajectory optimization code that involves numerical integration based on calculus of variations. IPOST has several methods of trajectory simulation; the one used in this paper is Cowell's method for full integration of the equations of motion. The analytical method derived in the companion paper was also used to simulate the trajectory. The accuracy of this method is discussed in the paper.

  20. Analytic continuation of quantum Monte Carlo data by stochastic analytical inference.

    PubMed

    Fuchs, Sebastian; Pruschke, Thomas; Jarrell, Mark

    2010-05-01

    We present an algorithm for the analytic continuation of imaginary-time quantum Monte Carlo data which is strictly based on principles of Bayesian statistical inference. Within this framework we are able to obtain an explicit expression for the calculation of a weighted average over possible energy spectra, which can be evaluated by standard Monte Carlo simulations, yielding as by-product also the distribution function as function of the regularization parameter. Our algorithm thus avoids the usual ad hoc assumptions introduced in similar algorithms to fix the regularization parameter. We apply the algorithm to imaginary-time quantum Monte Carlo data and compare the resulting energy spectra with those from a standard maximum-entropy calculation.

  1. SAM Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target chemical, radiochemical, pathogens, and biotoxin analytes in environmental samples can use this online query tool to identify analytical methods included in EPA's Selected Analytical Methods for Environmental Remediation

  2. A Review and Annotated Bibliography of Training Performance Measurement and Assessment Literature

    DTIC Science & Technology

    1988-10-01

    work envirorments and orgoIizational climate questomaires. Identifies empirical eaures of Army unit effectiveness . Key Points: Looks at inspection reparts, mission accompil lsl’nt results, eff iclwy measures etc. A-63 ...PROJECT TASK WORK UNIT TRADE/ARI), 12350 Research Parkway ELEMENT NO. NO. NO. ACCESSION NO. Orlando, FL 32826-3276 (continued) 6.3.7.43 A794 4.3.2 C.1 11... effectiveness . Researchers should investigate means for developing more empirical data, better analytic methods, and standardized measurement. Increased

  3. Electromagnetic plane-wave pulse transmission into a Lorentz half-space.

    PubMed

    Cartwright, Natalie A

    2011-12-01

    The propagation of an electromagnetic plane-wave signal obliquely incident upon a Lorentz half-space is studied analytically. Time-domain asymptotic expressions that increase in accuracy with propagation distance are derived by application of uniform saddle point methods on the Fourier-Laplace integral representation of the transmitted field. The results are shown to be continuous in time and comparable with numerical calculations of the field. Arrival times and angles of refraction are given for prominent transient pulse features and the steady-state signal.

  4. Limited Qualities Evaluation of Longitudinal Flight Control Systems Designed Using Multiobjective Control Design Techniques (HAVE INFINITY II)

    DTIC Science & Technology

    1998-06-01

    analytical phase of this research. Finally, the mixed H2/H-Infinity method optimally tradeoff the different benefits offered by the separate H2 and H...potential benefits of the multiobjective design techniques used. Due to the HAVE INFINITY I test results, AFIT made the decision to continue the...sensitivity and complimentary sensitivity weighting, and a mixed H2/H-Infinity design that compromised the benefits of both design techniques optimally. The

  5. Multi-model stereo restitution

    USGS Publications Warehouse

    Dueholm, K.S.

    1990-01-01

    Methods are described that permit simultaneous orientation of many small-frame photogrammetric models in an analytical plotter. The multi-model software program enables the operator to move freely between the oriented models during interpretation and mapping. Models change automatically when the measuring mark is moved from one frame to another, moving to the same ground coordinates in the neighboring model. Thus, data collection and plotting can be performed continuously across model boundaries. The orientation of the models is accomplished by a bundle block adjustment. -from Author

  6. Scattering of Lamb waves in a composite plate

    NASA Technical Reports Server (NTRS)

    Bratton, Robert; Datta, Subhendu; Shah, Arvind

    1991-01-01

    A combined analytical and finite element technique is developed to gain a better understanding of the scattering of elastic waves by defects. This hybrid method is capable of predicting scattered displacements from arbitrary shaped defects as well as inclusions of different material. The continuity of traction and displacements at the boundaries of the two areas provided the necessary equations to find the nodal displacements and expansion coefficients. Results clearly illustrate the influence of increasing crack depth on the scattered signal.

  7. [The applied of computer for analysis on contraceptive efficacy of IUD of rural women in Guangdong province].

    PubMed

    Jia, G H

    1989-12-01

    This paper discussed the usage and effect of IUD-type O use for rural married women in Guangdong province. The continuation rate of IUD-type O is 71.7 per cent 100 women in one year. The main problem for failure was expulsion. This paper have used a combination of univariate and multivariate analytic methods. On the whole, the important factors were number of gravid and parity, number of induced abortion and medical technical level etc.

  8. SAM Pathogen Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target pathogen analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select pathogens.

  9. High Precision Seawater Sr/Ca Measurements in the Florida Keys by Inductively Coupled Plasma Atomic Emission Spectrometry: Analytical Method and Implications for Coral Paleothermometry

    NASA Astrophysics Data System (ADS)

    Khare, A.; Kilbourne, K. H.; Schijf, J.

    2017-12-01

    Standard methods of reconstructing past sea surface temperatures (SSTs) with coral skeletal Sr/Ca ratios assume the seawater Sr/Ca ratio is constant. However, there is little data to support this assumption, in part because analytical techniques capable of determining seawater Sr/Ca with sufficient accuracy and precision are expensive and time consuming. We demonstrate a method to measure seawater Sr/Ca using inductively coupled plasma atomic emission spectrometry where we employ an intensity ratio calibration routine that reduces the self- matrix effects of calcium and cancels out the matrix effects that are common to both calcium and strontium. A seawater standard solution cross-calibrated with multiple instruments is used to correct for long-term instrument drift and any remnant matrix effects. The resulting method produces accurate seawater Sr/Ca determinations rapidly, inexpensively, and with a precision better than 0.2%. This method will make it easier for coral paleoclimatologists to quantify potentially problematic fluctuations in seawater Sr/Ca at their study locations. We apply our method to test for variability in surface seawater Sr/Ca along the Florida Keys Reef Tract. We are collecting winter and summer samples for two years in a grid with eleven nearshore to offshore transects across the reef, as well as continuous samples collected by osmotic pumps at four locations adjacent to our grid. Our initial analysis of the grid samples indicates a trend of decreasing Sr/Ca values offshore potentially due to a decreasing groundwater influence. The values differ by as much as 0.05 mmol/mol which could lead to an error of 1°C in mean SST reconstructions. Future work involves continued sampling in the Florida Keys to test for seasonal and interannual variability in seawater Sr/Ca, as well as collecting data from small reefs in the Virgin Islands to test the stability of seawater Sr/Ca under different geologic, hydrologic and hydrographic environments.

  10. Continuous electrophoretic purification of individual analytes from multicomponent mixtures.

    PubMed

    McLaren, David G; Chen, David D Y

    2004-04-15

    Individual analytes can be isolated from multicomponent mixtures and collected in the outlet vial by carrying out electrophoretic purification through a capillary column. Desired analytes are allowed to migrate continuously through the column under the electric field while undesired analytes are confined to the inlet vial by application of a hydrodynamic counter pressure. Using pressure ramping and buffer replenishment techniques, 18% of the total amount present in a bulk sample can be purified when the resolution to the adjacent peak is approximately 3. With a higher resolution, the yield could be further improved. Additionally, by periodically introducing fresh buffer into the sample, changes in pH and conductivity can be mediated, allowing higher purity (>or=99.5%) to be preserved in the collected fractions. With an additional reversed cycle of flow counterbalanced capillary electrophoresis, any individual component in a sample mixture can be purified providing it can be separated in an electrophoresis system.

  11. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method... (analytical method) provided that the chemistry of the method or the determinative technique is not changed... prevent efficient recovery of organic pollutants and prevent the method from meeting QC requirements, the...

  12. SAM Biotoxin Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target biotoxin analytes in environmental samples can use this online query tool to identify analytical methods included in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select biotoxins.

  13. SAM Chemical Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target chemical, radiochemical, pathogens, and biotoxin analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery

  14. An update on pharmaceutical film coating for drug delivery.

    PubMed

    Felton, Linda A; Porter, Stuart C

    2013-04-01

    Pharmaceutical coating processes have generally been transformed from what was essentially an art form in the mid-twentieth century to a much more technology-driven process. This review article provides a basic overview of current film coating processes, including a discussion on polymer selection, coating formulation additives and processing equipment. Substrate considerations for pharmaceutical coating processes are also presented. While polymeric coating operations are commonplace in the pharmaceutical industry, film coating processes are still not fully understood, which presents serious challenges with current regulatory requirements. Novel analytical technologies and various modeling techniques that are being used to better understand film coating processes are discussed. This review article also examines the challenges of implementing process analytical technologies in coating operations, active pharmaceutical ingredients in polymer film coatings, the use of high-solids coating systems and continuous coating and other novel coating application methods.

  15. Northwest Trajectory Analysis Capability: A Platform for Enhancing Computational Biophysics Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, Elena S.; Stephan, Eric G.; Corrigan, Abigail L.

    2008-07-30

    As computational resources continue to increase, the ability of computational simulations to effectively complement, and in some cases replace, experimentation in scientific exploration also increases. Today, large-scale simulations are recognized as an effective tool for scientific exploration in many disciplines including chemistry and biology. A natural side effect of this trend has been the need for an increasingly complex analytical environment. In this paper, we describe Northwest Trajectory Analysis Capability (NTRAC), an analytical software suite developed to enhance the efficiency of computational biophysics analyses. Our strategy is to layer higher-level services and introduce improved tools within the user’s familiar environmentmore » without preventing researchers from using traditional tools and methods. Our desire is to share these experiences to serve as an example for effectively analyzing data intensive large scale simulation data.« less

  16. Evaluating supplier quality performance using fuzzy analytical hierarchy process

    NASA Astrophysics Data System (ADS)

    Ahmad, Nazihah; Kasim, Maznah Mat; Rajoo, Shanmugam Sundram Kalimuthu

    2014-12-01

    Evaluating supplier quality performance is vital in ensuring continuous supply chain improvement, reducing the operational costs and risks towards meeting customer's expectation. This paper aims to illustrate an application of Fuzzy Analytical Hierarchy Process to prioritize the evaluation criteria in a context of automotive manufacturing in Malaysia. Five main criteria were identified which were quality, cost, delivery, customer serviceand technology support. These criteria had been arranged into hierarchical structure and evaluated by an expert. The relative importance of each criteria was determined by using linguistic variables which were represented as triangular fuzzy numbers. The Center of Gravity defuzzification method was used to convert the fuzzy evaluations into their corresponding crisps values. Such fuzzy evaluation can be used as a systematic tool to overcome the uncertainty evaluation of suppliers' performance which usually associated with human being subjective judgments.

  17. On the Coplanar Integrable Case of the Twice-Averaged Hill Problem with Central Body Oblateness

    NASA Astrophysics Data System (ADS)

    Vashkov'yak, M. A.

    2018-01-01

    The twice-averaged Hill problem with the oblateness of the central planet is considered in the case where its equatorial plane coincides with the plane of its orbital motion relative to the perturbing body. A qualitative study of this so-called coplanar integrable case was begun by Y. Kozai in 1963 and continued by M.L. Lidov and M.V. Yarskaya in 1974. However, no rigorous analytical solution of the problem can be obtained due to the complexity of the integrals. In this paper we obtain some quantitative evolution characteristics and propose an approximate constructive-analytical solution of the evolution system in the form of explicit time dependences of satellite orbit elements. The methodical accuracy has been estimated for several orbits of artificial lunar satellites by comparison with the numerical solution of the evolution system.

  18. Analyzing data from open enrollment groups: current considerations and future directions.

    PubMed

    Morgan-Lopez, Antonio A; Fals-Stewart, William

    2008-07-01

    Difficulties in modeling turnover in treatment-group membership have been cited as one of the major impediments to ecological validity of substance abuse and alcoholism treatment research. In this review, our primary foci are on (a) the discussion of approaches that draw on state-of-the-science analytic methods for modeling open-enrollment group data and (b) highlighting emerging issues that are critical to this relatively new area of methodological research (e.g., quantifying membership change, modeling "holiday" effects, and modeling membership change among group members and leaders). Continuing refinement of new modeling tools to address these analytic complexities may ultimately lead to the development of more federally funded open-enrollment trials. These developments may also facilitate the building of a "community-friendly" treatment research portfolio for funding agencies that support substance abuse and alcoholism treatment research.

  19. Glycoprotein Disease Markers and Single Protein-omics*

    PubMed Central

    Chandler, Kevin; Goldman, Radoslav

    2013-01-01

    Glycoproteins are well represented among biomarkers for inflammatory and cancer diseases. Secreted and membrane-associated glycoproteins make excellent targets for noninvasive detection. In this review, we discuss clinically applicable markers of cancer diseases and methods for their analysis. High throughput discovery continues to supply marker candidates with unusual glycan structures, altered glycoprotein abundance, or distribution of site-specific glycoforms. Improved analytical methods are needed to unlock the potential of these discoveries in validated clinical assays. A new generation of targeted quantitative assays is expected to advance the use of glycoproteins in early detection of diseases, molecular disease classification, and monitoring of therapeutic interventions. PMID:23399550

  20. Digital Mapping Techniques '09-Workshop Proceedings, Morgantown, West Virginia, May 10-13, 2009

    USGS Publications Warehouse

    Soller, David R.

    2011-01-01

    As in the previous years' meetings, the objective was to foster informal discussion and exchange of technical information, principally in order to develop more efficient methods for digital mapping, cartography, GIS analysis, and information management. At this meeting, oral and poster presentations and special discussion sessions emphasized (1) methods for creating and publishing map products (here, "publishing" includes Web-based release); (2) field data capture software and techniques, including the use of LiDAR; (3) digital cartographic techniques; (4) migration of digital maps into ArcGIS Geodatabase format; (5) analytical GIS techniques; and (6) continued development of the National Geologic Map Database.

  1. The Thick Level-Set model for dynamic fragmentation

    DOE PAGES

    Stershic, Andrew J.; Dolbow, John E.; Moës, Nicolas

    2017-01-04

    The Thick Level-Set (TLS) model is implemented to simulate brittle media undergoing dynamic fragmentation. This non-local model is discretized by the finite element method with damage represented as a continuous field over the domain. A level-set function defines the extent and severity of damage, and a length scale is introduced to limit the damage gradient. Numerical studies in one dimension demonstrate that the proposed method reproduces the rate-dependent energy dissipation and fragment length observations from analytical, numerical, and experimental approaches. In conclusion, additional studies emphasize the importance of appropriate bulk constitutive models and sufficient spatial resolution of the length scale.

  2. Method and apparatus for concentrating vapors for analysis

    DOEpatents

    Grate, Jay W [West Richland, WA; Baldwin, David L [Kennewick, WA; Anheier, Jr., Norman C.

    2012-06-05

    A pre-concentration device and a method are disclosed for concentrating gaseous vapors for analysis. Vapors sorbed and concentrated within the bed of the pre-concentration device are thermally desorbed, achieving at least partial separation of the vapor mixtures. The pre-concentration device is suitable, e.g., for pre-concentration and sample injection, and provides greater resolution of peaks for vapors within vapor mixtures, yielding detection levels that are 10-10,000 times better than direct sampling and analysis systems. Features are particularly useful for continuous unattended monitoring applications. The invention finds application in conjunction with, e.g., analytical instruments where low detection limits for gaseous vapors are desirable.

  3. 7 CFR 91.23 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...

  4. 7 CFR 91.23 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...

  5. 7 CFR 91.23 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...

  6. Evaluation of the matrix exponential for use in ground-water-flow and solute-transport simulations; theoretical framework

    USGS Publications Warehouse

    Umari, A.M.; Gorelick, S.M.

    1986-01-01

    It is possible to obtain analytic solutions to the groundwater flow and solute transport equations if space variables are discretized but time is left continuous. From these solutions, hydraulic head and concentration fields for any future time can be obtained without ' marching ' through intermediate time steps. This analytical approach involves matrix exponentiation and is referred to as the Matrix Exponential Time Advancement (META) method. Two algorithms are presented for the META method, one for symmetric and the other for non-symmetric exponent matrices. A numerical accuracy indicator, referred to as the matrix condition number, was defined and used to determine the maximum number of significant figures that may be lost in the META method computations. The relative computational and storage requirements of the META method with respect to the time marching method increase with the number of nodes in the discretized problem. The potential greater accuracy of the META method and the associated greater reliability through use of the matrix condition number have to be weighed against this increased relative computational and storage requirements of this approach as the number of nodes becomes large. For a particular number of nodes, the META method may be computationally more efficient than the time-marching method, depending on the size of time steps used in the latter. A numerical example illustrates application of the META method to a sample ground-water-flow problem. (Author 's abstract)

  7. How Qualitative Methods Can be Used to Inform Model Development.

    PubMed

    Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna

    2017-06-01

    Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.

  8. A New Method of Obtaining High-Resolution Paleoclimate Records from Speleothem Fluid Inclusions

    NASA Astrophysics Data System (ADS)

    Logan, A. J.; Horton, T. W.

    2010-12-01

    We present a new method for stable hydrogen and oxygen isotope analysis of ancient drip water trapped within cave speleothems. Our method improves on existing fluid inclusion isotopic analytical techniques in that it decreases the sample size by a factor of ten or more, dramatically improving the spatial and temporal precision of fluid inclusion-based paleoclimatology. Published thermal extraction methods require large samples (c. 150 mg) and temperatures high enough (c. 500-900°C) to cause calcite decomposition, which is also associated with isotopic fractionation of the trapped fluids. Extraction by crushing faces similar challenges, where the failure to extract all the trapped fluid can result in isotopic fractionation, and samples in excess of 500 mg are required. Our new method combines the strengths of these published thermal and crushing methods using continuous-flow isotope ratio analytical techniques. Our method combines relatively low-temperature (~250°C) thermal decrepitation with cryogenic trapping across a switching valve sample loop. In brief, ~20 mg carbonate samples are dried (75°C for >1 hour) and heated (250°C for >1 hour) in a quartz sample chamber under a continuously flowing stream of ultra-high purity helium. Heating of the sample chamber is achieved by use of a tube furnace. Fluids released during the heating step are trapped in a coiled stainless steel cold trap (~ -98°C) serving as the sample loop in a 6-way switching valve. Trapped fluids are subsequently injected into a high-temperature conversion elemental analyzer by switching the valve and rapidly thawing the trap. This approach yielded accurate and precise measurements of injected liquid water IAEA reference materials (GISP; SMOW2; SLAP2) for both hydrogen and oxygen isotopic compositions. Blanking tests performed on the extraction line demonstrate extremely low line-blank peak heights (<50mv). Our tests also demonstrate that complete recovery of liquid water is possible and that a minimum quantity of ~100nL water was required. In contrast to liquid water analyses, carbonate inclusion waters gave highly variable results. As plenty of signal was produced from relatively small sample sizes (~20 mg), the observed isotopic variation most likely reflects fractionation during fluid extraction, or natural isotopic variability. Additional tests and modifications to the extraction procedure are in progress, using a recently collected New Zealand stalagmite from a West Coast cave (DOC collection permit WC-27462-GEO). U-Th age data will accompany a paleoclimate record from this stalagmite obtained using standard carbonate analytical techniques, and compared to the results from our new fluid inclusion analyses.

  9. Net analyte signal standard addition method (NASSAM) as a novel spectrofluorimetric and spectrophotometric technique for simultaneous determination, application to assay of melatonin and pyridoxine

    NASA Astrophysics Data System (ADS)

    Asadpour-Zeynali, Karim; Bastami, Mohammad

    2010-02-01

    In this work a new modification of the standard addition method called "net analyte signal standard addition method (NASSAM)" is presented for the simultaneous spectrofluorimetric and spectrophotometric analysis. The proposed method combines the advantages of standard addition method with those of net analyte signal concept. The method can be applied for the determination of analyte in the presence of known interferents. The accuracy of the predictions against H-point standard addition method is not dependent on the shape of the analyte and interferent spectra. The method was successfully applied to simultaneous spectrofluorimetric and spectrophotometric determination of pyridoxine (PY) and melatonin (MT) in synthetic mixtures and in a pharmaceutical formulation.

  10. Amperometric Enzyme-Based Biosensors for Application in Food and Beverage Industry

    NASA Astrophysics Data System (ADS)

    Csöoregi, Elisabeth; Gáspñr, Szilveszter; Niculescu, Mihaela; Mattiasson, Bo; Schuhmann, Wolfgang

    Continuous, sensitive, selective, and reliable monitoring of a large variety of different compounds in various food and beverage samples is of increasing importance to assure a high-quality and tracing of any possible source of contamination of food and beverages. Most of the presently used classical analytical methods are often requiring expensive instrumentation, long analysis times and well-trained staff. Amperometric enzyme-based biosensors on the other hand have emerged in the last decade from basic science to useful tools with very promising application possibilities in food and beverage industry. Amperometric biosensors are in general highly selective, sensitive, relatively cheap, and easy to integrate into continuous analysis systems. A successful application of such sensors for industrial purposes, however, requires a sensor design, which satisfies the specific needs of monitoring the targeted analyte in the particular application, Since each individual application needs different operational conditions and sensor characteristics, it is obvious that biosensors have to be tailored for the particular case. The characteristics of the biosensors are depending on the used biorecognition element (enzyme), nature of signal transducer (electrode material) and the communication between these two elements (electron-transfer pathway).

  11. Ionic liquids: solvents and sorbents in sample preparation.

    PubMed

    Clark, Kevin D; Emaus, Miranda N; Varona, Marcelino; Bowers, Ashley N; Anderson, Jared L

    2018-01-01

    The applications of ionic liquids (ILs) and IL-derived sorbents are rapidly expanding. By careful selection of the cation and anion components, the physicochemical properties of ILs can be altered to meet the requirements of specific applications. Reports of IL solvents possessing high selectivity for specific analytes are numerous and continue to motivate the development of new IL-based sample preparation methods that are faster, more selective, and environmentally benign compared to conventional organic solvents. The advantages of ILs have also been exploited in solid/polymer formats in which ordinarily nonspecific sorbents are functionalized with IL moieties in order to impart selectivity for an analyte or analyte class. Furthermore, new ILs that incorporate a paramagnetic component into the IL structure, known as magnetic ionic liquids (MILs), have emerged as useful solvents for bioanalytical applications. In this rapidly changing field, this Review focuses on the applications of ILs and IL-based sorbents in sample preparation with a special emphasis on liquid phase extraction techniques using ILs and MILs, IL-based solid-phase extraction, ILs in mass spectrometry, and biological applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Teaching Analytical Method Transfer through Developing and Validating Then Transferring Dissolution Testing Methods for Pharmaceuticals

    ERIC Educational Resources Information Center

    Kimaru, Irene; Koether, Marina; Chichester, Kimberly; Eaton, Lafayette

    2017-01-01

    Analytical method transfer (AMT) and dissolution testing are important topics required in industry that should be taught in analytical chemistry courses. Undergraduate students in senior level analytical chemistry laboratory courses at Kennesaw State University (KSU) and St. John Fisher College (SJFC) participated in development, validation, and…

  13. Analytical analysis and implementation of a low-speed high-torque permanent magnet vernier in-wheel motor for electric vehicle

    NASA Astrophysics Data System (ADS)

    Li, Jiangui; Wang, Junhua; Zhigang, Zhao; Yan, Weili

    2012-04-01

    In this paper, analytical analysis of the permanent magnet vernier (PMV) is presented. The key is to analytically solve the governing Laplacian/quasi-Poissonian field equations in the motor regions. By using the time-stepping finite element method, the analytical method is verified. Hence, the performances of the PMV machine are quantitatively compared with that of the analytical results. The analytical results agree well with the finite element method results. Finally, the experimental results are given to further show the validity of the analysis.

  14. Compositional control of continuously graded anode functional layer

    NASA Astrophysics Data System (ADS)

    McCoppin, J.; Barney, I.; Mukhopadhyay, S.; Miller, R.; Reitz, T.; Young, D.

    2012-10-01

    In this work, solid oxide fuel cells (SOFC's) are fabricated with linear-compositionally graded anode functional layers (CGAFL) using a computer-controlled compound aerosol deposition (CCAD) system. Cells with different CGAFL thicknesses (30 um and 50 um) are prepared with a continuous compositionally graded interface deposited between the electrolyte and anode support current collecting regions. The compositional profile was characterized using energy dispersive X-ray spectroscopic mapping. An analytical model of the compound aerosol deposition was developed. The model predicted compositional profiles for both samples that closely matched the measured profiles, suggesting that aerosol-based deposition methods are capable of creating functional gradation on length scales suitable for solid oxide fuel cell structures. The electrochemical performances of the two cells are analyzed using electrochemical impedance spectroscopy (EIS).

  15. Analytical and experimental studies of an optimum multisegment phased liner noise suppression concept

    NASA Technical Reports Server (NTRS)

    Sawdy, D. T.; Beckemeyer, R. J.; Patterson, J. D.

    1976-01-01

    Results are presented from detailed analytical studies made to define methods for obtaining improved multisegment lining performance by taking advantage of relative placement of each lining segment. Properly phased liner segments reflect and spatially redistribute the incident acoustic energy and thus provide additional attenuation. A mathematical model was developed for rectangular ducts with uniform mean flow. Segmented acoustic fields were represented by duct eigenfunction expansions, and mode-matching was used to ensure continuity of the total field. Parametric studies were performed to identify attenuation mechanisms and define preliminary liner configurations. An optimization procedure was used to determine optimum liner impedance values for a given total lining length, Mach number, and incident modal distribution. Optimal segmented liners are presented and it is shown that, provided the sound source is well-defined and flow environment is known, conventional infinite duct optimum attenuation rates can be improved. To confirm these results, an experimental program was conducted in a laboratory test facility. The measured data are presented in the form of analytical-experimental correlations. Excellent agreement between theory and experiment verifies and substantiates the analytical prediction techniques. The results indicate that phased liners may be of immediate benefit in the development of improved aircraft exhaust duct noise suppressors.

  16. Technology-assisted psychoanalysis.

    PubMed

    Scharff, Jill Savege

    2013-06-01

    Teleanalysis-remote psychoanalysis by telephone, voice over internet protocol (VoIP), or videoteleconference (VTC)-has been thought of as a distortion of the frame that cannot support authentic analytic process. Yet it can augment continuity, permit optimum frequency of analytic sessions for in-depth analytic work, and enable outreach to analysands in areas far from specialized psychoanalytic centers. Theoretical arguments against teleanalysis are presented and countered and its advantages and disadvantages discussed. Vignettes of analytic process from teleanalytic sessions are presented, and indications, contraindications, and ethical concerns are addressed. The aim is to provide material from which to judge the authenticity of analytic process supported by technology.

  17. Asymptotic expansions of the kernel functions for line formation with continuous absorption

    NASA Technical Reports Server (NTRS)

    Hummer, D. G.

    1991-01-01

    Asymptotic expressions are obtained for the kernel functions M2(tau, alpha, beta) and K2(tau, alpha, beta) appearing in the theory of line formation with complete redistribution over a Voigt profile with damping parameter a, in the presence of a source of continuous opacity parameterized by beta. For a greater than 0, each coefficient in the asymptotic series is expressed as the product of analytic functions of a and eta. For Doppler broadening, only the leading term can be evaluated analytically.

  18. Analytical Methodology Used To Assess/Refine Observatory Thermal Vacuum Test Conditions For the Landsat 8 Data Continuity Mission

    NASA Technical Reports Server (NTRS)

    Fantano, Louis

    2015-01-01

    Thermal and Fluids Analysis Workshop Silver Spring, MD NCTS 21070-15 The Landsat 8 Data Continuity Mission, which is part of the United States Geologic Survey (USGS), launched February 11, 2013. A Landsat environmental test requirement mandated that test conditions bound worst-case flight thermal environments. This paper describes a rigorous analytical methodology applied to assess refine proposed thermal vacuum test conditions and the issues encountered attempting to satisfy this requirement.

  19. A sample preparation method for recovering suppressed analyte ions in MALDI TOF MS.

    PubMed

    Lou, Xianwen; de Waal, Bas F M; Milroy, Lech-Gustav; van Dongen, Joost L J

    2015-05-01

    In matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI TOF MS), analyte signals can be substantially suppressed by other compounds in the sample. In this technical note, we describe a modified thin-layer sample preparation method that significantly reduces the analyte suppression effect (ASE). In our method, analytes are deposited on top of the surface of matrix preloaded on the MALDI plate. To prevent embedding of analyte into the matrix crystals, the sample solution were prepared without matrix and efforts were taken not to re-dissolve the preloaded matrix. The results with model mixtures of peptides, synthetic polymers and lipids show that detection of analyte ions, which were completely suppressed using the conventional dried-droplet method, could be effectively recovered by using our method. Our findings suggest that the incorporation of analytes in the matrix crystals has an important contributory effect on ASE. By reducing ASE, our method should be useful for the direct MALDI MS analysis of multicomponent mixtures. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Discretization analysis of bifurcation based nonlinear amplifiers

    NASA Astrophysics Data System (ADS)

    Feldkord, Sven; Reit, Marco; Mathis, Wolfgang

    2017-09-01

    Recently, for modeling biological amplification processes, nonlinear amplifiers based on the supercritical Andronov-Hopf bifurcation have been widely analyzed analytically. For technical realizations, digital systems have become the most relevant systems in signal processing applications. The underlying continuous-time systems are transferred to the discrete-time domain using numerical integration methods. Within this contribution, effects on the qualitative behavior of the Andronov-Hopf bifurcation based systems concerning numerical integration methods are analyzed. It is shown exemplarily that explicit Runge-Kutta methods transform the truncated normalform equation of the Andronov-Hopf bifurcation into the normalform equation of the Neimark-Sacker bifurcation. Dependent on the order of the integration method, higher order terms are added during this transformation.A rescaled normalform equation of the Neimark-Sacker bifurcation is introduced that allows a parametric design of a discrete-time system which corresponds to the rescaled Andronov-Hopf system. This system approximates the characteristics of the rescaled Hopf-type amplifier for a large range of parameters. The natural frequency and the peak amplitude are preserved for every set of parameters. The Neimark-Sacker bifurcation based systems avoid large computational effort that would be caused by applying higher order integration methods to the continuous-time normalform equations.

  1. Thin Polymer Films with Continuous Vertically Aligned 1 nm Pores Fabricated by Soft Confinement

    DOE PAGES

    Feng, Xunda; Nejati, Siamak; Cowan, Matthew G.; ...

    2015-12-03

    Membrane separations are critically important in areas ranging from health care and analytical chemistry to bioprocessing and water purification. An ideal nanoporous membrane would consist of a thin film with physically continuous and vertically aligned nanopores and would display a narrow distribution of pore sizes. However, the current state of the art departs considerably from this ideal and is beset by intrinsic trade-offs between permeability and selectivity. We demonstrate an effective and scalable method to fabricate polymer films with ideal membrane morphologies consisting of submicron thickness films with physically continuous and vertically aligned 1 nm pores. The approach is basedmore » on soft confinement to control the orientation of a cross-linkable mesophase in which the pores are produced by self-assembly. The scalability, exceptional ease of fabrication, and potential to create a new class of nanofiltration membranes stand out as compelling aspects.« less

  2. From empirical data to time-inhomogeneous continuous Markov processes.

    PubMed

    Lencastre, Pedro; Raischel, Frank; Rogers, Tim; Lind, Pedro G

    2016-03-01

    We present an approach for testing for the existence of continuous generators of discrete stochastic transition matrices. Typically, existing methods to ascertain the existence of continuous Markov processes are based on the assumption that only time-homogeneous generators exist. Here a systematic extension to time inhomogeneity is presented, based on new mathematical propositions incorporating necessary and sufficient conditions, which are then implemented computationally and applied to numerical data. A discussion concerning the bridging between rigorous mathematical results on the existence of generators to its computational implementation is presented. Our detection algorithm shows to be effective in more than 60% of tested matrices, typically 80% to 90%, and for those an estimate of the (nonhomogeneous) generator matrix follows. We also solve the embedding problem analytically for the particular case of three-dimensional circulant matrices. Finally, a discussion of possible applications of our framework to problems in different fields is briefly addressed.

  3. Big Data in Science and Healthcare: A Review of Recent Literature and Perspectives

    PubMed Central

    Miron-Shatz, T.; Lau, A. Y. S.; Paton, C.

    2014-01-01

    Summary Objectives As technology continues to evolve and rise in various industries, such as healthcare, science, education, and gaming, a sophisticated concept known as Big Data is surfacing. The concept of analytics aims to understand data. We set out to portray and discuss perspectives of the evolving use of Big Data in science and healthcare and, to examine some of the opportunities and challenges. Methods A literature review was conducted to highlight the implications associated with the use of Big Data in scientific research and healthcare innovations, both on a large and small scale. Results Scientists and health-care providers may learn from one another when it comes to understanding the value of Big Data and analytics. Small data, derived by patients and consumers, also requires analytics to become actionable. Connectivism provides a framework for the use of Big Data and analytics in the areas of science and healthcare. This theory assists individuals to recognize and synthesize how human connections are driving the increase in data. Despite the volume and velocity of Big Data, it is truly about technology connecting humans and assisting them to construct knowledge in new ways. Concluding Thoughts The concept of Big Data and associated analytics are to be taken seriously when approaching the use of vast volumes of both structured and unstructured data in science and health-care. Future exploration of issues surrounding data privacy, confidentiality, and education are needed. A greater focus on data from social media, the quantified self-movement, and the application of analytics to “small data” would also be useful. PMID:25123717

  4. High throughput liquid absorption preconcentrator sampling instrument

    DOEpatents

    Zaromb, Solomon; Bozen, Ralph M.

    1992-01-01

    A system for detecting trace concentrations of an analyte in air includes a preconcentrator for the analyte and an analyte detector. The preconcentrator includes an elongated tubular container comprising a wettable material. The wettable material is continuously wetted with an analyte-sorbing liquid which flows from one part of the container to a lower end. Sampled air flows through the container in contact with the wetted material with a swirling motion which results in efficient transfer of analyte vapors or aerosol particles to the sorbing liquid and preconcentration of traces of analyte in the liquid. The preconcentrated traces of analyte may be either detected within the container or removed therefrom for injection into a separate detection means or for subsequent analysis.

  5. High throughput liquid absorption preconcentrator sampling instrument

    DOEpatents

    Zaromb, S.; Bozen, R.M.

    1992-12-22

    A system for detecting trace concentrations of an analyte in air includes a preconcentrator for the analyte and an analyte detector. The preconcentrator includes an elongated tubular container comprising a wettable material. The wettable material is continuously wetted with an analyte-sorbing liquid which flows from one part of the container to a lower end. Sampled air flows through the container in contact with the wetted material with a swirling motion which results in efficient transfer of analyte vapors or aerosol particles to the sorbing liquid and preconcentration of traces of analyte in the liquid. The preconcentrated traces of analyte may be either detected within the container or removed therefrom for injection into a separate detection means or for subsequent analysis. 12 figs.

  6. Proposal of Classification Method of Time Series Data in International Emissions Trading Market Using Agent-based Simulation

    NASA Astrophysics Data System (ADS)

    Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi

    This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.

  7. 40 CFR 141.704 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...

  8. 40 CFR 141.704 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...

  9. 40 CFR 141.704 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...

  10. 77 FR 41336 - Analytical Methods Used in Periodic Reporting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-13

    ... Methods Used in Periodic Reporting AGENCY: Postal Regulatory Commission. ACTION: Notice of filing. SUMMARY... proceeding to consider changes in analytical methods used in periodic reporting. This notice addresses... informal rulemaking proceeding to consider changes in the analytical methods approved for use in periodic...

  11. 40 CFR 141.704 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...

  12. 40 CFR 141.704 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...

  13. Recently published analytical methods for determining alcohol in body materials : alcohol countermeasures literature review

    DOT National Transportation Integrated Search

    1974-10-01

    The author has brought the review of published analytical methods for determining alcohol in body materials up-to- date. The review deals with analytical methods for alcohol in blood and other body fluids and tissues; breath alcohol methods; factors ...

  14. The in-line measurement of plant cell biomass using radio frequency impedance spectroscopy as a component of process analytical technology.

    PubMed

    Holland, Tanja; Blessing, Daniel; Hellwig, Stephan; Sack, Markus

    2013-10-01

    Radio frequency impedance spectroscopy (RFIS) is a robust method for the determination of cell biomass during fermentation. RFIS allows non-invasive in-line monitoring of the passive electrical properties of cells in suspension and can distinguish between living and dead cells based on their distinct behavior in an applied radio frequency field. We used continuous in situ RFIS to monitor batch-cultivated plant suspension cell cultures in stirred-tank bioreactors and compared the in-line data to conventional off-line measurements. RFIS-based analysis was more rapid and more accurate than conventional biomass determination, and was sensitive to changes in cell viability. The higher resolution of the in-line measurement revealed subtle changes in cell growth which were not accessible using conventional methods. Thus, RFIS is well suited for correlating such changes with intracellular states and product accumulation, providing unique opportunities for employing systems biotechnology and process analytical technology approaches to increase product yield and quality. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries

    DOE PAGES

    Lu, Zhiming

    2018-01-30

    Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less

  16. Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Zhiming

    Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less

  17. Load balancing prediction method of cloud storage based on analytic hierarchy process and hybrid hierarchical genetic algorithm.

    PubMed

    Zhou, Xiuze; Lin, Fan; Yang, Lvqing; Nie, Jing; Tan, Qian; Zeng, Wenhua; Zhang, Nian

    2016-01-01

    With the continuous expansion of the cloud computing platform scale and rapid growth of users and applications, how to efficiently use system resources to improve the overall performance of cloud computing has become a crucial issue. To address this issue, this paper proposes a method that uses an analytic hierarchy process group decision (AHPGD) to evaluate the load state of server nodes. Training was carried out by using a hybrid hierarchical genetic algorithm (HHGA) for optimizing a radial basis function neural network (RBFNN). The AHPGD makes the aggregative indicator of virtual machines in cloud, and become input parameters of predicted RBFNN. Also, this paper proposes a new dynamic load balancing scheduling algorithm combined with a weighted round-robin algorithm, which uses the predictive periodical load value of nodes based on AHPPGD and RBFNN optimized by HHGA, then calculates the corresponding weight values of nodes and makes constant updates. Meanwhile, it keeps the advantages and avoids the shortcomings of static weighted round-robin algorithm.

  18. 7 CFR 93.13 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...

  19. 40 CFR 141.25 - Analytical methods for radioactivity.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Analytical methods for radioactivity... § 141.25 Analytical methods for radioactivity. (a) Analysis for the following contaminants shall be conducted to determine compliance with § 141.66 (radioactivity) in accordance with the methods in the...

  20. 7 CFR 93.13 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...

  1. 7 CFR 93.13 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...

  2. 7 CFR 93.13 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...

  3. 7 CFR 93.13 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...

  4. Generation of gas-phase ions from charged clusters: an important ionization step causing suppression of matrix and analyte ions in matrix-assisted laser desorption/ionization mass spectrometry.

    PubMed

    Lou, Xianwen; van Dongen, Joost L J; Milroy, Lech-Gustav; Meijer, E W

    2016-12-30

    Ionization in matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) is a very complicated process. It has been reported that quaternary ammonium salts show extremely strong matrix and analyte suppression effects which cannot satisfactorily be explained by charge transfer reactions. Further investigation of the reasons causing these effects can be useful to improve our understanding of the MALDI process. The dried-droplet and modified thin-layer methods were used as sample preparation methods. In the dried-droplet method, analytes were co-crystallized with matrix, whereas in the modified thin-layer method analytes were deposited on the surface of matrix crystals. Model compounds, tetrabutylammonium iodide ([N(Bu) 4 ]I), cesium iodide (CsI), trihexylamine (THA) and polyethylene glycol 600 (PEG 600), were selected as the test analytes given their ability to generate exclusively pre-formed ions, protonated ions and metal ion adducts respectively in MALDI. The strong matrix suppression effect (MSE) observed using the dried-droplet method might disappear using the modified thin-layer method, which suggests that the incorporation of analytes in matrix crystals contributes to the MSE. By depositing analytes on the matrix surface instead of incorporating in the matrix crystals, the competition for evaporation/ionization from charged matrix/analyte clusters could be weakened resulting in reduced MSE. Further supporting evidence for this inference was found by studying the analyte suppression effect using the same two sample deposition methods. By comparing differences between the mass spectra obtained via the two sample preparation methods, we present evidence suggesting that the generation of gas-phase ions from charged matrix/analyte clusters may induce significant suppression of matrix and analyte ions. The results suggest that the generation of gas-phase ions from charged matrix/analyte clusters is an important ionization step in MALDI-MS. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Equipment and Analytical Companies Meeting Continuous Challenges May 20-21 2014 Continuous Manufacturing Symposium.

    PubMed

    Page, Trevor; Dubina, Henry; Fillipi, Gabriele; Guidat, Roland; Patnaik, Saroj; Poechlauer, Peter; Shering, Phil; Guinn, Martin; Mcdonnell, Peter; Johnston, Craig

    2015-03-01

    This white paper focuses on equipment, and analytical manufacturers' perspectives, regarding the challenges of continuous pharmaceutical manufacturing across five prompt questions. In addition to valued input from several vendors, commentary was provided from experienced pharmaceutical representatives, who have installed various continuous platforms. Additionally, a small medium enterprise (SME) perspective was obtained through interviews. A range of technical challenges is outlined, including: the presence of particles, equipment scalability, fouling (and cleaning), technology derisking, specific analytical challenges, and the general requirement of improved technical training. Equipment and analytical companies can make a significant contribution to help the introduction of continuous technology. A key point is that many of these challenges exist in batch processing and are not specific to continuous processing. Backward compatibility of software is not a continuous issue per se. In many cases, there is available learning from other industries. Business models and opportunities through outsourced development partners are also highlighted. Agile smaller companies and academic groups have a key role to play in developing skills, working collaboratively in partnerships, and focusing on solving relevant industry challenges. The precompetitive space differs for vendor companies compared with large pharmaceuticals. Currently, there is no strong consensus around a dominant continuous design, partly because of business dynamics and commercial interests. A more structured common approach to process design and hardware and software standardization would be beneficial, with initial practical steps in modeling. Conclusions include a digestible systems approach, accessible and published business cases, and increased user, academic, and supplier collaboration. This mirrors US FDA direction. The concept of silos in pharmaceutical companies is a common theme throughout the white papers. In the equipment domain, this is equally prevalent among a broad range of companies, mainly focusing on discrete areas. As an example, the flow chemistry and secondary drug product communities are almost entirely disconnected. Control and Process Analytical Technologies (PAT) companies are active in both domains. The equipment actors are a very diverse group with a few major Original Equipment Manufacturers (OEM) players and a variety of SME, project providers, integrators, upstream downstream providers, and specialist PAT. In some cases, partnerships or alliances are formed to increase critical mass. This white paper has focused on small molecules; equipment associated with biopharmaceuticals is covered in a separate white paper. More specifics on equipment detail are provided in final dosage form and drug substance white papers. The equipment and analytical development from laboratory to pilot to production is important, with a variety of sensors and complexity reducing with scale. The importance of robust processing rather than overcomplex control strategy mitigation is important. A search of nonacademic literature highlights, with a few notable exceptions, a relative paucity of material. Much focuses on the economics and benefits of continuous, rather than specifics of equipment issues. The disruptive nature of continuous manufacturing represents either an opportunity or a threat for many companies, so the incentive to change equipment varies. Also, for many companies, the pharmaceutical sector is not actually the dominant sector in terms of sales. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  6. Equipment and analytical companies meeting continuous challenges. May 20-21, 2014 Continuous Manufacturing Symposium.

    PubMed

    Page, Trevor; Dubina, Henry; Fillipi, Gabriele; Guidat, Roland; Patnaik, Saroj; Poechlauer, Peter; Shering, Phil; Guinn, Martin; Mcdonnell, Peter; Johnston, Craig

    2015-03-01

    This white paper focuses on equipment, and analytical manufacturers' perspectives, regarding the challenges of continuous pharmaceutical manufacturing across five prompt questions. In addition to valued input from several vendors, commentary was provided from experienced pharmaceutical representatives, who have installed various continuous platforms. Additionally, a small medium enterprise (SME) perspective was obtained through interviews. A range of technical challenges is outlined, including: the presence of particles, equipment scalability, fouling (and cleaning), technology derisking, specific analytical challenges, and the general requirement of improved technical training. Equipment and analytical companies can make a significant contribution to help the introduction of continuous technology. A key point is that many of these challenges exist in batch processing and are not specific to continuous processing. Backward compatibility of software is not a continuous issue per se. In many cases, there is available learning from other industries. Business models and opportunities through outsourced development partners are also highlighted. Agile smaller companies and academic groups have a key role to play in developing skills, working collaboratively in partnerships, and focusing on solving relevant industry challenges. The precompetitive space differs for vendor companies compared with large pharmaceuticals. Currently, there is no strong consensus around a dominant continuous design, partly because of business dynamics and commercial interests. A more structured common approach to process design and hardware and software standardization would be beneficial, with initial practical steps in modeling. Conclusions include a digestible systems approach, accessible and published business cases, and increased user, academic, and supplier collaboration. This mirrors US FDA direction. The concept of silos in pharmaceutical companies is a common theme throughout the white papers. In the equipment domain, this is equally prevalent among a broad range of companies, mainly focusing on discrete areas. As an example, the flow chemistry and secondary drug product communities are almost entirely disconnected. Control and Process Analytical Technologies (PAT) companies are active in both domains. The equipment actors are a very diverse group with a few major Original Equipment Manufacturers (OEM) players and a variety of SME, project providers, integrators, upstream downstream providers, and specialist PAT. In some cases, partnerships or alliances are formed to increase critical mass. This white paper has focused on small molecules; equipment associated with biopharmaceuticals is covered in a separate white paper. More specifics on equipment detail are provided in final dosage form and drug substance white papers. The equipment and analytical development from laboratory to pilot to production is important, with a variety of sensors and complexity reducing with scale. The importance of robust processing rather than overcomplex control strategy mitigation is important. A search of nonacademic literature highlights, with a few notable exceptions, a relative paucity of material. Much focuses on the economics and benefits of continuous, rather than specifics of equipment issues. The disruptive nature of continuous manufacturing represents either an opportunity or a threat for many companies, so the incentive to change equipment varies. Also, for many companies, the pharmaceutical sector is not actually the dominant sector in terms of sales. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  7. Finite-analytic numerical solution of heat transfer in two-dimensional cavity flow

    NASA Technical Reports Server (NTRS)

    Chen, C.-J.; Naseri-Neshat, H.; Ho, K.-S.

    1981-01-01

    Heat transfer in cavity flow is numerically analyzed by a new numerical method called the finite-analytic method. The basic idea of the finite-analytic method is the incorporation of local analytic solutions in the numerical solutions of linear or nonlinear partial differential equations. In the present investigation, the local analytic solutions for temperature, stream function, and vorticity distributions are derived. When the local analytic solution is evaluated at a given nodal point, it gives an algebraic relationship between a nodal value in a subregion and its neighboring nodal points. A system of algebraic equations is solved to provide the numerical solution of the problem. The finite-analytic method is used to solve heat transfer in the cavity flow at high Reynolds number (1000) for Prandtl numbers of 0.1, 1, and 10.

  8. A Deep Learning Approach to on-Node Sensor Data Analytics for Mobile or Wearable Devices.

    PubMed

    Ravi, Daniele; Wong, Charence; Lo, Benny; Yang, Guang-Zhong

    2017-01-01

    The increasing popularity of wearable devices in recent years means that a diverse range of physiological and functional data can now be captured continuously for applications in sports, wellbeing, and healthcare. This wealth of information requires efficient methods of classification and analysis where deep learning is a promising technique for large-scale data analytics. While deep learning has been successful in implementations that utilize high-performance computing platforms, its use on low-power wearable devices is limited by resource constraints. In this paper, we propose a deep learning methodology, which combines features learned from inertial sensor data together with complementary information from a set of shallow features to enable accurate and real-time activity classification. The design of this combined method aims to overcome some of the limitations present in a typical deep learning framework where on-node computation is required. To optimize the proposed method for real-time on-node computation, spectral domain preprocessing is used before the data are passed onto the deep learning framework. The classification accuracy of our proposed deep learning approach is evaluated against state-of-the-art methods using both laboratory and real world activity datasets. Our results show the validity of the approach on different human activity datasets, outperforming other methods, including the two methods used within our combined pipeline. We also demonstrate that the computation times for the proposed method are consistent with the constraints of real-time on-node processing on smartphones and a wearable sensor platform.

  9. Selected Analytical Methods for Environmental Remediation and Recovery (SAM) - Home

    EPA Pesticide Factsheets

    The SAM Home page provides access to all information provided in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM), and includes a query function allowing users to search methods by analyte, sample type and instrumentation.

  10. 40 CFR 141.74 - Analytical and monitoring requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical and monitoring requirements. 141.74 Section 141.74 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... listed below. Information regarding obtaining these documents can be obtained from the Safe Drinking...

  11. 40 CFR 141.89 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...

  12. 40 CFR 141.89 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...

  13. 40 CFR 141.89 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...

  14. 40 CFR 141.89 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...

  15. 75 FR 49930 - Stakeholder Meeting Regarding Re-Evaluation of Currently Approved Total Coliform Analytical Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-16

    ... Currently Approved Total Coliform Analytical Methods AGENCY: Environmental Protection Agency (EPA). ACTION... of currently approved Total Coliform Rule (TCR) analytical methods. At these meetings, stakeholders will be given an opportunity to discuss potential elements of a method re-evaluation study, such as...

  16. 40 CFR 141.89 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...

  17. Analytical solution for the transient response of a fluid/saturated porous medium halfspace system subjected to an impulsive line source

    NASA Astrophysics Data System (ADS)

    Shan, Zhendong; Ling, Daosheng; Jing, Liping; Li, Yongqiang

    2018-05-01

    In this paper, transient wave propagation is investigated within a fluid/saturated porous medium halfspace system with a planar interface that is subjected to a cylindrical P-wave line source. Assuming the permeability coefficient is sufficiently large, analytical solutions for the transient response of the fluid/saturated porous medium halfspace system are developed. Moreover, the analytical solutions are presented in simple closed forms wherein each term represents a transient physical wave, especially the expressions for head waves. The methodology utilised to determine where the head wave can emerge within the system is also given. The wave fields within the fluid and porous medium are first defined considering the behaviour of two compressional waves and one tangential wave in the saturated porous medium and one compressional wave in the fluid. Substituting these wave fields into the interface continuity conditions, the analytical solutions in the Laplace domain are then derived. To transform the solutions into the time domain, a suitable distortion of the contour is provided to change the integration path of the solution, after which the analytical solutions in the Laplace domain are transformed into the time domain by employing Cagniard's method. Numerical examples are provided to illustrate some interesting features of the fluid/saturated porous medium halfspace system. In particular, the interface wave and head waves that propagate along the interface between the fluid and saturated porous medium can be observed.

  18. An analytical and experimental investigation of sandwich composites subjected to low-velocity impact

    NASA Astrophysics Data System (ADS)

    Anderson, Todd Alan

    1999-12-01

    This study involves an experimental and analytical investigation of low-velocity impact phenomenon in sandwich composite structures. The analytical solution of a three-dimensional finite-geometry multi-layer specially orthotropic panel subjected to static and transient transverse loading cases is presented. The governing equations of the static and dynamic formulations are derived from Reissner's functional and solved by enforcing the continuity of traction and displacement components between adjacent layers. For the dynamic loading case, the governing equations are solved by applying Fourier or Laplace transformation in time. Additionally, the static solution is extended to solve the contact problem between the sandwich laminate and a rigid sphere. An iterative method is employed to determine the sphere's unknown contact area and pressure distribution. A failure criterion is then applied to the sandwich laminate's stress and strain field to predict impact damage. The analytical accuracy of the present study is verified through comparisons with finite element models, other analyses, and through experimentation. Low-velocity impact tests were conducted to characterize the type and extent of the damage observed in a variety of sandwich configurations with graphite/epoxy face sheets and foam or honeycomb cores. Correlation of the residual indentation and cross-sectional views of the impacted specimens provides a criterion for the extent of damage. Quasi-static indentation tests are also performed and show excellent agreement when compared with the analytical predictions. Finally, piezoelectric polyvinylidene fluoride (PVF2) film sensors are found to be effective in detecting low-velocity impact.

  19. Design and performance of a new continuous-flow sample-introduction system for flame infrared-emission spectrometry: Applications in process analysis, flow injection analysis, and ion-exchange high-performance liquid chromatography.

    PubMed

    Lam, C K; Zhang, Y; Busch, M A; Busch, K W

    1993-06-01

    A new sample introduction system for the analysis of continuously flowing liquid streams by flame infrared-emission (FIRE) spectrometry has been developed. The system uses a specially designed purge cell to strip dissolved CO(2) from solution into a hydrogen gas stream that serves as the fuel for a hydrogen/air flame. Vibrationally excited CO(2) molecules present in the flame are monitored with a simple infrared filter (4.4 mum) photometer. The new system can be used to introduce analytes as a continuous liquid stream (process analysis mode) or on a discrete basis by sample injection (flow injection analysis mode). The key to the success of the method is the new purge-cell design. The small internal volume of the cell minimizes problems associated with purge-cell clean-out and produces sharp, reproducible signals. Spent analytical solution is continuously drained from the cell, making cell disconnection and cleaning between samples unnecessary. Under the conditions employed in this study, samples could be analyzed at a maximum rate of approximately 60/h. The new sample introduction system was successfully tested in both a process analysis- and a flow injection analysis mode for the determination of total inorganic carbon in Waco tap water. For the first time, flame infrared-emission spectrometry was successfully extended to non-volatile organic compounds by using chemical pretreatment with peroxydisulfate in the presence of silver ion to convert the analytes into dissolved carbon dioxide, prior to purging and detection by the FIRE radiometer. A test of the peroxydisulfate/Ag(+) reaction using six organic acids and five sugars indicated that all 11 compounds were oxidized to nearly the same extent. Finally, the new sample introduction system was used in conjunction with a simple filter FIRE radiometer as a detection system in ion-exchange high-performance liquid chromatography. Ion-exchange chromatograms are shown for two aqueous mixtures, one containing six organic acids and the second containing six mono-, di-, and trisaccharides.

  20. Wearable physiological systems and technologies for metabolic monitoring.

    PubMed

    Gao, Wei; Brooks, George A; Klonoff, David C

    2018-03-01

    Wearable sensors allow continuous monitoring of metabolites for diabetes, sports medicine, exercise science, and physiology research. These sensors can continuously detect target analytes in skin interstitial fluid (ISF), tears, saliva, and sweat. In this review, we will summarize developments on wearable devices and their potential applications in research, clinical practice, and recreational and sporting activities. Sampling skin ISF can require insertion of a needle into the skin, whereas sweat, tears, and saliva can be sampled by devices worn outside the body. The most widely sampled metabolite from a wearable device is glucose in skin ISF for monitoring diabetes patients. Continuous ISF glucose monitoring allows estimation of the glucose concentration in blood without the pain, inconvenience, and blood waste of fingerstick capillary blood glucose testing. This tool is currently used by diabetes patients to provide information for dosing insulin and determining a diet and exercise plan. Similar technologies for measuring concentrations of other analytes in skin ISF could be used to monitor athletes, emergency responders, warfighters, and others in states of extreme physiological stress. Sweat is a potentially useful substrate for sampling analytes for metabolic monitoring during exercise. Lactate, sodium, potassium, and hydrogen ions can be measured in sweat. Tools for converting the concentrations of these analytes sampled from sweat, tears, and saliva into blood concentrations are being developed. As an understanding of the relationships between the concentrations of analytes in blood and easily sampled body fluid increases, then the benefits of new wearable devices for metabolic monitoring will also increase.

  1. Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation.

    PubMed

    Bergeron, Dominic; Tremblay, A-M S

    2016-08-01

    Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ^{2} with respect to α, and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.

  2. Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation

    NASA Astrophysics Data System (ADS)

    Bergeron, Dominic; Tremblay, A.-M. S.

    2016-08-01

    Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ2 with respect to α , and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.

  3. Real-time determination of critical quality attributes using near-infrared spectroscopy: a contribution for Process Analytical Technology (PAT).

    PubMed

    Rosas, Juan G; Blanco, Marcel; González, Josep M; Alcalà, Manel

    2012-08-15

    Process Analytical Technology (PAT) is playing a central role in current regulations on pharmaceutical production processes. Proper understanding of all operations and variables connecting the raw materials to end products is one of the keys to ensuring quality of the products and continuous improvement in their production. Near infrared spectroscopy (NIRS) has been successfully used to develop faster and non-invasive quantitative methods for real-time predicting critical quality attributes (CQA) of pharmaceutical granulates (API content, pH, moisture, flowability, angle of repose and particle size). NIR spectra have been acquired from the bin blender after granulation process in a non-classified area without the need of sample withdrawal. The methodology used for data acquisition, calibration modelling and method application in this context is relatively inexpensive and can be easily implemented by most pharmaceutical laboratories. For this purpose, Partial Least-Squares (PLS) algorithm was used to calculate multivariate calibration models, that provided acceptable Root Mean Square Error of Predictions (RMSEP) values (RMSEP(API)=1.0 mg/g; RMSEP(pH)=0.1; RMSEP(Moisture)=0.1%; RMSEP(Flowability)=0.6 g/s; RMSEP(Angle of repose)=1.7° and RMSEP(Particle size)=2.5%) that allowed the application for routine analyses of production batches. The proposed method affords quality assessment of end products and the determination of important parameters with a view to understanding production processes used by the pharmaceutical industry. As shown here, the NIRS technique is a highly suitable tool for Process Analytical Technologies. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    PubMed

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.

  5. Continuous Optimization on Constraint Manifolds

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1988-01-01

    This paper demonstrates continuous optimization on the differentiable manifold formed by continuous constraint functions. The first order tensor geodesic differential equation is solved on the manifold in both numerical and closed analytic form for simple nonlinear programs. Advantages and disadvantages with respect to conventional optimization techniques are discussed.

  6. From analytic inversion to contemporary IMRT optimization: radiation therapy planning revisited from a mathematical perspective.

    PubMed

    Censor, Yair; Unkelbach, Jan

    2012-04-01

    In this paper we look at the development of radiation therapy treatment planning from a mathematical point of view. Historically, planning for Intensity-Modulated Radiation Therapy (IMRT) has been considered as an inverse problem. We discuss first the two fundamental approaches that have been investigated to solve this inverse problem: Continuous analytic inversion techniques on one hand, and fully-discretized algebraic methods on the other hand. In the second part of the paper, we review another fundamental question which has been subject to debate from the beginning of IMRT until the present day: The rotation therapy approach versus fixed angle IMRT. This builds a bridge from historic work on IMRT planning to contemporary research in the context of Intensity-Modulated Arc Therapy (IMAT). Copyright © 2011 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. An overview on current fluid-inclusion research and applications

    USGS Publications Warehouse

    Chi, G.; Chou, I.-Ming; Lu, H.-Z.

    2003-01-01

    This paper provides an overview of some of the more important developments in fluid-inclusion research and applications in recent years, including fluid-inclusion petrography, PVTX studies, and analytical techniques. In fluid-inclusion petrography, the introduction of the concept of 'fluid-inclusion assemblage' has been a major advance. In PVTX studies, the use of synthetic fluid inclusions and hydrothermal diamond-anvil cells has greatly contributed to the characterization of the phase behaviour of geologically relevant fluid systems. Various analytical methods are being developed and refined rapidly, with the Laser-Raman and LA-ICP-MS techniques being particularly useful for volatile and solute analyses, respectively. Ore deposit research has been and will continue to be the main field of application of fluid inclusions. However, fluid inclusions have been increasingly applied to other fields of earth science, especially in petroleum geology and the study of magmatic and earth interior processes.

  8. An analytical drain current model for symmetric double-gate MOSFETs

    NASA Astrophysics Data System (ADS)

    Yu, Fei; Huang, Gongyi; Lin, Wei; Xu, Chuanzhong

    2018-04-01

    An analytical surface-potential-based drain current model of symmetric double-gate (sDG) MOSFETs is described as a SPICE compatible model in this paper. The continuous surface and central potentials from the accumulation to the strong inversion regions are solved from the 1-D Poisson's equation in sDG MOSFETs. Furthermore, the drain current is derived from the charge sheet model as a function of the surface potential. Over a wide range of terminal voltages, doping concentrations, and device geometries, the surface potential calculation scheme and drain current model are verified by solving the 1-D Poisson's equation based on the least square method and using the Silvaco Atlas simulation results and experimental data, respectively. Such a model can be adopted as a useful platform to develop the circuit simulator and provide the clear understanding of sDG MOSFET device physics.

  9. The tightly bound nuclei in the liquid drop model

    NASA Astrophysics Data System (ADS)

    Sree Harsha, N. R.

    2018-05-01

    In this paper, we shall maximise the binding energy per nucleon function in the semi-empirical mass formula of the liquid drop model of the atomic nuclei to analytically prove that the mean binding energy per nucleon curve has local extrema at A ≈ 58.6960, Z ≈ 26.3908 and at A ≈ 62.0178, Z ≈ 27.7506. The Lagrange method of multipliers is used to arrive at these results, while we have let the values of A and Z take continuous fractional values. The shell model that shows why 62Ni is the most tightly bound nucleus is outlined. A brief account on stellar nucleosynthesis is presented to show why 56Fe is more abundant than 62Ni and 58Fe. We believe that the analytical proof presented in this paper can be a useful tool to the instructors to introduce the nucleus with the highest mean binding energy per nucleon.

  10. A semi-analytical solution for elastic analysis of rotating thick cylindrical shells with variable thickness using disk form multilayers.

    PubMed

    Zamani Nejad, Mohammad; Jabbari, Mehdi; Ghannad, Mehdi

    2014-01-01

    Using disk form multilayers, a semi-analytical solution has been derived for determination of displacements and stresses in a rotating cylindrical shell with variable thickness under uniform pressure. The thick cylinder is divided into disk form layers form with their thickness corresponding to the thickness of the cylinder. Due to the existence of shear stress in the thick cylindrical shell with variable thickness, the equations governing disk layers are obtained based on first-order shear deformation theory (FSDT). These equations are in the form of a set of general differential equations. Given that the cylinder is divided into n disks, n sets of differential equations are obtained. The solution of this set of equations, applying the boundary conditions and continuity conditions between the layers, yields displacements and stresses. A numerical solution using finite element method (FEM) is also presented and good agreement was found.

  11. A Semi-Analytical Solution for Elastic Analysis of Rotating Thick Cylindrical Shells with Variable Thickness Using Disk Form Multilayers

    PubMed Central

    Zamani Nejad, Mohammad; Jabbari, Mehdi; Ghannad, Mehdi

    2014-01-01

    Using disk form multilayers, a semi-analytical solution has been derived for determination of displacements and stresses in a rotating cylindrical shell with variable thickness under uniform pressure. The thick cylinder is divided into disk form layers form with their thickness corresponding to the thickness of the cylinder. Due to the existence of shear stress in the thick cylindrical shell with variable thickness, the equations governing disk layers are obtained based on first-order shear deformation theory (FSDT). These equations are in the form of a set of general differential equations. Given that the cylinder is divided into n disks, n sets of differential equations are obtained. The solution of this set of equations, applying the boundary conditions and continuity conditions between the layers, yields displacements and stresses. A numerical solution using finite element method (FEM) is also presented and good agreement was found. PMID:24719582

  12. Acoustic impedance of micro perforated membranes: Velocity continuity condition at the perforation boundary.

    PubMed

    Li, Chenxi; Cazzolato, Ben; Zander, Anthony

    2016-01-01

    The classic analytical model for the sound absorption of micro perforated materials is well developed and is based on a boundary condition where the velocity of the material is assumed to be zero, which is accurate when the material vibration is negligible. This paper develops an analytical model for finite-sized circular micro perforated membranes (MPMs) by applying a boundary condition such that the velocity of air particles on the hole wall boundary is equal to the membrane vibration velocity (a zero-slip condition). The acoustic impedance of the perforation, which varies with its position, is investigated. A prediction method for the overall impedance of the holes and the combined impedance of the MPM is also provided. The experimental results for four different MPM configurations are used to validate the model and good agreement between the experimental and predicted results is achieved.

  13. A non-planar two-loop three-point function beyond multiple polylogarithms

    NASA Astrophysics Data System (ADS)

    von Manteuffel, Andreas; Tancredi, Lorenzo

    2017-06-01

    We consider the analytic calculation of a two-loop non-planar three-point function which contributes to the two-loop amplitudes for t\\overline{t} production and γγ production in gluon fusion through a massive top-quark loop. All subtopology integrals can be written in terms of multiple polylogarithms over an irrational alphabet and we employ a new method for the integration of the differential equations which does not rely on the rationalization of the latter. The top topology integrals, instead, in spite of the absence of a massive three-particle cut, cannot be evaluated in terms of multiple polylogarithms and require the introduction of integrals over complete elliptic integrals and polylogarithms. We provide one-fold integral representations for the solutions and continue them analytically to all relevant regions of the phase space in terms of real functions, extracting all imaginary parts explicitly. The numerical evaluation of our expressions becomes straightforward in this way.

  14. Parallel-plate wet denuder coupled ion chromatograph for near-real-time detection of trace acidic gases in clean room air.

    PubMed

    Takeuchi, Masaki; Tsunoda, Hiromichi; Tanaka, Hideji; Shiramizu, Yoshimi

    2011-01-01

    This paper describes the performance of our automated acidic (CH(3)COOH, HCOOH, HCl, HNO(2), SO(2), and HNO(3)) gases monitor utilizing a parallel-plate wet denuder (PPWD). The PPWD quantitatively collects gaseous contaminants at a high sample flow rate (∼8 dm(3) min(-1)) compared to the conventional methods used in a clean room. Rapid response to any variability in the sample concentration enables near-real-time monitoring. In the developed monitor, the analyte collected with the PPWD is pumped into one of two preconcentration columns for 15 min, and determined by means of ion chromatography. While one preconcentration column is used for chromatographic separation, the other is used for loading the sample solution. The system allows continuous monitoring of the common acidic gases in an advanced semiconductor manufacturing clean room. 2011 © The Japan Society for Analytical Chemistry

  15. Calculations of the Electron Energy Distribution Function in a Uranium Plasma by Analytic and Monte Carlo Techniques. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Bathke, C. G.

    1976-01-01

    Electron energy distribution functions were calculated in a U235 plasma at 1 atmosphere for various plasma temperatures and neutron fluxes. The distributions are assumed to be a summation of a high energy tail and a Maxwellian distribution. The sources of energetic electrons considered are the fission-fragment induced ionization of uranium and the electron induced ionization of uranium. The calculation of the high energy tail is reduced to an electron slowing down calculation, from the most energetic source to the energy where the electron is assumed to be incorporated into the Maxwellian distribution. The pertinent collisional processes are electron-electron scattering and electron induced ionization and excitation of uranium. Two distinct methods were employed in the calculation of the distributions. One method is based upon the assumption of continuous slowing and yields a distribution inversely proportional to the stopping power. An iteration scheme is utilized to include the secondary electron avalanche. In the other method, a governing equation is derived without assuming continuous electron slowing. This equation is solved by a Monte Carlo technique.

  16. Rényi continuous entropy of DNA sequences.

    PubMed

    Vinga, Susana; Almeida, Jonas S

    2004-12-07

    Entropy measures of DNA sequences estimate their randomness or, inversely, their repeatability. L-block Shannon discrete entropy accounts for the empirical distribution of all length-L words and has convergence problems for finite sequences. A new entropy measure that extends Shannon's formalism is proposed. Renyi's quadratic entropy calculated with Parzen window density estimation method applied to CGR/USM continuous maps of DNA sequences constitute a novel technique to evaluate sequence global randomness without some of the former method drawbacks. The asymptotic behaviour of this new measure was analytically deduced and the calculation of entropies for several synthetic and experimental biological sequences was performed. The results obtained were compared with the distributions of the null model of randomness obtained by simulation. The biological sequences have shown a different p-value according to the kernel resolution of Parzen's method, which might indicate an unknown level of organization of their patterns. This new technique can be very useful in the study of DNA sequence complexity and provide additional tools for DNA entropy estimation. The main MATLAB applications developed and additional material are available at the webpage . Specialized functions can be obtained from the authors.

  17. Numerical simulation of tunneling through arbitrary potential barriers applied on MIM and MIIM rectenna diodes

    NASA Astrophysics Data System (ADS)

    Abdolkader, Tarek M.; Shaker, Ahmed; Alahmadi, A. N. M.

    2018-07-01

    With the continuous miniaturization of electronic devices, quantum-mechanical effects such as tunneling become more effective in many device applications. In this paper, a numerical simulation tool is developed under a MATLAB environment to calculate the tunneling probability and current through an arbitrary potential barrier comparing three different numerical techniques: the finite difference method, transfer matrix method, and transmission line method. For benchmarking, the tool is applied to many case studies such as the rectangular single barrier, rectangular double barrier, and continuous bell-shaped potential barrier, each compared to analytical solutions and giving the dependence of the error on the number of mesh points. In addition, a thorough study of the J ‑ V characteristics of MIM and MIIM diodes, used as rectifiers for rectenna solar cells, is presented and simulations are compared to experimental results showing satisfactory agreement. On the undergraduate level, the tool provides a deeper insight for students to compare numerical techniques used to solve various tunneling problems and helps students to choose a suitable technique for a certain application.

  18. Online Solution of Two-Player Zero-Sum Games for Continuous-Time Nonlinear Systems With Completely Unknown Dynamics.

    PubMed

    Fu, Yue; Chai, Tianyou

    2016-12-01

    Regarding two-player zero-sum games of continuous-time nonlinear systems with completely unknown dynamics, this paper presents an online adaptive algorithm for learning the Nash equilibrium solution, i.e., the optimal policy pair. First, for known systems, the simultaneous policy updating algorithm (SPUA) is reviewed. A new analytical method to prove the convergence is presented. Then, based on the SPUA, without using a priori knowledge of any system dynamics, an online algorithm is proposed to simultaneously learn in real time either the minimal nonnegative solution of the Hamilton-Jacobi-Isaacs (HJI) equation or the generalized algebraic Riccati equation for linear systems as a special case, along with the optimal policy pair. The approximate solution to the HJI equation and the admissible policy pair is reexpressed by the approximation theorem. The unknown constants or weights of each are identified simultaneously by resorting to the recursive least square method. The convergence of the online algorithm to the optimal solutions is provided. A practical online algorithm is also developed. Simulation results illustrate the effectiveness of the proposed method.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hohimer, J.P.

    The use of laser-based analytical methods in nuclear-fuel processing plants is considered. The species and locations for accountability, process control, and effluent control measurements in the Coprocessing, Thorex, and reference Purex fuel processing operations are identified and the conventional analytical methods used for these measurements are summarized. The laser analytical methods based upon Raman, absorption, fluorescence, and nonlinear spectroscopy are reviewed and evaluated for their use in fuel processing plants. After a comparison of the capabilities of the laser-based and conventional analytical methods, the promising areas of application of the laser-based methods in fuel processing plants are identified.

  20. Temperature-controlled micro-TLC: a versatile green chemistry and fast analytical tool for separation and preliminary screening of steroids fraction from biological and environmental samples.

    PubMed

    Zarzycki, Paweł K; Slączka, Magdalena M; Zarzycka, Magdalena B; Bartoszuk, Małgorzata A; Włodarczyk, Elżbieta; Baran, Michał J

    2011-11-01

    This paper is a continuation of our previous research focusing on development of micro-TLC methodology under temperature-controlled conditions. The main goal of present paper is to demonstrate separation and detection capability of micro-TLC technique involving simple analytical protocols without multi-steps sample pre-purification. One of the advantages of planar chromatography over its column counterpart is that each TLC run can be performed using non-previously used stationary phase. Therefore, it is possible to fractionate or separate complex samples characterized by heavy biological matrix loading. In present studies components of interest, mainly steroids, were isolated from biological samples like fish bile using single pre-treatment steps involving direct organic liquid extraction and/or deproteinization by freeze-drying method. Low-molecular mass compounds with polarity ranging from estetrol to progesterone derived from the environmental samples (lake water, untreated and treated sewage waters) were concentrated using optimized solid-phase extraction (SPE). Specific bands patterns for samples derived from surface water of the Middle Pomerania in northern part of Poland can be easily observed on obtained micro-TLC chromatograms. This approach can be useful as simple and non-expensive complementary method for fast control and screening of treated sewage water discharged by the municipal wastewater treatment plants. Moreover, our experimental results show the potential of micro-TLC as an efficient tool for retention measurements of a wide range of steroids under reversed-phase (RP) chromatographic conditions. These data can be used for further optimalization of SPE or HPLC systems working under RP conditions. Furthermore, we also demonstrated that micro-TLC based analytical approach can be applied as an effective method for the internal standard (IS) substance search. Generally, described methodology can be applied for fast fractionation or screening of the whole range of target substances as well as chemo-taxonomic studies and fingerprinting of complex mixtures, which are present in biological or environmental samples. Due to low consumption of eluent (usually 0.3-1mL/run) mainly composed of water-alcohol binary mixtures, this method can be considered as environmentally friendly and green chemistry focused analytical tool, supplementary to analytical protocols involving column chromatography or planar micro-fluidic devices. Copyright © 2011 Elsevier Ltd. All rights reserved.

Top