Sample records for general analytical method

  1. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    PubMed

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.

  2. Analytical capabilities and services of Lawrence Livermore Laboratory's General Chemistry Division. [Methods available at Lawrence Livermore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutmacher, R.; Crawford, R.

    This comprehensive guide to the analytical capabilities of Lawrence Livermore Laboratory's General Chemistry Division describes each analytical method in terms of its principle, field of application, and qualitative and quantitative uses. Also described are the state and quantity of sample required for analysis, processing time, available instrumentation, and responsible personnel.

  3. 7 CFR 91.23 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...

  4. 7 CFR 91.23 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...

  5. 7 CFR 91.23 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...

  6. Simultaneous Spectrophotometric Determination of Rifampicin, Isoniazid and Pyrazinamide in a Single Step

    PubMed Central

    Asadpour-Zeynali, Karim; Saeb, Elhameh

    2016-01-01

    Three antituberculosis medications are investigated in this work consist of rifampicin, isoniazid and pyrazinamide. The ultra violet (UV) spectra of these compounds are overlapped, thus use of suitable chemometric methods are helpful for simultaneous spectrophotometric determination of them. A generalized version of net analyte signal standard addition method (GNASSAM) was used for determination of three antituberculosis medications as a model system. In generalized net analyte signal standard addition method only one standard solution was prepared for all analytes. This standard solution contains a mixture of all analytes of interest, and the addition of such solution to sample, causes increases in net analyte signal of each analyte which are proportional to the concentrations of analytes in added standards solution. For determination of concentration of each analyte in some synthetic mixtures, the UV spectra of pure analytes and each sample were recorded in the range of 210 nm-550 nm. The standard addition procedure was performed for each sample and the UV spectrum was recorded after each addition and finally the results were analyzed by net analyte signal method. Obtained concentrations show acceptable performance of GNASSAM in these cases. PMID:28243267

  7. Bias Assessment of General Chemistry Analytes using Commutable Samples.

    PubMed

    Koerbin, Gus; Tate, Jillian R; Ryan, Julie; Jones, Graham Rd; Sikaris, Ken A; Kanowski, David; Reed, Maxine; Gill, Janice; Koumantakis, George; Yen, Tina; St John, Andrew; Hickman, Peter E; Simpson, Aaron; Graham, Peter

    2014-11-01

    Harmonisation of reference intervals for routine general chemistry analytes has been a goal for many years. Analytical bias may prevent this harmonisation. To determine if analytical bias is present when comparing methods, the use of commutable samples, or samples that have the same properties as the clinical samples routinely analysed, should be used as reference samples to eliminate the possibility of matrix effect. The use of commutable samples has improved the identification of unacceptable analytical performance in the Netherlands and Spain. The International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) has undertaken a pilot study using commutable samples in an attempt to determine not only country specific reference intervals but to make them comparable between countries. Australia and New Zealand, through the Australasian Association of Clinical Biochemists (AACB), have also undertaken an assessment of analytical bias using commutable samples and determined that of the 27 general chemistry analytes studied, 19 showed sufficiently small between method biases as to not prevent harmonisation of reference intervals. Application of evidence based approaches including the determination of analytical bias using commutable material is necessary when seeking to harmonise reference intervals.

  8. Analytical approximate solutions for a general class of nonlinear delay differential equations.

    PubMed

    Căruntu, Bogdan; Bota, Constantin

    2014-01-01

    We use the polynomial least squares method (PLSM), which allows us to compute analytical approximate polynomial solutions for a very general class of strongly nonlinear delay differential equations. The method is tested by computing approximate solutions for several applications including the pantograph equations and a nonlinear time-delay model from biology. The accuracy of the method is illustrated by a comparison with approximate solutions previously computed using other methods.

  9. \\tLaboratory Environmental Sample Disposal Information Document - Companion to Standardized Analytical Methods for Environmental Restoration Following Homeland Security Events (SAM) – Revision 5.0

    EPA Pesticide Factsheets

    Document is intended to provide general guidelines for use byEPA and EPA-contracted laboratories when disposing of samples and associated analytical waste following use of the analytical methods listed in SAM.

  10. Safety and Waste Management for SAM Chemistry Methods

    EPA Pesticide Factsheets

    The General Safety and Waste Management page offers section-specific safety and waste management details for the chemical analytes included in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM).

  11. Safety and Waste Management for SAM Radiochemical Methods

    EPA Pesticide Factsheets

    The General Safety and Waste Management page offers section-specific safety and waste management details for the radiochemical analytes included in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM).

  12. Analytical close-form solutions to the elastic fields of solids with dislocations and surface stress

    NASA Astrophysics Data System (ADS)

    Ye, Wei; Paliwal, Bhasker; Ougazzaden, Abdallah; Cherkaoui, Mohammed

    2013-07-01

    The concept of eigenstrain is adopted to derive a general analytical framework to solve the elastic field for 3D anisotropic solids with general defects by considering the surface stress. The formulation shows the elastic constants and geometrical features of the surface play an important role in determining the elastic fields of the solid. As an application, the analytical close-form solutions to the stress fields of an infinite isotropic circular nanowire are obtained. The stress fields are compared with the classical solutions and those of complex variable method. The stress fields from this work demonstrate the impact from the surface stress when the size of the nanowire shrinks but becomes negligible in macroscopic scale. Compared with the power series solutions of complex variable method, the analytical solutions in this work provide a better platform and they are more flexible in various applications. More importantly, the proposed analytical framework profoundly improves the studies of general 3D anisotropic materials with surface effects.

  13. Luminescent detection of hydrazine and hydrazine derivatives

    DOEpatents

    Swager, Timothy M [Newton, MA; Thomas, III, Samuel W.

    2012-04-17

    The present invention generally relates to methods for modulating the optical properties of a luminescent polymer via interaction with a species (e.g., an analyte). In some cases, the present invention provides methods for determination of an analyte by monitoring a change in an optical signal of a luminescent polymer upon exposure to an analyte. Methods of the present invention may be useful for the vapor phase detection of analytes such as explosives and toxins. The present invention also provides methods for increasing the luminescence intensity of a polymer, such as a polymer that has been photobleached, by exposing the luminescent polymer to a species such as a reducing agent.

  14. Meta-Analytic Methods of Pooling Correlation Matrices for Structural Equation Modeling under Different Patterns of Missing Data

    ERIC Educational Resources Information Center

    Furlow, Carolyn F.; Beretvas, S. Natasha

    2005-01-01

    Three methods of synthesizing correlations for meta-analytic structural equation modeling (SEM) under different degrees and mechanisms of missingness were compared for the estimation of correlation and SEM parameters and goodness-of-fit indices by using Monte Carlo simulation techniques. A revised generalized least squares (GLS) method for…

  15. Contextual and Analytic Qualities of Research Methods Exemplified in Research on Teaching

    ERIC Educational Resources Information Center

    Svensson, Lennart; Doumas, Kyriaki

    2013-01-01

    The aim of the present article is to discuss contextual and analytic qualities of research methods. The arguments are specified in relation to research on teaching. A specific investigation is used as an example to illustrate the general methodological approach. It is argued that research methods should be carefully grounded in an understanding of…

  16. Analytical Solution of a Generalized Hirota-Satsuma Equation

    NASA Astrophysics Data System (ADS)

    Kassem, M.; Mabrouk, S.; Abd-el-Malek, M.

    A modified version of generalized Hirota-Satsuma is here solved using a two parameter group transformation method. This problem in three dimensions was reduced by Estevez [1] to a two dimensional one through a Lie transformation method and left unsolved. In the present paper, through application of symmetry transformation the Lax pair has been reduced to a system of ordinary equations. Three transformations cases are investigated. The obtained analytical solutions are plotted and show a profile proper to deflagration processes, well described by Degasperis-Procesi equation.

  17. Evaluation of selected methods for determining streamflow during periods of ice effect

    USGS Publications Warehouse

    Melcher, N.B.; Walker, J.F.

    1990-01-01

    The methods are classified into two general categories, subjective and analytical, depending on whether individual judgement is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods, and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used for streamflow-gaging stations where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice adjustment factor) may be appropriate for use for stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge ratio and multiple regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.

  18. Analytical methods in multivariate highway safety exposure data estimation

    DOT National Transportation Integrated Search

    1984-01-01

    Three general analytical techniques which may be of use in : extending, enhancing, and combining highway accident exposure data are : discussed. The techniques are log-linear modelling, iterative propor : tional fitting and the expectation maximizati...

  19. Semi-analytical solution for the generalized absorbing boundary condition in molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Lee, Chung-Shuo; Chen, Yan-Yu; Yu, Chi-Hua; Hsu, Yu-Chuan; Chen, Chuin-Shan

    2017-07-01

    We present a semi-analytical solution of a time-history kernel for the generalized absorbing boundary condition in molecular dynamics (MD) simulations. To facilitate the kernel derivation, the concept of virtual atoms in real space that can conform with an arbitrary boundary in an arbitrary lattice is adopted. The generalized Langevin equation is regularized using eigenvalue decomposition and, consequently, an analytical expression of an inverse Laplace transform is obtained. With construction of dynamical matrices in the virtual domain, a semi-analytical form of the time-history kernel functions for an arbitrary boundary in an arbitrary lattice can be found. The time-history kernel functions for different crystal lattices are derived to show the generality of the proposed method. Non-equilibrium MD simulations in a triangular lattice with and without the absorbing boundary condition are conducted to demonstrate the validity of the solution.

  20. Analytical ground state for the Jaynes-Cummings model with ultrastrong coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Yuanwei; Institute of Theoretical Physics, Shanxi University, Taiyuan 030006; Chen Gang

    2011-06-15

    We present a generalized variational method to analytically obtain the ground-state properties of the Jaynes-Cummings model with the ultrastrong coupling. An explicit expression for the ground-state energy, which agrees well with the numerical simulation in a wide range of the experimental parameters, is given. In particular, the introduced method can successfully solve this Jaynes-Cummings model with the positive detuning (the atomic resonant level is larger than the photon frequency), which cannot be treated in the adiabatical approximation and the generalized rotating-wave approximation. Finally, we also demonstrate analytically how to control the mean photon number by means of the current experimentalmore » parameters including the photon frequency, the coupling strength, and especially the atomic resonant level.« less

  1. [Manual therapy in general practice].

    PubMed

    Березуцкий, Владимир И

    2016-01-01

    The article is devoted to manual therapy practice for diagnostics and treatment of vertebrogenic pain syndrome in general practice. Analytical roundup of sources proves medical advantage of implementation of manual therapy basic methods by general practice specialists.

  2. Analytical model for advective-dispersive transport involving flexible boundary inputs, initial distributions and zero-order productions

    NASA Astrophysics Data System (ADS)

    Chen, Jui-Sheng; Li, Loretta Y.; Lai, Keng-Hsin; Liang, Ching-Ping

    2017-11-01

    A novel solution method is presented which leads to an analytical model for the advective-dispersive transport in a semi-infinite domain involving a wide spectrum of boundary inputs, initial distributions, and zero-order productions. The novel solution method applies the Laplace transform in combination with the generalized integral transform technique (GITT) to obtain the generalized analytical solution. Based on this generalized analytical expression, we derive a comprehensive set of special-case solutions for some time-dependent boundary distributions and zero-order productions, described by the Dirac delta, constant, Heaviside, exponentially-decaying, or periodically sinusoidal functions as well as some position-dependent initial conditions and zero-order productions specified by the Dirac delta, constant, Heaviside, or exponentially-decaying functions. The developed solutions are tested against an analytical solution from the literature. The excellent agreement between the analytical solutions confirms that the new model can serve as an effective tool for investigating transport behaviors under different scenarios. Several examples of applications, are given to explore transport behaviors which are rarely noted in the literature. The results show that the concentration waves resulting from the periodically sinusoidal input are sensitive to dispersion coefficient. The implication of this new finding is that a tracer test with a periodic input may provide additional information when for identifying the dispersion coefficients. Moreover, the solution strategy presented in this study can be extended to derive analytical models for handling more complicated problems of solute transport in multi-dimensional media subjected to sequential decay chain reactions, for which analytical solutions are not currently available.

  3. General Quality Control (QC) Guidelines for SAM Methods

    EPA Pesticide Factsheets

    Learn more about quality control guidelines and recommendations for the analysis of samples using the methods listed in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM).

  4. Solving the Schroedinger Equation of Atoms and Molecules without Analytical Integration Based on the Free Iterative-Complement-Interaction Wave Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakatsuji, H.; Nakashima, H.; Department of Synthetic Chemistry and Biological Chemistry, Graduate School of Engineering, Kyoto University, Nishikyo-ku, Kyoto 615-8510

    2007-12-14

    A local Schroedinger equation (LSE) method is proposed for solving the Schroedinger equation (SE) of general atoms and molecules without doing analytic integrations over the complement functions of the free ICI (iterative-complement-interaction) wave functions. Since the free ICI wave function is potentially exact, we can assume a flatness of its local energy. The variational principle is not applicable because the analytic integrations over the free ICI complement functions are very difficult for general atoms and molecules. The LSE method is applied to several 2 to 5 electron atoms and molecules, giving an accuracy of 10{sup -5} Hartree in total energy.more » The potential energy curves of H{sub 2} and LiH molecules are calculated precisely with the free ICI LSE method. The results show the high potentiality of the free ICI LSE method for developing accurate predictive quantum chemistry with the solutions of the SE.« less

  5. Selection of Wavelengths for Optimum Precision in Simultaneous Spectrophotometric Determinations.

    ERIC Educational Resources Information Center

    DiTusa, Michael R.; Schilt, Alfred A.

    1985-01-01

    Although many textbooks include a description of simultaneous determinations employing absorption spectrophotometry and treat the mathematics necessary for analytical quantitations, treatment of analytical wavelength selection has been mostly qualitative. Therefore, a general method for selecting wavelengths for optimum precision in simultaneous…

  6. Sampling and analysis for radon-222 dissolved in ground water and surface water

    USGS Publications Warehouse

    DeWayne, Cecil L.; Gesell, T.F.

    1992-01-01

    Radon-222 is a naturally occurring radioactive gas in the uranium-238 decay series that has traditionally been called, simply, radon. The lung cancer risks associated with the inhalation of radon decay products have been well documented by epidemiological studies on populations of uranium miners. The realization that radon is a public health hazard has raised the need for sampling and analytical guidelines for field personnel. Several sampling and analytical methods are being used to document radon concentrations in ground water and surface water worldwide but no convenient, single set of guidelines is available. Three different sampling and analytical methods - bubbler, liquid scintillation, and field screening - are discussed in this paper. The bubbler and liquid scintillation methods have high accuracy and precision, and small analytical method detection limits of 0.2 and 10 pCi/l (picocuries per liter), respectively. The field screening method generally is used as a qualitative reconnaissance tool.

  7. Artificial Intelligence Methods in Pursuit Evasion Differential Games

    DTIC Science & Technology

    1990-07-30

    objectives, sometimes with fuzzy ones. Classical optimization, control or game theoretic methods are insufficient for their resolution. I Solution...OVERALL SATISFACTION WITH SCHOOL 120 FIGURE 5.13 EXAMPLE AHP HIERARCHY FOR CHOOSING MOST APPROPRIATE DIFFERENTIAL GAME AND PARAMETRIZATION 125 FIGURE 5.14...the Analytical Hierarchy Process originated by T.L. Saaty of the Wharton School. The Analytic Hierarchy Process ( AHP ) is a general theory of

  8. A new tool for the evaluation of the analytical procedure: Green Analytical Procedure Index.

    PubMed

    Płotka-Wasylka, J

    2018-05-01

    A new means for assessing analytical protocols relating to green analytical chemistry attributes has been developed. The new tool, called GAPI (Green Analytical Procedure Index), evaluates the green character of an entire analytical methodology, from sample collection to final determination, and was created using such tools as the National Environmental Methods Index (NEMI) or Analytical Eco-Scale to provide not only general but also qualitative information. In GAPI, a specific symbol with five pentagrams can be used to evaluate and quantify the environmental impact involved in each step of an analytical methodology, mainly from green through yellow to red depicting low, medium to high impact, respectively. The proposed tool was used to evaluate analytical procedures applied in the determination of biogenic amines in wine samples, and polycyclic aromatic hydrocarbon determination by EPA methods. GAPI tool not only provides an immediately perceptible perspective to the user/reader but also offers exhaustive information on evaluated procedures. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. An Analytical Investigation of Three General Methods of Calculating Chemical-Equilibrium Compositions

    NASA Technical Reports Server (NTRS)

    Zeleznik, Frank J.; Gordon, Sanford

    1960-01-01

    The Brinkley, Huff, and White methods for chemical-equilibrium calculations were modified and extended in order to permit an analytical comparison. The extended forms of these methods permit condensed species as reaction products, include temperature as a variable in the iteration, and permit arbitrary estimates for the variables. It is analytically shown that the three extended methods can be placed in a form that is independent of components. In this form the Brinkley iteration is identical computationally to the White method, while the modified Huff method differs only'slightly from these two. The convergence rates of the modified Brinkley and White methods are identical; and, further, all three methods are guaranteed to converge and will ultimately converge quadratically. It is concluded that no one of the three methods offers any significant computational advantages over the other two.

  10. General Safety and Waste Management Related to SAM

    EPA Pesticide Factsheets

    The General Safety and Waste Management page offers section-specific safety and waste management details for chemicals, radiochemicals, pathogens, and biotoxins included in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM).

  11. Life cycle management of analytical methods.

    PubMed

    Parr, Maria Kristina; Schmidt, Alexander H

    2018-01-05

    In modern process management, the life cycle concept gains more and more importance. It focusses on the total costs of the process from invest to operation and finally retirement. Also for analytical procedures an increasing interest for this concept exists in the recent years. The life cycle of an analytical method consists of design, development, validation (including instrumental qualification, continuous method performance verification and method transfer) and finally retirement of the method. It appears, that also regulatory bodies have increased their awareness on life cycle management for analytical methods. Thus, the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH), as well as the United States Pharmacopeial Forum discuss the enrollment of new guidelines that include life cycle management of analytical methods. The US Pharmacopeia (USP) Validation and Verification expert panel already proposed a new General Chapter 〈1220〉 "The Analytical Procedure Lifecycle" for integration into USP. Furthermore, also in the non-regulated environment a growing interest on life cycle management is seen. Quality-by-design based method development results in increased method robustness. Thereby a decreased effort is needed for method performance verification, and post-approval changes as well as minimized risk of method related out-of-specification results. This strongly contributes to reduced costs of the method during its life cycle. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. General method of solving the Schroedinger equation of atoms and molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakatsuji, Hiroshi

    2005-12-15

    We propose a general method of solving the Schroedinger equation of atoms and molecules. We first construct the wave function having the exact structure, using the ICI (iterative configuration or complement interaction) method and then optimize the variables involved by the variational principle. Based on the scaled Schroedinger equation and related principles, we can avoid the singularity problem of atoms and molecules and formulate a general method of calculating the exact wave functions in an analytical expansion form. We choose initial function {psi}{sub 0} and scaling g function, and then the ICI method automatically generates the wave function that hasmore » the exact structure by using the Hamiltonian of the system. The Hamiltonian contains all the information of the system. The free ICI method provides a flexible and variationally favorable procedure of constructing the exact wave function. We explain the computational procedure of the analytical ICI method routinely performed in our laboratory. Simple examples are given using hydrogen atom for the nuclear singularity case, the Hooke's atom for the electron singularity case, and the helium atom for both cases.« less

  13. Implementation of structural response sensitivity calculations in a large-scale finite-element analysis system

    NASA Technical Reports Server (NTRS)

    Giles, G. L.; Rogers, J. L., Jr.

    1982-01-01

    The methodology used to implement structural sensitivity calculations into a major, general-purpose finite-element analysis system (SPAR) is described. This implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calculating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of SPAR are also discussed.

  14. Improving LC-MS sensitivity through increases in chromatographic performance: comparisons of UPLC-ES/MS/MS to HPLC-ES/MS/MS.

    PubMed

    Churchwell, Mona I; Twaddle, Nathan C; Meeker, Larry R; Doerge, Daniel R

    2005-10-25

    Recent technological advances have made available reverse phase chromatographic media with a 1.7 microm particle size along with a liquid handling system that can operate such columns at much higher pressures. This technology, termed ultra performance liquid chromatography (UPLC), offers significant theoretical advantages in resolution, speed, and sensitivity for analytical determinations, particularly when coupled with mass spectrometers capable of high-speed acquisitions. This paper explores the differences in LC-MS performance by conducting a side-by-side comparison of UPLC for several methods previously optimized for HPLC-based separation and quantification of multiple analytes with maximum throughput. In general, UPLC produced significant improvements in method sensitivity, speed, and resolution. Sensitivity increases with UPLC, which were found to be analyte-dependent, were as large as 10-fold and improvements in method speed were as large as 5-fold under conditions of comparable peak separations. Improvements in chromatographic resolution with UPLC were apparent from generally narrower peak widths and from a separation of diastereomers not possible using HPLC. Overall, the improvements in LC-MS method sensitivity, speed, and resolution provided by UPLC show that further advances can be made in analytical methodology to add significant value to hypothesis-driven research.

  15. Generalized constitutive equations for piezo-actuated compliant mechanism

    NASA Astrophysics Data System (ADS)

    Cao, Junyi; Ling, Mingxiang; Inman, Daniel J.; Lin, Jin

    2016-09-01

    This paper formulates analytical models to describe the static displacement and force interactions between generic serial-parallel compliant mechanisms and their loads by employing the matrix method. In keeping with the familiar piezoelectric constitutive equations, the generalized constitutive equations of compliant mechanism represent the input-output displacement and force relations in the form of a generalized Hooke’s law and as analytical functions of physical parameters. Also significantly, a new model of output displacement for compliant mechanism interacting with piezo-stacks and elastic loads is deduced based on the generalized constitutive equations. Some original findings differing from the well-known constitutive performance of piezo-stacks are also given. The feasibility of the proposed models is confirmed by finite element analysis and by experiments under various elastic loads. The analytical models can be an insightful tool for predicting and optimizing the performance of a wide class of compliant mechanisms that simultaneously consider the influence of loads and piezo-stacks.

  16. AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)

    EPA Science Inventory

    Abstract

    A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...

  17. Safety and Waste Management for SAM Pathogen Methods

    EPA Pesticide Factsheets

    The General Safety and Waste Management page offers section-specific safety and waste management details for the pathogens included in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM).

  18. Safety and Waste Management for SAM Biotoxin Methods

    EPA Pesticide Factsheets

    The General Safety and Waste Management page offers section-specific safety and waste management details for the biotoxins included in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM).

  19. Big data science: A literature review of nursing research exemplars.

    PubMed

    Westra, Bonnie L; Sylvia, Martha; Weinfurter, Elizabeth F; Pruinelli, Lisiane; Park, Jung In; Dodd, Dianna; Keenan, Gail M; Senk, Patricia; Richesson, Rachel L; Baukner, Vicki; Cruz, Christopher; Gao, Grace; Whittenburg, Luann; Delaney, Connie W

    Big data and cutting-edge analytic methods in nursing research challenge nurse scientists to extend the data sources and analytic methods used for discovering and translating knowledge. The purpose of this study was to identify, analyze, and synthesize exemplars of big data nursing research applied to practice and disseminated in key nursing informatics, general biomedical informatics, and nursing research journals. A literature review of studies published between 2009 and 2015. There were 650 journal articles identified in 17 key nursing informatics, general biomedical informatics, and nursing research journals in the Web of Science database. After screening for inclusion and exclusion criteria, 17 studies published in 18 articles were identified as big data nursing research applied to practice. Nurses clearly are beginning to conduct big data research applied to practice. These studies represent multiple data sources and settings. Although numerous analytic methods were used, the fundamental issue remains to define the types of analyses consistent with big data analytic methods. There are needs to increase the visibility of big data and data science research conducted by nurse scientists, further examine the use of state of the science in data analytics, and continue to expand the availability and use of a variety of scientific, governmental, and industry data resources. A major implication of this literature review is whether nursing faculty and preparation of future scientists (PhD programs) are prepared for big data and data science. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Generalized analytical solutions to sequentially coupled multi-species advective-dispersive transport equations in a finite domain subject to an arbitrary time-dependent source boundary condition

    NASA Astrophysics Data System (ADS)

    Chen, Jui-Sheng; Liu, Chen-Wuing; Liang, Ching-Ping; Lai, Keng-Hsin

    2012-08-01

    SummaryMulti-species advective-dispersive transport equations sequentially coupled with first-order decay reactions are widely used to describe the transport and fate of the decay chain contaminants such as radionuclide, chlorinated solvents, and nitrogen. Although researchers attempted to present various types of methods for analytically solving this transport equation system, the currently available solutions are mostly limited to an infinite or a semi-infinite domain. A generalized analytical solution for the coupled multi-species transport problem in a finite domain associated with an arbitrary time-dependent source boundary is not available in the published literature. In this study, we first derive generalized analytical solutions for this transport problem in a finite domain involving arbitrary number of species subject to an arbitrary time-dependent source boundary. Subsequently, we adopt these derived generalized analytical solutions to obtain explicit analytical solutions for a special-case transport scenario involving an exponentially decaying Bateman type time-dependent source boundary. We test the derived special-case solutions against the previously published coupled 4-species transport solution and the corresponding numerical solution with coupled 10-species transport to conduct the solution verification. Finally, we compare the new analytical solutions derived for a finite domain against the published analytical solutions derived for a semi-infinite domain to illustrate the effect of the exit boundary condition on coupled multi-species transport with an exponential decaying source boundary. The results show noticeable discrepancies between the breakthrough curves of all the species in the immediate vicinity of the exit boundary obtained from the analytical solutions for a finite domain and a semi-infinite domain for the dispersion-dominated condition.

  1. Siewert solutions of transcendental equations, generalized Lambert functions and physical applications

    NASA Astrophysics Data System (ADS)

    Barsan, Victor

    2018-05-01

    Several classes of transcendental equations, mainly eigenvalue equations associated to non-relativistic quantum mechanical problems, are analyzed. Siewert's systematic approach of such equations is discussed from the perspective of the new results recently obtained in the theory of generalized Lambert functions and of algebraic approximations of various special or elementary functions. Combining exact and approximate analytical methods, quite precise analytical outputs are obtained for apparently untractable problems. The results can be applied in quantum and classical mechanics, magnetism, elasticity, solar energy conversion, etc.

  2. Derivation of general analytic gradient expressions for density-fitted post-Hartree-Fock methods: An efficient implementation for the density-fitted second-order Møller–Plesset perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bozkaya, Uğur, E-mail: ugur.bozkaya@atauni.edu.tr

    General analytic gradient expressions (with the frozen-core approximation) are presented for density-fitted post-HF methods. An efficient implementation of frozen-core analytic gradients for the second-order Møller–Plesset perturbation theory (MP2) with the density-fitting (DF) approximation (applying to both reference and correlation energies), which is denoted as DF-MP2, is reported. The DF-MP2 method is applied to a set of alkanes, conjugated dienes, and noncovalent interaction complexes to compare the computational cost of single point analytic gradients with MP2 with the resolution of the identity approach (RI-MP2) [F. Weigend and M. Häser, Theor. Chem. Acc. 97, 331 (1997); R. A. Distasio, R. P. Steele,more » Y. M. Rhee, Y. Shao, and M. Head-Gordon, J. Comput. Chem. 28, 839 (2007)]. In the RI-MP2 method, the DF approach is used only for the correlation energy. Our results demonstrate that the DF-MP2 method substantially accelerate the RI-MP2 method for analytic gradient computations due to the reduced input/output (I/O) time. Because in the DF-MP2 method the DF approach is used for both reference and correlation energies, the storage of 4-index electron repulsion integrals (ERIs) are avoided, 3-index ERI tensors are employed instead. Further, as in case of integrals, our gradient equation is completely avoid construction or storage of the 4-index two-particle density matrix (TPDM), instead we use 2- and 3-index TPDMs. Hence, the I/O bottleneck of a gradient computation is significantly overcome. Therefore, the cost of the generalized-Fock matrix (GFM), TPDM, solution of Z-vector equations, the back transformation of TPDM, and integral derivatives are substantially reduced when the DF approach is used for the entire energy expression. Further application results show that the DF approach introduce negligible errors for closed-shell reaction energies and equilibrium bond lengths.« less

  3. Evaluation of generalized degrees of freedom for sparse estimation by replica method

    NASA Astrophysics Data System (ADS)

    Sakata, A.

    2016-12-01

    We develop a method to evaluate the generalized degrees of freedom (GDF) for linear regression with sparse regularization. The GDF is a key factor in model selection, and thus its evaluation is useful in many modelling applications. An analytical expression for the GDF is derived using the replica method in the large-system-size limit with random Gaussian predictors. The resulting formula has a universal form that is independent of the type of regularization, providing us with a simple interpretation. Within the framework of replica symmetric (RS) analysis, GDF has a physical meaning as the effective fraction of non-zero components. The validity of our method in the RS phase is supported by the consistency of our results with previous mathematical results. The analytical results in the RS phase are calculated numerically using the belief propagation algorithm.

  4. Heuristics for Understanding the Concepts of Interaction, Polynomial Trend, and the General Linear Model.

    ERIC Educational Resources Information Center

    Thompson, Bruce

    The relationship between analysis of variance (ANOVA) methods and their analogs (analysis of covariance and multiple analyses of variance and covariance--collectively referred to as OVA methods) and the more general analytic case is explored. A small heuristic data set is used, with a hypothetical sample of 20 subjects, randomly assigned to five…

  5. A General and Flexible Approach to Estimating the Social Relations Model Using Bayesian Methods

    ERIC Educational Resources Information Center

    Ludtke, Oliver; Robitzsch, Alexander; Kenny, David A.; Trautwein, Ulrich

    2013-01-01

    The social relations model (SRM) is a conceptual, methodological, and analytical approach that is widely used to examine dyadic behaviors and interpersonal perception within groups. This article introduces a general and flexible approach to estimating the parameters of the SRM that is based on Bayesian methods using Markov chain Monte Carlo…

  6. Methods for Integrating Moderation and Mediation: A General Analytical Framework Using Moderated Path Analysis

    ERIC Educational Resources Information Center

    Edwards, Jeffrey R.; Lambert, Lisa Schurer

    2007-01-01

    Studies that combine moderation and mediation are prevalent in basic and applied psychology research. Typically, these studies are framed in terms of moderated mediation or mediated moderation, both of which involve similar analytical approaches. Unfortunately, these approaches have important shortcomings that conceal the nature of the moderated…

  7. Cotinine analytical workshop report: consideration of analytical methods for determining cotinine in human body fluids as a measure of passive exposure to tobacco smoke.

    PubMed Central

    Watts, R R; Langone, J J; Knight, G J; Lewtas, J

    1990-01-01

    A two-day technical workshop was convened November 10-11, 1986, to discuss analytical approaches for determining trace amounts of cotinine in human body fluids resulting from passive exposure to environmental tobacco smoke (ETS). The workshop, jointly sponsored by the U.S. Environmental Protection Agency and Centers for Disease Control, was attended by scientists with expertise in cotinine analytical methodology and/or conduct of human monitoring studies related to ETS. The workshop format included technical presentations, separate panel discussions on chromatography and immunoassay analytical approaches, and group discussions related to the quality assurance/quality control aspects of future monitoring programs. This report presents a consensus of opinion on general issues before the workshop panel participants and also a detailed comparison of several analytical approaches being used by the various represented laboratories. The salient features of the chromatography and immunoassay analytical methods are discussed separately. PMID:2190812

  8. Trace detection of analytes using portable raman systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alam, M. Kathleen; Hotchkiss, Peter J.; Martin, Laura E.

    Apparatuses and methods for in situ detection of a trace amount of an analyte are disclosed herein. In a general embodiment, the present disclosure provides a surface-enhanced Raman spectroscopy (SERS) insert including a passageway therethrough, where the passageway has a SERS surface positioned therein. The SERS surface is configured to adsorb molecules of an analyte of interest. A concentrated sample is caused to flow over the SERS surface. The SERS insert is then provided to a portable Raman spectroscopy system, where it is analyzed for the analyte of interest.

  9. A new multi-step technique with differential transform method for analytical solution of some nonlinear variable delay differential equations.

    PubMed

    Benhammouda, Brahim; Vazquez-Leal, Hector

    2016-01-01

    This work presents an analytical solution of some nonlinear delay differential equations (DDEs) with variable delays. Such DDEs are difficult to treat numerically and cannot be solved by existing general purpose codes. A new method of steps combined with the differential transform method (DTM) is proposed as a powerful tool to solve these DDEs. This method reduces the DDEs to ordinary differential equations that are then solved by the DTM. Furthermore, we show that the solutions can be improved by Laplace-Padé resummation method. Two examples are presented to show the efficiency of the proposed technique. The main advantage of this technique is that it possesses a simple procedure based on a few straight forward steps and can be combined with any analytical method, other than the DTM, like the homotopy perturbation method.

  10. Distribution of Steps with Finite-Range Interactions: Analytic Approximations and Numerical Results

    NASA Astrophysics Data System (ADS)

    GonzáLez, Diego Luis; Jaramillo, Diego Felipe; TéLlez, Gabriel; Einstein, T. L.

    2013-03-01

    While most Monte Carlo simulations assume only nearest-neighbor steps interact elastically, most analytic frameworks (especially the generalized Wigner distribution) posit that each step elastically repels all others. In addition to the elastic repulsions, we allow for possible surface-state-mediated interactions. We investigate analytically and numerically how next-nearest neighbor (NNN) interactions and, more generally, interactions out to q'th nearest neighbor alter the form of the terrace-width distribution and of pair correlation functions (i.e. the sum over n'th neighbor distribution functions, which we investigated recently.[2] For physically plausible interactions, we find modest changes when NNN interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.

  11. Molecular detection of Borrelia burgdorferi sensu lato – An analytical comparison of real-time PCR protocols from five different Scandinavian laboratories

    PubMed Central

    Faller, Maximilian; Wilhelmsson, Peter; Kjelland, Vivian; Andreassen, Åshild; Dargis, Rimtas; Quarsten, Hanne; Dessau, Ram; Fingerle, Volker; Margos, Gabriele; Noraas, Sølvi; Ornstein, Katharina; Petersson, Ann-Cathrine; Matussek, Andreas; Lindgren, Per-Eric; Henningsson, Anna J.

    2017-01-01

    Introduction Lyme borreliosis (LB) is the most common tick transmitted disease in Europe. The diagnosis of LB today is based on the patient´s medical history, clinical presentation and laboratory findings. The laboratory diagnostics are mainly based on antibody detection, but in certain conditions molecular detection by polymerase chain reaction (PCR) may serve as a complement. Aim The purpose of this study was to evaluate the analytical sensitivity, analytical specificity and concordance of eight different real-time PCR methods at five laboratories in Sweden, Norway and Denmark. Method Each participating laboratory was asked to analyse three different sets of samples (reference panels; all blinded) i) cDNA extracted and transcribed from water spiked with cultured Borrelia strains, ii) cerebrospinal fluid spiked with cultured Borrelia strains, and iii) DNA dilution series extracted from cultured Borrelia and relapsing fever strains. The results and the method descriptions of each laboratory were systematically evaluated. Results and conclusions The analytical sensitivities and the concordance between the eight protocols were in general high. The concordance was especially high between the protocols using 16S rRNA as the target gene, however, this concordance was mainly related to cDNA as the type of template. When comparing cDNA and DNA as the type of template the analytical sensitivity was in general higher for the protocols using DNA as template regardless of the use of target gene. The analytical specificity for all eight protocols was high. However, some protocols were not able to detect Borrelia spielmanii, Borrelia lusitaniae or Borrelia japonica. PMID:28937997

  12. Ionization Suppression and Recovery in Direct Biofluid Analysis Using Paper Spray Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Vega, Carolina; Spence, Corina; Zhang, Chengsen; Bills, Brandon J.; Manicke, Nicholas E.

    2016-04-01

    Paper spray mass spectrometry is a method for the direct analysis of biofluid samples in which extraction of analytes from dried biofluid spots and electrospray ionization occur from the paper on which the dried sample is stored. We examined matrix effects in the analysis of small molecule drugs from urine, plasma, and whole blood. The general method was to spike stable isotope labeled analogs of each analyte into the spray solvent, while the analyte itself was in the dried biofluid. Intensity of the labeled analog is proportional to ionization efficiency, whereas the ratio of the analyte intensity to the labeled analog in the spray solvent is proportional to recovery. Ion suppression and recovery were found to be compound- and matrix-dependent. Highest levels of ion suppression were obtained for poor ionizers (e.g., analytes lacking basic aliphatic amine groups) in urine and approached -90%. Ion suppression was much lower or even absent for good ionizers (analytes with aliphatic amines) in dried blood spots. Recovery was generally highest in urine and lowest in blood. We also examined the effect of two experimental parameters on ion suppression and recovery: the spray solvent and the sample position (how far away from the paper tip the dried sample was spotted). Finally, the change in ion suppression and analyte elution as a function of time was examined by carrying out a paper spray analysis of dried plasma spots for 5 min by continually replenishing the spray solvent.

  13. Official Methods for the Determination of Minerals and Trace Elements in Infant Formula and Milk Products: A Review.

    PubMed

    Poitevin, Eric

    2016-01-01

    The minerals and trace elements that account for about 4% of total human body mass serve as materials and regulators in numerous biological activities in body structure building. Infant formula and milk products are important sources of endogenic and added minerals and trace elements and hence, must comply with regulatory as well as nutritional and safety requirements. In addition, reliable analytical data are necessary to support product content and innovation, health claims, or declaration and specific safety issues. Adequate analytical platforms and methods must be implemented to demonstrate both the compliance and safety assessment of all declared and regulated minerals and trace elements, especially trace-element contaminant surveillance. The first part of this paper presents general information on the mineral composition of infant formula and milk products and their regulatory status. In the second part, a survey describes the main techniques and related current official methods determining minerals and trace elements in infant formula and milk products applied for by various international organizations (AOAC INTERNATIONAL, the International Organization for Standardization, the International Dairy Federation, and the European Committe for Standardization). The third part summarizes method officialization activities by Stakeholder Panels on Infant Formula and Adult Nutritionals and Stakeholder Panel on Strategic Food Analytical Methods. The final part covers a general discussion focusing on analytical gaps and future trends in inorganic analysis that have been applied for in infant formula and milk-based products.

  14. Transfer Function of Multi-Stage Active Filters: A Solution Based on Pascal's Triangle and a General Expression

    ERIC Educational Resources Information Center

    Levesque, Luc

    2012-01-01

    A method is proposed to simplify analytical computations of the transfer function for electrical circuit filters, which are made from repetitive identical stages. A method based on the construction of Pascal's triangle is introduced and then a general solution from two initial conditions is provided for the repetitive identical stage. The present…

  15. Study report on guidelines and test procedures for investigating stability of nonlinear cardiovascular control system models

    NASA Technical Reports Server (NTRS)

    Fitzjerrell, D. G.

    1974-01-01

    A general study of the stability of nonlinear as compared to linear control systems is presented. The analysis is general and, therefore, applies to other types of nonlinear biological control systems as well as the cardiovascular control system models. Both inherent and numerical stability are discussed for corresponding analytical and graphic methods and numerical methods.

  16. The spectral method and the central limit theorem for general Markov chains

    NASA Astrophysics Data System (ADS)

    Nagaev, S. V.

    2017-12-01

    We consider Markov chains with an arbitrary phase space and develop a modification of the spectral method that enables us to prove the central limit theorem (CLT) for non-uniformly ergodic Markov chains. The conditions imposed on the transition function are more general than those by Athreya-Ney and Nummelin. Our proof of the CLT is purely analytical.

  17. HPV Genotyping of Modified General Primer-Amplicons Is More Analytically Sensitive and Specific by Sequencing than by Hybridization

    PubMed Central

    Meisal, Roger; Rounge, Trine Ballestad; Christiansen, Irene Kraus; Eieland, Alexander Kirkeby; Worren, Merete Molton; Molden, Tor Faksvaag; Kommedal, Øyvind; Hovig, Eivind; Leegaard, Truls Michael

    2017-01-01

    Sensitive and specific genotyping of human papillomaviruses (HPVs) is important for population-based surveillance of carcinogenic HPV types and for monitoring vaccine effectiveness. Here we compare HPV genotyping by Next Generation Sequencing (NGS) to an established DNA hybridization method. In DNA isolated from urine, the overall analytical sensitivity of NGS was found to be 22% higher than that of hybridization. NGS was also found to be the most specific method and expanded the detection repertoire beyond the 37 types of the DNA hybridization assay. Furthermore, NGS provided an increased resolution by identifying genetic variants of individual HPV types. The same Modified General Primers (MGP)-amplicon was used in both methods. The NGS method is described in detail to facilitate implementation in the clinical microbiology laboratory and includes suggestions for new standards for detection and calling of types and variants with improved resolution. PMID:28045981

  18. HPV Genotyping of Modified General Primer-Amplicons Is More Analytically Sensitive and Specific by Sequencing than by Hybridization.

    PubMed

    Meisal, Roger; Rounge, Trine Ballestad; Christiansen, Irene Kraus; Eieland, Alexander Kirkeby; Worren, Merete Molton; Molden, Tor Faksvaag; Kommedal, Øyvind; Hovig, Eivind; Leegaard, Truls Michael; Ambur, Ole Herman

    2017-01-01

    Sensitive and specific genotyping of human papillomaviruses (HPVs) is important for population-based surveillance of carcinogenic HPV types and for monitoring vaccine effectiveness. Here we compare HPV genotyping by Next Generation Sequencing (NGS) to an established DNA hybridization method. In DNA isolated from urine, the overall analytical sensitivity of NGS was found to be 22% higher than that of hybridization. NGS was also found to be the most specific method and expanded the detection repertoire beyond the 37 types of the DNA hybridization assay. Furthermore, NGS provided an increased resolution by identifying genetic variants of individual HPV types. The same Modified General Primers (MGP)-amplicon was used in both methods. The NGS method is described in detail to facilitate implementation in the clinical microbiology laboratory and includes suggestions for new standards for detection and calling of types and variants with improved resolution.

  19. Analytic strategies to evaluate the association of time-varying exposures to HIV-related outcomes: Alcohol consumption as an example.

    PubMed

    Cook, Robert L; Kelso, Natalie E; Brumback, Babette A; Chen, Xinguang

    2016-01-01

    As persons with HIV are living longer, there is a growing need to investigate factors associated with chronic disease, rate of disease progression and survivorship. Many risk factors for this high-risk population change over time, such as participation in treatment, alcohol consumption and drug abuse. Longitudinal datasets are increasingly available, particularly clinical data that contain multiple observations of health exposures and outcomes over time. Several analytic options are available for assessment of longitudinal data; however, it can be challenging to choose the appropriate analytic method for specific combinations of research questions and types of data. The purpose of this review is to help researchers choose the appropriate methods to analyze longitudinal data, using alcohol consumption as an example of a time-varying exposure variable. When selecting the optimal analytic method, one must consider aspects of exposure (e.g. timing, pattern, and amount) and outcome (fixed or time-varying), while also addressing minimizing bias. In this article, we will describe several analytic approaches for longitudinal data, including developmental trajectory analysis, generalized estimating equations, and mixed effect models. For each analytic strategy, we describe appropriate situations to use the method and provide an example that demonstrates the use of the method. Clinical data related to alcohol consumption and HIV are used to illustrate these methods.

  20. Quantifying the measurement uncertainty of results from environmental analytical methods.

    PubMed

    Moser, J; Wegscheider, W; Sperka-Gottlieb, C

    2001-07-01

    The Eurachem-CITAC Guide Quantifying Uncertainty in Analytical Measurement was put into practice in a public laboratory devoted to environmental analytical measurements. In doing so due regard was given to the provisions of ISO 17025 and an attempt was made to base the entire estimation of measurement uncertainty on available data from the literature or from previously performed validation studies. Most environmental analytical procedures laid down in national or international standards are the result of cooperative efforts and put into effect as part of a compromise between all parties involved, public and private, that also encompasses environmental standards and statutory limits. Central to many procedures is the focus on the measurement of environmental effects rather than on individual chemical species. In this situation it is particularly important to understand the measurement process well enough to produce a realistic uncertainty statement. Environmental analytical methods will be examined as far as necessary, but reference will also be made to analytical methods in general and to physical measurement methods where appropriate. This paper describes ways and means of quantifying uncertainty for frequently practised methods of environmental analysis. It will be shown that operationally defined measurands are no obstacle to the estimation process as described in the Eurachem/CITAC Guide if it is accepted that the dominating component of uncertainty comes from the actual practice of the method as a reproducibility standard deviation.

  1. Drifting solutions with elliptic symmetry for the compressible Navier-Stokes equations with density-dependent viscosity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Hongli, E-mail: kaixinguoan@163.com; Yuen, Manwai, E-mail: nevetsyuen@hotmail.com

    2014-05-15

    In this paper, we investigate the analytical solutions of the compressible Navier-Stokes equations with dependent-density viscosity. By using the characteristic method, we successfully obtain a class of drifting solutions with elliptic symmetry for the Navier-Stokes model wherein the velocity components are governed by a generalized Emden dynamical system. In particular, when the viscosity variables are taken the same as Yuen [M. W. Yuen, “Analytical solutions to the Navier-Stokes equations,” J. Math. Phys. 49, 113102 (2008)], our solutions constitute a generalization of that obtained by Yuen. Interestingly, numerical simulations show that the analytical solutions can be used to explain the driftingmore » phenomena of the propagation wave like Tsunamis in oceans.« less

  2. Implementation of structural response sensitivity calculations in a large-scale finite-element analysis system

    NASA Technical Reports Server (NTRS)

    Giles, G. L.; Rogers, J. L., Jr.

    1982-01-01

    The implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calclating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of the system are also discussed.

  3. Time-dependent inertia analysis of vehicle mechanisms

    NASA Astrophysics Data System (ADS)

    Salmon, James Lee

    Two methods for performing transient inertia analysis of vehicle hardware systems are developed in this dissertation. The analysis techniques can be used to predict the response of vehicle mechanism systems to the accelerations associated with vehicle impacts. General analytical methods for evaluating translational or rotational system dynamics are generated and evaluated for various system characteristics. The utility of the derived techniques are demonstrated by applying the generalized methods to two vehicle systems. Time dependent acceleration measured during a vehicle to vehicle impact are used as input to perform a dynamic analysis of an automobile liftgate latch and outside door handle. Generalized Lagrange equations for a non-conservative system are used to formulate a second order nonlinear differential equation defining the response of the components to the transient input. The differential equation is solved by employing the fourth order Runge-Kutta method. The events are then analyzed using commercially available two dimensional rigid body dynamic analysis software. The results of the two analytical techniques are compared to experimental data generated by high speed film analysis of tests of the two components performed on a high G acceleration sled at Ford Motor Company.

  4. On analytic design of loudspeaker arrays with uniform radiation characteristics

    PubMed

    Aarts; Janssen

    2000-01-01

    Some notes on analytical derived loudspeaker arrays with uniform radiation characteristics are presented. The array coefficients are derived via analytical means and compared with so-called maximal flat sequences known from telecommunications and information theory. It appears that the newly derived array, i.e., the quadratic phase array, has a higher efficiency than the Bessel array and a flatter response than the Barker array. The method discussed admits generalization to the design of arrays with desired nonuniform radiating characteristics.

  5. 40 CFR 63.90 - Program overview.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... calibration gases or test cells; (4) Use of an analytical technology that differs from that specified by a... “proven technology” (generally accepted by the scientific community as equivalent or better) that is... enforceable test method involving “proven technology” (generally accepted by the scientific community as...

  6. 40 CFR 63.90 - Program overview.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... calibration gases or test cells; (4) Use of an analytical technology that differs from that specified by a... “proven technology” (generally accepted by the scientific community as equivalent or better) that is... enforceable test method involving “proven technology” (generally accepted by the scientific community as...

  7. 40 CFR 63.90 - Program overview.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... calibration gases or test cells; (4) Use of an analytical technology that differs from that specified by a... “proven technology” (generally accepted by the scientific community as equivalent or better) that is... enforceable test method involving “proven technology” (generally accepted by the scientific community as...

  8. Functional Group Analysis.

    ERIC Educational Resources Information Center

    Smith, Walter T., Jr.; Patterson, John M.

    1980-01-01

    Discusses analytical methods selected from current research articles. Groups information by topics of general interest, including acids, aldehydes and ketones, nitro compounds, phenols, and thiols. Cites 97 references. (CS)

  9. Assessment regarding the use of the computer aided analytical models in the calculus of the general strength of a ship hull

    NASA Astrophysics Data System (ADS)

    Hreniuc, V.; Hreniuc, A.; Pescaru, A.

    2017-08-01

    Solving a general strength problem of a ship hull may be done using analytical approaches which are useful to deduce the buoyancy forces distribution, the weighting forces distribution along the hull and the geometrical characteristics of the sections. These data are used to draw the free body diagrams and to compute the stresses. The general strength problems require a large amount of calculi, therefore it is interesting how a computer may be used to solve such problems. Using computer programming an engineer may conceive software instruments based on analytical approaches. However, before developing the computer code the research topic must be thoroughly analysed, in this way being reached a meta-level of understanding of the problem. The following stage is to conceive an appropriate development strategy of the original software instruments useful for the rapid development of computer aided analytical models. The geometrical characteristics of the sections may be computed using a bool algebra that operates with ‘simple’ geometrical shapes. By ‘simple’ we mean that for the according shapes we have direct calculus relations. In the set of ‘simple’ shapes we also have geometrical entities bounded by curves approximated as spline functions or as polygons. To conclude, computer programming offers the necessary support to solve general strength ship hull problems using analytical methods.

  10. Møller-Plesset perturbation theory gradient in the generalized hybrid orbital quantum mechanical and molecular mechanical method

    NASA Astrophysics Data System (ADS)

    Jung, Jaewoon; Sugita, Yuji; Ten-no, S.

    2010-02-01

    An analytic gradient expression is formulated and implemented for the second-order Møller-Plesset perturbation theory (MP2) based on the generalized hybrid orbital QM/MM method. The method enables us to obtain an accurate geometry at a reasonable computational cost. The performance of the method is assessed for various isomers of alanine dipepetide. We also compare the optimized structures of fumaramide-derived [2]rotaxane and cAMP-dependent protein kinase with experiment.

  11. Exact Solutions for the Integrable Sixth-Order Drinfeld-Sokolov-Satsuma-Hirota System by the Analytical Methods.

    PubMed

    Manafian Heris, Jalil; Lakestani, Mehrdad

    2014-01-01

    We establish exact solutions including periodic wave and solitary wave solutions for the integrable sixth-order Drinfeld-Sokolov-Satsuma-Hirota system. We employ this system by using a generalized (G'/G)-expansion and the generalized tanh-coth methods. These methods are developed for searching exact travelling wave solutions of nonlinear partial differential equations. It is shown that these methods, with the help of symbolic computation, provide a straightforward and powerful mathematical tool for solving nonlinear partial differential equations.

  12. An Efficient Numerical Method for Computing Synthetic Seismograms for a Layered Half-space with Sources and Receivers at Close or Same Depths

    NASA Astrophysics Data System (ADS)

    Zhang, H.-m.; Chen, X.-f.; Chang, S.

    - It is difficult to compute synthetic seismograms for a layered half-space with sources and receivers at close to or the same depths using the generalized R/T coefficient method (Kennett, 1983; Luco and Apsel, 1983; Yao and Harkrider, 1983; Chen, 1993), because the wavenumber integration converges very slowly. A semi-analytic method for accelerating the convergence, in which part of the integration is implemented analytically, was adopted by some authors (Apsel and Luco, 1983; Hisada, 1994, 1995). In this study, based on the principle of the Repeated Averaging Method (Dahlquist and Björck, 1974; Chang, 1988), we propose an alternative, efficient, numerical method, the peak-trough averaging method (PTAM), to overcome the difficulty mentioned above. Compared with the semi-analytic method, PTAM is not only much simpler mathematically and easier to implement in practice, but also more efficient. Using numerical examples, we illustrate the validity, accuracy and efficiency of the new method.

  13. Analytical Dynamics and Nonrigid Spacecraft Simulation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.

    1974-01-01

    Application to the simulation of idealized spacecraft are considered both for multiple-rigid-body models and for models consisting of combination of rigid bodies and elastic bodies, with the elastic bodies being defined either as continua, as finite-element systems, or as a collection of given modal data. Several specific examples are developed in detail by alternative methods of analytical mechanics, and results are compared to a Newton-Euler formulation. The following methods are developed from d'Alembert's principle in vector form: (1) Lagrange's form of d'Alembert's principle for independent generalized coordinates; (2) Lagrange's form of d'Alembert's principle for simply constrained systems; (3) Kane's quasi-coordinate formulation of D'Alembert's principle; (4) Lagrange's equations for independent generalized coordinates; (5) Lagrange's equations for simply constrained systems; (6) Lagrangian quasi-coordinate equations (or the Boltzmann-Hamel equations); (7) Hamilton's equations for simply constrained systems; and (8) Hamilton's equations for independent generalized coordinates.

  14. Methods for determination of inorganic substances in water and fluvial sediments

    USGS Publications Warehouse

    Fishman, Marvin J.; Friedman, Linda C.

    1989-01-01

    Chapter Al of the laboratory manual contains methods used by the U.S. Geological Survey to analyze samples of water, suspended sediments, and bottom material for their content of inorganic constituents. Included are methods for determining the concentration of dissolved constituents in water, the total recoverable and total of constituents in water-suspended sediment samples, and the recoverable and total concentrations of constituents in samples of bottom material. The introduction to the manual includes essential definitions and a brief discussion of the use of significant figures in calculating and reporting analytical results. Quality control in the water-analysis laboratory is discussed, including the accuracy and precision of analyses, the use of standard-reference water samples, and the operation of an effective quality-assurance program. Methods for sample preparation and pretreatment are given also. A brief discussion of the principles of the analytical techniques involved and their particular application to water and sediment analysis is presented. The analytical methods of these techniques are arranged alphabetically by constituent. For each method, the general topics covered are the application, the principle of the method, the interferences, the apparatus and reagents required, a detailed description of the analytical procedure, reporting results, units and significant figures, and analytical precision data, when available. More than 126 methods are given for the determination of 70 inorganic constituents and physical properties of water, suspended sediment, and bottom material.

  15. Methods for determination of inorganic substances in water and fluvial sediments

    USGS Publications Warehouse

    Fishman, Marvin J.; Friedman, Linda C.

    1985-01-01

    Chapter Al of the laboratory manual contains methods used by the Geological Survey to analyze samples of water, suspended sediments, and bottom material for their content of inorganic constituents. Included are methods for determining the concentration of dissolved constituents in water, total recoverable and total of constituents in water-suspended sediment samples, and recoverable and total concentrations of constituents in samples of bottom material. Essential definitions are included in the introduction to the manual, along with a brief discussion of the use of significant figures in calculating and reporting analytical results. Quality control in the water-analysis laboratory is discussed, including accuracy and precision of analyses, the use of standard reference water samples, and the operation of an effective quality assurance program. Methods for sample preparation and pretreatment are given also.A brief discussion of the principles of the analytical techniques involved and their particular application to water and sediment analysis is presented. The analytical methods involving these techniques are arranged alphabetically according to constituent. For each method given, the general topics covered are application, principle of the method, interferences, apparatus and reagents required, a detailed description of the analytical procedure, reporting results, units and significant figures, and analytical precision data, when available. More than 125 methods are given for the determination of 70 different inorganic constituents and physical properties of water, suspended sediment, and bottom material.

  16. High-resolution metabolomics assessment of military personnel: Evaluating analytical strategies for chemical detection

    PubMed Central

    Liu, Ken H.; Walker, Douglas I.; Uppal, Karan; Tran, ViLinh; Rohrbeck, Patricia; Mallon, Timothy M.; Jones, Dean P.

    2016-01-01

    Objective To maximize detection of serum metabolites with high-resolution metabolomics (HRM). Methods Department of Defense Serum Repository (DoDSR) samples were analyzed using ultra-high resolution mass spectrometry with three complementary chromatographic phases and four ionization modes. Chemical coverage was evaluated by number of ions detected and accurate mass matches to a human metabolomics database. Results Individual HRM platforms provided accurate mass matches for up to 58% of the KEGG metabolite database. Combining two analytical methods increased matches to 72%, and included metabolites in most major human metabolic pathways and chemical classes. Detection and feature quality varied by analytical configuration. Conclusions Dual chromatography HRM with positive and negative electrospray ionization provides an effective generalized method for metabolic assessment of military personnel. PMID:27501105

  17. The general 2-D moments via integral transform method for acoustic radiation and scattering

    NASA Astrophysics Data System (ADS)

    Smith, Jerry R.; Mirotznik, Mark S.

    2004-05-01

    The moments via integral transform method (MITM) is a technique to analytically reduce the 2-D method of moments (MoM) impedance double integrals into single integrals. By using a special integral representation of the Green's function, the impedance integral can be analytically simplified to a single integral in terms of transformed shape and weight functions. The reduced expression requires fewer computations and reduces the fill times of the MoM impedance matrix. Furthermore, the resulting integral is analytic for nearly arbitrary shape and weight function sets. The MITM technique is developed for mixed boundary conditions and predictions with basic shape and weight function sets are presented. Comparisons of accuracy and speed between MITM and brute force are presented. [Work sponsored by ONR and NSWCCD ILIR Board.

  18. A Meta-Analytic Study Concerning the Effect of Computer-Based Teaching on Academic Success in Turkey

    ERIC Educational Resources Information Center

    Batdi, Veli

    2015-01-01

    This research aims to investigate the effect of computer-based teaching (CBT) on students' academic success. The research used a meta-analytic method to reach a general conclusion by statistically calculating the results of a number of independent studies. In total, 78 studies (62 master's theses, 4 PhD theses, and 12 articles) concerning this…

  19. 40 CFR 260.21 - Petitions for equivalent testing or analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.21... method; (2) A description of the types of wastes or waste matrices for which the proposed method may be... will be incorporated by reference in § 260.11 and added to “Test Methods for Evaluating Solid Waste...

  20. 40 CFR 260.21 - Petitions for equivalent testing or analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.21... method; (2) A description of the types of wastes or waste matrices for which the proposed method may be... will be incorporated by reference in § 260.11 and added to “Test Methods for Evaluating Solid Waste...

  1. 40 CFR 260.21 - Petitions for equivalent testing or analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.21... method; (2) A description of the types of wastes or waste matrices for which the proposed method may be... will be incorporated by reference in § 260.11 and added to “Test Methods for Evaluating Solid Waste...

  2. 40 CFR 260.21 - Petitions for equivalent testing or analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.21... will be incorporated by reference in § 260.11 and added to “Test Methods for Evaluating Solid Waste... method; (2) A description of the types of wastes or waste matrices for which the proposed method may be...

  3. A LITERATURE REVIEW OF WIPE SAMPLING METHODS ...

    EPA Pesticide Factsheets

    Wipe sampling is an important technique for the estimation of contaminant deposition in buildings, homes, or outdoor surfaces as a source of possible human exposure. Numerousmethods of wipe sampling exist, and each method has its own specification for the type of wipe, wetting solvent, and determinative step to be used, depending upon the contaminant of concern. The objective of this report is to concisely summarize the findings of a literature review that was conducted to identify the state-of-the-art wipe sampling techniques for a target list of compounds. This report describes the methods used to perform the literature review; a brief review of wipe sampling techniques in general; an analysis of physical and chemical properties of each target analyte; an analysis of wipe sampling techniques for the target analyte list; and asummary of the wipe sampling techniques for the target analyte list, including existing data gaps. In general, no overwhelming consensus can be drawn from the current literature on how to collect a wipe sample for the chemical warfare agents, organophosphate pesticides, and other toxic industrial chemicals of interest to this study. Different methods, media, and wetting solvents have been recommended and used by various groups and different studies. For many of the compounds of interest, no specific wipe sampling methodology has been established for their collection. Before a wipe sampling method (or methods) can be established for the co

  4. Positive lists of cosmetic ingredients: Analytical methodology for regulatory and safety controls - A review.

    PubMed

    Lores, Marta; Llompart, Maria; Alvarez-Rivera, Gerardo; Guerra, Eugenia; Vila, Marlene; Celeiro, Maria; Lamas, J Pablo; Garcia-Jares, Carmen

    2016-04-07

    Cosmetic products placed on the market and their ingredients, must be safe under reasonable conditions of use, in accordance to the current legislation. Therefore, regulated and allowed chemical substances must meet the regulatory criteria to be used as ingredients in cosmetics and personal care products, and adequate analytical methodology is needed to evaluate the degree of compliance. This article reviews the most recent methods (2005-2015) used for the extraction and the analytical determination of the ingredients included in the positive lists of the European Regulation of Cosmetic Products (EC 1223/2009): comprising colorants, preservatives and UV filters. It summarizes the analytical properties of the most relevant analytical methods along with the possibilities of fulfilment of the current regulatory issues. The cosmetic legislation is frequently being updated; consequently, the analytical methodology must be constantly revised and improved to meet safety requirements. The article highlights the most important advances in analytical methodology for cosmetics control, both in relation to the sample pretreatment and extraction and the different instrumental approaches developed to solve this challenge. Cosmetics are complex samples, and most of them require a sample pretreatment before analysis. In the last times, the research conducted covering this aspect, tended to the use of green extraction and microextraction techniques. Analytical methods were generally based on liquid chromatography with UV detection, and gas and liquid chromatographic techniques hyphenated with single or tandem mass spectrometry; but some interesting proposals based on electrophoresis have also been reported, together with some electroanalytical approaches. Regarding the number of ingredients considered for analytical control, single analyte methods have been proposed, although the most useful ones in the real life cosmetic analysis are the multianalyte approaches. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Analytic uncertainty and sensitivity analysis of models with input correlations

    NASA Astrophysics Data System (ADS)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  6. A Graphical Approach to Teaching Amplifier Design at the Undergraduate Level

    ERIC Educational Resources Information Center

    Assaad, R. S.; Silva-Martinez, J.

    2009-01-01

    Current methods of teaching basic amplifier design at the undergraduate level need further development to match today's technological advances. The general class approach to amplifier design is analytical and heavily based on mathematical manipulations. However, the students mathematical abilities are generally modest, creating a void in which…

  7. Development of Aeroservoelastic Analytical Models and Gust Load Alleviation Control Laws of a SensorCraft Wind-Tunnel Model Using Measured Data

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Vartio, Eric; Shimko, Anthony; Kvaternik, Raymond G.; Eure, Kenneth W.; Scott,Robert C.

    2007-01-01

    Aeroservoelastic (ASE) analytical models of a SensorCraft wind-tunnel model are generated using measured data. The data was acquired during the ASE wind-tunnel test of the HiLDA (High Lift-to-Drag Active) Wing model, tested in the NASA Langley Transonic Dynamics Tunnel (TDT) in late 2004. Two time-domain system identification techniques are applied to the development of the ASE analytical models: impulse response (IR) method and the Generalized Predictive Control (GPC) method. Using measured control surface inputs (frequency sweeps) and associated sensor responses, the IR method is used to extract corresponding input/output impulse response pairs. These impulse responses are then transformed into state-space models for use in ASE analyses. Similarly, the GPC method transforms measured random control surface inputs and associated sensor responses into an AutoRegressive with eXogenous input (ARX) model. The ARX model is then used to develop the gust load alleviation (GLA) control law. For the IR method, comparison of measured with simulated responses are presented to investigate the accuracy of the ASE analytical models developed. For the GPC method, comparison of simulated open-loop and closed-loop (GLA) time histories are presented.

  8. Development of Aeroservoelastic Analytical Models and Gust Load Alleviation Control Laws of a SensorCraft Wind-Tunnel Model Using Measured Data

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Shimko, Anthony; Kvaternik, Raymond G.; Eure, Kenneth W.; Scott, Robert C.

    2006-01-01

    Aeroservoelastic (ASE) analytical models of a SensorCraft wind-tunnel model are generated using measured data. The data was acquired during the ASE wind-tunnel test of the HiLDA (High Lift-to-Drag Active) Wing model, tested in the NASA Langley Transonic Dynamics Tunnel (TDT) in late 2004. Two time-domain system identification techniques are applied to the development of the ASE analytical models: impulse response (IR) method and the Generalized Predictive Control (GPC) method. Using measured control surface inputs (frequency sweeps) and associated sensor responses, the IR method is used to extract corresponding input/output impulse response pairs. These impulse responses are then transformed into state-space models for use in ASE analyses. Similarly, the GPC method transforms measured random control surface inputs and associated sensor responses into an AutoRegressive with eXogenous input (ARX) model. The ARX model is then used to develop the gust load alleviation (GLA) control law. For the IR method, comparison of measured with simulated responses are presented to investigate the accuracy of the ASE analytical models developed. For the GPC method, comparison of simulated open-loop and closed-loop (GLA) time histories are presented.

  9. Visual Analytics of integrated Data Systems for Space Weather Purposes

    NASA Astrophysics Data System (ADS)

    Rosa, Reinaldo; Veronese, Thalita; Giovani, Paulo

    Analysis of information from multiple data sources obtained through high resolution instrumental measurements has become a fundamental task in all scientific areas. The development of expert methods able to treat such multi-source data systems, with both large variability and measurement extension, is a key for studying complex scientific phenomena, especially those related to systemic analysis in space and environmental sciences. In this talk, we present a time series generalization introducing the concept of generalized numerical lattice, which represents a discrete sequence of temporal measures for a given variable. In this novel representation approach each generalized numerical lattice brings post-analytical data information. We define a generalized numerical lattice as a set of three parameters representing the following data properties: dimensionality, size and post-analytical measure (e.g., the autocorrelation, Hurst exponent, etc)[1]. From this representation generalization, any multi-source database can be reduced to a closed set of classified time series in spatiotemporal generalized dimensions. As a case study, we show a preliminary application in space science data, highlighting the possibility of a real time analysis expert system. In this particular application, we have selected and analyzed, using detrended fluctuation analysis (DFA), several decimetric solar bursts associated to X flare-classes. The association with geomagnetic activity is also reported. DFA method is performed in the framework of a radio burst automatic monitoring system. Our results may characterize the variability pattern evolution, computing the DFA scaling exponent, scanning the time series by a short windowing before the extreme event [2]. For the first time, the application of systematic fluctuation analysis for space weather purposes is presented. The prototype for visual analytics is implemented in a Compute Unified Device Architecture (CUDA) by using the K20 Nvidia graphics processing units (GPUs) to reduce the integrated analysis runtime. [1] Veronese et al. doi: 10.6062/jcis.2009.01.02.0021, 2010. [2] Veronese et al. doi:http://dx.doi.org/10.1016/j.jastp.2010.09.030, 2011.

  10. Analytical derivatives of the individual state energies in ensemble density functional theory method. I. General formalism

    DOE PAGES

    Filatov, Michael; Liu, Fang; Martínez, Todd J.

    2017-07-21

    The state-averaged (SA) spin restricted ensemble referenced Kohn-Sham (REKS) method and its state interaction (SI) extension, SI-SA-REKS, enable one to describe correctly the shape of the ground and excited potential energy surfaces of molecules undergoing bond breaking/bond formation reactions including features such as conical intersections crucial for theoretical modeling of non-adiabatic reactions. Until recently, application of the SA-REKS and SI-SA-REKS methods to modeling the dynamics of such reactions was obstructed due to the lack of the analytical energy derivatives. Here, the analytical derivatives of the individual SA-REKS and SI-SA-REKS energies are derived. The final analytic gradient expressions are formulated entirelymore » in terms of traces of matrix products and are presented in the form convenient for implementation in the traditional quantum chemical codes employing basis set expansions of the molecular orbitals. Finally, we will describe the implementation and benchmarking of the derived formalism in a subsequent article of this series.« less

  11. [Basic research on digital logistic management of hospital].

    PubMed

    Cao, Hui

    2010-05-01

    This paper analyzes and explores the possibilities of digital information-based management realized by equipment department, general services department, supply room and other material flow departments in different hospitals in order to optimize the procedures of information-based asset management. There are various analytical methods of medical supplies business models, providing analytical data for correct decisions made by departments and leaders of hospital and the governing authorities.

  12. Methods of analysis by the U.S. Geological Survey National Water Quality Laboratory; determination of inorganic and organic constituents in water and fluvial sediments

    USGS Publications Warehouse

    Fishman, M. J.

    1993-01-01

    Methods to be used to analyze samples of water, suspended sediment and bottom material for their content of inorganic and organic constituents are presented. Technology continually changes, and so this laboratory manual includes new and revised methods for determining the concentration of dissolved constituents in water, whole water recoverable constituents in water-suspended sediment samples, and recoverable concentration of constit- uents in bottom material. For each method, the general topics covered are the application, the principle of the method, interferences, the apparatus and reagents required, a detailed description of the analytical procedure, reporting results, units and significant figures, and analytical precision data. Included in this manual are 30 methods.

  13. Analytical solutions of the planar cyclic voltammetry process for two soluble species with equal diffusivities and fast electron transfer using the method of eigenfunction expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samin, Adib; Lahti, Erik; Zhang, Jinsuo, E-mail: zhang.3558@osu.edu

    Cyclic voltammetry is a powerful tool that is used for characterizing electrochemical processes. Models of cyclic voltammetry take into account the mass transport of species and the kinetics at the electrode surface. Analytical solutions of these models are not well-known due to the complexity of the boundary conditions. In this study we present closed form analytical solutions of the planar voltammetry model for two soluble species with fast electron transfer and equal diffusivities using the eigenfunction expansion method. Our solution methodology does not incorporate Laplace transforms and yields good agreement with the numerical solution. This solution method can be extendedmore » to cases that are more general and may be useful for benchmarking purposes.« less

  14. Functional Interfaces Constructed by Controlled/Living Radical Polymerization for Analytical Chemistry.

    PubMed

    Wang, Huai-Song; Song, Min; Hang, Tai-Jun

    2016-02-10

    The high-value applications of functional polymers in analytical science generally require well-defined interfaces, including precisely synthesized molecular architectures and compositions. Controlled/living radical polymerization (CRP) has been developed as a versatile and powerful tool for the preparation of polymers with narrow molecular weight distributions and predetermined molecular weights. Among the CRP system, atom transfer radical polymerization (ATRP) and reversible addition-fragmentation chain transfer (RAFT) are well-used to develop new materials for analytical science, such as surface-modified core-shell particles, monoliths, MIP micro- or nanospheres, fluorescent nanoparticles, and multifunctional materials. In this review, we summarize the emerging functional interfaces constructed by RAFT and ATRP for applications in analytical science. Various polymers with precisely controlled architectures including homopolymers, block copolymers, molecular imprinted copolymers, and grafted copolymers were synthesized by CRP methods for molecular separation, retention, or sensing. We expect that the CRP methods will become the most popular technique for preparing functional polymers that can be broadly applied in analytical chemistry.

  15. Multiple animal studies for medical chemical defense program in soldier/patient decontamination and drug development on task 85-17: Validation of an analytical method for the detection of soman (GD), mustard (HD), tabun (GA), and VX in wastewater samples. Final report, 13 October 1985-1 January 1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joiner, R.L.; Hayes, L.; Rust, W.

    1989-05-01

    The following report summarizes the development and validation of an analytical method for the analyses of soman (GD), mustard (HD), VX, and tabun (GA) in wastewater. The need for an analytical method that can detect GD, HD, VX, and GA with the necessary sensitivity (< 20 parts per billion (PPB))and selectivity is essential to Medical Research and Evaluation Facility (MREF) operations. The analytical data were generated using liquid-liquid extraction of the wastewater, with the extract being concentrated and analyzed by gas chromatography (GC) methods. The sample preparation and analyses methods were developed in support of ongoing activities within the MREF.more » We have documented the precision and accuracy of the analytical method through an expected working calibration range (3.0 to 60 ppb). The analytical method was statistically evaluated over a range of concentrations to establish a detection limit and quantitation limit for the method. Whenever the true concentration is 8.5 ppb or above, the probability is at least 99.9 percent that the measured concentration will be ppb or above. Thus, 6 ppb could be used as a lower reliability limit for detecting concentrations in excess of 8.5 ppb. In summary, the proposed sample extraction and analyses methods are suitable for quantitative analyses to determine the presence of GD, HD, VX, and GA in wastewater samples. Our findings indicate that we can detect any of these chemical surety materiel (CSM) in water at or below the established U.S. Army Surgeon General's safety levels in drinking water.« less

  16. Graphical Method for Determining Projectile Trajectory

    ERIC Educational Resources Information Center

    Moore, J. C.; Baker, J. C.; Franzel, L.; McMahon, D.; Songer, D.

    2010-01-01

    We present a nontrigonometric graphical method for predicting the trajectory of a projectile when the angle and initial velocity are known. Students enrolled in a general education conceptual physics course typically have weak backgrounds in trigonometry, making inaccessible the standard analytical calculation of projectile range. Furthermore,…

  17. VOFTools - A software package of calculation tools for volume of fluid methods using general convex grids

    NASA Astrophysics Data System (ADS)

    López, J.; Hernández, J.; Gómez, P.; Faura, F.

    2018-02-01

    The VOFTools library includes efficient analytical and geometrical routines for (1) area/volume computation, (2) truncation operations that typically arise in VOF (volume of fluid) methods, (3) area/volume conservation enforcement (VCE) in PLIC (piecewise linear interface calculation) reconstruction and(4) computation of the distance from a given point to the reconstructed interface. The computation of a polyhedron volume uses an efficient formula based on a quadrilateral decomposition and a 2D projection of each polyhedron face. The analytical VCE method is based on coupling an interpolation procedure to bracket the solution with an improved final calculation step based on the above volume computation formula. Although the library was originally created to help develop highly accurate advection and reconstruction schemes in the context of VOF methods, it may have more general applications. To assess the performance of the supplied routines, different tests, which are provided in FORTRAN and C, were implemented for several 2D and 3D geometries.

  18. A new method for constructing analytic elements for groundwater flow.

    NASA Astrophysics Data System (ADS)

    Strack, O. D.

    2007-12-01

    The analytic element method is based upon the superposition of analytic functions that are defined throughout the infinite domain, and can be used to meet a variety of boundary conditions. Analytic elements have been use successfully for a number of problems, mainly dealing with the Poisson equation (see, e.g., Theory and Applications of the Analytic Element Method, Reviews of Geophysics, 41,2/1005 2003 by O.D.L. Strack). The majority of these analytic elements consists of functions that exhibit jumps along lines or curves. Such linear analytic elements have been developed also for other partial differential equations, e.g., the modified Helmholz equation and the heat equation, and were constructed by integrating elementary solutions, the point sink and the point doublet, along a line. This approach is limiting for two reasons. First, the existence is required of the elementary solutions, and, second, the integration tends to limit the range of solutions that can be obtained. We present a procedure for generating analytic elements that requires merely the existence of a harmonic function with the desired properties; such functions exist in abundance. The procedure to be presented is used to generalize this harmonic function in such a way that the resulting expression satisfies the applicable differential equation. The approach will be applied, along with numerical examples, for the modified Helmholz equation and for the heat equation, while it is noted that the method is in no way restricted to these equations. The procedure is carried out entirely in terms of complex variables, using Wirtinger calculus.

  19. Analytical approximation schemes for solving exact renormalization group equations in the local potential approximation

    NASA Astrophysics Data System (ADS)

    Bervillier, C.; Boisseau, B.; Giacomini, H.

    2008-02-01

    The relation between the Wilson-Polchinski and the Litim optimized ERGEs in the local potential approximation is studied with high accuracy using two different analytical approaches based on a field expansion: a recently proposed genuine analytical approximation scheme to two-point boundary value problems of ordinary differential equations, and a new one based on approximating the solution by generalized hypergeometric functions. A comparison with the numerical results obtained with the shooting method is made. A similar accuracy is reached in each case. Both two methods appear to be more efficient than the usual field expansions frequently used in the current studies of ERGEs (in particular for the Wilson-Polchinski case in the study of which they fail).

  20. Evaluation of selected methods for determining streamflow during periods of ice effect

    USGS Publications Warehouse

    Melcher, Norwood B.; Walker, J.F.

    1992-01-01

    Seventeen methods for estimating ice-affected streamflow are evaluated for potential use with the U.S. Geological Survey streamflow-gaging station network. The methods evaluated were identified by written responses from U.S. Geological Survey field offices and by a comprehensive literature search. The methods selected and techniques used for applying the methods are described in this report. The methods are evaluated by comparing estimated results with data collected at three streamflow-gaging stations in Iowa during the winter of 1987-88. Discharge measurements were obtained at 1- to 5-day intervals during the ice-affected periods at the three stations to define an accurate baseline record. Discharge records were compiled for each method based on data available, assuming a 6-week field schedule. The methods are classified into two general categories-subjective and analytical--depending on whether individual judgment is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used at streamflow-gaging stations, where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice-adjustment factor) may be appropriate for use at stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge-ratio and multiple-regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.

  1. Maximum entropy formalism for the analytic continuation of matrix-valued Green's functions

    NASA Astrophysics Data System (ADS)

    Kraberger, Gernot J.; Triebl, Robert; Zingl, Manuel; Aichhorn, Markus

    2017-10-01

    We present a generalization of the maximum entropy method to the analytic continuation of matrix-valued Green's functions. To treat off-diagonal elements correctly based on Bayesian probability theory, the entropy term has to be extended for spectral functions that are possibly negative in some frequency ranges. In that way, all matrix elements of the Green's function matrix can be analytically continued; we introduce a computationally cheap element-wise method for this purpose. However, this method cannot ensure important constraints on the mathematical properties of the resulting spectral functions, namely positive semidefiniteness and Hermiticity. To improve on this, we present a full matrix formalism, where all matrix elements are treated simultaneously. We show the capabilities of these methods using insulating and metallic dynamical mean-field theory (DMFT) Green's functions as test cases. Finally, we apply the methods to realistic material calculations for LaTiO3, where off-diagonal matrix elements in the Green's function appear due to the distorted crystal structure.

  2. Development of Matched (migratory Analytical Time Change Easy Detection) Method for Satellite-Tracked Migratory Birds

    NASA Astrophysics Data System (ADS)

    Doko, Tomoko; Chen, Wenbo; Higuchi, Hiroyoshi

    2016-06-01

    Satellite tracking technology has been used to reveal the migration patterns and flyways of migratory birds. In general, bird migration can be classified according to migration status. These statuses include the wintering period, spring migration, breeding period, and autumn migration. To determine the migration status, periods of these statuses should be individually determined, but there is no objective method to define 'a threshold date' for when an individual bird changes its status. The research objective is to develop an effective and objective method to determine threshold dates of migration status based on satellite-tracked data. The developed method was named the "MATCHED (Migratory Analytical Time Change Easy Detection) method". In order to demonstrate the method, data acquired from satellite-tracked Tundra Swans were used. MATCHED method is composed by six steps: 1) dataset preparation, 2) time frame creation, 3) automatic identification, 4) visualization of change points, 5) interpretation, and 6) manual correction. Accuracy was tested. In general, MATCHED method was proved powerful to identify the change points between migration status as well as stopovers. Nevertheless, identifying "exact" threshold dates is still challenging. Limitation and application of this method was discussed.

  3. Evaluation of a reduced centrifugation time and higher centrifugal force on various general chemistry and immunochemistry analytes in plasma and serum.

    PubMed

    Møller, Mette F; Søndergaard, Tove R; Kristensen, Helle T; Münster, Anna-Marie B

    2017-09-01

    Background Centrifugation of blood samples is an essential preanalytical step in the clinical biochemistry laboratory. Centrifugation settings are often altered to optimize sample flow and turnaround time. Few studies have addressed the effect of altering centrifugation settings on analytical quality, and almost all studies have been done using collection tubes with gel separator. Methods In this study, we compared a centrifugation time of 5 min at 3000 ×  g to a standard protocol of 10 min at 2200 ×  g. Nine selected general chemistry and immunochemistry analytes and interference indices were studied in lithium heparin plasma tubes and serum tubes without gel separator. Results were evaluated using mean bias, difference plots and coefficient of variation, compared with maximum allowable bias and coefficient of variation used in laboratory routine quality control. Results For all analytes except lactate dehydrogenase, the results were within the predefined acceptance criteria, indicating that the analytical quality was not compromised. Lactate dehydrogenase showed higher values after centrifugation for 5 min at 3000 ×  g, mean bias was 6.3 ± 2.2% and the coefficient of variation was 5%. Conclusions We found that a centrifugation protocol of 5 min at 3000 ×  g can be used for the general chemistry and immunochemistry analytes studied, with the possible exception of lactate dehydrogenase, which requires further assessment.

  4. Dominating Scale-Free Networks Using Generalized Probabilistic Methods

    PubMed Central

    Molnár,, F.; Derzsy, N.; Czabarka, É.; Székely, L.; Szymanski, B. K.; Korniss, G.

    2014-01-01

    We study ensemble-based graph-theoretical methods aiming to approximate the size of the minimum dominating set (MDS) in scale-free networks. We analyze both analytical upper bounds of dominating sets and numerical realizations for applications. We propose two novel probabilistic dominating set selection strategies that are applicable to heterogeneous networks. One of them obtains the smallest probabilistic dominating set and also outperforms the deterministic degree-ranked method. We show that a degree-dependent probabilistic selection method becomes optimal in its deterministic limit. In addition, we also find the precise limit where selecting high-degree nodes exclusively becomes inefficient for network domination. We validate our results on several real-world networks, and provide highly accurate analytical estimates for our methods. PMID:25200937

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tukey, J.W.; Bloomfield, P.

    In its most general terms, the work carried out under the contract consists of the development of new data analytic methods and the improvement of existing methods, their implementation on computer, especially minicomputers, and the development of non-statistical, systems-level software to support these activities. The work reported or completed is reviewed. (GHT)

  6. Shape sensitivity analysis of flutter response of a laminated wing

    NASA Technical Reports Server (NTRS)

    Bergen, Fred D.; Kapania, Rakesh K.

    1988-01-01

    A method is presented for calculating the shape sensitivity of a wing aeroelastic response with respect to changes in geometric shape. Yates' modified strip method is used in conjunction with Giles' equivalent plate analysis to predict the flutter speed, frequency, and reduced frequency of the wing. Three methods are used to calculate the sensitivity of the eigenvalue. The first method is purely a finite difference calculation of the eigenvalue derivative directly from the solution of the flutter problem corresponding to the two different values of the shape parameters. The second method uses an analytic expression for the eigenvalue sensitivities of a general complex matrix, where the derivatives of the aerodynamic, mass, and stiffness matrices are computed using a finite difference approximation. The third method also uses an analytic expression for the eigenvalue sensitivities, but the aerodynamic matrix is computed analytically. All three methods are found to be in good agreement with each other. The sensitivities of the eigenvalues were used to predict the flutter speed, frequency, and reduced frequency. These approximations were found to be in good agreement with those obtained using a complete reanalysis.

  7. 21 CFR 177.1960 - Vinyl chloride-hexene-1 copolymers.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... determined by any suitable analytical procedure of generally accepted applicability. (ii) Inherent viscosity... D1243-79, “Standard Test Method for Dilute Solution Viscosity of Vinyl Chloride Polymers,” which is...

  8. Approximate bound-state solutions of the Dirac equation for the generalized yukawa potential plus the generalized tensor interaction

    NASA Astrophysics Data System (ADS)

    Ikot, Akpan N.; Maghsoodi, Elham; Hassanabadi, Hassan; Obu, Joseph A.

    2014-05-01

    In this paper, we obtain the approximate analytical bound-state solutions of the Dirac particle with the generalized Yukawa potential within the framework of spin and pseudospin symmetries for the arbitrary к state with a generalized tensor interaction. The generalized parametric Nikiforov-Uvarov method is used to obtain the energy eigenvalues and the corresponding wave functions in closed form. We also report some numerical results and present figures to show the effect of the tensor interaction.

  9. Synthesized airfoil data method for prediction of dynamic stall and unsteady airloads

    NASA Technical Reports Server (NTRS)

    Gangwani, S. T.

    1983-01-01

    A detailed analysis of dynamic stall experiments has led to a set of relatively compact analytical expressions, called synthesized unsteady airfoil data, which accurately describe in the time-domain the unsteady aerodynamic characteristics of stalled airfoils. An analytical research program was conducted to expand and improve this synthesized unsteady airfoil data method using additional available sets of unsteady airfoil data. The primary objectives were to reduce these data to synthesized form for use in rotor airload prediction analyses and to generalize the results. Unsteady drag data were synthesized which provided the basis for successful expansion of the formulation to include computation of the unsteady pressure drag of airfoils and rotor blades. Also, an improved prediction model for airfoil flow reattachment was incorporated in the method. Application of this improved unsteady aerodynamics model has resulted in an improved correlation between analytic predictions and measured full scale helicopter blade loads and stress data.

  10. Role of chromatography in the development of Standard Reference Materials for organic analysis.

    PubMed

    Wise, Stephen A; Phinney, Karen W; Sander, Lane C; Schantz, Michele M

    2012-10-26

    The certification of chemical constituents in natural-matrix Standard Reference Materials (SRMs) at the National Institute of Standards and Technology (NIST) can require the use of two or more independent analytical methods. The independence among the methods is generally achieved by taking advantage of differences in extraction, separation, and detection selectivity. This review describes the development of the independent analytical methods approach at NIST, and its implementation in the measurement of organic constituents such as contaminants in environmental materials, nutrients and marker compounds in food and dietary supplement matrices, and health diagnostic and nutritional assessment markers in human serum. The focus of this review is the important and critical role that separation science techniques play in achieving the necessary independence of the analytical steps in the measurement of trace-level organic constituents in natural matrix SRMs. Published by Elsevier B.V.

  11. Generalized Mantel-Haenszel Methods for Differential Item Functioning Detection

    ERIC Educational Resources Information Center

    Fidalgo, Angel M.; Madeira, Jaqueline M.

    2008-01-01

    Mantel-Haenszel methods comprise a highly flexible methodology for assessing the degree of association between two categorical variables, whether they are nominal or ordinal, while controlling for other variables. The versatility of Mantel-Haenszel analytical approaches has made them very popular in the assessment of the differential functioning…

  12. Proceedings of the 6. international conference on stability and handling of liquid fuels. Volume 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giles, H.N.

    Volume 2 of these proceedings contain 42 papers arranged under the following topical sections: Fuel blending and compatibility; Middle distillates; Microbiology; Alternative fuels; General topics (analytical methods, tank remediation, fuel additives, storage stability); and Poster presentations (analysis methods, oxidation kinetics, health problems).

  13. A generalized theory for the design of contraction cones and other low speed ducts

    NASA Technical Reports Server (NTRS)

    Barger, R. L.; Bowen, J. T.

    1972-01-01

    A generalization of the Tsien method of contraction cone design is described. The design velocity distribution is expressed in such a form that the required high order derivatives can be obtained by recursion rather than by numerical or analytic differentiation. The method is applicable to the design of diffusers and converging-diverging ducts as well as contraction cones. The computer program is described and a FORTRAN listing of the program is provided.

  14. On the Gibbs phenomenon 5: Recovering exponential accuracy from collocation point values of a piecewise analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang

    1994-01-01

    The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.

  15. Development of an integrated BEM for hot fluid-structure interaction

    NASA Technical Reports Server (NTRS)

    Banerjee, P. K.; Dargush, G. F.

    1989-01-01

    The Boundary Element Method (BEM) is chosen as a basic analysis tool principally because the definition of quantities like fluxes, temperature, displacements, and velocities is very precise on a boundary base discretization scheme. One fundamental difficulty is, of course, that the entire analysis requires a very considerable amount of analytical work which is not present in other numerical methods. During the last 18 months all of this analytical work was completed and a two-dimensional, general purpose code was written. Some of the early results are described. It is anticipated that within the next two to three months almost all two-dimensional idealizations will be examined. It should be noted that the analytical work for the three-dimensional case has also been done and numerical implementation will begin next year.

  16. High-Resolution Metabolomics Assessment of Military Personnel: Evaluating Analytical Strategies for Chemical Detection.

    PubMed

    Liu, Ken H; Walker, Douglas I; Uppal, Karan; Tran, ViLinh; Rohrbeck, Patricia; Mallon, Timothy M; Jones, Dean P

    2016-08-01

    The aim of this study was to maximize detection of serum metabolites with high-resolution metabolomics (HRM). Department of Defense Serum Repository (DoDSR) samples were analyzed using ultrahigh resolution mass spectrometry with three complementary chromatographic phases and four ionization modes. Chemical coverage was evaluated by number of ions detected and accurate mass matches to a human metabolomics database. Individual HRM platforms provided accurate mass matches for up to 58% of the KEGG metabolite database. Combining two analytical methods increased matches to 72% and included metabolites in most major human metabolic pathways and chemical classes. Detection and feature quality varied by analytical configuration. Dual chromatography HRM with positive and negative electrospray ionization provides an effective generalized method for metabolic assessment of military personnel.

  17. Explanation-based generalization of partially ordered plans

    NASA Technical Reports Server (NTRS)

    Kambhampati, Subbarao; Kedar, Smadar

    1991-01-01

    Most previous work in analytic generalization of plans dealt with totally ordered plans. These methods cannot be directly applied to generalizing partially ordered plans, since they do not capture all interactions among plan operators for all total orders of such plans. We introduce a new method for generalizing partially ordered plans. This method is based on providing explanation-based generalization (EBG) with explanations which systematically capture the interactions among plan operators for all the total orders of a partially-ordered plan. The explanations are based on the Modal Truth Criterion which states the necessary and sufficient conditions for ensuring the truth of a proposition at any point in a plan, for a class of partially ordered plans. The generalizations obtained by this method guarantee successful and interaction-free execution of any total order of the generalized plan. In addition, the systematic derivation of the generalization algorithms from the Modal Truth Criterion obviates the need for carrying out a separate formal proof of correctness of the EBG algorithms.

  18. General design method for three-dimensional potential flow fields. 1: Theory

    NASA Technical Reports Server (NTRS)

    Stanitz, J. D.

    1980-01-01

    A general design method was developed for steady, three dimensional, potential, incompressible or subsonic-compressible flow. In this design method, the flow field, including the shape of its boundary, was determined for arbitrarily specified, continuous distributions of velocity as a function of arc length along the boundary streamlines. The method applied to the design of both internal and external flow fields, including, in both cases, fields with planar symmetry. The analytic problems associated with stagnation points, closure of bodies in external flow fields, and prediction of turning angles in three dimensional ducts were reviewed.

  19. Analysis of polymeric phenolics in red wines using different techniques combined with gel permeation chromatography fractionation.

    PubMed

    Guadalupe, Zenaida; Soldevilla, Alberto; Sáenz-Navajas, María-Pilar; Ayestarán, Belén

    2006-04-21

    A multiple-step analytical method was developed to improve the analysis of polymeric phenolics in red wines. With a common initial step based on the fractionation of wine phenolics by gel permeation chromatography (GPC), different analytical techniques were used: high-performance liquid chromatography-diode array detection (HPLC-DAD), HPLC-mass spectrometry (MS), capillary zone electrophoresis (CZE) and spectrophotometry. This method proved to be valid for analyzing different families of phenolic compounds, such as monomeric phenolics and their derivatives, polymeric pigments and proanthocyanidins. The analytical characteristics of fractionation by GPC were studied and the method was fully validated, yielding satisfactory statistical results. GPC fractionation substantially improved the analysis of polymeric pigments by CZE, in terms of response, repeatability and reproducibility. It also represented an improvement in the traditional vanillin assay used for proanthocyanidin (PA) quantification. Astringent proanthocyanidins were also analyzed using a simple combined method that allowed these compounds, for which only general indexes were available, to be quantified.

  20. A robust and versatile signal-on fluorescence sensing strategy based on SYBR Green I dye and graphene oxide

    PubMed Central

    Qiu, Huazhang; Wu, Namei; Zheng, Yanjie; Chen, Min; Weng, Shaohuang; Chen, Yuanzhong; Lin, Xinhua

    2015-01-01

    A robust and versatile signal-on fluorescence sensing strategy was developed to provide label-free detection of various target analytes. The strategy used SYBR Green I dye and graphene oxide as signal reporter and signal-to-background ratio enhancer, respectively. Multidrug resistance protein 1 (MDR1) gene and mercury ion (Hg2+) were selected as target analytes to investigate the generality of the method. The linear relationship and specificity of the detections showed that the sensitive and selective analyses of target analytes could be achieved by the proposed strategy with low detection limits of 0.5 and 2.2 nM for MDR1 gene and Hg2+, respectively. Moreover, the strategy was used to detect real samples. Analytical results of MDR1 gene in the serum indicated that the developed method is a promising alternative approach for real applications in complex systems. Furthermore, the recovery of the proposed method for Hg2+ detection was acceptable. Thus, the developed label-free signal-on fluorescence sensing strategy exhibited excellent universality, sensitivity, and handling convenience. PMID:25565810

  1. Geochemical and analytical implications of extensive sulfur retention in ash from Indonesian peats

    USGS Publications Warehouse

    Kane, Jean S.; Neuzil, Sandra G.

    1993-01-01

    Sulfur is an analyte of considerable importance to the complete major element analysis of ash from low-sulfur, low-ash Indonesian peats. Most analytical schemes for major element peat- and coal-ash analyses, including the inductively coupled plasma atomic emission spectrometry method used in this work, do not permit measurement of sulfur in the ash. As a result, oxide totals cannot be used as a check on accuracy of analysis. Alternative quality control checks verify the accuracy of the cation analyses. Cation and sulfur correlations with percent ash yield suggest that silicon and titanium, and to a lesser extent, aluminum, generally originate as minerals, whereas magnesium and sulfur generally originate from organic matter. Cation correlations with oxide totals indicate that, for these Indonesian peats, magnesium dominates sulfur fixation during ashing because it is considerably more abundant in the ash than calcium, the next most important cation in sulfur fixation.

  2. SU-E-T-569: Neutron Shielding Calculation Using Analytical and Multi-Monte Carlo Method for Proton Therapy Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, S; Shin, E H; Kim, J

    2015-06-15

    Purpose: To evaluate the shielding wall design to protect patients, staff and member of the general public for secondary neutron using a simply analytic solution, multi-Monte Carlo code MCNPX, ANISN and FLUKA. Methods: An analytical and multi-Monte Carlo method were calculated for proton facility (Sumitomo Heavy Industry Ltd.) at Samsung Medical Center in Korea. The NCRP-144 analytical evaluation methods, which produced conservative estimates on the dose equivalent values for the shielding, were used for analytical evaluations. Then, the radiation transport was simulated with the multi-Monte Carlo code. The neutron dose at evaluation point is got by the value using themore » production of the simulation value and the neutron dose coefficient introduced in ICRP-74. Results: The evaluation points of accelerator control room and control room entrance are mainly influenced by the point of the proton beam loss. So the neutron dose equivalent of accelerator control room for evaluation point is 0.651, 1.530, 0.912, 0.943 mSv/yr and the entrance of cyclotron room is 0.465, 0.790, 0.522, 0.453 mSv/yr with calculation by the method of NCRP-144 formalism, ANISN, FLUKA and MCNP, respectively. The most of Result of MCNPX and FLUKA using the complicated geometry showed smaller values than Result of ANISN. Conclusion: The neutron shielding for a proton therapy facility has been evaluated by the analytic model and multi-Monte Carlo methods. We confirmed that the setting of shielding was located in well accessible area to people when the proton facility is operated.« less

  3. Properties of water as a novel stationary phase in capillary gas chromatography.

    PubMed

    Gallant, Jonathan A; Thurbide, Kevin B

    2014-09-12

    A novel method of separation that uses water as a stationary phase in capillary gas chromatography (GC) is presented. Through applying a water phase to the interior walls of a stainless steel capillary, good separations were obtained for a large variety of analytes in this format. It was found that carrier gas humidification and backpressure were key factors in promoting stable operation over time at various temperatures. For example, with these measures in place, the retention time of an acetone test analyte was found to reduce by only 44s after 100min of operation at a column temperature of 100°C. In terms of efficiency, under optimum conditions the method produced about 20,000 plates for an acetone test analyte on a 250μm i.d.×30m column. Overall, retention on the stationary phase generally increased with analyte water solubility and polarity, but was relatively little correlated with analyte volatility. Conversely, non-polar analytes were essentially unretained in the system. These features were applied to the direct analysis of different polar analytes in both aqueous and organic samples. Results suggest that this approach could provide an interesting alternative tool in capillary GC separations. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. An Analysis of Prospective Chemistry Teachers' Cognitive Structures through Flow Map Method: The Subject of Oxidation and Reduction

    ERIC Educational Resources Information Center

    Temel, Senar

    2016-01-01

    This study aims to analyse prospective chemistry teachers' cognitive structures related to the subject of oxidation and reduction through a flow map method. Purposeful sampling method was employed in this study, and 8 prospective chemistry teachers from a group of students who had taken general chemistry and analytical chemistry courses were…

  5. Artificial neural network and classical least-squares methods for neurotransmitter mixture analysis.

    PubMed

    Schulze, H G; Greek, L S; Gorzalka, B B; Bree, A V; Blades, M W; Turner, R F

    1995-02-01

    Identification of individual components in biological mixtures can be a difficult problem regardless of the analytical method employed. In this work, Raman spectroscopy was chosen as a prototype analytical method due to its inherent versatility and applicability to aqueous media, making it useful for the study of biological samples. Artificial neural networks (ANNs) and the classical least-squares (CLS) method were used to identify and quantify the Raman spectra of the small-molecule neurotransmitters and mixtures of such molecules. The transfer functions used by a network, as well as the architecture of a network, played an important role in the ability of the network to identify the Raman spectra of individual neurotransmitters and the Raman spectra of neurotransmitter mixtures. Specifically, networks using sigmoid and hyperbolic tangent transfer functions generalized better from the mixtures in the training data set to those in the testing data sets than networks using sine functions. Networks with connections that permit the local processing of inputs generally performed better than other networks on all the testing data sets. and better than the CLS method of curve fitting, on novel spectra of some neurotransmitters. The CLS method was found to perform well on noisy, shifted, and difference spectra.

  6. Comparison of adjoint and analytical Bayesian inversion methods for constraining Asian sources of carbon monoxide using satellite (MOPITT) measurements of CO columns

    NASA Astrophysics Data System (ADS)

    Kopacz, Monika; Jacob, Daniel J.; Henze, Daven K.; Heald, Colette L.; Streets, David G.; Zhang, Qiang

    2009-02-01

    We apply the adjoint of an atmospheric chemical transport model (GEOS-Chem CTM) to constrain Asian sources of carbon monoxide (CO) with 2° × 2.5° spatial resolution using Measurement of Pollution in the Troposphere (MOPITT) satellite observations of CO columns in February-April 2001. Results are compared to the more common analytical method for solving the same Bayesian inverse problem and applied to the same data set. The analytical method is more exact but because of computational limitations it can only constrain emissions over coarse regions. We find that the correction factors to the a priori CO emission inventory from the adjoint inversion are generally consistent with those of the analytical inversion when averaged over the large regions of the latter. The adjoint solution reveals fine-scale variability (cities, political boundaries) that the analytical inversion cannot resolve, for example, in the Indian subcontinent or between Korea and Japan, and some of that variability is of opposite sign which points to large aggregation errors in the analytical solution. Upward correction factors to Chinese emissions from the prior inventory are largest in central and eastern China, consistent with a recent bottom-up revision of that inventory, although the revised inventory also sees the need for upward corrections in southern China where the adjoint and analytical inversions call for downward correction. Correction factors for biomass burning emissions derived from the adjoint and analytical inversions are consistent with a recent bottom-up inventory on the basis of MODIS satellite fire data.

  7. Numerical realization of the variational method for generating self-trapped beams

    NASA Astrophysics Data System (ADS)

    Duque, Erick I.; Lopez-Aguayo, Servando; Malomed, Boris A.

    2018-03-01

    We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schr\\"odinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.

  8. An algorithm for full parametric solution of problems on the statics of orthotropic plates by the method of boundary states with perturbations

    NASA Astrophysics Data System (ADS)

    Penkov, V. B.; Ivanychev, D. A.; Novikova, O. S.; Levina, L. V.

    2018-03-01

    The article substantiates the possibility of building full parametric analytical solutions of mathematical physics problems in arbitrary regions by means of computer systems. The suggested effective means for such solutions is the method of boundary states with perturbations, which aptly incorporates all parameters of an orthotropic medium in a general solution. We performed check calculations of elastic fields of an anisotropic rectangular region (test and calculation problems) for a generalized plane stress state.

  9. Tests of Measurement Invariance without Subgroups: A Generalization of Classical Methods

    ERIC Educational Resources Information Center

    Merkle, Edgar C.; Zeileis, Achim

    2013-01-01

    The issue of measurement invariance commonly arises in factor-analytic contexts, with methods for assessment including likelihood ratio tests, Lagrange multiplier tests, and Wald tests. These tests all require advance definition of the number of groups, group membership, and offending model parameters. In this paper, we study tests of measurement…

  10. Construction of RFIF using VVSFs with application

    NASA Astrophysics Data System (ADS)

    Katiyar, Kuldip; Prasad, Bhagwati

    2017-10-01

    A method of variable vertical scaling factors (VVSFs) is proposed to define the recurrent fractal interpolation function (RFIF) for fitting the data sets. A generalization of one of the recent methods using analytic approach is presented for finding variable vertical scaling factors. An application of it in reconstruction of an EEG signal is also given.

  11. Fair and Square Computation of Inverse "Z"-Transforms of Rational Functions

    ERIC Educational Resources Information Center

    Moreira, M. V.; Basilio, J. C.

    2012-01-01

    All methods presented in textbooks for computing inverse "Z"-transforms of rational functions have some limitation: 1) the direct division method does not, in general, provide enough information to derive an analytical expression for the time-domain sequence "x"("k") whose "Z"-transform is "X"("z"); 2) computation using the inversion integral…

  12. Impact of Advanced Propeller Technology on Aircraft/Mission Characteristics of Several General Aviation Aircraft

    NASA Technical Reports Server (NTRS)

    Keiter, I. D.

    1982-01-01

    Studies of several General Aviation aircraft indicated that the application of advanced technologies to General Aviation propellers can reduce fuel consumption in future aircraft by a significant amount. Propeller blade weight reductions achieved through the use of composites, propeller efficiency and noise improvements achieved through the use of advanced concepts and improved propeller analytical design methods result in aircraft with lower operating cost, acquisition cost and gross weight.

  13. Unified semiclassical theory for the two-state system: an analytical solution for general nonadiabatic tunneling.

    PubMed

    Zhu, Chaoyuan; Lin, Sheng Hsien

    2006-07-28

    Unified semiclasical solution for general nonadiabatic tunneling between two adiabatic potential energy surfaces is established by employing unified semiclassical solution for pure nonadiabatic transition [C. Zhu, J. Chem. Phys. 105, 4159 (1996)] with the certain symmetry transformation. This symmetry comes from a detailed analysis of the reduced scattering matrix for Landau-Zener type of crossing as a special case of nonadiabatic transition and nonadiabatic tunneling. Traditional classification of crossing and noncrossing types of nonadiabatic transition can be quantitatively defined by the rotation angle of adiabatic-to-diabatic transformation, and this rotational angle enters the analytical solution for general nonadiabatic tunneling. The certain two-state exponential potential models are employed for numerical tests, and the calculations from the present general nonadiabatic tunneling formula are demonstrated in very good agreement with the results from exact quantum mechanical calculations. The present general nonadiabatic tunneling formula can be incorporated with various mixed quantum-classical methods for modeling electronically nonadiabatic processes in photochemistry.

  14. Determination of transport wind speed in the gaussian plume diffusion equation for low-lying point sources

    NASA Astrophysics Data System (ADS)

    Wang, I. T.

    A general method for determining the effective transport wind speed, overlineu, in the Gaussian plume equation is discussed. Physical arguments are given for using the generalized overlineu instead of the often adopted release-level wind speed with the plume diffusion equation. Simple analytical expressions for overlineu applicable to low-level point releases and a wide range of atmospheric conditions are developed. A non-linear plume kinematic equation is derived using these expressions. Crosswind-integrated SF 6 concentration data from the 1983 PNL tracer experiment are used to evaluate the proposed analytical procedures along with the usual approach of using the release-level wind speed. Results of the evaluation are briefly discussed.

  15. Generalized analytic solutions and response characteristics of magnetotelluric fields on anisotropic infinite faults

    NASA Astrophysics Data System (ADS)

    Bing, Xue; Yicai, Ji

    2018-06-01

    In order to understand directly and analyze accurately the detected magnetotelluric (MT) data on anisotropic infinite faults, two-dimensional partial differential equations of MT fields are used to establish a model of anisotropic infinite faults using the Fourier transform method. A multi-fault model is developed to expand the one-fault model. The transverse electric mode and transverse magnetic mode analytic solutions are derived using two-infinite-fault models. The infinite integral terms of the quasi-analytic solutions are discussed. The dual-fault model is computed using the finite element method to verify the correctness of the solutions. The MT responses of isotropic and anisotropic media are calculated to analyze the response functions by different anisotropic conductivity structures. The thickness and conductivity of the media, influencing MT responses, are discussed. The analytic principles are also given. The analysis results are significant to how MT responses are perceived and to the data interpretation of the complex anisotropic infinite faults.

  16. Short-time quantum dynamics of sharp boundaries potentials

    NASA Astrophysics Data System (ADS)

    Granot, Er'el; Marchewka, Avi

    2015-02-01

    Despite the high prevalence of singular potential in general, and rectangular potentials in particular, in applied scattering models, to date little is known about their short time effects. The reason is that singular potentials cause a mixture of complicated local as well as non-local effects. The object of this work is to derive a generic method to calculate analytically the short-time impact of any singular potential. In this paper it is shown that the scattering of a smooth wavefunction on a singular potential is totally equivalent, in the short-time regime, to the free propagation of a singular wavefunction. However, the latter problem was totally addressed analytically in Ref. [7]. Therefore, this equivalency can be utilized in solving analytically the short time dynamics of any smooth wavefunction at the presence of a singular potentials. In particular, with this method the short-time dynamics of any problem where a sharp boundaries potential (e.g., a rectangular barrier) is turned on instantaneously can easily be solved analytically.

  17. Rational quality assessment procedure for less-investigated herbal medicines: Case of a Congolese antimalarial drug with an analytical report.

    PubMed

    Tshitenge, Dieudonné Tshitenge; Ioset, Karine Ndjoko; Lami, José Nzunzu; Ndelo-di-Phanzu, Josaphat; Mufusama, Jean-Pierre Koy Sita; Bringmann, Gerhard

    2016-04-01

    Herbal medicines are the most globally used type of medical drugs. Their high cultural acceptability is due to the experienced safety and efficiency over centuries of use. Many of them are still phytochemically less-investigated, and are used without standardization or quality control. Choosing SIROP KILMA, an authorized Congolese antimalarial phytomedicine, as a model case, our study describes an interdisciplinary approach for a rational quality assessment of herbal drugs in general. It combines an authentication step of the herbal remedy prior to any fingerprinting, the isolation of the major constituents, the development and validation of an HPLC-DAD analytical method with internal markers, and the application of the method to several batches of the herbal medicine (here KILMA) thus permitting the establishment of a quantitative fingerprint. From the constitutive plants of KILMA, acteoside, isoacteoside, stachannin A, and pectolinarigenin-7-O-glucoside were isolated, and acteoside was used as the prime marker for the validation of an analytical method. This study contributes to the efforts of the WHO for the establishment of standards enabling the analytical evaluation of herbal materials. Moreover, the paper describes the first phytochemical and analytical report on a marketed Congolese phytomedicine. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Newer developments on self-modeling curve resolution implementing equality and unimodality constraints.

    PubMed

    Beyramysoltan, Samira; Abdollahi, Hamid; Rajkó, Róbert

    2014-05-27

    Analytical self-modeling curve resolution (SMCR) methods resolve data sets to a range of feasible solutions using only non-negative constraints. The Lawton-Sylvestre method was the first direct method to analyze a two-component system. It was generalized as a Borgen plot for determining the feasible regions in three-component systems. It seems that a geometrical view is required for considering curve resolution methods, because the complicated (only algebraic) conceptions caused a stop in the general study of Borgen's work for 20 years. Rajkó and István revised and elucidated the principles of existing theory in SMCR methods and subsequently introduced computational geometry tools for developing an algorithm to draw Borgen plots in three-component systems. These developments are theoretical inventions and the formulations are not always able to be given in close form or regularized formalism, especially for geometric descriptions, that is why several algorithms should have been developed and provided for even the theoretical deductions and determinations. In this study, analytical SMCR methods are revised and described using simple concepts. The details of a drawing algorithm for a developmental type of Borgen plot are given. Additionally, for the first time in the literature, equality and unimodality constraints are successfully implemented in the Lawton-Sylvestre method. To this end, a new state-of-the-art procedure is proposed to impose equality constraint in Borgen plots. Two- and three-component HPLC-DAD data set were simulated and analyzed by the new analytical curve resolution methods with and without additional constraints. Detailed descriptions and explanations are given based on the obtained abstract spaces. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Survey of NASA research on crash dynamics

    NASA Technical Reports Server (NTRS)

    Thomson, R. G.; Carden, H. D.; Hayduk, R. J.

    1984-01-01

    Ten years of structural crash dynamics research activities conducted on general aviation aircraft by the National Aeronautics and Space Administration (NASA) are described. Thirty-two full-scale crash tests were performed at Langley Research Center, and pertinent data on airframe and seat behavior were obtained. Concurrent with the experimental program, analytical methods were developed to help predict structural behavior during impact. The effects of flight parameters at impact on cabin deceleration pulses at the seat/occupant interface, experimental and analytical correlation of data on load-limiting subfloor and seat configurations, airplane section test results for computer modeling validation, and data from emergency-locator-transmitter (ELT) investigations to determine probable cause of false alarms and nonactivations are assessed. Computer programs which provide designers with analytical methods for predicting accelerations, velocities, and displacements of collapsing structures are also discussed.

  20. Analytical approaches to optimizing system "Semiconductor converter-electric drive complex"

    NASA Astrophysics Data System (ADS)

    Kormilicin, N. V.; Zhuravlev, A. M.; Khayatov, E. S.

    2018-03-01

    In the electric drives of the machine-building industry, the problem of optimizing the drive in terms of mass-size indicators is acute. The article offers analytical methods that ensure the minimization of the mass of a multiphase semiconductor converter. In multiphase electric drives, the form of the phase current at which the best possible use of the "semiconductor converter-electric drive complex" for active materials is different from the sinusoidal form. It is shown that under certain restrictions on the phase current form, it is possible to obtain an analytical solution. In particular, if one assumes the shape of the phase current to be rectangular, the optimal shape of the control actions will depend on the width of the interpolar gap. In the general case, the proposed algorithm can be used to solve the problem under consideration by numerical methods.

  1. Analytical and numerical analyses for a penny-shaped crack embedded in an infinite transversely isotropic multi-ferroic composite medium: semi-permeable electro-magnetic boundary condition

    NASA Astrophysics Data System (ADS)

    Zheng, R.-F.; Wu, T.-H.; Li, X.-Y.; Chen, W.-Q.

    2018-06-01

    The problem of a penny-shaped crack embedded in an infinite space of transversely isotropic multi-ferroic composite medium is investigated. The crack is assumed to be subjected to uniformly distributed mechanical, electric and magnetic loads applied symmetrically on the upper and lower crack surfaces. The semi-permeable (limited-permeable) electro-magnetic boundary condition is adopted. By virtue of the generalized method of potential theory and the general solutions, the boundary integro-differential equations governing the mode I crack problem, which are of nonlinear nature, are established and solved analytically. Exact and complete coupling magneto-electro-elastic field is obtained in terms of elementary functions. Important parameters in fracture mechanics on the crack plane, e.g., the generalized crack surface displacements, the distributions of generalized stresses at the crack tip, the generalized stress intensity factors and the energy release rate, are explicitly presented. To validate the present solutions, a numerical code by virtue of finite element method is established for 3D crack problems in the framework of magneto-electro-elasticity. To evaluate conveniently the effect of the medium inside the crack, several empirical formulae are developed, based on the numerical results.

  2. Quantitative methods for analysing cumulative effects on fish migration success: a review.

    PubMed

    Johnson, J E; Patterson, D A; Martins, E G; Cooke, S J; Hinch, S G

    2012-07-01

    It is often recognized, but seldom addressed, that a quantitative assessment of the cumulative effects, both additive and non-additive, of multiple stressors on fish survival would provide a more realistic representation of the factors that influence fish migration. This review presents a compilation of analytical methods applied to a well-studied fish migration, a more general review of quantitative multivariable methods, and a synthesis on how to apply new analytical techniques in fish migration studies. A compilation of adult migration papers from Fraser River sockeye salmon Oncorhynchus nerka revealed a limited number of multivariable methods being applied and the sub-optimal reliance on univariable methods for multivariable problems. The literature review of fisheries science, general biology and medicine identified a large number of alternative methods for dealing with cumulative effects, with a limited number of techniques being used in fish migration studies. An evaluation of the different methods revealed that certain classes of multivariable analyses will probably prove useful in future assessments of cumulative effects on fish migration. This overview and evaluation of quantitative methods gathered from the disparate fields should serve as a primer for anyone seeking to quantify cumulative effects on fish migration survival. © 2012 The Authors. Journal of Fish Biology © 2012 The Fisheries Society of the British Isles.

  3. 40 CFR 79.11 - Information and assurances to be provided by the fuel manufacturer.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... description (or identification, in the case of a generally accepted method) of a suitable analytical technique... sold, offered for sale, or introduced into commerce for use in motor vehicles manufactured after model...

  4. High-throughput screening for new psychoactive substances (NPS) in whole blood by DLLME extraction and UHPLC-MS/MS analysis.

    PubMed

    Odoardi, Sara; Fisichella, Marco; Romolo, Francesco Saverio; Strano-Rossi, Sabina

    2015-09-01

    The increasing number of new psychoactive substances (NPS) present in the illicit market render their identification in biological fluids/tissues of great concern for clinical and forensic toxicology. Analytical methods able to detect the huge number of substances that can be used are sought, considering also that many NPS are not detected by the standard immunoassays generally used for routine drug screening. The aim of this work was to develop a method for the screening of different classes of NPS (a total of 78 analytes including cathinones, synthetic cannabinoids, phenethylamines, piperazines, ketamine and analogues, benzofurans, tryptamines) from blood samples. The simultaneous extraction of analytes was performed by Dispersive Liquid/Liquid Microextraction DLLME, a very rapid, cheap and efficient extraction technique that employs microliters amounts of organic solvents. Analyses were performed by a target Ultrahigh Performance Liquid Chromatography tandem Mass Spectrometry (UHPLC-MS/MS) method in multiple reaction monitoring (MRM). The method allowed the detection of the studied analytes with limits of detection (LODs) ranging from 0.2 to 2ng/mL. The proposed DLLME method can be used as an alternative to classical liquid/liquid or solid-phase extraction techniques due to its rapidity, necessity to use only microliters amounts of organic solvents, cheapness, and to its ability to extract simultaneously a huge number of analytes also from different chemical classes. The method was then applied to 60 authentic real samples from forensic cases, demonstrating its suitability for the screening of a wide number of NPS. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Mathematical and computational studies of equilibrium capillary free surfaces

    NASA Technical Reports Server (NTRS)

    Albright, N.; Chen, N. F.; Concus, P.; Finn, R.

    1977-01-01

    The results of several independent studies are presented. The general question is considered of whether a wetting liquid always rises higher in a small capillary tube than in a larger one, when both are dipped vertically into an infinite reservoir. An analytical investigation is initiated to determine the qualitative behavior of the family of solutions of the equilibrium capillary free-surface equation that correspond to rotationally symmetric pendent liquid drops and the relationship of these solutions to the singular solution, which corresponds to an infinite spike of liquid extending downward to infinity. The block successive overrelaxation-Newton method and the generalized conjugate gradient method are investigated for solving the capillary equation on a uniform square mesh in a square domain, including the case for which the solution is unbounded at the corners. Capillary surfaces are calculated on the ellipse, on a circle with reentrant notches, and on other irregularly shaped domains using JASON, a general purpose program for solving nonlinear elliptic equations on a nonuniform quadrilaterial mesh. Analytical estimates for the nonexistence of solutions of the equilibrium capillary free-surface equation on the ellipse in zero gravity are evaluated.

  6. Purely numerical approach for analyzing flow to a well intercepting a vertical fracture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narasimhan, T.N.; Palen, W.A.

    1979-03-01

    A numerical method, based on an Integral Finite Difference approach, is presented to investigate wells intercepting fractures in general and vertical fractures in particular. Such features as finite conductivity, wellbore storage, damage, and fracture deformability and its influence as permeability are easily handled. The advantage of the numerical approach is that it is based on fewer assumptions than analytic solutions and hence has greater generality. Illustrative examples are given to validate the method against known solutions. New results are presenteed to demonstrate the applicability of the method to problems not apparently considered in the literature so far.

  7. General method for extracting the quantum efficiency of dispersive qubit readout in circuit QED

    NASA Astrophysics Data System (ADS)

    Bultink, C. C.; Tarasinski, B.; Haandbæk, N.; Poletto, S.; Haider, N.; Michalak, D. J.; Bruno, A.; DiCarlo, L.

    2018-02-01

    We present and demonstrate a general three-step method for extracting the quantum efficiency of dispersive qubit readout in circuit QED. We use active depletion of post-measurement photons and optimal integration weight functions on two quadratures to maximize the signal-to-noise ratio of the non-steady-state homodyne measurement. We derive analytically and demonstrate experimentally that the method robustly extracts the quantum efficiency for arbitrary readout conditions in the linear regime. We use the proven method to optimally bias a Josephson traveling-wave parametric amplifier and to quantify different noise contributions in the readout amplification chain.

  8. Building analytical three-field cosmological models

    NASA Astrophysics Data System (ADS)

    Santos, J. R. L.; Moraes, P. H. R. S.; Ferreira, D. A.; Neta, D. C. Vilar

    2018-02-01

    A difficult task to deal with is the analytical treatment of models composed of three real scalar fields, as their equations of motion are in general coupled and hard to integrate. In order to overcome this problem we introduce a methodology to construct three-field models based on the so-called "extension method". The fundamental idea of the procedure is to combine three one-field systems in a non-trivial way, to construct an effective three scalar field model. An interesting scenario where the method can be implemented is with inflationary models, where the Einstein-Hilbert Lagrangian is coupled with the scalar field Lagrangian. We exemplify how a new model constructed from our method can lead to non-trivial behaviors for cosmological parameters.

  9. Means and method of detection in chemical separation procedures

    DOEpatents

    Yeung, Edward S.; Koutny, Lance B.; Hogan, Barry L.; Cheung, Chan K.; Ma, Yinfa

    1993-03-09

    A means and method for indirect detection of constituent components of a mixture separated in a chemical separation process. Fluorescing ions are distributed across the area in which separation of the mixture will occur to provide a generally uniform background fluorescence intensity. For example, the mixture is comprised of one or more charged analytes which displace fluorescing ions where its constituent components separate to. Fluorescing ions of the same charge as the charged analyte components cause a displacement. The displacement results in the location of the separated components having a reduced fluorescence intensity to the remainder of the background. Detection of the lower fluorescence intensity areas can be visually, by photographic means and methods, or by automated laser scanning.

  10. Means and method of detection in chemical separation procedures

    DOEpatents

    Yeung, E.S.; Koutny, L.B.; Hogan, B.L.; Cheung, C.K.; Yinfa Ma.

    1993-03-09

    A means and method are described for indirect detection of constituent components of a mixture separated in a chemical separation process. Fluorescing ions are distributed across the area in which separation of the mixture will occur to provide a generally uniform background fluorescence intensity. For example, the mixture is comprised of one or more charged analytes which displace fluorescing ions where its constituent components separate to. Fluorescing ions of the same charge as the charged analyte components cause a displacement. The displacement results in the location of the separated components having a reduced fluorescence intensity to the remainder of the background. Detection of the lower fluorescence intensity areas can be visually, by photographic means and methods, or by automated laser scanning.

  11. Assessing and Analyzing Change in Attitudes in the Classroom

    ERIC Educational Resources Information Center

    Tractenberg, Rochelle E.; Chaterji, Ranjana; Haramati, Aviad

    2007-01-01

    We explore three analytic methods that can be used to quantify and qualify changes in attitude and similar outcomes that may be encountered in the educational context. These methods can be used or adapted whenever the outcome of interest is change in a generally unmeasurable attribute, such as attitude. The analyses we describe focus on: (1)…

  12. Statistical error in simulations of Poisson processes: Example of diffusion in solids

    NASA Astrophysics Data System (ADS)

    Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.

    2016-08-01

    Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.

  13. A transient laboratory method for determining the hydraulic properties of 'tight' rocks-I. Theory

    USGS Publications Warehouse

    Hsieh, P.A.; Tracy, J.V.; Neuzil, C.E.; Bredehoeft, J.D.; Silliman, Stephen E.

    1981-01-01

    Transient pulse testing has been employed increasingly in the laboratory to measure the hydraulic properties of rock samples with low permeability. Several investigators have proposed a mathematical model in terms of an initial-boundary value problem to describe fluid flow in a transient pulse test. However, the solution of this problem has not been available. In analyzing data from the transient pulse test, previous investigators have either employed analytical solutions that are derived with the use of additional, restrictive assumptions, or have resorted to numerical methods. In Part I of this paper, a general, analytical solution for the transient pulse test is presented. This solution is graphically illustrated by plots of dimensionless variables for several cases of interest. The solution is shown to contain, as limiting cases, the more restrictive analytical solutions that the previous investigators have derived. A method of computing both the permeability and specific storage of the test sample from experimental data will be presented in Part II. ?? 1981.

  14. A complete analytical solution for the inverse instantaneous kinematics of a spherical-revolute-spherical (7R) redundant manipulator

    NASA Technical Reports Server (NTRS)

    Podhorodeski, R. P.; Fenton, R. G.; Goldenberg, A. A.

    1989-01-01

    Using a method based upon resolving joint velocities using reciprocal screw quantities, compact analytical expressions are generated for the inverse solution of the joint rates of a seven revolute (spherical-revolute-spherical) manipulator. The method uses a sequential decomposition of screw coordinates to identify reciprocal screw quantities used in the resolution of a particular joint rate solution, and also to identify a Jacobian null-space basis used for the direct solution of optimal joint rates. The results of the screw decomposition are used to study special configurations of the manipulator, generating expressions for the inverse velocity solution for all non-singular configurations of the manipulator, and identifying singular configurations and their characteristics. Two functions are therefore served: a new general method for the solution of the inverse velocity problem is presented; and complete analytical expressions are derived for the resolution of the joint rates of a seven degree of freedom manipulator useful for telerobotic and industrial robotic application.

  15. Analytical Support Capabilities of Turkish General Staff Scientific Decision Support Centre (SDSC) to Defence Transformation

    DTIC Science & Technology

    2005-04-01

    RTO-MP-SAS-055 4 - 1 UNCLASSIFIED/UNLIMITED UNCLASSIFIED/UNLIMITED Analytical Support Capabilities of Turkish General Staff Scientific...the end failed to achieve anything commensurate with the effort. The analytical support capabilities of Turkish Scientific Decision Support Center to...percent of the İpekkan, Z.; Özkil, A. (2005) Analytical Support Capabilities of Turkish General Staff Scientific Decision Support Centre (SDSC) to

  16. A study of methods to predict and measure the transmission of sound through the walls of light aircraft. Integration of certain singular boundary element integrals for applications in linear acoustics

    NASA Technical Reports Server (NTRS)

    Zimmerle, D.; Bernhard, R. J.

    1985-01-01

    An alternative method for performing singular boundary element integrals for applications in linear acoustics is discussed. The method separates the integral of the characteristic solution into a singular and nonsingular part. The singular portion is integrated with a combination of analytic and numerical techniques while the nonsingular portion is integrated with standard Gaussian quadrature. The method may be generalized to many types of subparametric elements. The integrals over elements containing the root node are considered, and the characteristic solution for linear acoustic problems are examined. The method may be generalized to most characteristic solutions.

  17. Environmental and Water Quality Operational Studies. General Guidelines for Monitoring Contaminants in Reservoirs

    DTIC Science & Technology

    1986-02-01

    espacially trte for the topics of sampling and analytical methods, statistical considerations, and the design of general water quality monitoring networks. For...and to the establishment and habitat differentiation of biological populations within reservoirs. Reservoir operatirn, esp- cially the timing...8217 % - - % properties of bottom sediments, as well as specific habitat associations of biological populations of reservoirs. Thus, such heterogeneities

  18. Collapse of ultrashort spatiotemporal pulses described by the cubic generalized Kadomtsev-Petviashvili equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leblond, Herve; Kremer, David; Mihalache, Dumitru

    2010-03-15

    By using a reductive perturbation method, we derive from Maxwell-Bloch equations a cubic generalized Kadomtsev-Petviashvili equation for ultrashort spatiotemporal optical pulse propagation in cubic (Kerr-like) media without the use of the slowly varying envelope approximation. We calculate the collapse threshold for the propagation of few-cycle spatiotemporal pulses described by the generic cubic generalized Kadomtsev-Petviashvili equation by a direct numerical method and compare it to analytic results based on a rigorous virial theorem. Besides, typical evolution of the spectrum (integrated over the transverse spatial coordinate) is given and a strongly asymmetric spectral broadening of ultrashort spatiotemporal pulses during collapse is evidenced.

  19. 1 CFR 6.2 - Analytical subject indexes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 1 General Provisions 1 2010-01-01 2010-01-01 false Analytical subject indexes. 6.2 Section 6.2 General Provisions ADMINISTRATIVE COMMITTEE OF THE FEDERAL REGISTER THE FEDERAL REGISTER INDEXES AND ANCILLARIES § 6.2 Analytical subject indexes. Analytical subject indexes covering the contents of the Federal...

  20. 1 CFR 6.2 - Analytical subject indexes.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 1 General Provisions 1 2011-01-01 2011-01-01 false Analytical subject indexes. 6.2 Section 6.2 General Provisions ADMINISTRATIVE COMMITTEE OF THE FEDERAL REGISTER THE FEDERAL REGISTER INDEXES AND ANCILLARIES § 6.2 Analytical subject indexes. Analytical subject indexes covering the contents of the Federal...

  1. 1 CFR 6.2 - Analytical subject indexes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 1 General Provisions 1 2014-01-01 2012-01-01 true Analytical subject indexes. 6.2 Section 6.2 General Provisions ADMINISTRATIVE COMMITTEE OF THE FEDERAL REGISTER THE FEDERAL REGISTER INDEXES AND ANCILLARIES § 6.2 Analytical subject indexes. Analytical subject indexes covering the contents of the Federal...

  2. 1 CFR 6.2 - Analytical subject indexes.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 1 General Provisions 1 2012-01-01 2012-01-01 false Analytical subject indexes. 6.2 Section 6.2 General Provisions ADMINISTRATIVE COMMITTEE OF THE FEDERAL REGISTER THE FEDERAL REGISTER INDEXES AND ANCILLARIES § 6.2 Analytical subject indexes. Analytical subject indexes covering the contents of the Federal...

  3. 1 CFR 6.2 - Analytical subject indexes.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 1 General Provisions 1 2013-01-01 2012-01-01 true Analytical subject indexes. 6.2 Section 6.2 General Provisions ADMINISTRATIVE COMMITTEE OF THE FEDERAL REGISTER THE FEDERAL REGISTER INDEXES AND ANCILLARIES § 6.2 Analytical subject indexes. Analytical subject indexes covering the contents of the Federal...

  4. Precise determination of N-acetylcysteine in pharmaceuticals by microchip electrophoresis.

    PubMed

    Rudašová, Marína; Masár, Marián

    2016-01-01

    A novel microchip electrophoresis method for the rapid and high-precision determination of N-acetylcysteine, a pharmaceutically active ingredient, in mucolytics has been developed. Isotachophoresis separations were carried out at pH 6.0 on a microchip with conductivity detection. The methods of external calibration and internal standard were used to evaluate the results. The internal standard method effectively eliminated variations in various working parameters, mainly run-to-run fluctuations of an injected volume. The repeatability and accuracy of N-acetylcysteine determination in all mucolytic preparations tested (Solmucol 90 and 200, and ACC Long 600) were more than satisfactory with the relative standard deviation and relative error values <0.7 and <1.9%, respectively. A recovery range of 99-101% of N-acetylcysteine in the analyzed pharmaceuticals predetermines the proposed method for accurate analysis as well. This work, in general, indicates analytical possibilities of microchip isotachophoresis for the quantitative analysis of simplified samples such as pharmaceuticals that contain the analyte(s) at relatively high concentrations. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Analytical Modeling for the Bending Resonant Frequency of Multilayered Microresonators with Variable Cross-Section

    PubMed Central

    Herrera-May, Agustín L.; Aguilera-Cortés, Luz A.; Plascencia-Mora, Hector; Rodríguez-Morales, Ángel L.; Lu, Jian

    2011-01-01

    Multilayered microresonators commonly use sensitive coating or piezoelectric layers for detection of mass and gas. Most of these microresonators have a variable cross-section that complicates the prediction of their fundamental resonant frequency (generally of the bending mode) through conventional analytical models. In this paper, we present an analytical model to estimate the first resonant frequency and deflection curve of single-clamped multilayered microresonators with variable cross-section. The analytical model is obtained using the Rayleigh and Macaulay methods, as well as the Euler-Bernoulli beam theory. Our model is applied to two multilayered microresonators with piezoelectric excitation reported in the literature. Both microresonators are composed by layers of seven different materials. The results of our analytical model agree very well with those obtained from finite element models (FEMs) and experimental data. Our analytical model can be used to determine the suitable dimensions of the microresonator’s layers in order to obtain a microresonator that operates at a resonant frequency necessary for a particular application. PMID:22164071

  6. Numerical realization of the variational method for generating self-trapped beams.

    PubMed

    Duque, Erick I; Lopez-Aguayo, Servando; Malomed, Boris A

    2018-03-19

    We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schrödinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.

  7. Analytic theory of orbit contraction

    NASA Technical Reports Server (NTRS)

    Vinh, N. X.; Longuski, J. M.; Busemann, A.; Culp, R. D.

    1977-01-01

    The motion of a satellite in orbit, subject to atmospheric force and the motion of a reentry vehicle are governed by gravitational and aerodynamic forces. This suggests the derivation of a uniform set of equations applicable to both cases. For the case of satellite motion, by a proper transformation and by the method of averaging, a technique appropriate for long duration flight, the classical nonlinear differential equation describing the contraction of the major axis is derived. A rigorous analytic solution is used to integrate this equation with a high degree of accuracy, using Poincare's method of small parameters and Lagrange's expansion to explicitly express the major axis as a function of the eccentricity. The solution is uniformly valid for moderate and small eccentricities. For highly eccentric orbits, the asymptotic equation is derived directly from the general equation. Numerical solutions were generated to display the accuracy of the analytic theory.

  8. Comparison of methods for determination of total oil sands-derived naphthenic acids in water samples.

    PubMed

    Hughes, Sarah A; Huang, Rongfu; Mahaffey, Ashley; Chelme-Ayala, Pamela; Klamerth, Nikolaus; Meshref, Mohamed N A; Ibrahim, Mohamed D; Brown, Christine; Peru, Kerry M; Headley, John V; Gamal El-Din, Mohamed

    2017-11-01

    There are several established methods for the determination of naphthenic acids (NAs) in waters associated with oil sands mining operations. Due to their highly complex nature, measured concentration and composition of NAs vary depending on the method used. This study compared different common sample preparation techniques, analytical instrument methods, and analytical standards to measure NAs in groundwater and process water samples collected from an active oil sands operation. In general, the high- and ultrahigh-resolution methods, namely high performance liquid chromatography time-of-flight mass spectrometry (UPLC-TOF-MS) and Orbitrap mass spectrometry (Orbitrap-MS), were within an order of magnitude of the Fourier transform infrared spectroscopy (FTIR) methods. The gas chromatography mass spectrometry (GC-MS) methods consistently had the highest NA concentrations and greatest standard error. Total NAs concentration was not statistically different between sample preparation of solid phase extraction and liquid-liquid extraction. Calibration standards influenced quantitation results. This work provided a comprehensive understanding of the inherent differences in the various techniques available to measure NAs and hence the potential differences in measured amounts of NAs in samples. Results from this study will contribute to the analytical method standardization for NA analysis in oil sands related water samples. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. A novel optimization algorithm for MIMO Hammerstein model identification under heavy-tailed noise.

    PubMed

    Jin, Qibing; Wang, Hehe; Su, Qixin; Jiang, Beiyan; Liu, Qie

    2018-01-01

    In this paper, we study the system identification of multi-input multi-output (MIMO) Hammerstein processes under the typical heavy-tailed noise. To the best of our knowledge, there is no general analytical method to solve this identification problem. Motivated by this, we propose a general identification method to solve this problem based on a Gaussian-Mixture Distribution intelligent optimization algorithm (GMDA). The nonlinear part of Hammerstein process is modeled by a Radial Basis Function (RBF) neural network, and the identification problem is converted to an optimization problem. To overcome the drawbacks of analytical identification method in the presence of heavy-tailed noise, a meta-heuristic optimization algorithm, Cuckoo search (CS) algorithm is used. To improve its performance for this identification problem, the Gaussian-mixture Distribution (GMD) and the GMD sequences are introduced to improve the performance of the standard CS algorithm. Numerical simulations for different MIMO Hammerstein models are carried out, and the simulation results verify the effectiveness of the proposed GMDA. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  10. The analytic structure of conformal blocks and the generalized Wilson-Fisher fixed points

    DOE PAGES

    Gliozzi, Ferdinando; Guerrieri, Andrea L.; Petkou, Anastasios C.; ...

    2017-04-11

    Here, we describe in detail the method used in our previous work arXiv:1611.10344 to study the Wilson-Fisher critical points nearby generalized free CFTs, exploiting the analytic structure of conformal blocks as functions of the conformal dimension of the exchanged operator. Our method is equivalent to the mechanism of conformal multiplet recombination set up by null states. We also compute, to the first non-trivial order in the ε-expansion, the anomalous dimensions and the OPE coefficients of infinite classes of scalar local operators using just CFT data. We study single-scalar and O(N)-invariant theories, as well as theories with multiple deformations. When availablemore » we agree with older results, but we also produce a wealth of new ones. Furthermore, unitarity and crossing symmetry are not used in our approach and we are able to apply our method to non-unitary theories as well. Some implications of our results for the study of the non-unitary theories containing partially conserved higher-spin currents are briefly mentioned.« less

  11. Methods for integrating moderation and mediation: a general analytical framework using moderated path analysis.

    PubMed

    Edwards, Jeffrey R; Lambert, Lisa Schurer

    2007-03-01

    Studies that combine moderation and mediation are prevalent in basic and applied psychology research. Typically, these studies are framed in terms of moderated mediation or mediated moderation, both of which involve similar analytical approaches. Unfortunately, these approaches have important shortcomings that conceal the nature of the moderated and the mediated effects under investigation. This article presents a general analytical framework for combining moderation and mediation that integrates moderated regression analysis and path analysis. This framework clarifies how moderator variables influence the paths that constitute the direct, indirect, and total effects of mediated models. The authors empirically illustrate this framework and give step-by-step instructions for estimation and interpretation. They summarize the advantages of their framework over current approaches, explain how it subsumes moderated mediation and mediated moderation, and describe how it can accommodate additional moderator and mediator variables, curvilinear relationships, and structural equation models with latent variables. (c) 2007 APA, all rights reserved.

  12. Statistical analysis of loopy belief propagation in random fields

    NASA Astrophysics Data System (ADS)

    Yasuda, Muneki; Kataoka, Shun; Tanaka, Kazuyuki

    2015-10-01

    Loopy belief propagation (LBP), which is equivalent to the Bethe approximation in statistical mechanics, is a message-passing-type inference method that is widely used to analyze systems based on Markov random fields (MRFs). In this paper, we propose a message-passing-type method to analytically evaluate the quenched average of LBP in random fields by using the replica cluster variation method. The proposed analytical method is applicable to general pairwise MRFs with random fields whose distributions differ from each other and can give the quenched averages of the Bethe free energies over random fields, which are consistent with numerical results. The order of its computational cost is equivalent to that of standard LBP. In the latter part of this paper, we describe the application of the proposed method to Bayesian image restoration, in which we observed that our theoretical results are in good agreement with the numerical results for natural images.

  13. The Effectiveness of Circular Equating as a Criterion for Evaluating Equating.

    ERIC Educational Resources Information Center

    Wang, Tianyou; Hanson, Bradley A.; Harris, Deborah J.

    Equating a test form to itself through a chain of equatings, commonly referred to as circular equating, has been widely used as a criterion to evaluate the adequacy of equating. This paper uses both analytical methods and simulation methods to show that this criterion is in general invalid in serving this purpose. For the random groups design done…

  14. A research program to reduce interior noise in general aviation airplanes. [test methods and results

    NASA Technical Reports Server (NTRS)

    Roskam, J.; Muirhead, V. U.; Smith, H. W.; Peschier, T. D.; Durenberger, D.; Vandam, K.; Shu, T. C.

    1977-01-01

    Analytical and semi-empirical methods for determining the transmission of sound through isolated panels and predicting panel transmission loss are described. Test results presented include the influence of plate stiffness and mass and the effects of pressurization and vibration damping materials on sound transmission characteristics. Measured and predicted results are presented in tables and graphs.

  15. A general method for computing the total solar radiation force on complex spacecraft structures

    NASA Technical Reports Server (NTRS)

    Chan, F. K.

    1981-01-01

    The method circumvents many of the existing difficulties in computational logic presently encountered in the direct analytical or numerical evaluation of the appropriate surface integral. It may be applied to complex spacecraft structures for computing the total force arising from either specular or diffuse reflection or even from non-Lambertian reflection and re-radiation.

  16. Computing sensitivity and selectivity in parallel factor analysis and related multiway techniques: the need for further developments in net analyte signal theory.

    PubMed

    Olivieri, Alejandro C

    2005-08-01

    Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.

  17. A Generalized Michaelis-Menten Equation in Protein Synthesis: Effects of Mis-Charged Cognate tRNA and Mis-Reading of Codon.

    PubMed

    Dutta, Annwesha; Chowdhury, Debashish

    2017-05-01

    The sequence of amino acid monomers in the primary structure of a protein is decided by the corresponding sequence of codons (triplets of nucleic acid monomers) on the template messenger RNA (mRNA). The polymerization of a protein, by incorporation of the successive amino acid monomers, is carried out by a molecular machine called ribosome. We develop a stochastic kinetic model that captures the possibilities of mis-reading of mRNA codon and prior mis-charging of a tRNA. By a combination of analytical and numerical methods, we obtain the distribution of the times taken for incorporation of the successive amino acids in the growing protein in this mathematical model. The corresponding exact analytical expression for the average rate of elongation of a nascent protein is a 'biologically motivated' generalization of the Michaelis-Menten formula for the average rate of enzymatic reactions. This generalized Michaelis-Menten-like formula (and the exact analytical expressions for a few other quantities) that we report here display the interplay of four different branched pathways corresponding to selection of four different types of tRNA.

  18. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  19. A short history, principles, and types of ELISA, and our laboratory experience with peptide/protein analyses using ELISA.

    PubMed

    Aydin, Suleyman

    2015-10-01

    Playing a critical role in the metabolic homeostasis of living systems, the circulating concentrations of peptides/proteins are influenced by a variety of patho-physiological events. These peptide/protein concentrations in biological fluids are measured using various methods, the most common of which is enzymatic immunoassay EIA/ELISA and which guide the clinicians in diagnosing and monitoring diseases that inflict biological systems. All the techniques where enzymes are employed to show antigen-antibody reactions are generally referred to as enzymatic immunoassay EIA/ELISA method. Since the basic principles of EIA and ELISA are the same. The main objective of this review is to present an overview of the historical journey that had led to the invention of EIA/ELISA, an indispensible method for medical and research laboratories, types of ELISA developed after its invention [direct (the first ELISA method invented), indirect, sandwich and competitive methods], problems encountered during peptide/protein analyses (pre-analytical, analytical and post-analytical), rules to be followed to prevent these problems, and our laboratory experience of more than 15 years. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Quantitative evaluation of the matrix effect in bioanalytical methods based on LC-MS: A comparison of two approaches.

    PubMed

    Rudzki, Piotr J; Gniazdowska, Elżbieta; Buś-Kwaśnik, Katarzyna

    2018-06-05

    Liquid chromatography coupled to mass spectrometry (LC-MS) is a powerful tool for studying pharmacokinetics and toxicokinetics. Reliable bioanalysis requires the characterization of the matrix effect, i.e. influence of the endogenous or exogenous compounds on the analyte signal intensity. We have compared two methods for the quantitation of matrix effect. The CVs(%) of internal standard normalized matrix factors recommended by the European Medicines Agency were evaluated against internal standard normalized relative matrix effects derived from Matuszewski et al. (2003). Both methods use post-extraction spiked samples, but matrix factors require also neat solutions. We have tested both approaches using analytes of diverse chemical structures. The study did not reveal relevant differences in the results obtained with both calculation methods. After normalization with the internal standard, the CV(%) of the matrix factor was on average 0.5% higher than the corresponding relative matrix effect. The method adopted by the European Medicines Agency seems to be slightly more conservative in the analyzed datasets. Nine analytes of different structures enabled a general overview of the problem, still, further studies are encouraged to confirm our observations. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. On analyticity of linear waves scattered by a layered medium

    NASA Astrophysics Data System (ADS)

    Nicholls, David P.

    2017-10-01

    The scattering of linear waves by periodic structures is a crucial phenomena in many branches of applied physics and engineering. In this paper we establish rigorous analytic results necessary for the proper numerical analysis of a class of High-Order Perturbation of Surfaces methods for simulating such waves. More specifically, we prove a theorem on existence and uniqueness of solutions to a system of partial differential equations which model the interaction of linear waves with a multiply layered periodic structure in three dimensions. This result provides hypotheses under which a rigorous numerical analysis could be conducted for recent generalizations to the methods of Operator Expansions, Field Expansions, and Transformed Field Expansions.

  2. A general statistical test for correlations in a finite-length time series.

    PubMed

    Hanson, Jeffery A; Yang, Haw

    2008-06-07

    The statistical properties of the autocorrelation function from a time series composed of independently and identically distributed stochastic variables has been studied. Analytical expressions for the autocorrelation function's variance have been derived. It has been found that two common ways of calculating the autocorrelation, moving-average and Fourier transform, exhibit different uncertainty characteristics. For periodic time series, the Fourier transform method is preferred because it gives smaller uncertainties that are uniform through all time lags. Based on these analytical results, a statistically robust method has been proposed to test the existence of correlations in a time series. The statistical test is verified by computer simulations and an application to single-molecule fluorescence spectroscopy is discussed.

  3. Modeling and Analysis of Large Amplitude Flight Maneuvers

    NASA Technical Reports Server (NTRS)

    Anderson, Mark R.

    2004-01-01

    Analytical methods for stability analysis of large amplitude aircraft motion have been slow to develop because many nonlinear system stability assessment methods are restricted to a state-space dimension of less than three. The proffered approach is to create regional cell-to-cell maps for strategically located two-dimensional subspaces within the higher-dimensional model statespace. These regional solutions capture nonlinear behavior better than linearized point solutions. They also avoid the computational difficulties that emerge when attempting to create a cell map for the entire state-space. Example stability results are presented for a general aviation aircraft and a micro-aerial vehicle configuration. The analytical results are consistent with characteristics that were discovered during previous flight-testing.

  4. Analytical technologies for influenza virus-like particle candidate vaccines: challenges and emerging approaches

    PubMed Central

    2013-01-01

    Influenza virus-like particle vaccines are one of the most promising ways to respond to the threat of future influenza pandemics. VLPs are composed of viral antigens but lack nucleic acids making them non-infectious which limit the risk of recombination with wild-type strains. By taking advantage of the advancements in cell culture technologies, the process from strain identification to manufacturing has the potential to be completed rapidly and easily at large scales. After closely reviewing the current research done on influenza VLPs, it is evident that the development of quantification methods has been consistently overlooked. VLP quantification at all stages of the production process has been left to rely on current influenza quantification methods (i.e. Hemagglutination assay (HA), Single Radial Immunodiffusion assay (SRID), NA enzymatic activity assays, Western blot, Electron Microscopy). These are analytical methods developed decades ago for influenza virions and final bulk influenza vaccines. Although these methods are time-consuming and cumbersome they have been sufficient for the characterization of final purified material. Nevertheless, these analytical methods are impractical for in-line process monitoring because VLP concentration in crude samples generally falls out of the range of detection for these methods. This consequently impedes the development of robust influenza-VLP production and purification processes. Thus, development of functional process analytical techniques, applicable at every stage during production, that are compatible with different production platforms is in great need to assess, optimize and exploit the full potential of novel manufacturing platforms. PMID:23642219

  5. A modeling approach to compare ΣPCB concentrations between congener-specific analyses

    USGS Publications Warehouse

    Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.

    2017-01-01

    Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time. 

  6. Lie algebraic approach to the time-dependent quantum general harmonic oscillator and the bi-dimensional charged particle in time-dependent electromagnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibarra-Sierra, V.G.; Sandoval-Santana, J.C.; Cardoso, J.L.

    We discuss the one-dimensional, time-dependent general quadratic Hamiltonian and the bi-dimensional charged particle in time-dependent electromagnetic fields through the Lie algebraic approach. Such method consists in finding a set of generators that form a closed Lie algebra in terms of which it is possible to express a quantum Hamiltonian and therefore the evolution operator. The evolution operator is then the starting point to obtain the propagator as well as the explicit form of the Heisenberg picture position and momentum operators. First, the set of generators forming a closed Lie algebra is identified for the general quadratic Hamiltonian. This algebra ismore » later extended to study the Hamiltonian of a charged particle in electromagnetic fields exploiting the similarities between the terms of these two Hamiltonians. These results are applied to the solution of five different examples: the linear potential which is used to introduce the Lie algebraic method, a radio frequency ion trap, a Kanai–Caldirola-like forced harmonic oscillator, a charged particle in a time dependent magnetic field, and a charged particle in constant magnetic field and oscillating electric field. In particular we present exact analytical expressions that are fitting for the study of a rotating quadrupole field ion trap and magneto-transport in two-dimensional semiconductor heterostructures illuminated by microwave radiation. In these examples we show that this powerful method is suitable to treat quadratic Hamiltonians with time dependent coefficients quite efficiently yielding closed analytical expressions for the propagator and the Heisenberg picture position and momentum operators. -- Highlights: •We deal with the general quadratic Hamiltonian and a particle in electromagnetic fields. •The evolution operator is worked out through the Lie algebraic approach. •We also obtain the propagator and Heisenberg picture position and momentum operators. •Analytical expressions for a rotating quadrupole field ion trap are presented. •Exact solutions for magneto-transport in variable electromagnetic fields are shown.« less

  7. A generalization of random matrix theory and its application to statistical physics.

    PubMed

    Wang, Duan; Zhang, Xin; Horvatic, Davor; Podobnik, Boris; Eugene Stanley, H

    2017-02-01

    To study the statistical structure of crosscorrelations in empirical data, we generalize random matrix theory and propose a new method of cross-correlation analysis, known as autoregressive random matrix theory (ARRMT). ARRMT takes into account the influence of auto-correlations in the study of cross-correlations in multiple time series. We first analytically and numerically determine how auto-correlations affect the eigenvalue distribution of the correlation matrix. Then we introduce ARRMT with a detailed procedure of how to implement the method. Finally, we illustrate the method using two examples taken from inflation rates for air pressure data for 95 US cities.

  8. Analytical modelling of Halbach linear generator incorporating pole shifting and piece-wise spring for ocean wave energy harvesting

    NASA Astrophysics Data System (ADS)

    Tan, Yimin; Lin, Kejian; Zu, Jean W.

    2018-05-01

    Halbach permanent magnet (PM) array has attracted tremendous research attention in the development of electromagnetic generators for its unique properties. This paper has proposed a generalized analytical model for linear generators. The slotted stator pole-shifting and implementation of Halbach array have been combined for the first time. Initially, the magnetization components of the Halbach array have been determined using Fourier decomposition. Then, based on the magnetic scalar potential method, the magnetic field distribution has been derived employing specially treated boundary conditions. FEM analysis has been conducted to verify the analytical model. A slotted linear PM generator with Halbach PM has been constructed to validate the model and further improved using piece-wise springs to trigger full range reciprocating motion. A dynamic model has been developed to characterize the dynamic behavior of the slider. This analytical method provides an effective tool in development and optimization of Halbach PM generator. The experimental results indicate that piece-wise springs can be employed to improve generator performance under low excitation frequency.

  9. Annual banned-substance review: analytical approaches in human sports drug testing.

    PubMed

    Thevis, Mario; Kuuranne, Tiia; Walpurgis, Katja; Geyer, Hans; Schänzer, Wilhelm

    2016-01-01

    The aim of improving anti-doping efforts is predicated on several different pillars, including, amongst others, optimized analytical methods. These commonly result from exploiting most recent developments in analytical instrumentation as well as research data on elite athletes' physiology in general, and pharmacology, metabolism, elimination, and downstream effects of prohibited substances and methods of doping, in particular. The need for frequent and adequate adaptations of sports drug testing procedures has been incessant, largely due to the uninterrupted emergence of new chemical entities but also due to the apparent use of established or even obsolete drugs for reasons other than therapeutic means, such as assumed beneficial effects on endurance, strength, and regeneration capacities. Continuing the series of annual banned-substance reviews, literature concerning human sports drug testing published between October 2014 and September 2015 is summarized and reviewed in reference to the content of the 2015 Prohibited List as issued by the World Anti-Doping Agency (WADA), with particular emphasis on analytical approaches and their contribution to enhanced doping controls. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Appendix 3 Summary of Field Sampling and Analytical Methods with Bibliography

    EPA Science Inventory

    Conductivity and Specific conductance are measures of the ability of water to conduct an electric current, and are a general measure of stream-water quality. Conductivity is affected by temperature, with warmer water having a greater conductivity. Specific conductance is the te...

  11. ANALYTICAL METHOD DEVELOPMENT FOR ALACHLOR ESA AND OTHER ACETANILIDE HERBICIDE DEGRADATION PRODUCTS

    EPA Science Inventory

    In 1998, USEPA published a Drinking Water Contaminant Candidate List (CCL) of 50 chemicals and 10 microorganisms. "Alachlor ESA and other acetanilide herbicide degradation products" is listed on the the 1998 CCL. Acetanilide degradation products are generally more water soluble...

  12. Importance sampling with imperfect cloning for the computation of generalized Lyapunov exponents

    NASA Astrophysics Data System (ADS)

    Anteneodo, Celia; Camargo, Sabrina; Vallejos, Raúl O.

    2017-12-01

    We revisit the numerical calculation of generalized Lyapunov exponents, L (q ) , in deterministic dynamical systems. The standard method consists of adding noise to the dynamics in order to use importance sampling algorithms. Then L (q ) is obtained by taking the limit noise-amplitude → 0 after the calculation. We focus on a particular method that involves periodic cloning and pruning of a set of trajectories. However, instead of considering a noisy dynamics, we implement an imperfect (noisy) cloning. This alternative method is compared with the standard one and, when possible, with analytical results. As a workbench we use the asymmetric tent map, the standard map, and a system of coupled symplectic maps. The general conclusion of this study is that the imperfect-cloning method performs as well as the standard one, with the advantage of preserving the deterministic dynamics.

  13. Analytical representation for ephemeris with short time-span - Aplication to the longitude of Titan

    NASA Astrophysics Data System (ADS)

    XI, Xiaojin; Vienne, Alain

    2017-06-01

    Ephemerides of the natural satellites are generally presented in the form of tables, or computed on line, for example like some best ones from JPL or IMCCE. In the sense of fitted the more recent and best observations, analytical representation is not so sufficient, although these representations are valid over a very long time-span. But in some analytical studies, it could be benefitted to have the both advantages. We present here the case of the study of the rotation of Titan, in which we need a representation of the true longitude of Titan. Frequency analysis can be used partially on the numerical ephemerides because of limited time-span. To complete it, we use the form of the analytical representation to obtained their numerical parameters.The method is presented and some results are given.

  14. Label-free functional nucleic acid sensors for detecting target agents

    DOEpatents

    Lu, Yi; Xiang, Yu

    2015-01-13

    A general methodology to design label-free fluorescent functional nucleic acid sensors using a vacant site approach and an abasic site approach is described. In one example, a method for designing label-free fluorescent functional nucleic acid sensors (e.g., those that include a DNAzyme, aptamer or aptazyme) that have a tunable dynamic range through the introduction of an abasic site (e.g., dSpacer) or a vacant site into the functional nucleic acids. Also provided is a general method for designing label-free fluorescent aptamer sensors based on the regulation of malachite green (MG) fluorescence. A general method for designing label-free fluorescent catalytic and molecular beacons (CAMBs) is also provided. The methods demonstrated here can be used to design many other label-free fluorescent sensors to detect a wide range of analytes. Sensors and methods of using the disclosed sensors are also provided.

  15. A Fixed-point Scheme for the Numerical Construction of Magnetohydrostatic Atmospheres in Three Dimensions

    NASA Astrophysics Data System (ADS)

    Gilchrist, S. A.; Braun, D. C.; Barnes, G.

    2016-12-01

    Magnetohydrostatic models of the solar atmosphere are often based on idealized analytic solutions because the underlying equations are too difficult to solve in full generality. Numerical approaches, too, are often limited in scope and have tended to focus on the two-dimensional problem. In this article we develop a numerical method for solving the nonlinear magnetohydrostatic equations in three dimensions. Our method is a fixed-point iteration scheme that extends the method of Grad and Rubin ( Proc. 2nd Int. Conf. on Peaceful Uses of Atomic Energy 31, 190, 1958) to include a finite gravity force. We apply the method to a test case to demonstrate the method in general and our implementation in code in particular.

  16. Methods for Synthesizing Findings on Moderation Effects Across Multiple Randomized Trials

    PubMed Central

    Brown, C Hendricks; Sloboda, Zili; Faggiano, Fabrizio; Teasdale, Brent; Keller, Ferdinand; Burkhart, Gregor; Vigna-Taglianti, Federica; Howe, George; Masyn, Katherine; Wang, Wei; Muthén, Bengt; Stephens, Peggy; Grey, Scott; Perrino, Tatiana

    2011-01-01

    This paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis, and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis. We present a broad class of analytic models to examine moderation effects across trials that can be used to assess their overall effect and explain sources of heterogeneity, and present ways to disentangle differences across trials due to individual differences, contextual level differences, intervention, and trial design. PMID:21360061

  17. Methods for synthesizing findings on moderation effects across multiple randomized trials.

    PubMed

    Brown, C Hendricks; Sloboda, Zili; Faggiano, Fabrizio; Teasdale, Brent; Keller, Ferdinand; Burkhart, Gregor; Vigna-Taglianti, Federica; Howe, George; Masyn, Katherine; Wang, Wei; Muthén, Bengt; Stephens, Peggy; Grey, Scott; Perrino, Tatiana

    2013-04-01

    This paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis. We present a broad class of analytic models to examine moderation effects across trials that can be used to assess their overall effect and explain sources of heterogeneity, and present ways to disentangle differences across trials due to individual differences, contextual level differences, intervention, and trial design.

  18. Analytical performance specifications for changes in assay bias (Δbias) for data with logarithmic distributions as assessed by effects on reference change values.

    PubMed

    Petersen, Per H; Lund, Flemming; Fraser, Callum G; Sölétormos, György

    2016-11-01

    Background The distributions of within-subject biological variation are usually described as coefficients of variation, as are analytical performance specifications for bias, imprecision and other characteristics. Estimation of specifications required for reference change values is traditionally done using relationship between the batch-related changes during routine performance, described as Δbias, and the coefficients of variation for analytical imprecision (CV A ): the original theory is based on standard deviations or coefficients of variation calculated as if distributions were Gaussian. Methods The distribution of between-subject biological variation can generally be described as log-Gaussian. Moreover, recent analyses of within-subject biological variation suggest that many measurands have log-Gaussian distributions. In consequence, we generated a model for the estimation of analytical performance specifications for reference change value, with combination of Δbias and CV A based on log-Gaussian distributions of CV I as natural logarithms. The model was tested using plasma prolactin and glucose as examples. Results Analytical performance specifications for reference change value generated using the new model based on log-Gaussian distributions were practically identical with the traditional model based on Gaussian distributions. Conclusion The traditional and simple to apply model used to generate analytical performance specifications for reference change value, based on the use of coefficients of variation and assuming Gaussian distributions for both CV I and CV A , is generally useful.

  19. Analytical applications of microbial fuel cells. Part II: Toxicity, microbial activity and quantification, single analyte detection and other uses.

    PubMed

    Abrevaya, Ximena C; Sacco, Natalia J; Bonetto, Maria C; Hilding-Ohlsson, Astrid; Cortón, Eduardo

    2015-01-15

    Microbial fuel cells were rediscovered twenty years ago and now are a very active research area. The reasons behind this new activity are the relatively recent discovery of electrogenic or electroactive bacteria and the vision of two important practical applications, as wastewater treatment coupled with clean energy production and power supply systems for isolated low-power sensor devices. Although some analytical applications of MFCs were proposed earlier (as biochemical oxygen demand sensing) only lately a myriad of new uses of this technology are being presented by research groups around the world, which combine both biological-microbiological and electroanalytical expertises. This is the second part of a review of MFC applications in the area of analytical sciences. In Part I a general introduction to biological-based analytical methods including bioassays, biosensors, MFCs design, operating principles, as well as, perhaps the main and earlier presented application, the use as a BOD sensor was reviewed. In Part II, other proposed uses are presented and discussed. As other microbially based analytical systems, MFCs are satisfactory systems to measure and integrate complex parameters that are difficult or impossible to measure otherwise, such as water toxicity (where the toxic effect to aquatic organisms needed to be integrated). We explore here the methods proposed to measure toxicity, microbial metabolism, and, being of special interest to space exploration, life sensors. Also, some methods with higher specificity, proposed to detect a single analyte, are presented. Different possibilities to increase selectivity and sensitivity, by using molecular biology or other modern techniques are also discussed here. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. VALIDATION OF ANALYTICAL METHODS AND INSTRUMENTATION FOR BERYLLIUM MEASUREMENT: REVIEW AND SUMMARY OF AVAILABLE GUIDES, PROCEDURES, AND PROTOCOLS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekechukwu, A

    Method validation is the process of evaluating whether an analytical method is acceptable for its intended purpose. For pharmaceutical methods, guidelines from the United States Pharmacopeia (USP), International Conference on Harmonisation (ICH), and the United States Food and Drug Administration (USFDA) provide a framework for performing such valications. In general, methods for regulatory compliance must include studies on specificity, linearity, accuracy, precision, range, detection limit, quantitation limit, and robustness. Elements of these guidelines are readily adapted to the issue of validation for beryllium sampling and analysis. This document provides a listing of available sources which can be used to validatemore » analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers and books reviewed is given in the Appendix. Available validation documents and guides are listed therein; each has a brief description of application and use. In the referenced sources, there are varying approches to validation and varying descriptions of the valication process at different stages in method development. This discussion focuses on valication and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all referenced documents were published in English.« less

  1. Recommendations for choosing an analysis method that controls Type I error for unbalanced cluster sample designs with Gaussian outcomes.

    PubMed

    Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H

    2015-11-30

    We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Construction of measurement uncertainty profiles for quantitative analysis of genetically modified organisms based on interlaboratory validation data.

    PubMed

    Macarthur, Roy; Feinberg, Max; Bertheau, Yves

    2010-01-01

    A method is presented for estimating the size of uncertainty associated with the measurement of products derived from genetically modified organisms (GMOs). The method is based on the uncertainty profile, which is an extension, for the estimation of uncertainty, of a recent graphical statistical tool called an accuracy profile that was developed for the validation of quantitative analytical methods. The application of uncertainty profiles as an aid to decision making and assessment of fitness for purpose is also presented. Results of the measurement of the quantity of GMOs in flour by PCR-based methods collected through a number of interlaboratory studies followed the log-normal distribution. Uncertainty profiles built using the results generally give an expected range for measurement results of 50-200% of reference concentrations for materials that contain at least 1% GMO. This range is consistent with European Network of GM Laboratories and the European Union (EU) Community Reference Laboratory validation criteria and can be used as a fitness for purpose criterion for measurement methods. The effect on the enforcement of EU labeling regulations is that, in general, an individual analytical result needs to be < 0.45% to demonstrate compliance, and > 1.8% to demonstrate noncompliance with a labeling threshold of 0.9%.

  3. Analytical studies on holographic superconductor in the probe limit

    NASA Astrophysics Data System (ADS)

    Peng, Yan; Liu, Guohua

    2017-09-01

    We investigate the holographic superconductor model constructed in the (2+1)-dimensional AdS soliton background in the probe limit. With analytical methods, we obtain the formula of critical phase transition points with respect to the scalar mass. We also generalize this formula to higher-dimensional space-time. We mention that these formulas are precise compared to numerical results. In addition, we find a correspondence between the value of the charged scalar field at the tip and the scalar operator at infinity around the phase transition points.

  4. Inversion of the anomalous diffraction approximation for variable complex index of refraction near unity. [numerical tests for water-haze aerosol model

    NASA Technical Reports Server (NTRS)

    Smith, C. B.

    1982-01-01

    The Fymat analytic inversion method for retrieving a particle-area distribution function from anomalous diffraction multispectral extinction data and total area is generalized to the case of a variable complex refractive index m(lambda) near unity depending on spectral wavelength lambda. Inversion tests are presented for a water-haze aerosol model. An upper-phase shift limit of 5 pi/2 retrieved an accurate peak area distribution profile. Analytical corrections using both the total number and area improved the inversion.

  5. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    PubMed

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  6. Two types of modes in finite size one-dimensional coaxial photonic crystals: General rules and experimental evidence

    NASA Astrophysics Data System (ADS)

    El Boudouti, E. H.; El Hassouani, Y.; Djafari-Rouhani, B.; Aynaou, H.

    2007-08-01

    We demonstrate analytically and experimentally the existence and behavior of two types of modes in finite size one-dimensional coaxial photonic crystals made of N cells with vanishing magnetic field on both sides. We highlight the existence of N-1 confined modes in each band and one mode by gap associated to either one or the other of the two surfaces surrounding the structure. The latter modes are independent of N . These results generalize our previous findings on the existence of surface modes in two semi-infinite superlattices obtained from the cleavage of an infinite superlattice between two cells. The analytical results are obtained by means of the Green’s function method, whereas the experiments are carried out using coaxial cables in the radio-frequency regime.

  7. Aerodynamic shape optimization of wing and wing-body configurations using control theory

    NASA Technical Reports Server (NTRS)

    Reuther, James; Jameson, Antony

    1995-01-01

    This paper describes the implementation of optimization techniques based on control theory for wing and wing-body design. In previous studies it was shown that control theory could be used to devise an effective optimization procedure for airfoils and wings in which the shape and the surrounding body-fitted mesh are both generated analytically, and the control is the mapping function. Recently, the method has been implemented for both potential flows and flows governed by the Euler equations using an alternative formulation which employs numerically generated grids, so that it can more easily be extended to treat general configurations. Here results are presented both for the optimization of a swept wing using an analytic mapping, and for the optimization of wing and wing-body configurations using a general mesh.

  8. Child Development in Developing Countries: Introduction and Methods

    ERIC Educational Resources Information Center

    Bornstein, Marc H.; Britto, Pia Rebello; Nonoyama-Tarumi, Yuko; Ota, Yumiko; Petrovic, Oliver; Putnick, Diane L.

    2012-01-01

    The Multiple Indicator Cluster Survey (MICS) is a nationally representative, internationally comparable household survey implemented to examine protective and risk factors of child development in developing countries around the world. This introduction describes the conceptual framework, nature of the MICS3, and general analytic plan of articles…

  9. The ultrasound-enhanced bioscouring performance of four polygalacturonase enzymes obtained from rhizopus oryzae

    USDA-ARS?s Scientific Manuscript database

    An analytical and statistical method has been developed to measure the ultrasound-enhanced bioscouring performance of milligram quantities of endo- and exo-polygalacturonase enzymes obtained from Rhizopus oryzae fungi. UV-Vis spectrophotometric data and a general linear mixed models procedure indic...

  10. Analytic thinking reduces belief in conspiracy theories.

    PubMed

    Swami, Viren; Voracek, Martin; Stieger, Stefan; Tran, Ulrich S; Furnham, Adrian

    2014-12-01

    Belief in conspiracy theories has been associated with a range of negative health, civic, and social outcomes, requiring reliable methods of reducing such belief. Thinking dispositions have been highlighted as one possible factor associated with belief in conspiracy theories, but actual relationships have only been infrequently studied. In Study 1, we examined associations between belief in conspiracy theories and a range of measures of thinking dispositions in a British sample (N=990). Results indicated that a stronger belief in conspiracy theories was significantly associated with lower analytic thinking and open-mindedness and greater intuitive thinking. In Studies 2-4, we examined the causational role played by analytic thinking in relation to conspiracist ideation. In Study 2 (N=112), we showed that a verbal fluency task that elicited analytic thinking reduced belief in conspiracy theories. In Study 3 (N=189), we found that an alternative method of eliciting analytic thinking, which related to cognitive disfluency, was effective at reducing conspiracist ideation in a student sample. In Study 4, we replicated the results of Study 3 among a general population sample (N=140) in relation to generic conspiracist ideation and belief in conspiracy theories about the July 7, 2005, bombings in London. Our results highlight the potential utility of supporting attempts to promote analytic thinking as a means of countering the widespread acceptance of conspiracy theories. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Equilibrium Solutions of the Logarithmic Hamiltonian Leapfrog for the N-body Problem

    NASA Astrophysics Data System (ADS)

    Minesaki, Yukitaka

    2018-04-01

    We prove that a second-order logarithmic Hamiltonian leapfrog for the classical general N-body problem (CGNBP) designed by Mikkola and Tanikawa and some higher-order logarithmic Hamiltonian methods based on symmetric multicompositions of the logarithmic algorithm exactly reproduce the orbits of elliptic relative equilibrium solutions in the original CGNBP. These methods are explicit symplectic methods. Before this proof, only some implicit discrete-time CGNBPs proposed by Minesaki had been analytically shown to trace the orbits of elliptic relative equilibrium solutions. The proof is therefore the first existence proof for explicit symplectic methods. Such logarithmic Hamiltonian methods with a variable time step can also precisely retain periodic orbits in the classical general three-body problem, which generic numerical methods with a constant time step cannot do.

  12. A stochastic method for Brownian-like optical transport calculations in anisotropic biosuspensions and blood

    NASA Astrophysics Data System (ADS)

    Miller, Steven

    1998-03-01

    A generic stochastic method is presented that rapidly evaluates numerical bulk flux solutions to the one-dimensional integrodifferential radiative transport equation, for coherent irradiance of optically anisotropic suspensions of nonspheroidal bioparticles, such as blood. As Fermat rays or geodesics enter the suspension, they evolve into a bundle of random paths or trajectories due to scattering by the suspended bioparticles. Overall, this can be interpreted as a bundle of Markov trajectories traced out by a "gas" of Brownian-like point photons being scattered and absorbed by the homogeneous distribution of uncorrelated cells in suspension. By considering the cumulative vectorial intersections of a statistical bundle of random trajectories through sets of interior data planes in the space containing the medium, the effective equivalent information content and behavior of the (generally unknown) analytical flux solutions of the radiative transfer equation rapidly emerges. The fluxes match the analytical diffuse flux solutions in the diffusion limit, which verifies the accuracy of the algorithm. The method is not constrained by the diffusion limit and gives correct solutions for conditions where diffuse solutions are not viable. Unlike conventional Monte Carlo and numerical techniques adapted from neutron transport or nuclear reactor problems that compute scalar quantities, this vectorial technique is fast, easily implemented, adaptable, and viable for a wide class of biophotonic scenarios. By comparison, other analytical or numerical techniques generally become unwieldy, lack viability, or are more difficult to utilize and adapt. Illustrative calculations are presented for blood medias at monochromatic wavelengths in the visible spectrum.

  13. Flows of Newtonian and Power-Law Fluids in Symmetrically Corrugated Cappilary Fissures and Tubes

    NASA Astrophysics Data System (ADS)

    Walicka, A.

    2018-02-01

    In this paper, an analytical method for deriving the relationships between the pressure drop and the volumetric flow rate in laminar flow regimes of Newtonian and power-law fluids through symmetrically corrugated capillary fissures and tubes is presented. This method, which is general with regard to fluid and capillary shape, can also be used as a foundation for different fluids, fissures and tubes. It can also be a good base for numerical integration when analytical expressions are hard to obtain due to mathematical complexities. Five converging-diverging or diverging-converging geometrics, viz. wedge and cone, parabolic, hyperbolic, hyperbolic cosine and cosine curve, are used as examples to illustrate the application of this method. For the wedge and cone geometry the present results for the power-law fluid were compared with the results obtained by another method; this comparison indicates a good compatibility between both the results.

  14. MODFLOW equipped with a new method for the accurate simulation of axisymmetric flow

    NASA Astrophysics Data System (ADS)

    Samani, N.; Kompani-Zare, M.; Barry, D. A.

    2004-01-01

    Axisymmetric flow to a well is an important topic of groundwater hydraulics, the simulation of which depends on accurate computation of head gradients. Groundwater numerical models with conventional rectilinear grid geometry such as MODFLOW (in contrast to analytical models) generally have not been used to simulate aquifer test results at a pumping well because they are not designed or expected to closely simulate the head gradient near the well. A scaling method is proposed based on mapping the governing flow equation from cylindrical to Cartesian coordinates, and vice versa. A set of relationships and scales is derived to implement the conversion. The proposed scaling method is then embedded in MODFLOW 2000. To verify the accuracy of the method steady and unsteady flows in confined and unconfined aquifers with fully or partially penetrating pumping wells are simulated and compared with the corresponding analytical solutions. In all cases a high degree of accuracy is achieved.

  15. Phonon dispersion on Ag (100) surface: A modified analytic embedded atom method study

    NASA Astrophysics Data System (ADS)

    Xiao-Jun, Zhang; Chang-Le, Chen

    2016-01-01

    Within the harmonic approximation, the analytic expression of the dynamical matrix is derived based on the modified analytic embedded atom method (MAEAM) and the dynamics theory of surface lattice. The surface phonon dispersions along three major symmetry directions , and X¯M¯ are calculated for the clean Ag (100) surface by using our derived formulas. We then discuss the polarization and localization of surface modes at points X¯ and M¯ by plotting the squared polarization vectors as a function of the layer index. The phonon frequencies of the surface modes calculated by MAEAM are compared with the available experimental and other theoretical data. It is found that the present results are generally in agreement with the referenced experimental or theoretical results, with a maximum deviation of 10.4%. The agreement shows that the modified analytic embedded atom method is a reasonable many-body potential model to quickly describe the surface lattice vibration. It also lays a significant foundation for studying the surface lattice vibration in other metals. Project supported by the National Natural Science Foundation of China (Grant Nos. 61471301 and 61078057), the Scientific Research Program Funded by Shaanxi Provincial Education Department, China (Grant No. 14JK1301), and the Specialized Research Fund for the Doctoral Program of Higher Education, China (Grant No. 20126102110045).

  16. Estimating statistical isotropy violation in CMB due to non-circular beam and complex scan in minutes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pant, Nidhi; Das, Santanu; Mitra, Sanjit

    Mild, unavoidable deviations from circular-symmetry of instrumental beams along with scan strategy can give rise to measurable Statistical Isotropy (SI) violation in Cosmic Microwave Background (CMB) experiments. If not accounted properly, this spurious signal can complicate the extraction of other SI violation signals (if any) in the data. However, estimation of this effect through exact numerical simulation is computationally intensive and time consuming. A generalized analytical formalism not only provides a quick way of estimating this signal, but also gives a detailed understanding connecting the leading beam anisotropy components to a measurable BipoSH characterisation of SI violation. In this paper,more » we provide an approximate generic analytical method for estimating the SI violation generated due to a non-circular (NC) beam and arbitrary scan strategy, in terms of the Bipolar Spherical Harmonic (BipoSH) spectra. Our analytical method can predict almost all the features introduced by a NC beam in a complex scan and thus reduces the need for extensive numerical simulation worth tens of thousands of CPU hours into minutes long calculations. As an illustrative example, we use WMAP beams and scanning strategy to demonstrate the easability, usability and efficiency of our method. We test all our analytical results against that from exact numerical simulations.« less

  17. Direct high-performance liquid chromatography method with refractometric detection designed for stability studies of treosulfan and its biologically active epoxy-transformers.

    PubMed

    Główka, Franciszek K; Romański, Michał; Teżyk, Artur; Żaba, Czesław

    2013-01-01

    Treosulfan (TREO) is an alkylating agent registered for treatment of advanced platin-resistant ovarian carcinoma. Nowadays, TREO is increasingly applied iv in high doses as a promising myeloablative agent with low organ toxicity in children. Under physiological conditions it undergoes pH-dependent transformation into epoxy-transformers (S,S-EBDM and S,S-DEB). The mechanism of this reaction is generally known, but not its kinetic details. In order to investigate kinetics of TREO transformation, HPLC method with refractometric detection for simultaneous determination of the three analytes in one analytical run has been developed for the first time. The samples containing TREO, S,S-EBDM, S,S-DEB and acetaminophen (internal standard) were directly injected onto the reversed phase column. To assure stability of the analytes and obtain their complete resolution, mobile phase composed of acetate buffer pH 4.5 and acetonitrile was applied. The linear range of the calibration curves of TREO, S,S-EBDM and S,S-DEB spanned concentrations of 20-6000, 34-8600 and 50-6000 μM, respectively. Intra- and interday precision and accuracy of the developed method fulfilled analytical criteria. The stability of the analytes in experimental samples was also established. The validated HPLC method was successfully applied to the investigation of the kinetics of TREO activation to S,S-EBDM and S,S-DEB. At pH 7.4 and 37 °C the transformation of TREO followed first-order kinetics with a half-life 1.5h. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Research on bathymetry estimation by Worldview-2 based with the semi-analytical model

    NASA Astrophysics Data System (ADS)

    Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.

    2015-04-01

    South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.

  19. GHM method for obtaining rationalsolutions of nonlinear differential equations.

    PubMed

    Vazquez-Leal, Hector; Sarmiento-Reyes, Arturo

    2015-01-01

    In this paper, we propose the application of the general homotopy method (GHM) to obtain rational solutions of nonlinear differential equations. It delivers a high precision representation of the nonlinear differential equation using a few linear algebraic terms. In order to assess the benefits of this proposal, three nonlinear problems are solved and compared against other semi-analytic methods or numerical methods. The obtained results show that GHM is a powerful tool, capable to generate highly accurate rational solutions. AMS subject classification 34L30.

  20. Lie symmetry analysis, conservation laws and exact solutions of the time-fractional generalized Hirota-Satsuma coupled KdV system

    NASA Astrophysics Data System (ADS)

    Saberi, Elaheh; Reza Hejazi, S.

    2018-02-01

    In the present paper, Lie point symmetries of the time-fractional generalized Hirota-Satsuma coupled KdV (HS-cKdV) system based on the Riemann-Liouville derivative are obtained. Using the derived Lie point symmetries, we obtain similarity reductions and conservation laws of the considered system. Finally, some analytic solutions are furnished by means of the invariant subspace method in the Caputo sense.

  1. Theory of ground state factorization in quantum cooperative systems.

    PubMed

    Giampaolo, Salvatore M; Adesso, Gerardo; Illuminati, Fabrizio

    2008-05-16

    We introduce a general analytic approach to the study of factorization points and factorized ground states in quantum cooperative systems. The method allows us to determine rigorously the existence, location, and exact form of separable ground states in a large variety of, generally nonexactly solvable, spin models belonging to different universality classes. The theory applies to translationally invariant systems, irrespective of spatial dimensionality, and for spin-spin interactions of arbitrary range.

  2. New Analytical Solution of the Equilibrium Ampere's Law Using the Walker's Method: a Didactic Example

    NASA Astrophysics Data System (ADS)

    Sousa, A. N. Laurindo; Ojeda-González, A.; Prestes, A.; Klausner, V.; Caritá, L. A.

    2018-02-01

    This work aims to demonstrate the analytical solution of the Grad-Shafranov (GS) equation or generalized Ampere's law, which is important in the studies of self-consistent 2.5-D solution for current sheet structures. A detailed mathematical development is presented to obtain the generating function as shown by Walker (RSPSA 91, 410, 1915). Therefore, we study the general solution of the GS equation in terms of the Walker's generating function in details without omitting any step. The Walker's generating function g( ζ) is written in a new way as the tangent of an unspecified function K( ζ). In this trend, the general solution of the GS equation is expressed as exp(- 2Ψ) = 4| K '( ζ)|2/cos2[ K( ζ) - K( ζ ∗)]. In order to investigate whether our proposal would simplify the mathematical effort to find new generating functions, we use Harris's solution as a test, in this case K( ζ) = arctan(exp( i ζ)). In summary, one of the article purposes is to present a review of the Harris's solution. In an attempt to find a simplified solution, we propose a new way to write the GS solution using g( ζ) = tan( K( ζ)). We also present a new analytical solution to the equilibrium Ampere's law using g( ζ) = cosh( b ζ), which includes a generalization of the Harris model and presents isolated magnetic islands.

  3. New Analytical Solution of the Equilibrium Ampere's Law Using the Walker's Method: a Didactic Example

    NASA Astrophysics Data System (ADS)

    Sousa, A. N. Laurindo; Ojeda-González, A.; Prestes, A.; Klausner, V.; Caritá, L. A.

    2017-12-01

    This work aims to demonstrate the analytical solution of the Grad-Shafranov (GS) equation or generalized Ampere's law, which is important in the studies of self-consistent 2.5-D solution for current sheet structures. A detailed mathematical development is presented to obtain the generating function as shown by Walker (RSPSA 91, 410, 1915). Therefore, we study the general solution of the GS equation in terms of the Walker's generating function in details without omitting any step. The Walker's generating function g(ζ) is written in a new way as the tangent of an unspecified function K(ζ). In this trend, the general solution of the GS equation is expressed as exp(- 2Ψ) = 4|K '(ζ)|2/cos2[K(ζ) - K(ζ ∗)]. In order to investigate whether our proposal would simplify the mathematical effort to find new generating functions, we use Harris's solution as a test, in this case K(ζ) = arctan(exp(i ζ)). In summary, one of the article purposes is to present a review of the Harris's solution. In an attempt to find a simplified solution, we propose a new way to write the GS solution using g(ζ) = tan(K(ζ)). We also present a new analytical solution to the equilibrium Ampere's law using g(ζ) = cosh(b ζ), which includes a generalization of the Harris model and presents isolated magnetic islands.

  4. A Mathematica program for the approximate analytical solution to a nonlinear undamped Duffing equation by a new approximate approach

    NASA Astrophysics Data System (ADS)

    Wu, Dongmei; Wang, Zhongcheng

    2006-03-01

    According to Mickens [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563], the general HB (harmonic balance) method is an approximation to the convergent Fourier series representation of the periodic solution of a nonlinear oscillator and not an approximation to an expansion in terms of a small parameter. Consequently, for a nonlinear undamped Duffing equation with a driving force Bcos(ωx), to find a periodic solution when the fundamental frequency is identical to ω, the corresponding Fourier series can be written as y˜(x)=∑n=1m acos[(2n-1)ωx]. How to calculate the coefficients of the Fourier series efficiently with a computer program is still an open problem. For HB method, by substituting approximation y˜(x) into force equation, expanding the resulting expression into a trigonometric series, then letting the coefficients of the resulting lowest-order harmonic be zero, one can obtain approximate coefficients of approximation y˜(x) [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563]. But for nonlinear differential equations such as Duffing equation, it is very difficult to construct higher-order analytical approximations, because the HB method requires solving a set of algebraic equations for a large number of unknowns with very complex nonlinearities. To overcome the difficulty, forty years ago, Urabe derived a computational method for Duffing equation based on Galerkin procedure [M. Urabe, A. Reiter, Numerical computation of nonlinear forced oscillations by Galerkin's procedure, J. Math. Anal. Appl. 14 (1966) 107-140]. Dooren obtained an approximate solution of the Duffing oscillator with a special set of parameters by using Urabe's method [R. van Dooren, Stabilization of Cowell's classic finite difference method for numerical integration, J. Comput. Phys. 16 (1974) 186-192]. In this paper, in the frame of the general HB method, we present a new iteration algorithm to calculate the coefficients of the Fourier series. By using this new method, the iteration procedure starts with a(x)cos(ωx)+b(x)sin(ωx), and the accuracy may be improved gradually by determining new coefficients a,a,… will be produced automatically in an one-by-one manner. In all the stage of calculation, we need only to solve a cubic equation. Using this new algorithm, we develop a Mathematica program, which demonstrates following main advantages over the previous HB method: (1) it avoids solving a set of associate nonlinear equations; (2) it is easier to be implemented into a computer program, and produces a highly accurate solution with analytical expression efficiently. It is interesting to find that, generally, for a given set of parameters, a nonlinear Duffing equation can have three independent oscillation modes. For some sets of the parameters, it can have two modes with complex displacement and one with real displacement. But in some cases, it can have three modes, all of them having real displacement. Therefore, we can divide the parameters into two classes, according to the solution property: there is only one mode with real displacement and there are three modes with real displacement. This program should be useful to study the dynamically periodic behavior of a Duffing oscillator and can provide an approximate analytical solution with high-accuracy for testing the error behavior of newly developed numerical methods with a wide range of parameters. Program summaryTitle of program:AnalyDuffing.nb Catalogue identifier:ADWR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWR_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:none Computer for which the program is designed and others on which it has been tested:the program has been designed for a microcomputer and been tested on the microcomputer. Computers:IBM PC Installations:the address(es) of your computer(s) Operating systems under which the program has been tested:Windows XP Programming language used:Software Mathematica 4.2, 5.0 and 5.1 No. of lines in distributed program, including test data, etc.:23 663 No. of bytes in distributed program, including test data, etc.:152 321 Distribution format:tar.gz Memory required to execute with typical data:51 712 Bytes No. of bits in a word: No. of processors used:1 Has the code been vectorized?:no Peripherals used:no Program Library subprograms used:no Nature of physical problem:To find an approximate solution with analytical expressions for the undamped nonlinear Duffing equation with periodic driving force when the fundamental frequency is identical to the driving force. Method of solution:In the frame of the general HB method, by using a new iteration algorithm to calculate the coefficients of the Fourier series, we can obtain an approximate analytical solution with high-accuracy efficiently. Restrictions on the complexity of the problem:For problems, which have a large driving frequency, the convergence may be a little slow, because more iterative times are needed. Typical running time:several seconds Unusual features of the program:For an undamped Duffing equation, it can provide all the solutions or the oscillation modes with real displacement for any interesting parameters, for the required accuracy, efficiently. The program can be used to study the dynamically periodic behavior of a nonlinear oscillator, and can provide a high-accurate approximate analytical solution for developing high-accurate numerical method.

  5. Construction Method of Analytical Solutions to the Mathematical Physics Boundary Problems for Non-Canonical Domains

    NASA Astrophysics Data System (ADS)

    Mobarakeh, Pouyan Shakeri; Grinchenko, Victor T.

    2015-06-01

    The majority of practical cases of acoustics problems requires solving the boundary problems in non-canonical domains. Therefore construction of analytical solutions of mathematical physics boundary problems for non-canonical domains is both lucrative from the academic viewpoint, and very instrumental for elaboration of efficient algorithms of quantitative estimation of the field characteristics under study. One of the main solving ideologies for such problems is based on the superposition method that allows one to analyze a wide class of specific problems with domains which can be constructed as the union of canonically-shaped subdomains. It is also assumed that an analytical solution (or quasi-solution) can be constructed for each subdomain in one form or another. However, this case implies some difficulties in the construction of calculation algorithms, insofar as the boundary conditions are incompletely defined in the intervals, where the functions appearing in the general solution are orthogonal to each other. We discuss several typical examples of problems with such difficulties, we study their nature and identify the optimal methods to overcome them.

  6. On a class of integrals of Legendre polynomials with complicated arguments--with applications in electrostatics and biomolecular modeling.

    PubMed

    Yu, Yi-Kuo

    2003-08-15

    The exact analytical result for a class of integrals involving (associated) Legendre polynomials of complicated argument is presented. The method employed can in principle be generalized to integrals involving other special functions. This class of integrals also proves useful in the electrostatic problems in which dielectric spheres are involved, which is of importance in modeling the dynamics of biological macromolecules. In fact, with this solution, a more robust foundation is laid for the Generalized Born method in modeling the dynamics of biomolecules. c2003 Elsevier B.V. All rights reserved.

  7. Study of diatomic molecules. 2: Intensities. [optical emission spectroscopy of ScO

    NASA Technical Reports Server (NTRS)

    Femenias, J. L.

    1978-01-01

    The theory of perturbations, giving the diatomic effective Hamiltonian, is used for calculating actual molecular wave functions and intensity factors involved in transitions between states arising from Hund's coupling cases a,b, intermediate a-b, and c tendency. The Herman and Wallis corrections are derived, without any knowledge of the analytical expressions of the wave functions, and generalized to transitions between electronic states with whatever symmetry and multiplicity. A general method for studying perturbed intensities is presented using primarily modern spectroscopic numerical approaches. The method is used in the study of the ScO optical emission spectrum.

  8. The non-Gaussian joint probability density function of slope and elevation for a nonlinear gravity wave field. [in ocean surface

    NASA Technical Reports Server (NTRS)

    Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.

    1984-01-01

    On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.

  9. The use of an analytic Hamiltonian matrix for solving the hydrogenic atom

    NASA Astrophysics Data System (ADS)

    Bhatti, Mohammad

    2001-10-01

    The non-relativistic Hamiltonian corresponding to the Shrodinger equation is converted into analytic Hamiltonian matrix using the kth order B-splines functions. The Galerkin method is applied to the solution of the Shrodinger equation for bound states of hydrogen-like systems. The program Mathematica is used to create analytic matrix elements and exact integration is performed over the knot-sequence of B-splines and the resulting generalized eigenvalue problem is solved on a specified numerical grid. The complete basis set and the energy spectrum is obtained for the coulomb potential for hydrogenic systems with Z less than 100 with B-splines of order eight. Another application is given to test the Thomas-Reiche-Kuhn sum rule for the hydrogenic systems.

  10. Generalized model of electromigration with 1:1 (analyte:selector) complexation stoichiometry: part I. Theory.

    PubMed

    Dubský, Pavel; Müllerová, Ludmila; Dvořák, Martin; Gaš, Bohuslav

    2015-03-06

    The model of electromigration of a multivalent weak acidic/basic/amphoteric analyte that undergoes complexation with a mixture of selectors is introduced. The model provides an extension of the series of models starting with the single-selector model without dissociation by Wren and Rowe in 1992, continuing with the monovalent weak analyte/single-selector model by Rawjee, Williams and Vigh in 1993 and that by Lelièvre in 1994, and ending with the multi-selector overall model without dissociation developed by our group in 2008. The new multivalent analyte multi-selector model shows that the effective mobility of the analyte obeys the original Wren and Row's formula. The overall complexation constant, mobility of the free analyte and mobility of complex can be measured and used in a standard way. The mathematical expressions for the overall parameters are provided. We further demonstrate mathematically that the pH dependent parameters for weak analytes can be simply used as an input into the multi-selector overall model and, in reverse, the multi-selector overall parameters can serve as an input into the pH-dependent models for the weak analytes. These findings can greatly simplify the rationale method development in analytical electrophoresis, specifically enantioseparations. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Trace analysis of trimethoprim and sulfonamide, macrolide, quinolone, and tetracycline antibiotics in chlorinated drinking water using liquid chromatography electrospray tandem mass spectrometry

    USGS Publications Warehouse

    Ye, Z.; Weinberg, H.S.; Meyer, M.T.

    2007-01-01

    A multirun analytical method has been developed and validated for trace determination of 24 antibiotics including 7 sulfonamides, 3 macrolides, 7 quinolones, 6 tetracyclines, and trimethoprim in chlorine-disinfected drinking water using a single solid-phase extraction method coupled to liquid chromatography with positive electrospray tandem mass spectrometry detection. The analytes were extracted by a hydrophilic-lipophilic balanced resin and eluted with acidified methanol (0.1% formic acid), resulting in analyte recoveries generally above 90%. The limits of quantitation were mostly below 10 ng/L in drinking water. Since the concentrated sample matrix typically caused ion suppression during electrospray ionization, the method of standard addition was used for quantitation. Chlorine residuals in drinking water can react with some antibiotics, but ascorbic acid was found to be an effective chlorine quenching agent without affecting the analysis and stability of the antibiotics in water. A preliminary occurrence study using this method revealed the presence of some antibiotics in drinking waters, including sulfamethoxazole (3.0-3.4 ng/L), macrolides (1.4-4.9 ng/L), and quinolones (1.2-4.0 ng/L). ?? 2007 American Chemical Society.

  12. Verification of Decision-Analytic Models for Health Economic Evaluations: An Overview.

    PubMed

    Dasbach, Erik J; Elbasha, Elamin H

    2017-07-01

    Decision-analytic models for cost-effectiveness analysis are developed in a variety of software packages where the accuracy of the computer code is seldom verified. Although modeling guidelines recommend using state-of-the-art quality assurance and control methods for software engineering to verify models, the fields of pharmacoeconomics and health technology assessment (HTA) have yet to establish and adopt guidance on how to verify health and economic models. The objective of this paper is to introduce to our field the variety of methods the software engineering field uses to verify that software performs as expected. We identify how many of these methods can be incorporated in the development process of decision-analytic models in order to reduce errors and increase transparency. Given the breadth of methods used in software engineering, we recommend a more in-depth initiative to be undertaken (e.g., by an ISPOR-SMDM Task Force) to define the best practices for model verification in our field and to accelerate adoption. Establishing a general guidance for verifying models will benefit the pharmacoeconomics and HTA communities by increasing accuracy of computer programming, transparency, accessibility, sharing, understandability, and trust of models.

  13. Modal element method for potential flow in non-uniform ducts: Combining closed form analysis with CFD

    NASA Technical Reports Server (NTRS)

    Baumeister, Kenneth J.; Baumeister, Joseph F.

    1994-01-01

    An analytical procedure is presented, called the modal element method, that combines numerical grid based algorithms with eigenfunction expansions developed by separation of variables. A modal element method is presented for solving potential flow in a channel with two-dimensional cylindrical like obstacles. The infinite computational region is divided into three subdomains; the bounded finite element domain, which is characterized by the cylindrical obstacle and the surrounding unbounded uniform channel entrance and exit domains. The velocity potential is represented approximately in the grid based domain by a finite element solution and is represented analytically by an eigenfunction expansion in the uniform semi-infinite entrance and exit domains. The calculated flow fields are in excellent agreement with exact analytical solutions. By eliminating the grid surrounding the obstacle, the modal element method reduces the numerical grid size, employs a more precise far field boundary condition, as well as giving theoretical insight to the interaction of the obstacle with the mean flow. Although the analysis focuses on a specific geometry, the formulation is general and can be applied to a variety of problems as seen by a comparison to companion theories in aeroacoustics and electromagnetics.

  14. Ray Tracing and Modal Methods for Modeling Radio Propagation in Tunnels With Rough Walls

    PubMed Central

    Zhou, Chenming

    2017-01-01

    At the ultrahigh frequencies common to portable radios, tunnels such as mine entries are often modeled by hollow dielectric waveguides. The roughness condition of the tunnel walls has an influence on radio propagation, and therefore should be taken into account when an accurate power prediction is needed. This paper investigates how wall roughness affects radio propagation in tunnels, and presents a unified ray tracing and modal method for modeling radio propagation in tunnels with rough walls. First, general analytical formulas for modeling the influence of the wall roughness are derived, based on the modal method and the ray tracing method, respectively. Second, the equivalence of the ray tracing and modal methods in the presence of wall roughnesses is mathematically proved, by showing that the ray tracing-based analytical formula can converge to the modal-based formula through the Poisson summation formula. The derivation and findings are verified by simulation results based on ray tracing and modal methods. PMID:28935995

  15. Analytical and numerical treatment of the heat conduction equation obtained via time-fractional distributed-order heat conduction law

    NASA Astrophysics Data System (ADS)

    Želi, Velibor; Zorica, Dušan

    2018-02-01

    Generalization of the heat conduction equation is obtained by considering the system of equations consisting of the energy balance equation and fractional-order constitutive heat conduction law, assumed in the form of the distributed-order Cattaneo type. The Cauchy problem for system of energy balance equation and constitutive heat conduction law is treated analytically through Fourier and Laplace integral transform methods, as well as numerically by the method of finite differences through Adams-Bashforth and Grünwald-Letnikov schemes for approximation derivatives in temporal domain and leap frog scheme for spatial derivatives. Numerical examples, showing time evolution of temperature and heat flux spatial profiles, demonstrate applicability and good agreement of both methods in cases of multi-term and power-type distributed-order heat conduction laws.

  16. Analytical connection between thresholds and immunization strategies of SIS model in random networks

    NASA Astrophysics Data System (ADS)

    Zhou, Ming-Yang; Xiong, Wen-Man; Liao, Hao; Wang, Tong; Wei, Zong-Wen; Fu, Zhong-Qian

    2018-05-01

    Devising effective strategies for hindering the propagation of viruses and protecting the population against epidemics is critical for public security and health. Despite a number of studies based on the susceptible-infected-susceptible (SIS) model devoted to this topic, we still lack a general framework to compare different immunization strategies in completely random networks. Here, we address this problem by suggesting a novel method based on heterogeneous mean-field theory for the SIS model. Our method builds the relationship between the thresholds and different immunization strategies in completely random networks. Besides, we provide an analytical argument that the targeted large-degree strategy achieves the best performance in random networks with arbitrary degree distribution. Moreover, the experimental results demonstrate the effectiveness of the proposed method in both artificial and real-world networks.

  17. Analysis of high-aspect-ratio jet-flap wings of arbitrary geometry

    NASA Technical Reports Server (NTRS)

    Lissaman, P. B. S.

    1973-01-01

    An analytical technique to compute the performance of an arbitrary jet-flapped wing is developed. The solution technique is based on the method of Maskell and Spence in which the well-known lifting-line approach is coupled with an auxiliary equation providing the extra function needed in jet-flap theory. The present method is generalized to handle straight, uncambered wings of arbitrary planform, twist, and blowing (including unsymmetrical cases). An analytical procedure is developed for continuous variations in the above geometric data with special functions to exactly treat discontinuities in any of the geometric and blowing data. A rational theory for the effect of finite wing thickness is introduced as well as simplified concepts of effective aspect ratio for rapid estimation of performance.

  18. Approximate analytical solutions in the analysis of thin elastic plates

    NASA Astrophysics Data System (ADS)

    Goloskokov, Dmitriy P.; Matrosov, Alexander V.

    2018-05-01

    Two approaches to the construction of approximate analytical solutions for bending of a rectangular thin plate are presented: the superposition method based on the method of initial functions (MIF) and the one built using the Green's function in the form of orthogonal series. Comparison of two approaches is carried out by analyzing a square plate clamped along its contour. Behavior of the moment and the shear force in the neighborhood of the corner points is discussed. It is shown that both solutions give identical results at all points of the plate except for the neighborhoods of the corner points. There are differences in the values of bending moments and generalized shearing forces in the neighborhoods of the corner points.

  19. A chemodynamic approach for estimating losses of target organic chemicals from water during sample holding time

    USGS Publications Warehouse

    Capel, P.D.; Larson, S.J.

    1995-01-01

    Minimizing the loss of target organic chemicals from environmental water samples between the time of sample collection and isolation is important to the integrity of an investigation. During this sample holding time, there is a potential for analyte loss through volatilization from the water to the headspace, sorption to the walls and cap of the sample bottle; and transformation through biotic and/or abiotic reactions. This paper presents a chemodynamic-based, generalized approach to estimate the most probable loss processes for individual target organic chemicals. The basic premise is that the investigator must know which loss process(es) are important for a particular analyte, based on its chemodynamic properties, when choosing the appropriate method(s) to prevent loss.

  20. Recent developments in urinalysis of metabolites of new psychoactive substances using LC-MS.

    PubMed

    Peters, Frank T

    2014-08-01

    In the last decade, an ever-increasing number of new psychoactive substances (NPSs) have appeared on the recreational drug market. To account for this development, analytical toxicologists have to continuously adapt their methods to encompass the latest NPSs. Urine is the preferred biological matrix for screening analysis in different areas of analytical toxicology. However, the development of urinalysis procedures for NPSs is complicated by the fact that generally little or no information on urinary excretion patterns of such drugs exists when they first appear on the market. Metabolism studies are therefore a prerequisite in the development of urinalysis methods for NPSs. In this article, the literature on the urinalysis of NPS metabolites will be reviewed, focusing on articles published after 2008.

  1. Quantitative DNA fiber mapping

    DOEpatents

    Gray, Joe W.; Weier, Heinz-Ulrich G.

    1998-01-01

    The present invention relates generally to the DNA mapping and sequencing technologies. In particular, the present invention provides enhanced methods and compositions for the physical mapping and positional cloning of genomic DNA. The present invention also provides a useful analytical technique to directly map cloned DNA sequences onto individual stretched DNA molecules.

  2. 21 CFR 570.35 - Affirmation of generally recognized as safe (GRAS) status.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... to the NAS-NRC GRAS list survey (36 FR 20546), shall submit a petition for GRAS affirmation pursuant... included where applicable.) (g) Quantitative compositions. (h) Manufacturing process (excluding any trade... quantitative methods for determining the substance(s) in food, including the type of analytical procedures used...

  3. 21 CFR 570.35 - Affirmation of generally recognized as safe (GRAS) status.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... to the NAS-NRC GRAS list survey (36 FR 20546), shall submit a petition for GRAS affirmation pursuant... included where applicable.) (g) Quantitative compositions. (h) Manufacturing process (excluding any trade... quantitative methods for determining the substance(s) in food, including the type of analytical procedures used...

  4. 21 CFR 570.35 - Affirmation of generally recognized as safe (GRAS) status.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... to the NAS-NRC GRAS list survey (36 FR 20546), shall submit a petition for GRAS affirmation pursuant... included where applicable.) (g) Quantitative compositions. (h) Manufacturing process (excluding any trade... quantitative methods for determining the substance(s) in food, including the type of analytical procedures used...

  5. 21 CFR 570.35 - Affirmation of generally recognized as safe (GRAS) status.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... to the NAS-NRC GRAS list survey (36 FR 20546), shall submit a petition for GRAS affirmation pursuant... included where applicable.) (g) Quantitative compositions. (h) Manufacturing process (excluding any trade... quantitative methods for determining the substance(s) in food, including the type of analytical procedures used...

  6. 21 CFR 570.35 - Affirmation of generally recognized as safe (GRAS) status.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... to the NAS-NRC GRAS list survey (36 FR 20546), shall submit a petition for GRAS affirmation pursuant... included where applicable.) (g) Quantitative compositions. (h) Manufacturing process (excluding any trade... quantitative methods for determining the substance(s) in food, including the type of analytical procedures used...

  7. Cholesterol and Plants

    ERIC Educational Resources Information Center

    Behrman, E. J.; Gopalan, Venkat

    2005-01-01

    There is a widespread belief among the public and even among chemist that plants do not contain cholesterol. This wrong belief is the result of the fact that plants generally contain only small quantities of cholesterol and that analytical methods for the detection of cholesterol in this range were not developed until recently.

  8. Advanced bridge safety initiative, task 1 : development of improved analytical load rating procedures for flat-slab concrete bridges - a thesis and guidelines.

    DOT National Transportation Integrated Search

    2010-01-01

    Current AASHTO provisions for the conventional load rating of flat slab bridges rely on the equivalent strip method : of analysis for determining live load effects, this is generally regarded as overly conservative by many professional : engineers. A...

  9. Current Status of Mycotoxin Analysis: A Critical Review.

    PubMed

    Shephard, Gordon S

    2016-07-01

    It is over 50 years since the discovery of aflatoxins focused the attention of food safety specialists on fungal toxins in the feed and food supply. Since then, analysis of this important group of natural contaminants has advanced in parallel with general developments in analytical science, and current MS methods are capable of simultaneously analyzing hundreds of compounds, including mycotoxins, pesticides, and drugs. This profusion of data may advance our understanding of human exposure, yet constitutes an interpretive challenge to toxicologists and food safety regulators. Despite these advances in analytical science, the basic problem of the extreme heterogeneity of mycotoxin contamination, although now well understood, cannot be circumvented. The real health challenges posed by mycotoxin exposure occur in the developing world, especially among small-scale and subsistence farmers. Addressing these problems requires innovative approaches in which analytical science must also play a role in providing suitable out-of-laboratory analytical techniques.

  10. Analytical formulation of cellular automata rules using data models

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger M.; Handley, James W.

    2009-05-01

    We present a unique method for converting traditional cellular automata (CA) rules into analytical function form. CA rules have been successfully used for morphological image processing and volumetric shape recognition and classification. Further, the use of CA rules as analog models to the physical and biological sciences can be significantly extended if analytical (as opposed to discrete) models could be formulated. We show that such transformations are possible. We use as our example John Horton Conway's famous "Game of Life" rule set. We show that using Data Modeling, we are able to derive both polynomial and bi-spectrum models of the IF-THEN rules that yield equivalent results. Further, we demonstrate that the "Game of Life" rule set can be modeled using the multi-fluxion, yielding a closed form nth order derivative and integral. All of the demonstrated analytical forms of the CA rule are general and applicable to real-time use.

  11. A Higher Order Iterative Method for Computing the Drazin Inverse

    PubMed Central

    Soleymani, F.; Stanimirović, Predrag S.

    2013-01-01

    A method with high convergence rate for finding approximate inverses of nonsingular matrices is suggested and established analytically. An extension of the introduced computational scheme to general square matrices is defined. The extended method could be used for finding the Drazin inverse. The application of the scheme on large sparse test matrices alongside the use in preconditioning of linear system of equations will be presented to clarify the contribution of the paper. PMID:24222747

  12. Shape optimization using a NURBS-based interface-enriched generalized FEM

    DOE PAGES

    Najafi, Ahmad R.; Safdari, Masoud; Tortorelli, Daniel A.; ...

    2016-11-26

    This study presents a gradient-based shape optimization over a fixed mesh using a non-uniform rational B-splines-based interface-enriched generalized finite element method, applicable to multi-material structures. In the proposed method, non-uniform rational B-splines are used to parameterize the design geometry precisely and compactly by a small number of design variables. An analytical shape sensitivity analysis is developed to compute derivatives of the objective and constraint functions with respect to the design variables. Subtle but important new terms involve the sensitivity of shape functions and their spatial derivatives. As a result, verification and illustrative problems are solved to demonstrate the precision andmore » capability of the method.« less

  13. 7 CFR 90.2 - General terms defined.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... agency, or other agency, organization or person that defines in the general terms the basis on which the... analytical data using proficiency check sample or analyte recovery techniques. In addition, the certainty.... Quality control. The system of close examination of the critical details of an analytical procedure in...

  14. Long-term variability in sugarcane bagasse feedstock compositional methods: Sources and magnitude of analytical variability

    DOE PAGES

    Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie; ...

    2016-10-18

    In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less

  15. Long-term variability in sugarcane bagasse feedstock compositional methods: Sources and magnitude of analytical variability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie

    In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less

  16. Quantum networks in divergence-free circuit QED

    NASA Astrophysics Data System (ADS)

    Parra-Rodriguez, A.; Rico, E.; Solano, E.; Egusquiza, I. L.

    2018-04-01

    Superconducting circuits are one of the leading quantum platforms for quantum technologies. With growing system complexity, it is of crucial importance to develop scalable circuit models that contain the minimum information required to predict the behaviour of the physical system. Based on microwave engineering methods, divergent and non-divergent Hamiltonian models in circuit quantum electrodynamics have been proposed to explain the dynamics of superconducting quantum networks coupled to infinite-dimensional systems, such as transmission lines and general impedance environments. Here, we study systematically common linear coupling configurations between networks and infinite-dimensional systems. The main result is that the simple Lagrangian models for these configurations present an intrinsic natural length that provides a natural ultraviolet cutoff. This length is due to the unavoidable dressing of the environment modes by the network. In this manner, the coupling parameters between their components correctly manifest their natural decoupling at high frequencies. Furthermore, we show the requirements to correctly separate infinite-dimensional coupled systems in local bases. We also compare our analytical results with other analytical and approximate methods available in the literature. Finally, we propose several applications of these general methods to analogue quantum simulation of multi-spin-boson models in non-perturbative coupling regimes.

  17. Numerical evaluation of electromagnetic fields due to dipole antennas in the presence of stratified media

    NASA Technical Reports Server (NTRS)

    Tsang, L.; Brown, R.; Kong, J. A.; Simmons, G.

    1974-01-01

    Two numerical methods are used to evaluate the integrals that express the em fields due to dipole antennas radiating in the presence of a stratified medium. The first method is a direct integration by means of Simpson's rule. The second method is indirect and approximates the kernel of the integral by means of the fast Fourier transform. In contrast to previous analytical methods that applied only to two-layer cases the numerical methods can be used for any arbitrary number of layers with general properties.

  18. Environmental monitoring of phenolic pollutants in water by cloud point extraction prior to micellar electrokinetic chromatography.

    PubMed

    Stege, Patricia W; Sombra, Lorena L; Messina, Germán A; Martinez, Luis D; Silva, María F

    2009-05-01

    Many aromatic compounds can be found in the environment as a result of anthropogenic activities and some of them are highly toxic. The need to determine low concentrations of pollutants requires analytical methods with high sensitivity, selectivity, and resolution for application to soil, sediment, water, and other environmental samples. Complex sample preparation involving analyte isolation and enrichment is generally necessary before the final analysis. The present paper outlines a novel, simple, low-cost, and environmentally friendly method for the simultaneous determination of p-nitrophenol (PNP), p-aminophenol (PAP), and hydroquinone (HQ) by micellar electrokinetic capillary chromatography after preconcentration by cloud point extraction. Enrichment factors of 180 to 200 were achieved. The limits of detection of the analytes for the preconcentration of 50-ml sample volume were 0.10 microg L(-1) for PNP, 0.20 microg L(-1) for PAP, and 0.16 microg L(-1) for HQ. The optimized procedure was applied to the determination of phenolic pollutants in natural waters from San Luis, Argentina.

  19. A review of the occurrence, analyses, toxicity, and biodegradation of naphthenic acids.

    PubMed

    Clemente, Joyce S; Fedorak, Phillip M

    2005-07-01

    Naphthenic acids occur naturally in crude oils and in oil sands bitumens. They are toxic components in refinery wastewaters and in oil sands extraction waters. In addition, there are many industrial uses for naphthenic acids, so there is a potential for their release to the environment from a variety of activities. Studies have shown that naphthenic acids are susceptible to biodegradation, which decreases their concentration and reduces toxicity. This is a complex group of carboxylic acids with the general formula CnH(2n+Z)O2, where n indicates the carbon number and Z specifies the hydrogen deficiency resulting from ring formation. Measuring the concentrations of naphthenic acids in environmental samples and determining the chemical composition of a naphthenic acids mixture are huge analytical challenges. However, new analytical methods are being applied to these problems and progress is being made to better understand this mixture of chemically similar compounds. This paper reviews a variety of analytical methods and their application to assessing biodegradation of naphthenic acids.

  20. Magnetic Nanoparticles for Antibiotics Detection

    PubMed Central

    Cristea, Cecilia; Tertis, Mihaela; Galatus, Ramona

    2017-01-01

    Widespread use of antibiotics has led to pollution of waterways, potentially creating resistance among freshwater bacterial communities. Microorganisms resistant to commonly prescribed antibiotics (superbug) have dramatically increased over the last decades. The presence of antibiotics in waters, in food and beverages in both their un-metabolized and metabolized forms are of interest for humans. This is due to daily exposure in small quantities, that, when accumulated, could lead to development of drug resistance to antibiotics, or multiply the risk of allergic reaction. Conventional analytical methods used to quantify antibiotics are relatively expensive and generally require long analysis time associated with the difficulties to perform field analyses. In this context, electrochemical and optical based sensing devices are of interest, offering great potentials for a broad range of analytical applications. This review will focus on the application of magnetic nanoparticles in the design of different analytical methods, mainly sensors, used for the detection of antibiotics in different matrices (human fluids, the environmental, food and beverages samples). PMID:28538684

  1. The "Forgotten" Pseudomomenta and Gauge Changes in Generalized Landau Level Problems: Spatially Nonuniform Magnetic and Temporally Varying Electric Fields

    NASA Astrophysics Data System (ADS)

    Konstantinou, Georgios; Moulopoulos, Konstantinos

    2017-05-01

    By perceiving gauge invariance as an analytical tool in order to get insight into the states of the "generalized Landau problem" (a charged quantum particle moving inside a magnetic, and possibly electric field), and motivated by an early article that correctly warns against a naive use of gauge transformation procedures in the usual Landau problem (i.e. with the magnetic field being static and uniform), we first show how to bypass the complications pointed out in that article by solving the problem in full generality through gauge transformation techniques in a more appropriate manner. Our solution provides in simple and closed analytical forms all Landau Level-wavefunctions without the need to specify a particular vector potential. This we do by proper handling of the so-called pseudomomentum ěc {{K}} (or of a quantity that we term pseudo-angular momentum L z ), a method that is crucially different from the old warning argument, but also from standard treatments in textbooks and in research literature (where the usual Landau-wavefunctions are employed - labeled with canonical momenta quantum numbers). Most importantly, we go further by showing that a similar procedure can be followed in the more difficult case of spatially-nonuniform magnetic fields: in such case we define ěc {{K}} and L z as plausible generalizations of the previous ordinary case, namely as appropriate line integrals of the inhomogeneous magnetic field - our method providing closed analytical expressions for all stationary state wavefunctions in an easy manner and in a broad set of geometries and gauges. It can thus be viewed as complementary to the few existing works on inhomogeneous magnetic fields, that have so far mostly focused on determining the energy eigenvalues rather than the corresponding eigenkets (on which they have claimed that, even in the simplest cases, it is not possible to obtain in closed form the associated wavefunctions). The analytical forms derived here for these wavefunctions enable us to also provide explicit Berry's phase calculations and a quick study of their connection to probability currents and to some recent interesting issues in elementary Quantum Mechanics and Condensed Matter Physics. As an added feature, we also show how the possible presence of an additional electric field can be treated through a further generalization of pseudomomenta and their proper handling.

  2. A method of predicting flow rates required to achieve anti-icing performance with a porous leading edge ice protection system

    NASA Technical Reports Server (NTRS)

    Kohlman, D. L.; Albright, A. E.

    1983-01-01

    An analytical method was developed for predicting minimum flow rates required to provide anti-ice protection with a porous leading edge fluid ice protection system. The predicted flow rates compare with an average error of less than 10 percent to six experimentally determined flow rates from tests in the NASA Icing Research Tunnel on a general aviation wing section.

  3. Exact test-based approach for equivalence test with parameter margin.

    PubMed

    Cassie Dong, Xiaoyu; Bian, Yuanyuan; Tsong, Yi; Wang, Tianhua

    2017-01-01

    The equivalence test has a wide range of applications in pharmaceutical statistics which we need to test for the similarity between two groups. In recent years, the equivalence test has been used in assessing the analytical similarity between a proposed biosimilar product and a reference product. More specifically, the mean values of the two products for a given quality attribute are compared against an equivalence margin in the form of ±f × σ R , where ± f × σ R is a function of the reference variability. In practice, this margin is unknown and is estimated from the sample as ±f × S R . If we use this estimated margin with the classic t-test statistic on the equivalence test for the means, both Type I and Type II error rates may inflate. To resolve this issue, we develop an exact-based test method and compare this method with other proposed methods, such as the Wald test, the constrained Wald test, and the Generalized Pivotal Quantity (GPQ) in terms of Type I error rate and power. Application of those methods on data analysis is also provided in this paper. This work focuses on the development and discussion of the general statistical methodology and is not limited to the application of analytical similarity.

  4. Noise Certification Predictions for FJX-2-Powered Aircraft Using Analytic Methods

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.

    1999-01-01

    Williams International Co. is currently developing the 700-pound thrust class FJX-2 turbofan engine for the general Aviation Propulsion Program's Turbine Engine Element. As part of the 1996 NASA-Williams cooperative working agreement, NASA agreed to analytically calculate the noise certification levels of the FJX-2-powered V-Jet II test bed aircraft. Although the V-Jet II is a demonstration aircraft that is unlikely to be produced and certified, the noise results presented here may be considered to be representative of the noise levels of small, general aviation jet aircraft that the FJX-2 would power. A single engine variant of the V-Jet II, the V-Jet I concept airplane, is also considered. Reported in this paper are the analytically predicted FJX-2/V-Jet noise levels appropriate for Federal Aviation Regulation certification. Also reported are FJX-2/V-Jet noise levels using noise metrics appropriate for the propeller-driven aircraft that will be its major market competition, as well as a sensitivity analysis of the certification noise levels to major system uncertainties.

  5. Pharmacokinetics of reduced iso-α-acids in volunteers following clear bottled beer consumption.

    PubMed

    Rodda, Luke N; Gerostamoulos, Dimitri; Drummer, Olaf H

    2015-05-01

    Reduced iso-α-acids (reduced IAA) consisting of the rho-, tetrahydro- and hexahydro-IAA groups (RIAA, TIAA and HIAA, respectively) are ingredient congeners specific to beer and generally found in clear and also occasionally green bottled beer. Concentrations of reduced IAA were determined in the blood and urine of five volunteers over 6h following the consumption of small volumes of beer containing each of the reduced IAA. The reduced IAA were absorbed and bioavailable with peak concentrations at 0.5h followed by a drop of generally fivefold by 2h. Preliminary pharmacokinetics of these compounds in humans shows relatively small inter-individual differences and an estimated short half-life varying between ∼38 and 46min for the three groups. Comparison of RIAA analyte ratios within the group indicate that some analytes eliminate relatively faster than others and the formation of metabolite products was observed. Preliminary urine analysis showed only unmodified RIAA analytes were detectable throughout 6h and suggests extensive phase I metabolism of TIAA and HIAA analytes. In authentic forensic casework where clear or green bottled beers are consumed, the identification of reduced IAA groups may provide a novel method to target ingredient congeners consistent with beer ingestion and suggest the type of beer consumed. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Local fields and effective conductivity tensor of ellipsoidal particle composite with anisotropic constituents

    NASA Astrophysics Data System (ADS)

    Kushch, Volodymyr I.; Sevostianov, Igor; Giraud, Albert

    2017-11-01

    An accurate semi-analytical solution of the conductivity problem for a composite with anisotropic matrix and arbitrarily oriented anisotropic ellipsoidal inhomogeneities has been obtained. The developed approach combines the superposition principle with the multipole expansion of perturbation fields of inhomogeneities in terms of ellipsoidal harmonics and reduces the boundary value problem to an infinite system of linear algebraic equations for the induced multipole moments of inhomogeneities. A complete full-field solution is obtained for the multi-particle models comprising inhomogeneities of diverse shape, size, orientation and properties which enables an adequate account for the microstructure parameters. The solution is valid for the general-type anisotropy of constituents and arbitrary orientation of the orthotropy axes. The effective conductivity tensor of the particulate composite with anisotropic constituents is evaluated in the framework of the generalized Maxwell homogenization scheme. Application of the developed method to composites with imperfect ellipsoidal interfaces is straightforward. Their incorporation yields probably the most general model of a composite that may be considered in the framework of analytical approach.

  7. Linear stability and nonlinear analyses of traffic waves for the general nonlinear car-following model with multi-time delays

    NASA Astrophysics Data System (ADS)

    Sun, Dihua; Chen, Dong; Zhao, Min; Liu, Weining; Zheng, Linjiang

    2018-07-01

    In this paper, the general nonlinear car-following model with multi-time delays is investigated in order to describe the reactions of vehicle to driving behavior. Platoon stability and string stability criteria are obtained for the general nonlinear car-following model. Burgers equation and Korteweg de Vries (KdV) equation and their solitary wave solutions are derived adopting the reductive perturbation method. We investigate the properties of typical optimal velocity model using both analytic and numerical methods, which estimates the impact of delays about the evolution of traffic congestion. The numerical results show that time delays in sensing relative movement is more sensitive to the stability of traffic flow than time delays in sensing host motion.

  8. Selectivity in analytical chemistry: two interpretations for univariate methods.

    PubMed

    Dorkó, Zsanett; Verbić, Tatjana; Horvai, George

    2015-01-01

    Selectivity is extremely important in analytical chemistry but its definition is elusive despite continued efforts by professional organizations and individual scientists. This paper shows that the existing selectivity concepts for univariate analytical methods broadly fall in two classes: selectivity concepts based on measurement error and concepts based on response surfaces (the response surface being the 3D plot of the univariate signal as a function of analyte and interferent concentration, respectively). The strengths and weaknesses of the different definitions are analyzed and contradictions between them unveiled. The error based selectivity is very general and very safe but its application to a range of samples (as opposed to a single sample) requires the knowledge of some constraint about the possible sample compositions. The selectivity concepts based on the response surface are easily applied to linear response surfaces but may lead to difficulties and counterintuitive results when applied to nonlinear response surfaces. A particular advantage of this class of selectivity is that with linear response surfaces it can provide a concentration independent measure of selectivity. In contrast, the error based selectivity concept allows only yes/no type decision about selectivity. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Olive oil authentication: A comparative analysis of regulatory frameworks with especial emphasis on quality and authenticity indices, and recent analytical techniques developed for their assessment. A review.

    PubMed

    Bajoub, Aadil; Bendini, Alessandra; Fernández-Gutiérrez, Alberto; Carrasco-Pancorbo, Alegría

    2018-03-24

    Over the last decades, olive oil quality and authenticity control has become an issue of great importance to consumers, suppliers, retailers, and regulators in both traditional and emerging olive oil producing countries, mainly due to the increasing worldwide popularity and the trade globalization of this product. Thus, in order to ensure olive oil authentication, various national and international laws and regulations have been adopted, although some of them are actually causing an enormous debate about the risk that they can represent for the harmonization of international olive oil trade standards. Within this context, this review was designed to provide a critical overview and comparative analysis of selected regulatory frameworks for olive oil authentication, with special emphasis on the quality and purity criteria considered by these regulation systems, their thresholds and the analytical methods employed for monitoring them. To complete the general overview, recent analytical advances to overcome drawbacks and limitations of the official methods to evaluate olive oil quality and to determine possible adulterations were reviewed. Furthermore, the latest trends on analytical approaches to assess the olive oil geographical and varietal origin traceability were also examined.

  10. Helios: Understanding Solar Evolution Through Text Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Randazzese, Lucien

    This proof-of-concept project focused on developing, testing, and validating a range of bibliometric, text analytic, and machine-learning based methods to explore the evolution of three photovoltaic (PV) technologies: Cadmium Telluride (CdTe), Dye-Sensitized solar cells (DSSC), and Multi-junction solar cells. The analytical approach to the work was inspired by previous work by the same team to measure and predict the scientific prominence of terms and entities within specific research domains. The goal was to create tools that could assist domain-knowledgeable analysts in investigating the history and path of technological developments in general, with a focus on analyzing step-function changes in performance,more » or “breakthroughs,” in particular. The text-analytics platform developed during this project was dubbed Helios. The project relied on computational methods for analyzing large corpora of technical documents. For this project we ingested technical documents from the following sources into Helios: Thomson Scientific Web of Science (papers), the U.S. Patent & Trademark Office (patents), the U.S. Department of Energy (technical documents), the U.S. National Science Foundation (project funding summaries), and a hand curated set of full-text documents from Thomson Scientific and other sources.« less

  11. Centrifugal ultrafiltration of human serum for improving immunoglobulin A quantification using attenuated total reflectance infrared spectroscopy.

    PubMed

    Elsohaby, Ibrahim; McClure, J Trenton; Riley, Christopher B; Bryanton, Janet; Bigsby, Kathryn; Shaw, R Anthony

    2018-02-20

    Attenuated total reflectance infrared (ATR-IR) spectroscopy is a simple, rapid and cost-effective method for the analysis of serum. However, the complex nature of serum remains a limiting factor to the reliability of this method. We investigated the benefits of coupling the centrifugal ultrafiltration with ATR-IR spectroscopy for quantification of human serum IgA concentration. Human serum samples (n = 196) were analyzed for IgA using an immunoturbidimetric assay. ATR-IR spectra were acquired for whole serum samples and for the retentate (residue) reconstituted with saline following 300 kDa centrifugal ultrafiltration. IR-based analytical methods were developed for each of the two spectroscopic datasets, and the accuracy of each of the two methods compared. Analytical methods were based upon partial least squares regression (PLSR) calibration models - one with 5-PLS factors (for whole serum) and the second with 9-PLS factors (for the reconstituted retentate). Comparison of the two sets of IR-based analytical results to reference IgA values revealed improvements in the Pearson correlation coefficient (from 0.66 to 0.76), and the root mean squared error of prediction in IR-based IgA concentrations (from 102 to 79 mg/dL) for the ultrafiltration retentate-based method as compared to the method built upon whole serum spectra. Depleting human serum low molecular weight proteins using a 300 kDa centrifugal filter thus enhances the accuracy IgA quantification by ATR-IR spectroscopy. Further evaluation and optimization of this general approach may ultimately lead to routine analysis of a range of high molecular-weight analytical targets that are otherwise unsuitable for IR-based analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. A Rapid Analytical Method for Determination of Aflatoxins in Plant-Derived Dietary Supplement and Cosmetic Oils

    PubMed Central

    Mahoney, Noreen; Molyneux, Russell J.

    2010-01-01

    Consumption of edible oils derived from conventional crop plants is increasing because they are generally regarded as more healthy alternatives to animal based fats and oils. More recently there has been increased interest in the use of alternative specialty plant-derived oils, including those from tree nuts (almonds, pistachios and walnuts) and botanicals (borage, evening primrose and perilla) both for direct human consumption (e.g. as salad dressings) but also for preparation of cosmetics, soaps, and fragrance oils. This has raised the issue as to whether or not exposure to aflatoxins can result from such oils. Although most crops are subject to analysis and control, it has generally been assumed that plant oils do not retain aflatoxins due to their high polarity and lipophobicity of these compounds. There is virtually no scientific evidence to support this supposition and available information is conflicting. To improve the safety and consistency of botanicals and dietary supplements, research is needed to establish whether or not oils used directly, or in the formulation of products, contain aflatoxins. A validated analytical method for the analysis of aflatoxins in plant-derived oils is essential, in order to establish the safety of dietary supplements for consumption or cosmetic use that contain such oils. The aim of this research was therefore to develop an HPLC method applicable to a wide variety of oils from different plant sources spiked with aflatoxins, thereby providing a basis for a comprehensive project to establish an intra- and inter-laboratory validated analytical method for analysis of aflatoxins in dietary supplements and cosmetics formulated with plant oils. PMID:20235534

  13. A Generalized Approach to Forensic Dye Identification: Development and Utility of Reference Libraries.

    PubMed

    Groves, Ethan; Palenik, Skip; Palenik, Christopher S

    2018-04-18

    While color is arguably the most important optical property of evidential fibers, the actual dyestuffs responsible for its expression in them are, in forensic trace evidence examinations, rarely analyzed and still less often identified. This is due, primarily, to the exceedingly small quantities of dye present in a single fiber as well as to the fact that dye identification is a challenging analytical problem, even when large quantities are available for analysis. Among the practical reasons for this are the wide range of dyestuffs available (and the even larger number of trade names), the low total concentration of dyes in the finished product, the limited amount of sample typically available for analysis in forensic cases, and the complexity of the dye mixtures that may exist within a single fiber. Literature on the topic of dye analysis is often limited to a specific method, subset of dyestuffs, or an approach that is not applicable given the constraints of a forensic analysis. Here, we present a generalized approach to dye identification that ( 1 ) combines several robust analytical methods, ( 2 ) is broadly applicable to a wide range of dye chemistries, application classes, and fiber types, and ( 3 ) can be scaled down to forensic casework-sized samples. The approach is based on the development of a reference collection of 300 commercially relevant textile dyes that have been characterized by a variety of microanalytical methods (HPTLC, Raman microspectroscopy, infrared microspectroscopy, UV-Vis spectroscopy, and visible microspectrophotometry). Although there is no single approach that is applicable to all dyes on every type of fiber, a combination of these analytical methods has been applied using a reproducible approach that permits the use of reference libraries to constrain the identity of and, in many cases, identify the dye (or dyes) present in a textile fiber sample.

  14. Framework for event-based semidistributed modeling that unifies the SCS-CN method, VIC, PDM, and TOPMODEL

    NASA Astrophysics Data System (ADS)

    Bartlett, M. S.; Parolari, A. J.; McDonnell, J. J.; Porporato, A.

    2016-09-01

    Hydrologists and engineers may choose from a range of semidistributed rainfall-runoff models such as VIC, PDM, and TOPMODEL, all of which predict runoff from a distribution of watershed properties. However, these models are not easily compared to event-based data and are missing ready-to-use analytical expressions that are analogous to the SCS-CN method. The SCS-CN method is an event-based model that describes the runoff response with a rainfall-runoff curve that is a function of the cumulative storm rainfall and antecedent wetness condition. Here we develop an event-based probabilistic storage framework and distill semidistributed models into analytical, event-based expressions for describing the rainfall-runoff response. The event-based versions called VICx, PDMx, and TOPMODELx also are extended with a spatial description of the runoff concept of "prethreshold" and "threshold-excess" runoff, which occur, respectively, before and after infiltration exceeds a storage capacity threshold. For total storm rainfall and antecedent wetness conditions, the resulting ready-to-use analytical expressions define the source areas (fraction of the watershed) that produce runoff by each mechanism. They also define the probability density function (PDF) representing the spatial variability of runoff depths that are cumulative values for the storm duration, and the average unit area runoff, which describes the so-called runoff curve. These new event-based semidistributed models and the traditional SCS-CN method are unified by the same general expression for the runoff curve. Since the general runoff curve may incorporate different model distributions, it may ease the way for relating such distributions to land use, climate, topography, ecology, geology, and other characteristics.

  15. On analytic modeling of lunar perturbations of artificial satellites of the earth

    NASA Astrophysics Data System (ADS)

    Lane, M. T.

    1989-06-01

    Two different procedures for analytically modeling the effects of the moon's direct gravitational force on artificial earth satellites are discussed from theoretical and numerical viewpoints. One is developed using classical series expansions of inclination and eccentricity for both the satellite and the moon, and the other employs the method of averaging. Both solutions are seen to have advantages, but it is shown that while the former is more accurate in special situations, the latter is quicker and more practical for the general orbit determination problem where observed data are used to correct the orbit in near real time.

  16. Analytical solutions of the Dirac equation under Hellmann–Frost–Musulin potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onate, C.A., E-mail: oaclems14@physicist.net; Onyeaju, M.C.; Ikot, A.N.

    2016-12-15

    The approximate analytical solutions of the Dirac equation with Hellmann–Frost–Musulin potential have been studied by using the generalized parametric Nikiforov–Uvarov (NU) method for arbitrary spin–orbit quantum number k under the spin and pseudospin symmetries. The Hellmann–Frost–Musulin potential is a superposition potential that consists of Yukawa potential, Coulomb potential, and Frost–Musulin potential. As a particular case, we found the energy levels of the non-relativistic limit of the spin symmetry. The energy equation of Yukawa potential, Coulomb potential, Hellmann potential and Frost–Musulin potential are obtained. Energy values are generated for some diatomic molecules.

  17. Analytical techniques and method validation for the measurement of selected semivolatile and nonvolatile organofluorochemicals in air.

    PubMed

    Reagen, William K; Lindstrom, Kent R; Thompson, Kathy L; Flaherty, John M

    2004-09-01

    The widespread use of semi- and nonvolatile organofluorochemicals in industrial facilities, concern about their persistence, and relatively recent advancements in liquid chromatography/mass spectrometry (LC/MS) technology have led to the development of new analytical methods to assess potential worker exposure to airborne organofluorochemicals. Techniques were evaluated for the determination of 19 organofluorochemicals and for total fluorine in ambient air samples. Due to the potential biphasic nature of most of these fluorochemicals when airborne, Occupational Safety and Health Administration (OSHA) versatile sampler (OVS) tubes were used to simultaneously trap fluorochemical particulates and vapors from workplace air. Analytical methods were developed for OVS air samples to quantitatively analyze for total fluorine using oxygen bomb combustion/ion selective electrode and for 17 organofluorochemicals using LC/MS and gas chromatography/mass spectrometry (GC/MS). The experimental design for this validation was based on the National Institute of Occupational Safety and Health (NIOSH) Guidelines for Air Sampling and Analytical Method Development and Evaluation, with some revisions of the experimental design. The study design incorporated experiments to determine analytical recovery and stability, sampler capacity, the effect of some environmental parameters on recoveries, storage stability, limits of detection, precision, and accuracy. Fluorochemical mixtures were spiked onto each OVS tube over a range of 0.06-6 microg for each of 12 compounds analyzed by LC/MS and 0.3-30 microg for 5 compounds analyzed by GC/MS. These ranges allowed reliable quantitation at 0.001-0.1 mg/m3 in general for LC/MS analytes and 0.005-0.5 mg/m3 for GC/MS analytes when 60 L of air are sampled. The organofluorochemical exposure guideline (EG) is currently 0.1 mg/m3 for many analytes, with one exception being ammonium perfluorooctanoate (EG is 0.01 mg/m3). Total fluorine results may be used to determine if the individual compounds quantified provide a suitable mass balance of total airborne organofluorochemicals based on known fluorine content. Improvements in precision and/or recovery as well as some additional testing would be needed to meet all NIOSH validation criteria. This study provided valuable information about the accuracy of this method for organofluorochemical exposure assessment.

  18. Development of an 19F NMR method for the analysis of fluorinated acids in environmental water samples.

    PubMed

    Ellis, D A; Martin, J W; Muir, D C; Mabury, S A

    2000-02-15

    This investigation was carried out to evaluate 19F NMR as an analytical tool for the measurement of trifluoroacetic acid (TFA) and other fluorinated acids in the aquatic environment. A method based upon strong anionic exchange (SAX) chromatography was also optimized for the concentration of the fluoro acids prior to NMR analysis. Extraction of the analyte from the SAX column was carried out directly in the NMR solvent in the presence of the strong organic base, DBU. The method allowed the analysis of the acid without any prior cleanup steps being involved. Optimal NMR sensitivity based upon T1 relaxation times was investigated for seven fluorinated compounds in four different NMR solvents. The use of the relaxation agent chromium acetylacetonate, Cr(acac)3, within these solvent systems was also evaluated. Results show that the optimal NMR solvent differs for each fluorinated analyte. Cr(acac)3 was shown to have pronounced effects on the limits of detection of the analyte. Generally, the optimal sensitivity condition appears to be methanol-d4/2M DBU in the presence of 4 mg/mL of Cr-(acac)3. The method was validated through spike and recovery for five fluoro acids from environmentally relevant waters. Results are presented for the analysis of TFA in Toronto rainwater, which ranged from < 16 to 850 ng/L. The NMR results were confirmed by GC-MS selected-ion monitoring of the fluoroanalide derivative.

  19. Analytical methods of the U.S. Geological Survey's New York District Water-Analysis Laboratory

    USGS Publications Warehouse

    Lawrence, Gregory B.; Lincoln, Tricia A.; Horan-Ross, Debra A.; Olson, Mark L.; Waldron, Laura A.

    1995-01-01

    The New York District of the U.S. Geological Survey (USGS) in Troy, N.Y., operates a water-analysis laboratory for USGS watershed-research projects in the Northeast that require analyses of precipitation and of dilute surface water and soil water for major ions; it also provides analyses of certain chemical constituents in soils and soil gas samples.This report presents the methods for chemical analyses of water samples, soil-water samples, and soil-gas samples collected in wateshed-research projects. The introduction describes the general materials and technicques for each method and explains the USGS quality-assurance program and data-management procedures; it also explains the use of cross reference to the three most commonly used methods manuals for analysis of dilute waters. The body of the report describes the analytical procedures for (1) solution analysis, (2) soil analysis, and (3) soil-gas analysis. The methods are presented in alphabetical order by constituent. The method for each constituent is preceded by (1) reference codes for pertinent sections of the three manuals mentioned above, (2) a list of the method's applications, and (3) a summary of the procedure. The methods section for each constitutent contains the following categories: instrumentation and equipment, sample preservation and storage, reagents and standards, analytical procedures, quality control, maintenance, interferences, safety considerations, and references. Sufficient information is presented for each method to allow the resulting data to be appropriately used in environmental investigations.

  20. PROPOSED SIAM PROBLEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BAILEY, DAVID H.; BORWEIN, JONATHAN M.

    A recent paper by the present authors, together with mathematical physicists David Broadhurst and M. Larry Glasser, explored Bessel moment integrals, namely definite integrals of the general form {integral}{sub 0}{sup {infinity}} t{sup m}f{sup n}(t) dt, where the function f(t) is one of the classical Bessel functions. In that paper, numerous previously unknown analytic evaluations were obtained, using a combination of analytic methods together with some fairly high-powered numerical computations, often performed on highly parallel computers. In several instances, while we were able to numerically discover what appears to be a solid analytic identity, based on extremely high-precision numerical computations, wemore » were unable to find a rigorous proof. Thus we present here a brief list of some of these unproven but numerically confirmed identities.« less

  1. Flight and analytical investigations of a structural mode excitation system on the YF-12A airplane

    NASA Technical Reports Server (NTRS)

    Goforth, E. A.; Murphy, R. C.; Beranek, J. A.; Davis, R. A.

    1987-01-01

    A structural excitation system, using an oscillating canard vane to generate force, was mounted on the forebody of the YF-12A airplane. The canard vane was used to excite the airframe structural modes during flight in the subsonic, transonic, and supersonic regimes. Structural modal responses generated by the canard vane forces were measured at the flight test conditions by airframe-mounted accelerometers. Correlations of analytical and experimental aeroelastic results were made. Doublet lattice, steady state double lattice with uniform lag, Mach box, and piston theory all produced acceptable analytical aerodynamic results within the restrictions that apply to each. In general, the aerodynamic theory methods, carefully applied, were found to predict the dynamic behavior of the YF-12A aircraft adequately.

  2. Analyzing Response Times in Tests with Rank Correlation Approaches

    ERIC Educational Resources Information Center

    Ranger, Jochen; Kuhn, Jorg-Tobias

    2013-01-01

    It is common practice to log-transform response times before analyzing them with standard factor analytical methods. However, sometimes the log-transformation is not capable of linearizing the relation between the response times and the latent traits. Therefore, a more general approach to response time analysis is proposed in the current…

  3. Oral Reading Fluency Growth: A Sample of Methodology and Findings. Research Brief 6

    ERIC Educational Resources Information Center

    Tindal, Gerald; Nese, Joseph F. T.

    2013-01-01

    For the past 20 years, the growth of students' oral reading fluency has been investigated by a number of researchers using curriculum-based measurement. These researchers have used varied methods (student samples, measurement procedures, and analytical techniques) and yet have converged on a relatively consistent finding: General education…

  4. Rapid Analytical Method for the Determination of Aflatoxins in Plant-Derived Dietary Supplement and Cosmetic Oils

    USDA-ARS?s Scientific Manuscript database

    Consumption of edible oils derived from conventional crop plants is increasing because they are generally regarded as more healthy alternatives to animal based fats and oils. More recently there has been increased interest in the use of alternative specialty plant-derived oils, including those from...

  5. Optimal convolution SOR acceleration of waveform relaxation with application to semiconductor device simulation

    NASA Technical Reports Server (NTRS)

    Reichelt, Mark

    1993-01-01

    In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.

  6. Methods for the analysis of azo dyes employed in food industry--A review.

    PubMed

    Yamjala, Karthik; Nainar, Meyyanathan Subramania; Ramisetti, Nageswara Rao

    2016-02-01

    A wide variety of azo dyes are generally added for coloring food products not only to make them visually aesthetic but also to reinstate the original appearance lost during the production process. However, many countries in the world have banned the use of most of the azo dyes in food and their usage is highly regulated by domestic and export food supplies. The regulatory authorities and food analysts adopt highly sensitive and selective analytical methods for monitoring as well as assuring the quality and safety of food products. The present manuscript presents a comprehensive review of various analytical techniques used in the analysis of azo dyes employed in food industries of different parts of the world. A brief description on the use of different extraction methods such as liquid-liquid, solid phase and membrane extraction has also been presented. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Entropy generation in Gaussian quantum transformations: applying the replica method to continuous-variable quantum information theory

    NASA Astrophysics Data System (ADS)

    Gagatsos, Christos N.; Karanikas, Alexandros I.; Kordas, Georgios; Cerf, Nicolas J.

    2016-02-01

    In spite of their simple description in terms of rotations or symplectic transformations in phase space, quadratic Hamiltonians such as those modelling the most common Gaussian operations on bosonic modes remain poorly understood in terms of entropy production. For instance, determining the quantum entropy generated by a Bogoliubov transformation is notably a hard problem, with generally no known analytical solution, while it is vital to the characterisation of quantum communication via bosonic channels. Here we overcome this difficulty by adapting the replica method, a tool borrowed from statistical physics and quantum field theory. We exhibit a first application of this method to continuous-variable quantum information theory, where it enables accessing entropies in an optical parametric amplifier. As an illustration, we determine the entropy generated by amplifying a binary superposition of the vacuum and a Fock state, which yields a surprisingly simple, yet unknown analytical expression.

  8. A General Simulation Method for Multiple Bodies in Proximate Flight

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    2003-01-01

    Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.

  9. Analytical estimation on divergence and flutter vibrations of symmetrical three-phase induction stator via field-synchronous coordinates

    NASA Astrophysics Data System (ADS)

    Xia, Ying; Wang, Shiyu; Sun, Wenjia; Xiu, Jie

    2017-01-01

    The electromagnetically induced parametric vibration of the symmetrical three-phase induction stator is examined. While it can be analyzed by an approximate analytical or numerical method, more accurate and simple analytical method is desirable. This work proposes a new method based on the field-synchronous coordinates. A mechanical-electromagnetic coupling model is developed under this frame such that a time-invariant governing equation with gyroscopic term can be developed. With the general vibration theory, the eigenvalue is formulated; the transition curves between the stable and unstable regions, and response are all determined as closed-form expressions of basic mechanical-electromagnetic parameters. The dependence of these parameters on the instability behaviors is demonstrated. The results imply that the divergence and flutter instabilities can occur even for symmetrical motors with balanced, constant amplitude and sinusoidal voltage. To verify the analytical predictions, this work also builds up a time-variant model of the same system under the conventional inertial frame. The Floquét theory is employed to predict the parametric instability and the numerical integration is used to obtain the parametric response. The parametric instability and response are both well compared against those under the field-synchronous coordinates. The proposed field-synchronous coordinates allows a quick estimation on the electromagnetically induced vibration. The convenience offered by the body-fixed coordinates is discussed across various fields.

  10. [Analysis of hot spots and trend of molecular pharmacognosy research based on project supported by National Natural Science Foundation of 1995-2014].

    PubMed

    Wang, Jun-Wen; Liu, Yang; Tong, Yuan-Yuan; Yang, Ce; Li, Hai-Yan

    2016-05-01

    This study collected 1995-2014 molecular pharmacognosy study, a total of 595 items, funded by Natural Science Foundation of China (NSFC). TDA and Excel software were used to analyze the data of the projects about general situation, hot spots of research with rank analytic and correlation analytic methods. Supported by NSFC molecular pharmacognosy projects and funding a gradual increase in the number of, the proportion of funds for pharmaceutical research funding tends to be stable; mainly supported by molecular biology methods of genuine medicinal materials, secondary metabolism and Germplasm Resources Research; hot drugs including Radix Salviae Miltiorrhizae, Radix Rehmanniae, Cordyceps sinensis, hot contents including tanshinone biosynthesis, Rehmannia glutinosa continuous cropping obstacle. Copyright© by the Chinese Pharmaceutical Association.

  11. Two Approaches in the Lunar Libration Theory: Analytical vs. Numerical Methods

    NASA Astrophysics Data System (ADS)

    Petrova, Natalia; Zagidullin, Arthur; Nefediev, Yurii; Kosulin, Valerii

    2016-10-01

    Observation of the physical libration of the Moon and the celestial bodies is one of the astronomical methods to remotely evaluate the internal structure of a celestial body without using expensive space experiments. Review of the results obtained due to the physical libration study, is presented in the report.The main emphasis is placed on the description of successful lunar laser ranging for libration determination and on the methods of simulating the physical libration. As a result, estimation of the viscoelastic and dissipative properties of the lunar body, of the lunar core parameters were done. The core's existence was confirmed by the recent reprocessing of seismic data Apollo missions. Attention is paid to the physical interpretation of the phenomenon of free libration and methods of its determination.A significant part of the report is devoted to describing the practical application of the most accurate to date the analytical tables of lunar libration built by comprehensive analytical processing of residual differences obtained when comparing the long-term series of laser observations with numerical ephemeris DE421 [1].In general, the basic outline of the report reflects the effectiveness of two approaches in the libration theory - numerical and analytical solution. It is shown that the two approaches complement each other for the study of the Moon in different aspects: numerical approach provides high accuracy of the theory necessary for adequate treatment of modern high-accurate observations and the analytic approach allows you to see the essence of the various kind manifestations in the lunar rotation, predict and interpret the new effects in observations of physical libration [2].[1] Rambaux, N., J. G. Williams, 2011, The Moon's physical librations and determination of their free modes, Celest. Mech. Dyn. Astron., 109, 85-100.[2] Petrova N., A. Zagidullin, Yu. Nefediev. Analysis of long-periodic variations of lunar libration parameters on the basis of analytical theory / // The Russian-Japanese Workshop, 20-25 October, Tokyo (Mitaka) - Mizusawa, Japan. - 2014.

  12. The Abbott Architect c8000: analytical performance and productivity characteristics of a new analyzer applied to general chemistry testing.

    PubMed

    Pauli, Daniela; Seyfarth, Michael; Dibbelt, Leif

    2005-01-01

    Applying basic potentiometric and photometric assays, we evaluated the fully automated random access chemistry analyzer Architect c8000, a new member of the Abbott Architect system family, with respect to both its analytical and operational performance and compared it to an established high-throughput chemistry platform, the Abbott Aeroset. Our results demonstrate that intra- and inter-assay imprecision, inaccuracy, lower limit of detection and linear range of the c8000 generally meet actual requirements of laboratory diagnosis; there were only rare exceptions, e.g. assays for plasma lipase or urine uric acid which apparently need to be improved by additional rinsing of reagent pipettors. Even with plasma exhibiting CK activities as high as 40.000 U/l, sample carryover by the c8000 could not be detected. Comparison of methods run on the c8000 and the Aeroset revealed correlation coefficients of 0.98-1.00; if identical chemistries were applied on both analyzers, slopes of regression lines approached unity. With typical laboratory workloads including 10-20% STAT samples and up to 10% samples with high analyte concentrations demanding dilutional reruns, steady-state throughput numbers of 700 to 800 tests per hour were obtained with the c8000. The system generally responded to STAT orders within 2 minutes yielding analytical STAT order completion times of 5 to 15 minutes depending on the type and number of assays requested per sample. Due to its extended test and sample processing capabilities and highly comfortable software, the c8000 may meet the varying needs of clinical laboratories rather well.

  13. Analytical, Characterization, and Stability Studies of Organic Chemical, Drugs, and Drug Formulation

    DTIC Science & Technology

    2014-05-21

    stability studies was maintained over the entire contract period to ensure the continued integrity of the drug in its clinical use . Because our...facile automation. We demonstrated the method in principle, but were unable to remove the residual t-butanol to ɘ.5%. With additional research using ...to its use of ethylene oxide for sterilization, which is done in small batches. The generally recognized method of choice to produce a parenteral

  14. Automatic control systems satisfying certain general criterions on transient behavior

    NASA Technical Reports Server (NTRS)

    Boksenbom, Aaron S; Hood, Richard

    1952-01-01

    An analytic method for the design of automatic controls is developed that starts from certain arbitrary criterions on the behavior of the controlled system and gives those physically realizable equations that the control system can follow in order to realize this behavior. The criterions used are developed in the form of certain time integrals. General results are shown for systems of second order and of any number of degrees of freedom. Detailed examples for several cases in the control of a turbojet engine are presented.

  15. Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains

    NASA Astrophysics Data System (ADS)

    Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.

    2018-01-01

    We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.

  16. An evolution in listening: An analytical and critical study of structural, acoustic, and phenomenal aspects of selected works by Pauline Oliveros

    NASA Astrophysics Data System (ADS)

    Setar, Katherine Marie

    1997-08-01

    This dissertation analytically and critically examines composer Pauline Oliveros's philosophy of 'listening' as it applies to selected works created between 1961 and 1984. The dissertation is organized through the application of two criteria: three perspectives of listening (empirical, phenomenal, and, to a lesser extent, personal), and categories derived, in part, from her writings and interviews (improvisational, traditional, theatrical, electronic, meditational, and interactive). In general, Oliveros's works may be categorized by one of two listening perspectives. The 'empirical' listening perspective, which generally includes pure acoustic phenomenon, independent from human interpretation, is exemplified in the analyses of Sound Patterns (1961), OH HA AH (1968), and, to a lesser extent, I of IV (1966). The 'phenomenal' listening perspective, which involves the human interaction with the pure acoustic phenomenon, includes a critical examination of her post-1971 'meditation' pieces and an analytical and critical examination of her tonal 'interactive' improvisations in highly resonant space, such as Watertank Software (1984). The most pervasive element of Oliveros's stylistic evolution is her gradual change from the hierarchical aesthetic of the traditional composer, to one in which creative control is more equally shared by all participants. Other significant contributions by Oliveros include the probable invention of the 'meditation' genre, an emphasis on the subjective perceptions of musical participants as a means to greater musical awareness, her musical exploration of highly resonant space, and her pioneering work in American electronic music. Both analytical and critical commentary were applied to selective representative works from Oliveros's six compositional categories. The analytical methods applied to the Oliveros's works include Wayne Slawson's vowel/formant theory as described in his book, Sound Color, an original method of categorizing consonants as noise sources based upon the principles of the International Phonetic Association, traditional morphological analyses, linear-extrapolation analyses which are derived from Schenker's theory, and discussions of acoustic phenomena as they apply to such practices as 1960s electronic studio techniques and the dynamics of room acoustics.

  17. Kinematic synthesis of adjustable robotic mechanisms

    NASA Astrophysics Data System (ADS)

    Chuenchom, Thatchai

    1993-01-01

    Conventional hard automation, such as a linkage-based or a cam-driven system, provides high speed capability and repeatability but not the flexibility required in many industrial applications. The conventional mechanisms, that are typically single-degree-of-freedom systems, are being increasingly replaced by multi-degree-of-freedom multi-actuators driven by logic controllers. Although this new trend in sophistication provides greatly enhanced flexibility, there are many instances where the flexibility needs are exaggerated and the associated complexity is unnecessary. Traditional mechanism-based hard automation, on the other hand, neither can fulfill multi-task requirements nor are cost-effective mainly due to lack of methods and tools to design-in flexibility. This dissertation attempts to bridge this technological gap by developing Adjustable Robotic Mechanisms (ARM's) or 'programmable mechanisms' as a middle ground between high speed hard automation and expensive serial jointed-arm robots. This research introduces the concept of adjustable robotic mechanisms towards cost-effective manufacturing automation. A generalized analytical synthesis technique has been developed to support the computational design of ARM's that lays the theoretical foundation for synthesis of adjustable mechanisms. The synthesis method developed in this dissertation, called generalized adjustable dyad and triad synthesis, advances the well-known Burmester theory in kinematics to a new level. While this method provides planar solutions, a novel patented scheme is utilized for converting prescribed three-dimensional motion specifications into sets of planar projections. This provides an analytical and a computational tool for designing adjustable mechanisms that satisfy multiple sets of three-dimensional motion specifications. Several design issues were addressed, including adjustable parameter identification, branching defect, and mechanical errors. An efficient mathematical scheme for identification of adjustable member was also developed. The analytical synthesis techniques developed in this dissertation were successfully implemented in a graphic-intensive user-friendly computer program. A physical prototype of a general purpose adjustable robotic mechanism has been constructed to serve as a proof-of-concept model.

  18. Nonlinear analysis of structures. [within framework of finite element method

    NASA Technical Reports Server (NTRS)

    Armen, H., Jr.; Levine, H.; Pifko, A.; Levy, A.

    1974-01-01

    The development of nonlinear analysis techniques within the framework of the finite-element method is reported. Although the emphasis is concerned with those nonlinearities associated with material behavior, a general treatment of geometric nonlinearity, alone or in combination with plasticity is included, and applications presented for a class of problems categorized as axisymmetric shells of revolution. The scope of the nonlinear analysis capabilities includes: (1) a membrane stress analysis, (2) bending and membrane stress analysis, (3) analysis of thick and thin axisymmetric bodies of revolution, (4) a general three dimensional analysis, and (5) analysis of laminated composites. Applications of the methods are made to a number of sample structures. Correlation with available analytic or experimental data range from good to excellent.

  19. Determination of short chain carboxylic acids in vegetable oils and fats using ion exclusion chromatography electrospray ionization mass spectrometry.

    PubMed

    Viidanoja, Jyrki

    2015-02-27

    A new method for quantification of short chain C1-C6 carboxylic acids in vegetable oils and fats by employing Liquid Chromatography Mass Spectrometry (LC-MS) has been developed. The method requires minor sample preparation and applies non-conventional Electrospray Ionization (ESI) liquid phase chemistry. Samples are first dissolved in chloroform and then extracted using water that has been spiked with stable isotope labeled internal standards that are used for signal normalization and absolute quantification of selected acids. The analytes are separated using Ion Exclusion Chromatography (IEC) and detected with Electrospray Ionization Mass Spectrometry (ESI-MS) as deprotonated molecules. Prior to ionization the eluent that contains hydrochloric acid is modified post-column to ensure good ionization efficiency of the analytes. The averaged within run precision and between run precision were generally lower than 8%. The accuracy was between 85 and 115% for most of the analytes. The Lower Limit of Quantification (LLOQ) ranged from 0.006 to 7mg/kg. It is shown that this method offers good selectivity in cases where UV detection fails to produce reliable results. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Surface Plasmon Resonance: New Biointerface Designs and High-Throughput Affinity Screening

    NASA Astrophysics Data System (ADS)

    Linman, Matthew J.; Cheng, Quan Jason

    Surface plasmon resonance (SPR) is a surface optical technique that measures minute changes in refractive index at a metal-coated surface. It has become increasingly popular in the study of biological and chemical analytes because of its label-free measurement feature. In addition, SPR allows for both quantitative and qualitative assessment of binding interactions in real time, making it ideally suited for probing weak interactions that are often difficult to study with other methods. This chapter presents the biosensor development in the last 3 years or so utilizing SPR as the principal analytical technique, along with a concise background of the technique itself. While SPR has demonstrated many advantages, it is a nonselective method and so, building reproducible and functional interfaces is vital to sensing applications. This chapter, therefore, focuses mainly on unique surface chemistries and assay approaches to examine biological interactions with SPR. In addition, SPR imaging for high-throughput screening based on microarrays and novel hyphenated techniques involving the coupling of SPR to other analytical methods is discussed. The chapter concludes with a commentary on the current state of SPR biosensing technology and the general direction of future biosensor research.

  1. Convective heat transfer for a gaseous slip flow in micropipe and parallel-plate microchannel with uniform wall heat flux: effect of axial heat conduction

    NASA Astrophysics Data System (ADS)

    Haddout, Y.; Essaghir, E.; Oubarra, A.; Lahjomri, J.

    2017-12-01

    Thermally developing laminar slip flow through a micropipe and a parallel plate microchannel, with axial heat conduction and uniform wall heat flux, is studied analytically by using a powerful method of self-adjoint formalism. This method results from a decomposition of the elliptic energy equation into a system of two first-order partial differential equations. The advantage of this method over other methods, resides in the fact that the decomposition procedure leads to a selfadjoint problem although the initial problem is apparently not a self-adjoint one. The solution is an extension of prior studies and considers a first order slip model boundary conditions at the fluid-wall interface. The analytical expressions for the developing temperature and local Nusselt number in the thermal entrance region are obtained in the general case. Therefore, the solution obtained could be extended easily to any hydrodynamically developed flow and arbitrary heat flux distribution. The analytical results obtained are compared for select simplified cases with available numerical calculations and they both agree. The results show that the heat transfer characteristics of flow in the thermal entrance region are strongly influenced by the axial heat conduction and rarefaction effects which are respectively characterized by Péclet and Knudsen numbers.

  2. Application of isotope dilution mass spectrometry: determination of ochratoxin A in the Canadian Total Diet Study

    PubMed Central

    Tam, J.; Pantazopoulos, P.; Scott, P.M.; Moisey, J.; Dabeka, R.W.; Richard, I.D.K.

    2011-01-01

    Analytical methods are generally developed and optimized for specific commodities. Total Diet Studies, representing typical food products ‘as consumed’, pose an analytical challenge since every food product is different. In order to address this technical challenge, a selective and sensitive analytical method was developed suitable for the quantitation of ochratoxin A (OTA) in Canadian Total Diet Study composites. The method uses an acidified solvent extraction, an immunoaffinity column (IAC) for clean-up, liquid chromatography-tandem mass spectrometry (LC-MS/MS) for identification and quantification, and a uniformly stable isotope-labelled OTA (U-[13C20]-OTA) as an internal recovery standard. Results are corrected for this standard. The method is accurate (101% average recovery) and precise (5.5% relative standard deviation (RSD)) based on 17 duplicate analysis of various food products over 2 years. A total of 140 diet composites were analysed for OTA as part of the Canadian Total Diet Study. Samples were collected at retail level from two Canadian cities, Quebec City and Calgary, in 2008 and 2009, respectively. The results indicate that 73% (102/140) of the samples had detectable levels of OTA, with some of the highest levels of OTA contamination found in the Canadian bread supply. PMID:21623499

  3. Convective heat transfer for a gaseous slip flow in micropipe and parallel-plate microchannel with uniform wall heat flux: effect of axial heat conduction

    NASA Astrophysics Data System (ADS)

    Haddout, Y.; Essaghir, E.; Oubarra, A.; Lahjomri, J.

    2018-06-01

    Thermally developing laminar slip flow through a micropipe and a parallel plate microchannel, with axial heat conduction and uniform wall heat flux, is studied analytically by using a powerful method of self-adjoint formalism. This method results from a decomposition of the elliptic energy equation into a system of two first-order partial differential equations. The advantage of this method over other methods, resides in the fact that the decomposition procedure leads to a selfadjoint problem although the initial problem is apparently not a self-adjoint one. The solution is an extension of prior studies and considers a first order slip model boundary conditions at the fluid-wall interface. The analytical expressions for the developing temperature and local Nusselt number in the thermal entrance region are obtained in the general case. Therefore, the solution obtained could be extended easily to any hydrodynamically developed flow and arbitrary heat flux distribution. The analytical results obtained are compared for select simplified cases with available numerical calculations and they both agree. The results show that the heat transfer characteristics of flow in the thermal entrance region are strongly influenced by the axial heat conduction and rarefaction effects which are respectively characterized by Péclet and Knudsen numbers.

  4. Application of vector-valued rational approximations to the matrix eigenvalue problem and connections with Krylov subspace methods

    NASA Technical Reports Server (NTRS)

    Sidi, Avram

    1992-01-01

    Let F(z) be a vectored-valued function F: C approaches C sup N, which is analytic at z=0 and meromorphic in a neighborhood of z=0, and let its Maclaurin series be given. We use vector-valued rational approximation procedures for F(z) that are based on its Maclaurin series in conjunction with power iterations to develop bona fide generalizations of the power method for an arbitrary N X N matrix that may be diagonalizable or not. These generalizations can be used to obtain simultaneously several of the largest distinct eigenvalues and the corresponding invariant subspaces, and present a detailed convergence theory for them. In addition, it is shown that the generalized power methods of this work are equivalent to some Krylov subspace methods, among them the methods of Arnoldi and Lanczos. Thus, the theory provides a set of completely new results and constructions for these Krylov subspace methods. This theory suggests at the same time a new mode of usage for these Krylov subspace methods that were observed to possess computational advantages over their common mode of usage.

  5. Reactive silica transport in fractured porous media: Analytical solutions for a system of parallel fractures

    NASA Astrophysics Data System (ADS)

    Yang, Jianwen

    2012-04-01

    A general analytical solution is derived by using the Laplace transformation to describe transient reactive silica transport in a conceptualized 2-D system involving a set of parallel fractures embedded in an impermeable host rock matrix, taking into account of hydrodynamic dispersion and advection of silica transport along the fractures, molecular diffusion from each fracture to the intervening rock matrix, and dissolution of quartz. A special analytical solution is also developed by ignoring the longitudinal hydrodynamic dispersion term but remaining other conditions the same. The general and special solutions are in the form of a double infinite integral and a single infinite integral, respectively, and can be evaluated using Gauss-Legendre quadrature technique. A simple criterion is developed to determine under what conditions the general analytical solution can be approximated by the special analytical solution. It is proved analytically that the general solution always lags behind the special solution, unless a dimensionless parameter is less than a critical value. Several illustrative calculations are undertaken to demonstrate the effect of fracture spacing, fracture aperture and fluid flow rate on silica transport. The analytical solutions developed here can serve as a benchmark to validate numerical models that simulate reactive mass transport in fractured porous media.

  6. Analytical Energy Gradients for Excited-State Coupled-Cluster Methods

    NASA Astrophysics Data System (ADS)

    Wladyslawski, Mark; Nooijen, Marcel

    The equation-of-motion coupled-cluster (EOM-CC) and similarity transformed equation-of-motion coupled-cluster (STEOM-CC) methods have been firmly established as accurate and routinely applicable extensions of single-reference coupled-cluster theory to describe electronically excited states. An overview of these methods is provided, with emphasis on the many-body similarity transform concept that is the key to a rationalization of their accuracy. The main topic of the paper is the derivation of analytical energy gradients for such non-variational electronic structure approaches, with an ultimate focus on obtaining their detailed algebraic working equations. A general theoretical framework using Lagrange's method of undetermined multipliers is presented, and the method is applied to formulate the EOM-CC and STEOM-CC gradients in abstract operator terms, following the previous work in [P.G. Szalay, Int. J. Quantum Chem. 55 (1995) 151] and [S.R. Gwaltney, R.J. Bartlett, M. Nooijen, J. Chem. Phys. 111 (1999) 58]. Moreover, the systematics of the Lagrange multiplier approach is suitable for automation by computer, enabling the derivation of the detailed derivative equations through a standardized and direct procedure. To this end, we have developed the SMART (Symbolic Manipulation and Regrouping of Tensors) package of automated symbolic algebra routines, written in the Mathematica programming language. The SMART toolkit provides the means to expand, differentiate, and simplify equations by manipulation of the detailed algebraic tensor expressions directly. The Lagrangian multiplier formulation establishes a uniform strategy to perform the automated derivation in a standardized manner: A Lagrange multiplier functional is constructed from the explicit algebraic equations that define the energy in the electronic method; the energy functional is then made fully variational with respect to all of its parameters, and the symbolic differentiations directly yield the explicit equations for the wavefunction amplitudes, the Lagrange multipliers, and the analytical gradient via the perturbation-independent generalized Hellmann-Feynman effective density matrix. This systematic automated derivation procedure is applied to obtain the detailed gradient equations for the excitation energy (EE-), double ionization potential (DIP-), and double electron affinity (DEA-) similarity transformed equation-of-motion coupled-cluster singles-and-doubles (STEOM-CCSD) methods. In addition, the derivatives of the closed-shell-reference excitation energy (EE-), ionization potential (IP-), and electron affinity (EA-) equation-of-motion coupled-cluster singles-and-doubles (EOM-CCSD) methods are derived. Furthermore, the perturbative EOM-PT and STEOM-PT gradients are obtained. The algebraic derivative expressions for these dozen methods are all derived here uniformly through the automated Lagrange multiplier process and are expressed compactly in a chain-rule/intermediate-density formulation, which facilitates a unified modular implementation of analytic energy gradients for CCSD/PT-based electronic methods. The working equations for these analytical gradients are presented in full detail, and their factorization and implementation into an efficient computer code are discussed.

  7. Multicomponent quantitative spectroscopic analysis without reference substances based on ICA modelling.

    PubMed

    Monakhova, Yulia B; Mushtakova, Svetlana P

    2017-05-01

    A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.

  8. Assessment of technological level of stem cell research using principal component analysis.

    PubMed

    Do Cho, Sung; Hwan Hyun, Byung; Kim, Jae Kyeom

    2016-01-01

    In general, technological levels have been assessed based on specialist's opinion through the methods such as Delphi. But in such cases, results could be significantly biased per study design and individual expert. In this study, therefore scientific literatures and patents were selected by means of analytic indexes for statistic approach and technical assessment of stem cell fields. The analytic indexes, numbers and impact indexes of scientific literatures and patents, were weighted based on principal component analysis, and then, were summated into the single value. Technological obsolescence was calculated through the cited half-life of patents issued by the United States Patents and Trademark Office and was reflected in technological level assessment. As results, ranks of each nation's in reference to the technology level were rated by the proposed method. Furthermore we were able to evaluate strengthens and weaknesses thereof. Although our empirical research presents faithful results, in the further study, there is a need to compare the existing methods and the suggested method.

  9. Monte Carlo closure for moment-based transport schemes in general relativistic radiation hydrodynamic simulations

    NASA Astrophysics Data System (ADS)

    Foucart, Francois

    2018-04-01

    General relativistic radiation hydrodynamic simulations are necessary to accurately model a number of astrophysical systems involving black holes and neutron stars. Photon transport plays a crucial role in radiatively dominated accretion discs, while neutrino transport is critical to core-collapse supernovae and to the modelling of electromagnetic transients and nucleosynthesis in neutron star mergers. However, evolving the full Boltzmann equations of radiative transport is extremely expensive. Here, we describe the implementation in the general relativistic SPEC code of a cheaper radiation hydrodynamic method that theoretically converges to a solution of Boltzmann's equation in the limit of infinite numerical resources. The algorithm is based on a grey two-moment scheme, in which we evolve the energy density and momentum density of the radiation. Two-moment schemes require a closure that fills in missing information about the energy spectrum and higher order moments of the radiation. Instead of the approximate analytical closure currently used in core-collapse and merger simulations, we complement the two-moment scheme with a low-accuracy Monte Carlo evolution. The Monte Carlo results can provide any or all of the missing information in the evolution of the moments, as desired by the user. As a first test of our methods, we study a set of idealized problems demonstrating that our algorithm performs significantly better than existing analytical closures. We also discuss the current limitations of our method, in particular open questions regarding the stability of the fully coupled scheme.

  10. Axi-symmetric generalized thermoelastic diffusion problem with two-temperature and initial stress under fractional order heat conduction

    NASA Astrophysics Data System (ADS)

    Deswal, Sunita; Kalkal, Kapil Kumar; Sheoran, Sandeep Singh

    2016-09-01

    A mathematical model of fractional order two-temperature generalized thermoelasticity with diffusion and initial stress is proposed to analyze the transient wave phenomenon in an infinite thermoelastic half-space. The governing equations are derived in cylindrical coordinates for a two dimensional axi-symmetric problem. The analytical solution is procured by employing the Laplace and Hankel transforms for time and space variables respectively. The solutions are investigated in detail for a time dependent heat source. By using numerical inversion method of integral transforms, we obtain the solutions for displacement, stress, temperature and diffusion fields in physical domain. Computations are carried out for copper material and displayed graphically. The effect of fractional order parameter, two-temperature parameter, diffusion, initial stress and time on the different thermoelastic and diffusion fields is analyzed on the basis of analytical and numerical results. Some special cases have also been deduced from the present investigation.

  11. Riemannian geometry of Hamiltonian chaos: hints for a general theory.

    PubMed

    Cerruti-Sola, Monica; Ciraolo, Guido; Franzosi, Roberto; Pettini, Marco

    2008-10-01

    We aim at assessing the validity limits of some simplifying hypotheses that, within a Riemmannian geometric framework, have provided an explanation of the origin of Hamiltonian chaos and have made it possible to develop a method of analytically computing the largest Lyapunov exponent of Hamiltonian systems with many degrees of freedom. Therefore, a numerical hypotheses testing has been performed for the Fermi-Pasta-Ulam beta model and for a chain of coupled rotators. These models, for which analytic computations of the largest Lyapunov exponents have been carried out in the mentioned Riemannian geometric framework, appear as paradigmatic examples to unveil the reason why the main hypothesis of quasi-isotropy of the mechanical manifolds sometimes breaks down. The breakdown is expected whenever the topology of the mechanical manifolds is nontrivial. This is an important step forward in view of developing a geometric theory of Hamiltonian chaos of general validity.

  12. Inclined Pulsar Magnetospheres in General Relativity: Polar Caps for the Dipole, Quadrudipole, and Beyond

    NASA Astrophysics Data System (ADS)

    Gralla, Samuel E.; Lupsasca, Alexandru; Philippov, Alexander

    2017-12-01

    In the canonical model of a pulsar, rotational energy is transmitted through the surrounding plasma via two electrical circuits, each connecting to the star over a small region known as a “polar cap.” For a dipole-magnetized star, the polar caps coincide with the magnetic poles (hence the name), but in general, they can occur at any place and take any shape. In light of their crucial importance to most models of pulsar emission (from radio to X-ray to wind), we develop a general technique for determining polar cap properties. We consider a perfectly conducting star surrounded by a force-free magnetosphere and include the effects of general relativity. Using a combined numerical-analytical technique that leverages the rotation rate as a small parameter, we derive a general analytic formula for the polar cap shape and charge-current distribution as a function of the stellar mass, radius, rotation rate, moment of inertia, and magnetic field. We present results for dipole and quadrudipole fields (superposed dipole and quadrupole) inclined relative to the axis of rotation. The inclined dipole polar cap results are the first to include general relativity, and they confirm its essential role in the pulsar problem. The quadrudipole pulsar illustrates the phenomenon of thin annular polar caps. More generally, our method lays a foundation for detailed modeling of pulsar emission with realistic magnetic fields.

  13. The Hazardous-Drums Project: A Multiweek Laboratory Exercise for General Chemistry Involving Environmental, Quality Control, and Cost Evaluation

    ERIC Educational Resources Information Center

    Hayes, David; Widanski, Bozena

    2013-01-01

    A laboratory experiment is described that introduces students to "real-world" hazardous waste management issues chemists face. The students are required to define an analytical problem, choose a laboratory analysis method, investigate cost factors, consider quality-control issues, interpret the meaning of results, and provide management…

  14. Statistics of stable marriages

    NASA Astrophysics Data System (ADS)

    Dzierzawa, Michael; Oméro, Marie-José

    2000-11-01

    In the stable marriage problem N men and N women have to be matched by pairs under the constraint that the resulting matching is stable. We study the statistical properties of stable matchings in the large N limit using both numerical and analytical methods. Generalizations of the model including singles and unequal numbers of men and women are also investigated.

  15. Determination of linear short chain aliphatic aldehyde and ketone vapors in air using a polystyrene-coated quartz crystal nanobalance sensor.

    PubMed

    Mirmohseni, Abdolreza; Olad, Ali

    2010-01-01

    A polystyrene coated quartz crystal nanobalance (QCN) sensor was developed for use in the determination of a number of linear short-chain aliphatic aldehyde and ketone vapors contained in air. The quartz crystal was modified by a thin-layer coating of a commercial grade general purpose polystyrene (GPPS) from Tabriz petrochemical company using a solution casting method. Determination was based on frequency shifts of the modified quartz crystal due to the adsorption of analytes at the surface of modified electrode in exposure to various concentrations of analytes. The frequency shift was found to have a linear relation to the concentration of analytes. Linear calibration curves were obtained for 7-70 mg l(-1) of analytes with correlation coefficients in the range of 0.9935-0.9989 and sensitivity factors in the range of 2.07-6.74 Hz/mg l(-1). A storage period of over three months showed no loss in the sensitivity and performance of the sensor.

  16. Predicting playing frequencies for clarinets: A comparison between numerical simulations and simplified analytical formulas.

    PubMed

    Coyle, Whitney L; Guillemain, Philippe; Kergomard, Jean; Dalmont, Jean-Pierre

    2015-11-01

    When designing a wind instrument such as a clarinet, it can be useful to be able to predict the playing frequencies. This paper presents an analytical method to deduce these playing frequencies using the input impedance curve. Specifically there are two control parameters that have a significant influence on the playing frequency, the blowing pressure and reed opening. Four effects are known to alter the playing frequency and are examined separately: the flow rate due to the reed motion, the reed dynamics, the inharmonicity of the resonator, and the temperature gradient within the clarinet. The resulting playing frequencies for the first register of a particular professional level clarinet are found using the analytical formulas presented in this paper. The analytical predictions are then compared to numerically simulated results to validate the prediction accuracy. The main conclusion is that in general the playing frequency decreases above the oscillation threshold because of inharmonicity, then increases above the beating reed regime threshold because of the decrease of the flow rate effect.

  17. Multi-residue analysis of legacy POPs and emerging organic contaminants in Singapore's coastal waters using gas chromatography-triple quadrupole tandem mass spectrometry.

    PubMed

    Zhang, Hui; Bayen, Stéphane; Kelly, Barry C

    2015-08-01

    A gas chromatography-triple quadrupole mass spectrometry (GC-MS/MS) based method was developed for determination of 86 hydrophobic organic compounds in seawater. Solid-phase extraction (SPE) was employed for sequestration of target analytes in the dissolved phase. Ultrasound assisted extraction (UAE) and florisil chromatography were utilized for determination of concentrations in suspended sediments (particulate phase). The target compounds included multi-class hydrophobic contaminants with a wide range of physical-chemical properties. This list includes several polycyclic and nitro-aromatic musks, brominated and chlorinated flame retardants, methyl triclosan, chlorobenzenes, organochlorine pesticides (OCPs) and polychlorinated biphenyls (PCBs). Spiked MilliQ water and seawater samples were used to evaluate the method performance. Analyte recoveries were generally good, with the exception of some of the more volatile target analytes (chlorobenzenes and bromobenzenes). The method is very sensitive, with method detection limits typically in the low parts per quadrillion (ppq) range. Analysis of 51 field-collected seawater samples (dissolved and particulate-bound phases) from four distinct coastal sites around Singapore showed trace detection of several polychlorinated biphenyl congeners and other legacy POPs, as well as several current-use emerging organic contaminants (EOCs). Polycyclic and nitro-aromatic musks, bromobenzenes, dechlorane plus isomers (syn-DP, anti-DP) and methyl triclosan were frequently detected at appreciable levels (2-20,000pgL(-1)). The observed concentrations of the monitored contaminants in Singapore's marine environment were generally comparable to previously reported levels in other coastal marine systems. To our knowledge, these are the first measurements of these emerging contaminants of concern in Singapore or Southeast Asia. The developed method may prove beneficial for future environmental monitoring of hydrophobic organic contaminants in marine environments. Further, the study provides novel information regarding several potentially hazardous contaminants of concern in Singapore's marine environment, which will aid future risk assessment initiatives. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Modeling systematic errors: polychromatic sources of Beer-Lambert deviations in HPLC/UV and nonchromatographic spectrophotometric assays.

    PubMed

    Galli, C

    2001-07-01

    It is well established that the use of polychromatic radiation in spectrophotometric assays leads to excursions from the Beer-Lambert limit. This Note models the resulting systematic error as a function of assay spectral width, slope of molecular extinction coefficient, and analyte concentration. The theoretical calculations are compared with recent experimental results; a parameter is introduced which can be used to estimate the magnitude of the systematic error in both chromatographic and nonchromatographic spectrophotometric assays. It is important to realize that the polychromatic radiation employed in common laboratory equipment can yield assay errors up to approximately 4%, even at absorption levels generally considered 'safe' (i.e. absorption <1). Thus careful consideration of instrumental spectral width, analyte concentration, and slope of molecular extinction coefficient is required to ensure robust analytical methods.

  19. A review of the analytical simulation of aircraft crash dynamics

    NASA Technical Reports Server (NTRS)

    Fasanella, Edwin L.; Carden, Huey D.; Boitnott, Richard L.; Hayduk, Robert J.

    1990-01-01

    A large number of full scale tests of general aviation aircraft, helicopters, and one unique air-to-ground controlled impact of a transport aircraft were performed. Additionally, research was also conducted on seat dynamic performance, load-limiting seats, load limiting subfloor designs, and emergency-locator-transmitters (ELTs). Computer programs were developed to provide designers with methods for predicting accelerations, velocities, and displacements of collapsing structure and for estimating the human response to crash loads. The results of full scale aircraft and component tests were used to verify and guide the development of analytical simulation tools and to demonstrate impact load attenuating concepts. Analytical simulation of metal and composite aircraft crash dynamics are addressed. Finite element models are examined to determine their degree of corroboration by experimental data and to reveal deficiencies requiring further development.

  20. An extension of the Derrida-Lebowitz-Speer-Spohn equation

    NASA Astrophysics Data System (ADS)

    Bordenave, Charles; Germain, Pierre; Trogdon, Thomas

    2015-12-01

    We show how the derivation of the Derrida-Lebowitz-Speer-Spohn equation can be prolonged to obtain a new equation, generalizing the models obtained in the paper by these authors. We then investigate its properties from both an analytical and numerical perspective. Specifically, a numerical method is presented to approximate solutions of the prolonged equation. Using this method, we investigate the relationship between the solutions of the prolonged equation and the Tracy-Widom GOE distribution.

  1. An integrand reconstruction method for three-loop amplitudes

    NASA Astrophysics Data System (ADS)

    Badger, Simon; Frellesvig, Hjalte; Zhang, Yang

    2012-08-01

    We consider the maximal cut of a three-loop four point function with massless kinematics. By applying Gröbner bases and primary decomposition we develop a method which extracts all ten propagator master integral coefficients for an arbitrary triple-box configuration via generalized unitarity cuts. As an example we present analytic results for the three loop triple-box contribution to gluon-gluon scattering in Yang-Mills with adjoint fermions and scalars in terms of three master integrals.

  2. Gaussianization for fast and accurate inference from cosmological data

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2016-06-01

    We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.

  3. Bayesian inference based on stationary Fokker-Planck sampling.

    PubMed

    Berrones, Arturo

    2010-06-01

    A novel formalism for bayesian learning in the context of complex inference models is proposed. The method is based on the use of the stationary Fokker-Planck (SFP) approach to sample from the posterior density. Stationary Fokker-Planck sampling generalizes the Gibbs sampler algorithm for arbitrary and unknown conditional densities. By the SFP procedure, approximate analytical expressions for the conditionals and marginals of the posterior can be constructed. At each stage of SFP, the approximate conditionals are used to define a Gibbs sampling process, which is convergent to the full joint posterior. By the analytical marginals efficient learning methods in the context of artificial neural networks are outlined. Offline and incremental bayesian inference and maximum likelihood estimation from the posterior are performed in classification and regression examples. A comparison of SFP with other Monte Carlo strategies in the general problem of sampling from arbitrary densities is also presented. It is shown that SFP is able to jump large low-probability regions without the need of a careful tuning of any step-size parameter. In fact, the SFP method requires only a small set of meaningful parameters that can be selected following clear, problem-independent guidelines. The computation cost of SFP, measured in terms of loss function evaluations, grows linearly with the given model's dimension.

  4. Numeric kinetic energy operators for molecules in polyspherical coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadri, Keyvan; Meyer, Hans-Dieter; Lauvergnat, David

    Generalized curvilinear coordinates, as, e.g., polyspherical coordinates, are in general better adapted to the resolution of the nuclear Schroedinger equation than rectilinear ones like the normal mode coordinates. However, analytical expressions of the kinetic energy operators (KEOs) for molecular systems in polyspherical coordinates may be prohibitively complicated for large systems. In this paper we propose a method to generate a KEO numerically and bring it to a form practicable for dynamical calculations. To examine the new method we calculated vibrational spectra and eigenenergies for nitrous acid (HONO) and compare it with results obtained with an exact analytical KEO derived previouslymore » [F. Richter, P. Rosmus, F. Gatti, and H.-D. Meyer, J. Chem. Phys. 120, 6072 (2004)]. In a second example we calculated {pi}{yields}{pi}* photoabsorption spectrum and eigenenergies of ethene (C{sub 2}H{sub 4}) and compared it with previous work [M. R. Brill, F. Gatti, D. Lauvergnat, and H.-D. Meyer, Chem. Phys. 338, 186 (2007)]. In this ethene study the dimensionality was reduced from 12 to 6 by freezing six internal coordinates. Results for both molecules show that the proposed method for obtaining an approximate KEO is reliable for dynamical calculations. The error in eigenenergies was found to be below 1 cm{sup -1} for most states calculated.« less

  5. Analytical Solutions for Rumor Spreading Dynamical Model in a Social Network

    NASA Astrophysics Data System (ADS)

    Fallahpour, R.; Chakouvari, S.; Askari, H.

    2015-03-01

    In this paper, Laplace Adomian decomposition method is utilized for evaluating of spreading model of rumor. Firstly, a succinct review is constructed on the subject of using analytical methods such as Adomian decomposion method, Variational iteration method and Homotopy Analysis method for epidemic models and biomathematics. In continue a spreading model of rumor with consideration of forgetting mechanism is assumed and subsequently LADM is exerted for solving of it. By means of the aforementioned method, a general solution is achieved for this problem which can be readily employed for assessing of rumor model without exerting any computer program. In addition, obtained consequences for this problem are discussed for different cases and parameters. Furthermore, it is shown the method is so straightforward and fruitful for analyzing equations which have complicated terms same as rumor model. By employing numerical methods, it is revealed LADM is so powerful and accurate for eliciting solutions of this model. Eventually, it is concluded that this method is so appropriate for this problem and it can provide researchers a very powerful vehicle for scrutinizing rumor models in diverse kinds of social networks such as Facebook, YouTube, Flickr, LinkedIn and Tuitor.

  6. Crew appliance computer program manual, volume 1

    NASA Technical Reports Server (NTRS)

    Russell, D. J.

    1975-01-01

    Trade studies of numerous appliance concepts for advanced spacecraft galley, personal hygiene, housekeeping, and other areas were made to determine which best satisfy the space shuttle orbiter and modular space station mission requirements. Analytical models of selected appliance concepts not currently included in the G-189A Generalized Environmental/Thermal Control and Life Support Systems (ETCLSS) Computer Program subroutine library were developed. The new appliance subroutines are given along with complete analytical model descriptions, solution methods, user's input instructions, and validation run results. The appliance components modeled were integrated with G-189A ETCLSS models for shuttle orbiter and modular space station, and results from computer runs of these systems are presented.

  7. Brownian systems with spatially inhomogeneous activity

    NASA Astrophysics Data System (ADS)

    Sharma, A.; Brader, J. M.

    2017-09-01

    We generalize the Green-Kubo approach, previously applied to bulk systems of spherically symmetric active particles [J. Chem. Phys. 145, 161101 (2016), 10.1063/1.4966153], to include spatially inhomogeneous activity. The method is applied to predict the spatial dependence of the average orientation per particle and the density. The average orientation is given by an integral over the self part of the Van Hove function and a simple Gaussian approximation to this quantity yields an accurate analytical expression. Taking this analytical result as input to a dynamic density functional theory approximates the spatial dependence of the density in good agreement with simulation data. All theoretical predictions are validated using Brownian dynamics simulations.

  8. Solar neutrino masses and mixing from bilinear R-parity broken supersymmetry: Analytical versus numerical results

    NASA Astrophysics Data System (ADS)

    Díaz, M.; Hirsch, M.; Porod, W.; Romão, J.; Valle, J.

    2003-07-01

    We give an analytical calculation of solar neutrino masses and mixing at one-loop order within bilinear R-parity breaking supersymmetry, and compare our results to the exact numerical calculation. Our method is based on a systematic perturbative expansion of R-parity violating vertices to leading order. We find in general quite good agreement between the approximate and full numerical calculations, but the approximate expressions are much simpler to implement. Our formalism works especially well for the case of the large mixing angle Mikheyev-Smirnov-Wolfenstein solution, now strongly favored by the recent KamLAND reactor neutrino data.

  9. The transfer of analytical procedures.

    PubMed

    Ermer, J; Limberger, M; Lis, K; Wätzig, H

    2013-11-01

    Analytical method transfers are certainly among the most discussed topics in the GMP regulated sector. However, they are surprisingly little regulated in detail. General information is provided by USP, WHO, and ISPE in particular. Most recently, the EU emphasized the importance of analytical transfer by including it in their draft of the revised GMP Guideline. In this article, an overview and comparison of these guidelines is provided. The key to success for method transfers is the excellent communication between sending and receiving unit. In order to facilitate this communication, procedures, flow charts and checklists for responsibilities, success factors, transfer categories, the transfer plan and report, strategies in case of failed transfers, tables with acceptance limits are provided here, together with a comprehensive glossary. Potential pitfalls are described such that they can be avoided. In order to assure an efficient and sustainable transfer of analytical procedures, a practically relevant and scientifically sound evaluation with corresponding acceptance criteria is crucial. Various strategies and statistical tools such as significance tests, absolute acceptance criteria, and equivalence tests are thoroughly descibed and compared in detail giving examples. Significance tests should be avoided. The success criterion is not statistical significance, but rather analytical relevance. Depending on a risk assessment of the analytical procedure in question, statistical equivalence tests are recommended, because they include both, a practically relevant acceptance limit and a direct control of the statistical risks. However, for lower risk procedures, a simple comparison of the transfer performance parameters to absolute limits is also regarded as sufficient. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. A unified procedure for meta-analytic evaluation of surrogate end points in randomized clinical trials

    PubMed Central

    Dai, James Y.; Hughes, James P.

    2012-01-01

    The meta-analytic approach to evaluating surrogate end points assesses the predictiveness of treatment effect on the surrogate toward treatment effect on the clinical end point based on multiple clinical trials. Definition and estimation of the correlation of treatment effects were developed in linear mixed models and later extended to binary or failure time outcomes on a case-by-case basis. In a general regression setting that covers nonnormal outcomes, we discuss in this paper several metrics that are useful in the meta-analytic evaluation of surrogacy. We propose a unified 3-step procedure to assess these metrics in settings with binary end points, time-to-event outcomes, or repeated measures. First, the joint distribution of estimated treatment effects is ascertained by an estimating equation approach; second, the restricted maximum likelihood method is used to estimate the means and the variance components of the random treatment effects; finally, confidence intervals are constructed by a parametric bootstrap procedure. The proposed method is evaluated by simulations and applications to 2 clinical trials. PMID:22394448

  11. Applying Nyquist's method for stability determination to solar wind observations

    NASA Astrophysics Data System (ADS)

    Klein, Kristopher G.; Kasper, Justin C.; Korreck, K. E.; Stevens, Michael L.

    2017-10-01

    The role instabilities play in governing the evolution of solar and astrophysical plasmas is a matter of considerable scientific interest. The large number of sources of free energy accessible to such nearly collisionless plasmas makes general modeling of unstable behavior, accounting for the temperatures, densities, anisotropies, and relative drifts of a large number of populations, analytically difficult. We therefore seek a general method of stability determination that may be automated for future analysis of solar wind observations. This work describes an efficient application of the Nyquist instability method to the Vlasov dispersion relation appropriate for hot, collisionless, magnetized plasmas, including the solar wind. The algorithm recovers the familiar proton temperature anisotropy instabilities, as well as instabilities that had been previously identified using fits extracted from in situ observations in Gary et al. (2016). Future proposed applications of this method are discussed.

  12. Methods for delineating flood-prone areas in the Great Basin of Nevada and adjacent states

    USGS Publications Warehouse

    Burkham, D.E.

    1988-01-01

    The Great Basin is a region of about 210,000 square miles having no surface drainage to the ocean; it includes most of Nevada and parts of Utah, California, Oregon, Idaho, and Wyoming. The area is characterized by many parallel mountain ranges and valleys trending north-south. Stream channels usually are well defined and steep within the mountains, but on reaching the alluvial fan at the canyon mouth, they may diverge into numerous distributary channels, be discontinuous near the apex of the fan, or be deeply entrenched in the alluvial deposits. Larger rivers normally have well-defined channels to or across the valley floors, but all terminate at lakes or playas. Major floods occur in most parts of the Great Basin and result from snowmelt, frontal-storm rainfall, and localized convective rainfall. Snowmelt floods typically occur during April-June. Floods resulting from frontal rain and frontal rain on snow generally occur during November-March. Floods resulting from convective-type rainfall during localized thunderstorms occur most commonly during the summer months. Methods for delineating flood-prone areas are grouped into five general categories: Detailed, historical, analytical, physiographic, and reconnaissance. The detailed and historical methods are comprehensive methods; the analytical and physiographic are intermediate; and the reconnaissance method is only approximate. Other than the reconnaissance method, each method requires determination of a T-year discharge (the peak rate of flow during a flood with long-term average recurrence interval of T years) and T-year profile and the development of a flood-boundary map. The procedure is different, however, for each method. Appraisal of the applicability of each method included consideration of its technical soundness, limitations and uncertainties, ease of use, and costs in time and money. Of the five methods, the detailed method is probably the most accurate, though most expensive. It is applicable to hydraulic and topographic conditions found in many parts of the Great Basin. The historical method is also applicable over a wide range of conditions and is less expensive than the detailed method. However, it requires more historical flood data than are usually available, and experience and judgement are needed to obtain meaningful results. The analytical method is also less expensive than the detailed method and can be used over a wide range of conditions in which the T-year discharge can be determined directly. Experience, good judgement, and thorough knowledge of hydraulic principles are required to obtain adequate results, and the method has limited application in other than rigid-channel situations. The physiographic method is applicable to rigid-boundary channels and is less accurate than the detailed method. The reconnaissance method is relatively imprecise, but it may be the most rational method to use on alluvial fans or valley floors with discontinuous channels. In general, a comprehensive method is most suitable for use with rigid-bank streams in urban areas; only an approximate method seems justified in undeveloped areas.

  13. A Radical-Mediated Pathway for the Formation of [M + H](+) in Dielectric Barrier Discharge Ionization.

    PubMed

    Wolf, Jan-Christoph; Gyr, Luzia; Mirabelli, Mario F; Schaer, Martin; Siegenthaler, Peter; Zenobi, Renato

    2016-09-01

    Active capillary plasma ionization is a highly efficient ambient ionization method. Its general principle of ion formation is closely related to atmospheric pressure chemical ionization (APCI). The method is based on dielectric barrier discharge ionization (DBDI), and can be constructed in the form of a direct flow-through interface to a mass spectrometer. Protonated species ([M + H](+)) are predominantly formed, although in some cases radical cations are also observed. We investigated the underlying ionization mechanisms and reaction pathways for the formation of protonated analyte ([M + H](+)). We found that ionization occurs in the presence and in the absence of water vapor. Therefore, the mechanism cannot exclusively rely on hydronium clusters, as generally accepted for APCI. Based on isotope labeling experiments, protons were shown to originate from various solvents (other than water) and, to a minor extent, from gaseous impurities and/or self-protonation. By using CO2 instead of air or N2 as plasma gas, additional species like [M + OH](+) and [M - H](+) were observed. These gas-phase reaction products of CO2 with the analyte (tertiary amines) indicate the presence of a radical-mediated ionization pathway, which proceeds by direct reaction of the ionized plasma gas with the analyte. The proposed reaction pathway is supported with density functional theory (DFT) calculations. These findings add a new ionization pathway leading to the protonated species to those currently known for APCI. Graphical Abstract ᅟ.

  14. Collision problems treated with the Generalized Hyperspherical Sturmian method

    NASA Astrophysics Data System (ADS)

    Mitnik, D. M.; Gasaneo, G.; Ancarani, L. U.; Ambrosio, M. J.

    2014-04-01

    An hyperspherical Sturmian approach recently developed for three-body break-up processes is presented. To test several of its features, the method is applied to two simplified models. Excellent agreement is found when compared with the results of an analytically solvable problem. For the Temkin-Poet model of the double ionization of He by high energy electron impact, the present method is compared with the Spherical Sturmian approach, and again excellent agreement is found. Finally, a study of the channels appearing in the break-up three-body wave function is presented.

  15. Ribozyme-mediated signal augmentation on a mass-sensitive biosensor.

    PubMed

    Knudsen, Scott M; Lee, Joonhyung; Ellington, Andrew D; Savran, Cagri A

    2006-12-20

    Mass-based detection methods such as the quartz crystal microbalance (QCM) offer an attractive option to label-based methods; however the sensitivity is generally lower by comparison. In particular, low-molecular-weight analytes can be difficult to detect based on mass addition alone. In this communication, we present the use of effector-dependent ribozymes (aptazymes) as reagents for augmenting small ligand detection on a mass-sensitive device. Two distinct aptazymes were chosen: an L1-ligase-based aptazyme (L1-Rev), which is activated by a small peptide (MW approximately 2.4 kDa) from the HIV-1 Rev protein, and a hammerhead cleavase-based aptazyme (HH-theo3) activated by theophylline (MW = 180 Da). Aptazyme activity was observed in real time, and low-molecular-weight analyte detection has been successfully demonstrated with both aptazymes.

  16. Large space structure damping design

    NASA Technical Reports Server (NTRS)

    Pilkey, W. D.; Haviland, J. K.

    1983-01-01

    Several FORTRAN subroutines and programs were developed which compute complex eigenvalues of a damped system using different approaches, and which rescale mode shapes to unit generalized mass and make rigid bodies orthogonal to each other. An analytical proof of a Minimum Constrained Frequency Criterion (MCFC) for a single damper is presented. A method to minimize the effect of control spill-over for large space structures is proposed. The characteristic equation of an undamped system with a generalized control law is derived using reanalysis theory. This equation can be implemented in computer programs for efficient eigenvalue analysis or control quasi synthesis. Methods to control vibrations in large space structure are reviewed and analyzed. The resulting prototype, using electromagnetic actuator, is described.

  17. Task-based image quality evaluation of iterative reconstruction methods for low dose CT using computer simulations

    NASA Astrophysics Data System (ADS)

    Xu, Jingyan; Fuld, Matthew K.; Fung, George S. K.; Tsui, Benjamin M. W.

    2015-04-01

    Iterative reconstruction (IR) methods for x-ray CT is a promising approach to improve image quality or reduce radiation dose to patients. The goal of this work was to use task based image quality measures and the channelized Hotelling observer (CHO) to evaluate both analytic and IR methods for clinical x-ray CT applications. We performed realistic computer simulations at five radiation dose levels, from a clinical reference low dose D0 to 25% D0. A fixed size and contrast lesion was inserted at different locations into the liver of the XCAT phantom to simulate a weak signal. The simulated data were reconstructed on a commercial CT scanner (SOMATOM Definition Flash; Siemens, Forchheim, Germany) using the vendor-provided analytic (WFBP) and IR (SAFIRE) methods. The reconstructed images were analyzed by CHOs with both rotationally symmetric (RS) and rotationally oriented (RO) channels, and with different numbers of lesion locations (5, 10, and 20) in a signal known exactly (SKE), background known exactly but variable (BKEV) detection task. The area under the receiver operating characteristic curve (AUC) was used as a summary measure to compare the IR and analytic methods; the AUC was also used as the equal performance criterion to derive the potential dose reduction factor of IR. In general, there was a good agreement in the relative AUC values of different reconstruction methods using CHOs with RS and RO channels, although the CHO with RO channels achieved higher AUCs than RS channels. The improvement of IR over analytic methods depends on the dose level. The reference dose level D0 was based on a clinical low dose protocol, lower than the standard dose due to the use of IR methods. At 75% D0, the performance improvement was statistically significant (p < 0.05). The potential dose reduction factor also depended on the detection task. For the SKE/BKEV task involving 10 lesion locations, a dose reduction of at least 25% from D0 was achieved.

  18. Ascorbic Acid as a Standard for Iodometric Titrations. An Analytical Experiment for General Chemistry

    NASA Astrophysics Data System (ADS)

    Silva, Cesar R.; Simoni, Jose A.; Collins, Carol H.; Volpe, Pedro L. O.

    1999-10-01

    Ascorbic acid is suggested as the weighable compound for the standardization of iodine solutions in an analytical experiment in general chemistry. The experiment involves an iodometric titration in which iodine reacts with ascorbic acid, oxidizing it to dehydroascorbic acid. The redox titration endpoint is determined by the first iodine excess that is complexed with starch, giving a deep blue-violet color. The results of the titration of iodine solution using ascorbic acid as a calibration standard were compared with the results acquired by the classic method using a standardized solution of sodium thiosulfate. The standardization of the iodine solution using ascorbic acid was accurate and precise, with the advantages of saving time and avoiding mistakes due to solution preparation. The colorless ascorbic acid solution gives a very clear and sharp titration end point with starch. It was shown by thermogravimetric analysis that ascorbic acid can be dried at 393 K for 2 h without decomposition. This experiment allows general chemistry students to perform an iodometric titration during a single laboratory period, determining with precision the content of vitamin C in pharmaceutical formulations.

  19. The Effective-One-Body Approach to the General Relativistic Two Body Problem

    NASA Astrophysics Data System (ADS)

    Damour, Thibault; Nagar, Alessandro

    The two-body problem in General Relativity has been the subject of many analytical investigations. After reviewing some of the methods used to tackle this problem (and, more generally, the N-body problem), we focus on a new, recently introduced approach to the motion and radiation of (comparable mass) binary systems: the Effective One Body (EOB) formalism. We review the basic elements of this formalism, and discuss some of its recent developments. Several recent comparisons between EOB predictions and Numerical Relativity (NR) simulations have shown the aptitude of the EOB formalism to provide accurate descriptions of the dynamics and radiation of various binary systems (comprising black holes or neutron stars) in regimes that are inaccessible to other analytical approaches (such as the last orbits and the merger of comparable mass black holes). In synergy with NR simulations, post-Newtonian (PN) theory and Gravitational Self-Force (GSF) computations, the EOB formalism is likely to provide an efficient way of computing the very many accurate template waveforms that are needed for Gravitational Wave (GW) data analysis purposes.

  20. A multi-frequency iterative imaging method for discontinuous inverse medium problem

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Feng, Lixin

    2018-06-01

    The inverse medium problem with discontinuous refractive index is a kind of challenging inverse problem. We employ the primal dual theory and fast solution of integral equations, and propose a new iterative imaging method. The selection criteria of regularization parameter is given by the method of generalized cross-validation. Based on multi-frequency measurements of the scattered field, a recursive linearization algorithm has been presented with respect to the frequency from low to high. We also discuss the initial guess selection strategy by semi-analytical approaches. Numerical experiments are presented to show the effectiveness of the proposed method.

  1. A method for calculating strut and splitter plate noise in exit ducts: Theory and verification

    NASA Technical Reports Server (NTRS)

    Fink, M. R.

    1978-01-01

    Portions of a four-year analytical and experimental investigation relative to noise radiation from engine internal components in turbulent flow are summarized. Spectra measured for such airfoils over a range of chord, thickness ratio, flow velocity, and turbulence level were compared with predictions made by an available rigorous thin-airfoil analytical method. This analysis included the effects of flow compressibility and source noncompactness. Generally good agreement was obtained. This noise calculation method for isolated airfoils in turbulent flow was combined with a method for calculating transmission of sound through a subsonic exit duct and with an empirical far-field directivity shape. These three elements were checked separately and were individually shown to give close agreement with data. This combination provides a method for predicting engine internally generated aft-radiated noise from radial struts and stators, and annular splitter rings. Calculated sound power spectra, directivity, and acoustic pressure spectra were compared with the best available data. These data were for noise caused by a fan exit duct annular splitter ring, larger-chord stator blades, and turbine exit struts.

  2. Sweeping as a multistep enrichment process in micellar electrokinetic chromatography: the retention factor gradient effect.

    PubMed

    El-Awady, Mohamed; Pyell, Ute

    2013-07-05

    The application of a new method developed for the assessment of sweeping efficiency in MEKC under homogeneous and inhomogeneous electric field conditions is extended to the general case, in which the distribution coefficient and the electric conductivity of the analyte in the sample zone and in the separation compartment are varied. As test analytes p-hydroxybenzoates (parabens), benzamide and some aromatic amines are studied under MEKC conditions with SDS as anionic surfactant. We show that in the general case - in contrast to the classical description - the obtainable enrichment factor is not only dependent on the retention factor of the analyte in the sample zone but also dependent on the retention factor in the background electrolyte (BGE). It is shown that in the general case sweeping is inherently a multistep focusing process. We describe an additional focusing/defocusing step (the retention factor gradient effect, RFGE) quantitatively by extending the classical equation employed for the description of the sweeping process with an additional focusing/defocusing factor. The validity of this equation is demonstrated experimentally (and theoretically) under variation of the organic solvent content (in the sample and/or the BGE), the type of organic solvent (in the sample and/or the BGE), the electric conductivity (in the sample), the pH (in the sample), and the concentration of surfactant (in the BGE). It is shown that very high enrichment factors can be obtained, if the pH in the sample zone makes possible to convert the analyte into a charged species that has a high distribution coefficient with respect to an oppositely charged micellar phase, while the pH in the BGE enables separation of the neutral species under moderate retention factor conditions. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. A general analytical platform and strategy in search for illegal drugs.

    PubMed

    Johansson, Monika; Fransson, Dick; Rundlöf, Torgny; Huynh, Ngoc-Hang; Arvidsson, Torbjörn

    2014-11-01

    An effective screening procedure to identify and quantify active pharmaceutical substances in suspected illegal medicinal products is described. The analytical platform, consisting of accurate mass determination with liquid chromatography time-of-flight mass spectrometry (LC-QTOF-MS) in combination with nuclear magnetic resonance (NMR) spectroscopy provides an excellent analytical tool to screen for unknowns in medicinal products, food supplements and herbal formulations. This analytical approach has been successfully applied to analyze thousands of samples. The general screening method usually starts with a methanol extraction of tablets/capsules followed by liquid chromatographic separation on a Halo Phenyl-Hexyl column (2.7μm; 100mm×2.1mm) using an acetonitrile/0.1% formic acid gradient as eluent. The accurate mass of peaks of interest was recorded and a search made against an in-house database containing approximately 4200 substances, mostly pharmaceutical compounds. The search could be general or tailored against different classes of compounds. Hits were confirmed by analyzing a reference substance and/or by NMR. Quantification was normally performed with quantitative NMR (qNMR) spectroscopy. Applications for weight-loss substances like sibutramine and orlistat, sexual potency enhancement (PDE-5 inhibitors), and analgesic drugs are presented in this study. We have also identified prostaglandin analogues in eyelash growth serum, exemplified by isopropyl cloprostenate and bimatoprost. For creams and ointments, matrix solid-phase dispersion (MSPD) was found to give a clean extracts with high recovery prior to LC-MS analyses. The structural elucidation of cetilistat, a new weight-loss substance recently found in illegal medicines purchased over the Internet, is also presented. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Analytical quality by design: a tool for regulatory flexibility and robust analytics.

    PubMed

    Peraman, Ramalingam; Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy

    2015-01-01

    Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT).

  5. Analytical Quality by Design: A Tool for Regulatory Flexibility and Robust Analytics

    PubMed Central

    Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy

    2015-01-01

    Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT). PMID:25722723

  6. Electromagnetic topology: Characterization of internal electromagnetic coupling

    NASA Technical Reports Server (NTRS)

    Parmantier, J. P.; Aparicio, J. P.; Faure, F.

    1991-01-01

    The main principles are presented of a method dealing with the resolution of electromagnetic internal problems: Electromagnetic Topology. A very interesting way is to generalize the multiconductor transmission line network theory to the basic equation of the Electromagnetic Topology: the BLT equation. This generalization is illustrated by the treatment of an aperture as a four port junction. Analytical and experimental derivations of the scattering parameters are presented. These concepts are used to study the electromagnetic coupling in a scale model of an aircraft, and can be seen as a convenient means to test internal electromagnetic interference.

  7. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  8. Soliton and periodic solutions for time-dependent coefficient non-linear equation

    NASA Astrophysics Data System (ADS)

    Guner, Ozkan

    2016-01-01

    In this article, we establish exact solutions for the generalized (3+1)-dimensional variable coefficient Kadomtsev-Petviashvili (GVCKP) equation. Using solitary wave ansatz in terms of ? functions and the modified sine-cosine method, we find exact analytical bright soliton solutions and exact periodic solutions for the considered model. The physical parameters in the soliton solutions are obtained as function of the dependent model coefficients. The effectiveness and reliability of the method are shown by its application to the GVCKP equation.

  9. A performability solution method for degradable nonrepairable systems

    NASA Technical Reports Server (NTRS)

    Furchtgott, D. G.; Meyer, J. F.

    1984-01-01

    The present performability model-solving algorithm identifies performance with 'reward', representing the state behavior of a system S by a finite-state stochastic process and determining reward by means of reward rates that are associated with the states of the base model. A general method is obtained for determining the probability distribution function of the performance (reward) variable, and therefore the performability, of the corresponding system. This is done for bounded utilization periods, and the result is an integral expression which is either analytically or numerically solvable.

  10. General Procedure for the Easy Calculation of pH in an Introductory Course of General or Analytical Chemistry

    ERIC Educational Resources Information Center

    Cepriá, Gemma; Salvatella, Luis

    2014-01-01

    All pH calculations for simple acid-base systems used in introductory courses on general or analytical chemistry can be carried out by using a general procedure requiring the use of predominance diagrams. In particular, the pH is calculated as the sum of an independent term equaling the average pK[subscript a] values of the acids involved in the…

  11. The Impact of Institutional Factors on the Relationship Between High School Mathematics Curricula and College Mathematics Course-Taking and Achievement

    ERIC Educational Resources Information Center

    Harwell, Michael

    2013-01-01

    Meta-analytic methods were used to examine the moderating effect of institutional factors on the relationship between high school mathematics curricula and college mathematics course-taking and achievement from a sample of 32 colleges. The findings suggest that the impact of curriculum on college mathematics outcomes is not generally moderated by…

  12. Do Specialty Courts Achieve Better Outcomes for Children in Foster Care than General Courts?

    ERIC Educational Resources Information Center

    Sloan, Frank A.; Gifford, Elizabeth J.; Eldred, Lindsey M.; Acquah, Kofi F.; Blevins, Claire E.

    2013-01-01

    Objective: This study assessed the effects of unified family and drug treatment courts (DTCs) on the resolution of cases involving foster care children and the resulting effects on school performance. Method: The first analytic step was to assess the impacts of presence of unified and DTCs in North Carolina counties on time children spent in…

  13. The Toda lattice as a forced integrable system

    NASA Technical Reports Server (NTRS)

    Hansen, P. J.; Kaup, D. J.

    1985-01-01

    The analytic properties of the Jost functions for the inverse scattering transform associated with the forced Toda lattice are shown to determine the time evolution of this particular boundary value problem. It is suggested that inverse scattering methods may be used generally to analyze forced integrable systems. Thus an extension of the applicability of the inverse scattering transform is indicated.

  14. Fast and Analytical EAP Approximation from a 4th-Order Tensor.

    PubMed

    Ghosh, Aurobrata; Deriche, Rachid

    2012-01-01

    Generalized diffusion tensor imaging (GDTI) was developed to model complex apparent diffusivity coefficient (ADC) using higher-order tensors (HOTs) and to overcome the inherent single-peak shortcoming of DTI. However, the geometry of a complex ADC profile does not correspond to the underlying structure of fibers. This tissue geometry can be inferred from the shape of the ensemble average propagator (EAP). Though interesting methods for estimating a positive ADC using 4th-order diffusion tensors were developed, GDTI in general was overtaken by other approaches, for example, the orientation distribution function (ODF), since it is considerably difficult to recuperate the EAP from a HOT model of the ADC in GDTI. In this paper, we present a novel closed-form approximation of the EAP using Hermite polynomials from a modified HOT model of the original GDTI-ADC. Since the solution is analytical, it is fast, differentiable, and the approximation converges well to the true EAP. This method also makes the effort of computing a positive ADC worthwhile, since now both the ADC and the EAP can be used and have closed forms. We demonstrate our approach with 4th-order tensors on synthetic data and in vivo human data.

  15. Analytical method for analysis of electromagnetic scattering from inhomogeneous spherical structures using duality principles

    NASA Astrophysics Data System (ADS)

    Kiani, M.; Abdolali, A.; Safari, M.

    2018-03-01

    In this article, an analytical approach is presented for the analysis of electromagnetic (EM) scattering from radially inhomogeneous spherical structures (RISSs) based on the duality principle. According to the spherical symmetry, similar angular dependencies in all the regions are considered using spherical harmonics. To extract the radial dependency, the system of differential equations of wave propagation toward the inhomogeneity direction is equated with the dual planar ones. A general duality between electromagnetic fields and parameters and scattering parameters of the two structures is introduced. The validity of the proposed approach is verified through a comprehensive example. The presented approach substitutes a complicated problem in spherical coordinate to an easy, well posed, and previously solved problem in planar geometry. This approach is valid for all continuously varying inhomogeneity profiles. One of the major advantages of the proposed method is the capability of studying two general and applicable types of RISSs. As an interesting application, a class of lens antenna based on the physical concept of the gradient refractive index material is introduced. The approach is used to analyze the EM scattering from the structure and validate strong performance of the lens.

  16. Fast and Analytical EAP Approximation from a 4th-Order Tensor

    PubMed Central

    Ghosh, Aurobrata; Deriche, Rachid

    2012-01-01

    Generalized diffusion tensor imaging (GDTI) was developed to model complex apparent diffusivity coefficient (ADC) using higher-order tensors (HOTs) and to overcome the inherent single-peak shortcoming of DTI. However, the geometry of a complex ADC profile does not correspond to the underlying structure of fibers. This tissue geometry can be inferred from the shape of the ensemble average propagator (EAP). Though interesting methods for estimating a positive ADC using 4th-order diffusion tensors were developed, GDTI in general was overtaken by other approaches, for example, the orientation distribution function (ODF), since it is considerably difficult to recuperate the EAP from a HOT model of the ADC in GDTI. In this paper, we present a novel closed-form approximation of the EAP using Hermite polynomials from a modified HOT model of the original GDTI-ADC. Since the solution is analytical, it is fast, differentiable, and the approximation converges well to the true EAP. This method also makes the effort of computing a positive ADC worthwhile, since now both the ADC and the EAP can be used and have closed forms. We demonstrate our approach with 4th-order tensors on synthetic data and in vivo human data. PMID:23365552

  17. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  18. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  19. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  20. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  1. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  2. SAM Radiochemical Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target radiochemical analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select radiochemical analytes.

  3. Analytical performance of 17 general chemistry analytes across countries and across manufacturers in the INPUtS project of EQA organizers in Italy, the Netherlands, Portugal, United Kingdom and Spain.

    PubMed

    Weykamp, Cas; Secchiero, Sandra; Plebani, Mario; Thelen, Marc; Cobbaert, Christa; Thomas, Annette; Jassam, Nuthar; Barth, Julian H; Perich, Carmen; Ricós, Carmen; Faria, Ana Paula

    2017-02-01

    Optimum patient care in relation to laboratory medicine is achieved when results of laboratory tests are equivalent, irrespective of the analytical platform used or the country where the laboratory is located. Standardization and harmonization minimize differences and the success of efforts to achieve this can be monitored with international category 1 external quality assessment (EQA) programs. An EQA project with commutable samples, targeted with reference measurement procedures (RMPs) was organized by EQA institutes in Italy, the Netherlands, Portugal, UK, and Spain. Results of 17 general chemistry analytes were evaluated across countries and across manufacturers according to performance specifications derived from biological variation (BV). For K, uric acid, glucose, cholesterol and high-density density (HDL) cholesterol, the minimum performance specification was met in all countries and by all manufacturers. For Na, Cl, and Ca, the minimum performance specifications were met by none of the countries and manufacturers. For enzymes, the situation was complicated, as standardization of results of enzymes toward RMPs was still not achieved in 20% of the laboratories and questionable in the remaining 80%. The overall performance of the measurement of 17 general chemistry analytes in European medical laboratories met the minimum performance specifications. In this general picture, there were no significant differences per country and no significant differences per manufacturer. There were major differences between the analytes. There were six analytes for which the minimum quality specifications were not met and manufacturers should improve their performance for these analytes. Standardization of results of enzymes requires ongoing efforts.

  4. [Quality Management and Quality Specifications of Laboratory Tests in Clinical Studies--Challenges in Pre-Analytical Processes in Clinical Laboratories].

    PubMed

    Ishibashi, Midori

    2015-01-01

    The cost, speed, and quality are the three important factors recently indicated by the Ministry of Health, Labour and Welfare (MHLW) for the purpose of accelerating clinical studies. Based on this background, the importance of laboratory tests is increasing, especially in the evaluation of clinical study participants' entry and safety, and drug efficacy. To assure the quality of laboratory tests, providing high-quality laboratory tests is mandatory. For providing adequate quality assurance in laboratory tests, quality control in the three fields of pre-analytical, analytical, and post-analytical processes is extremely important. There are, however, no detailed written requirements concerning specimen collection, handling, preparation, storage, and shipping. Most laboratory tests for clinical studies are performed onsite in a local laboratory; however, a part of laboratory tests is done in offsite central laboratories after specimen shipping. As factors affecting laboratory tests, individual and inter-individual variations are well-known. Besides these factors, standardizing the factors of specimen collection, handling, preparation, storage, and shipping, may improve and maintain the high quality of clinical studies in general. Furthermore, the analytical method, units, and reference interval are also important factors. It is concluded that, to overcome the problems derived from pre-analytical processes, it is necessary to standardize specimen handling in a broad sense.

  5. Stability analysis of magnetized neutron stars - a semi-analytic approach

    NASA Astrophysics Data System (ADS)

    Herbrik, Marlene; Kokkotas, Kostas D.

    2017-04-01

    We implement a semi-analytic approach for stability analysis, addressing the ongoing uncertainty about stability and structure of neutron star magnetic fields. Applying the energy variational principle, a model system is displaced from its equilibrium state. The related energy density variation is set up analytically, whereas its volume integration is carried out numerically. This facilitates the consideration of more realistic neutron star characteristics within the model compared to analytical treatments. At the same time, our method retains the possibility to yield general information about neutron star magnetic field and composition structures that are likely to be stable. In contrast to numerical studies, classes of parametrized systems can be studied at once, finally constraining realistic configurations for interior neutron star magnetic fields. We apply the stability analysis scheme on polytropic and non-barotropic neutron stars with toroidal, poloidal and mixed fields testing their stability in a Newtonian framework. Furthermore, we provide the analytical scheme for dropping the Cowling approximation in an axisymmetric system and investigate its impact. Our results confirm the instability of simple magnetized neutron star models as well as a stabilization tendency in the case of mixed fields and stratification. These findings agree with analytical studies whose spectrum of model systems we extend by lifting former simplifications.

  6. SU-C-204-01: A Fast Analytical Approach for Prompt Gamma and PET Predictions in a TPS for Proton Range Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroniger, K; Herzog, M; Landry, G

    2015-06-15

    Purpose: We describe and demonstrate a fast analytical tool for prompt-gamma emission prediction based on filter functions applied on the depth dose profile. We present the implementation in a treatment planning system (TPS) of the same algorithm for positron emitter distributions. Methods: The prediction of the desired observable is based on the convolution of filter functions with the depth dose profile. For both prompt-gammas and positron emitters, the results of Monte Carlo simulations (MC) are compared with those of the analytical tool. For prompt-gamma emission from inelastic proton-induced reactions, homogeneous and inhomogeneous phantoms alongside with patient data are used asmore » irradiation targets of mono-energetic proton pencil beams. The accuracy of the tool is assessed in terms of the shape of the analytically calculated depth profiles and their absolute yields, compared to MC. For the positron emitters, the method is implemented in a research RayStation TPS and compared to MC predictions. Digital phantoms and patient data are used and positron emitter spatial density distributions are analyzed. Results: Calculated prompt-gamma profiles agree with MC within 3 % in terms of absolute yield and reproduce the correct shape. Based on an arbitrary reference material and by means of 6 filter functions (one per chemical element), profiles in any other material composed of those elements can be predicted. The TPS implemented algorithm is accurate enough to enable, via the analytically calculated positron emitters profiles, detection of range differences between the TPS and MC with errors of the order of 1–2 mm. Conclusion: The proposed analytical method predicts prompt-gamma and positron emitter profiles which generally agree with the distributions obtained by a full MC. The implementation of the tool in a TPS shows that reliable profiles can be obtained directly from the dose calculated by the TPS, without the need of full MC simulation.« less

  7. Analytic energy gradients for orbital-optimized MP3 and MP2.5 with the density-fitting approximation: An efficient implementation.

    PubMed

    Bozkaya, Uğur

    2018-03-15

    Efficient implementations of analytic gradients for the orbital-optimized MP3 and MP2.5 and their standard versions with the density-fitting approximation, which are denoted as DF-MP3, DF-MP2.5, DF-OMP3, and DF-OMP2.5, are presented. The DF-MP3, DF-MP2.5, DF-OMP3, and DF-OMP2.5 methods are applied to a set of alkanes and noncovalent interaction complexes to compare the computational cost with the conventional MP3, MP2.5, OMP3, and OMP2.5. Our results demonstrate that density-fitted perturbation theory (DF-MP) methods considered substantially reduce the computational cost compared to conventional MP methods. The efficiency of our DF-MP methods arise from the reduced input/output (I/O) time and the acceleration of gradient related terms, such as computations of particle density and generalized Fock matrices (PDMs and GFM), solution of the Z-vector equation, back-transformations of PDMs and GFM, and evaluation of analytic gradients in the atomic orbital basis. Further, application results show that errors introduced by the DF approach are negligible. Mean absolute errors for bond lengths of a molecular set, with the cc-pCVQZ basis set, is 0.0001-0.0002 Å. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  8. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  9. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  10. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  11. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  12. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  13. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  14. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  15. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture... Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MREs are listed as follows: (1) Official Methods of Analysis of AOAC...

  16. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  17. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  18. Statistical analysis of water-quality data containing multiple detection limits II: S-language software for nonparametric distribution modeling and hypothesis testing

    USGS Publications Warehouse

    Lee, L.; Helsel, D.

    2007-01-01

    Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.

  19. General properties and analytical approximations of photorefractive solitons

    NASA Astrophysics Data System (ADS)

    Geisler, A.; Homann, F.; Schmidt, H.-J.

    2004-08-01

    We investigate general properties of spatial 1-dimensional bright photorefractive solitons and discuss various analytical approximations for the soliton profile and the half width, both depending on an intensity parameter r. The case of dark solitons is also briefly addressed.

  20. Analysis of Mathematical Modelling on Potentiometric Biosensors

    PubMed Central

    Mehala, N.; Rajendran, L.

    2014-01-01

    A mathematical model of potentiometric enzyme electrodes for a nonsteady condition has been developed. The model is based on the system of two coupled nonlinear time-dependent reaction diffusion equations for Michaelis-Menten formalism that describes the concentrations of substrate and product within the enzymatic layer. Analytical expressions for the concentration of substrate and product and the corresponding flux response have been derived for all values of parameters using the new homotopy perturbation method. Furthermore, the complex inversion formula is employed in this work to solve the boundary value problem. The analytical solutions obtained allow a full description of the response curves for only two kinetic parameters (unsaturation/saturation parameter and reaction/diffusion parameter). Theoretical descriptions are given for the two limiting cases (zero and first order kinetics) and relatively simple approaches for general cases are presented. All the analytical results are compared with simulation results using Scilab/Matlab program. The numerical results agree with the appropriate theories. PMID:25969765

  1. Analysis of mathematical modelling on potentiometric biosensors.

    PubMed

    Mehala, N; Rajendran, L

    2014-01-01

    A mathematical model of potentiometric enzyme electrodes for a nonsteady condition has been developed. The model is based on the system of two coupled nonlinear time-dependent reaction diffusion equations for Michaelis-Menten formalism that describes the concentrations of substrate and product within the enzymatic layer. Analytical expressions for the concentration of substrate and product and the corresponding flux response have been derived for all values of parameters using the new homotopy perturbation method. Furthermore, the complex inversion formula is employed in this work to solve the boundary value problem. The analytical solutions obtained allow a full description of the response curves for only two kinetic parameters (unsaturation/saturation parameter and reaction/diffusion parameter). Theoretical descriptions are given for the two limiting cases (zero and first order kinetics) and relatively simple approaches for general cases are presented. All the analytical results are compared with simulation results using Scilab/Matlab program. The numerical results agree with the appropriate theories.

  2. Determination of the aerosol size distribution by analytic inversion of the extinction spectrum in the complex anomalous diffraction approximation.

    PubMed

    Franssens, G; De Maziére, M; Fonteyn, D

    2000-08-20

    A new derivation is presented for the analytical inversion of aerosol spectral extinction data to size distributions. It is based on the complex analytic extension of the anomalous diffraction approximation (ADA). We derive inverse formulas that are applicable to homogeneous nonabsorbing and absorbing spherical particles. Our method simplifies, generalizes, and unifies a number of results obtained previously in the literature. In particular, we clarify the connection between the ADA transform and the Fourier and Laplace transforms. Also, the effect of the particle refractive-index dispersion on the inversion is examined. It is shown that, when Lorentz's model is used for this dispersion, the continuous ADA inverse transform is mathematically well posed, whereas with a constant refractive index it is ill posed. Further, a condition is given, in terms of Lorentz parameters, for which the continuous inverse operator does not amplify the error.

  3. Postbuckling behavior of axially compressed graphite-epoxy cylindrical panels with circular holes

    NASA Technical Reports Server (NTRS)

    Knight, N. F., Jr.; Starnes, J. H., Jr.

    1984-01-01

    The results of an experimental and analytical study of the effects of circular holes on the postbuckling behavior of graphite-epoxy cylindrical panels loaded in axial compression are presented. The STAGSC-1 general shell analysis computer code is used to determine the buckling and postbuckling response of the panels. The loaded, curved ends of the specimens were clamped by fixtures and the unloaded, straight edges were simply supported by knife-edge restraints. The panels are loaded by uniform end shortening to several times the end shortening at buckling. The unstable equilibrium path of the postbuckling response is obtained analytically by using a method based on controlling an equilibrium-path-arc-length parameter instead of the traditional load parameter. The effects of hole diameter, panel radius, and panel thickness on postbuckling response are considered in the study. Experimental results are compared with the analytical results and the failure characteristics of the graphite-epoxy panels are described.

  4. On the analytical modeling of the nonlinear vibrations of pretensioned space structures

    NASA Technical Reports Server (NTRS)

    Housner, J. M.; Belvin, W. K.

    1983-01-01

    Pretensioned structures are receiving considerable attention as candidate large space structures. A typical example is a hoop-column antenna. The large number of preloaded members requires efficient analytical methods for concept validation and design. Validation through analyses is especially important since ground testing may be limited due to gravity effects and structural size. The present investigation has the objective to present an examination of the analytical modeling of pretensioned members undergoing nonlinear vibrations. Two approximate nonlinear analysis are developed to model general structural arrangements which include beam-columns and pretensioned cables attached to a common nucleus, such as may occur at a joint of a pretensioned structure. Attention is given to structures undergoing nonlinear steady-state oscillations due to sinusoidal excitation forces. Three analyses, linear, quasi-linear, and nonlinear are conducted and applied to study the response of a relatively simple cable stiffened structure.

  5. Quantification of four major metabolites of embryotoxic N-methyl- and N-ethyl-2-pyrrolidone in human urine by cooled-injection gas chromatography and isotope dilution mass spectrometry.

    PubMed

    Schindler, Birgit K; Koslitz, Stephan; Meier, Swetlana; Belov, Vladimir N; Koch, Holger M; Weiss, Tobias; Brüning, Thomas; Käfferlein, Heiko U

    2012-04-17

    N-Methyl- and N-ethyl-2-pyrollidone (NMP and NEP) are frequently used industrial solvents and were shown to be embryotoxic in animal experiments. We developed a sensitive, specific, and robust analytical method based on cooled-injection (CIS) gas chromatography and isotope dilution mass spectrometry to analyze 5-hydroxy-N-ethyl-2-pyrrolidone (5-HNEP) and 2-hydroxy-N-ethylsuccinimide (2-HESI), two newly identified presumed metabolites of NEP, and their corresponding methyl counterparts (5-HNMP, 2-HMSI) in human urine. The urine was spiked with deuterium-labeled analogues of these metabolites. The analytes were separated from urinary matrix by solid-phase extraction and silylated prior to quantification. Validation of this method was carried out by using both, spiked pooled urine samples and urine samples from 56 individuals of the general population with no known occupational exposure to NMP and NEP. Interday and intraday imprecision was better than 8% for all metabolites, while the limits of detection were between 5 and 20 μg/L depending on the analyte. The high sensitivity of the method enables us to quantify NMP and NEP metabolites at current environmental exposures by human biomonitoring.

  6. An active learning representative subset selection method using net analyte signal.

    PubMed

    He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi

    2018-05-05

    To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. An active learning representative subset selection method using net analyte signal

    NASA Astrophysics Data System (ADS)

    He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi

    2018-05-01

    To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced.

  8. Conducting Meta-Analyses Based on p Values

    PubMed Central

    van Aert, Robbie C. M.; Wicherts, Jelte M.; van Assen, Marcel A. L. M.

    2016-01-01

    Because of overwhelming evidence of publication bias in psychology, techniques to correct meta-analytic estimates for such bias are greatly needed. The methodology on which the p-uniform and p-curve methods are based has great promise for providing accurate meta-analytic estimates in the presence of publication bias. However, in this article, we show that in some situations, p-curve behaves erratically, whereas p-uniform may yield implausible estimates of negative effect size. Moreover, we show that (and explain why) p-curve and p-uniform result in overestimation of effect size under moderate-to-large heterogeneity and may yield unpredictable bias when researchers employ p-hacking. We offer hands-on recommendations on applying and interpreting results of meta-analyses in general and p-uniform and p-curve in particular. Both methods as well as traditional methods are applied to a meta-analysis on the effect of weight on judgments of importance. We offer guidance for applying p-uniform or p-curve using R and a user-friendly web application for applying p-uniform. PMID:27694466

  9. An analytical-numerical method for determining the mechanical response of a condenser microphone

    PubMed Central

    Homentcovschi, Dorel; Miles, Ronald N.

    2011-01-01

    The paper is based on determining the reaction pressure on the diaphragm of a condenser microphone by integrating numerically the frequency domain Stokes system describing the velocity and the pressure in the air domain beneath the diaphragm. Afterwards, the membrane displacement can be obtained analytically or numerically. The method is general and can be applied to any geometry of the backplate holes, slits, and backchamber. As examples, the method is applied to the Bruel & Kjaer (B&K) 4134 1/2-inch microphone determining the mechanical sensitivity and the mechano-thermal noise for a domain of frequencies and also the displacement field of the membrane for two specified frequencies. These elements compare well with the measured values published in the literature. Also a new design, completely micromachined (including the backvolume) of the B&K micro-electro-mechanical systems (MEM) 1/4-inch measurement microphone is proposed. It is shown that its mechanical performances are very similar to those of the B&K MEMS measurement microphone. PMID:22225026

  10. An analytical-numerical method for determining the mechanical response of a condenser microphone.

    PubMed

    Homentcovschi, Dorel; Miles, Ronald N

    2011-12-01

    The paper is based on determining the reaction pressure on the diaphragm of a condenser microphone by integrating numerically the frequency domain Stokes system describing the velocity and the pressure in the air domain beneath the diaphragm. Afterwards, the membrane displacement can be obtained analytically or numerically. The method is general and can be applied to any geometry of the backplate holes, slits, and backchamber. As examples, the method is applied to the Bruel & Kjaer (B&K) 4134 1/2-inch microphone determining the mechanical sensitivity and the mechano-thermal noise for a domain of frequencies and also the displacement field of the membrane for two specified frequencies. These elements compare well with the measured values published in the literature. Also a new design, completely micromachined (including the backvolume) of the B&K micro-electro-mechanical systems (MEM) 1/4-inch measurement microphone is proposed. It is shown that its mechanical performances are very similar to those of the B&K MEMS measurement microphone. © 2011 Acoustical Society of America

  11. Subtracting infrared renormalons from Wilson coefficients: Uniqueness and power dependences on ΛQCD

    NASA Astrophysics Data System (ADS)

    Mishima, Go; Sumino, Yukinari; Takaura, Hiromasa

    2017-06-01

    In the context of operator product expansion (OPE) and using the large-β0 approximation, we propose a method to define Wilson coefficients free from uncertainties due to IR renormalons. We first introduce a general observable X (Q2) with an explicit IR cutoff, and then we extract a genuine UV contribution XUV as a cutoff-independent part. XUV includes power corrections ˜(ΛQCD2/Q2)n which are independent of renormalons. Using the integration-by-regions method, we observe that XUV coincides with the leading Wilson coefficient in OPE and also clarify that the power corrections originate from UV region. We examine scheme dependence of XUV and single out a specific scheme favorable in terms of analytical properties. Our method would be optimal with respect to systematicity, analyticity and stability. We test our formulation with the examples of the Adler function, QCD force between Q Q ¯, and R -ratio in e+e- collision.

  12. Prediction of true test scores from observed item scores and ancillary data.

    PubMed

    Haberman, Shelby J; Yao, Lili; Sinharay, Sandip

    2015-05-01

    In many educational tests which involve constructed responses, a traditional test score is obtained by adding together item scores obtained through holistic scoring by trained human raters. For example, this practice was used until 2008 in the case of GRE(®) General Analytical Writing and until 2009 in the case of TOEFL(®) iBT Writing. With use of natural language processing, it is possible to obtain additional information concerning item responses from computer programs such as e-rater(®). In addition, available information relevant to examinee performance may include scores on related tests. We suggest application of standard results from classical test theory to the available data to obtain best linear predictors of true traditional test scores. In performing such analysis, we require estimation of variances and covariances of measurement errors, a task which can be quite difficult in the case of tests with limited numbers of items and with multiple measurements per item. As a consequence, a new estimation method is suggested based on samples of examinees who have taken an assessment more than once. Such samples are typically not random samples of the general population of examinees, so that we apply statistical adjustment methods to obtain the needed estimated variances and covariances of measurement errors. To examine practical implications of the suggested methods of analysis, applications are made to GRE General Analytical Writing and TOEFL iBT Writing. Results obtained indicate that substantial improvements are possible both in terms of reliability of scoring and in terms of assessment reliability. © 2015 The British Psychological Society.

  13. 40 CFR 161.180 - Enforcement analytical method.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 161.180... DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data Requirements § 161.180 Enforcement analytical method. An analytical method suitable for enforcement purposes must be...

  14. The Green's functions for peridynamic non-local diffusion.

    PubMed

    Wang, L J; Xu, J F; Wang, J X

    2016-09-01

    In this work, we develop the Green's function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green's functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green's functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems.

  15. A general method for calculating three-dimensional compressible laminar and turbulent boundary layers on arbitrary wings

    NASA Technical Reports Server (NTRS)

    Cebeci, T.; Kaups, K.; Ramsey, J. A.

    1977-01-01

    The method described utilizes a nonorthogonal coordinate system for boundary-layer calculations. It includes a geometry program that represents the wing analytically, and a velocity program that computes the external velocity components from a given experimental pressure distribution when the external velocity distribution is not computed theoretically. The boundary layer method is general, however, and can also be used for an external velocity distribution computed theoretically. Several test cases were computed by this method and the results were checked with other numerical calculations and with experiments when available. A typical computation time (CPU) on an IBM 370/165 computer for one surface of a wing which roughly consist of 30 spanwise stations and 25 streamwise stations, with 30 points across the boundary layer is less than 30 seconds for an incompressible flow and a little more for a compressible flow.

  16. Steady and Oscillatory, Subsonic and Supersonic, Aerodynamic Pressure and Generalized Forces for Complex Aircraft Configurations and Applications to Flutter. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Chen, L. T.

    1975-01-01

    A general method for analyzing aerodynamic flows around complex configurations is presented. By applying the Green function method, a linear integral equation relating the unknown, small perturbation potential on the surface of the body, to the known downwash is obtained. The surfaces of the aircraft, wake and diaphragm (if necessary) are divided into small quadrilateral elements which are approximated with hyperboloidal surfaces. The potential and its normal derivative are assumed to be constant within each element. This yields a set of linear algebraic equations and the coefficients are evaluated analytically. By using Gaussian elimination method, equations are solved for the potentials at the centroids of elements. The pressure coefficient is evaluated by the finite different method; the lift and moment coefficients are evaluated by numerical integration. Numerical results are presented, and applications to flutter are also included.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sahoo, Satiprasad; Dhar, Anirban, E-mail: anirban.dhar@gmail.com; Kar, Amlanjyoti

    Environmental management of an area describes a policy for its systematic and sustainable environmental protection. In the present study, regional environmental vulnerability assessment in Hirakud command area of Odisha, India is envisaged based on Grey Analytic Hierarchy Process method (Grey–AHP) using integrated remote sensing (RS) and geographic information system (GIS) techniques. Grey–AHP combines the advantages of classical analytic hierarchy process (AHP) and grey clustering method for accurate estimation of weight coefficients. It is a new method for environmental vulnerability assessment. Environmental vulnerability index (EVI) uses natural, environmental and human impact related factors, e.g., soil, geology, elevation, slope, rainfall, temperature, windmore » speed, normalized difference vegetation index, drainage density, crop intensity, agricultural DRASTIC value, population density and road density. EVI map has been classified into four environmental vulnerability zones (EVZs) namely: ‘low’, ‘moderate’ ‘high’, and ‘extreme’ encompassing 17.87%, 44.44%, 27.81% and 9.88% of the study area, respectively. EVI map indicates that the northern part of the study area is more vulnerable from an environmental point of view. EVI map shows close correlation with elevation. Effectiveness of the zone classification is evaluated by using grey clustering method. General effectiveness is in between “better” and “common classes”. This analysis demonstrates the potential applicability of the methodology. - Highlights: • Environmental vulnerability zone identification based on Grey Analytic Hierarchy Process (AHP) • The effectiveness evaluation by means of a grey clustering method with support from AHP • Use of grey approach eliminates the excessive dependency on the experience of experts.« less

  18. Light aircraft crash safety program

    NASA Technical Reports Server (NTRS)

    Thomson, R. G.; Hayduk, R. J.

    1974-01-01

    NASA is embarked upon research and development tasks aimed at providing the general aviation industry with a reliable crashworthy airframe design technology. The goals of the NASA program are: reliable analytical techniques for predicting the nonlinear behavior of structures; significant design improvements of airframes; and simulated full-scale crash test data. The analytical tools will include both simplified procedures for estimating energy absorption characteristics and more complex computer programs for analysis of general airframe structures under crash loading conditions. The analytical techniques being developed both in-house and under contract are described, and a comparison of some analytical predictions with experimental results is shown.

  19. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Enforcement analytical method. 158.355... DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An analytical method suitable for enforcement purposes must be provided for each active ingredient in the...

  20. Analytical and numerical study of electroosmotic slip flows of fractional second grade fluids

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoping; Qi, Haitao; Yu, Bo; Xiong, Zhen; Xu, Huanying

    2017-09-01

    This work investigates the unsteady electroosmotic slip flow of viscoelastic fluid through a parallel plate micro-channel under combined influence of electroosmotic and pressure gradient forcings with asymmetric zeta potentials at the walls. The generalized second grade fluid with fractional derivative was used for the constitutive equation. The Navier slip model with different slip coefficients at both walls was also considered. By employing the Debye-Hückel linearization and the Laplace and sin-cos-Fourier transforms, the analytical solutions for the velocity distribution are derived. And the finite difference method for this problem was also given. Finally, the influence of pertinent parameters on the generation of flow is presented graphically.

  1. Uniform GTD solution for the diffraction by metallic tapes on panelled compact-range reflectors

    NASA Technical Reports Server (NTRS)

    Somers, G. A.; Pathak, P. H.

    1992-01-01

    Metallic tape is commonly used to cover the interpanel gaps which occur in paneled compact-range reflectors. It is therefore of interest to study the effect of the scattering by the tape on the field in the target zone of the range. An analytical solution is presented for the target zone fields scattered by 2D metallic tapes. It is formulated by the generalized scattering matrix technique in conjunction with the Wiener-Hopf procedure. An extension to treat 3D tapes can be accomplished using the 2D solution via the equivalent current concept. The analytical solution is compared with a reference moment method solution to confirm the accuracy of the former.

  2. Constructing and predicting solitary pattern solutions for nonlinear time-fractional dispersive partial differential equations

    NASA Astrophysics Data System (ADS)

    Arqub, Omar Abu; El-Ajou, Ahmad; Momani, Shaher

    2015-07-01

    Building fractional mathematical models for specific phenomena and developing numerical or analytical solutions for these fractional mathematical models are crucial issues in mathematics, physics, and engineering. In this work, a new analytical technique for constructing and predicting solitary pattern solutions of time-fractional dispersive partial differential equations is proposed based on the generalized Taylor series formula and residual error function. The new approach provides solutions in the form of a rapidly convergent series with easily computable components using symbolic computation software. For method evaluation and validation, the proposed technique was applied to three different models and compared with some of the well-known methods. The resultant simulations clearly demonstrate the superiority and potentiality of the proposed technique in terms of the quality performance and accuracy of substructure preservation in the construct, as well as the prediction of solitary pattern solutions for time-fractional dispersive partial differential equations.

  3. Static penetration resistance of soils

    NASA Technical Reports Server (NTRS)

    Durgunoglu, H. T.; Mitchell, J. K.

    1973-01-01

    Model test results were used to define the failure mechanism associated with the static penetration resistance of cohesionless and low-cohesion soils. Knowledge of this mechanism has permitted the development of a new analytical method for calculating the ultimate penetration resistance which explicitly accounts for penetrometer base apex angle and roughness, soil friction angle, and the ratio of penetration depth to base width. Curves relating the bearing capacity factors to the soil friction angle are presented for failure in general shear. Strength parameters and penetrometer interaction properties of a fine sand were determined and used as the basis for prediction of the penetration resistance encountered by wedge, cone, and flat-ended penetrometers of different surface roughness using the proposed analytical method. Because of the close agreement between predicted values and values measured in laboratory tests, it appears possible to deduce in-situ soil strength parameters and their variation with depth from the results of static penetration tests.

  4. Emissive sensors and devices incorporating these sensors

    DOEpatents

    Swager, Timothy M; Zhang, Shi-Wei

    2013-02-05

    The present invention generally relates to luminescent and/or optically absorbing compositions and/or precursors to those compositions, including solid films incorporating these compositions/precursors, exhibiting increased luminescent lifetimes, quantum yields, enhanced stabilities and/or amplified emissions. The present invention also relates to sensors and methods for sensing analytes through luminescent and/or optically absorbing properties of these compositions and/or precursors. Examples of analytes detectable by the invention include electrophiles, alkylating agents, thionyl halides, and phosphate ester groups including phosphoryl halides, cyanides and thioates such as those found in certain chemical warfare agents. The present invention additionally relates to devices and methods for amplifying emissions, such as those produced using the above-described compositions and/or precursors, by incorporating the composition and/or precursor within a polymer having an energy migration pathway. In some cases, the compositions and/or precursors thereof include a compound capable of undergoing a cyclization reaction.

  5. From pixel to voxel: a deeper view of biological tissue by 3D mass spectral imaging

    PubMed Central

    Ye, Hui; Greer, Tyler; Li, Lingjun

    2011-01-01

    Three dimensional mass spectral imaging (3D MSI) is an exciting field that grants the ability to study a broad mass range of molecular species ranging from small molecules to large proteins by creating lateral and vertical distribution maps of select compounds. Although the general premise behind 3D MSI is simple, factors such as choice of ionization method, sample handling, software considerations and many others must be taken into account for the successful design of a 3D MSI experiment. This review provides a brief overview of ionization methods, sample preparation, software types and technological advancements driving 3D MSI research of a wide range of low- to high-mass analytes. Future perspectives in this field are also provided to conclude that the positive and promises ever-growing applications in the biomedical field with continuous developments of this powerful analytical tool. PMID:21320052

  6. Principal polynomial analysis.

    PubMed

    Laparra, Valero; Jiménez, Sandra; Tuia, Devis; Camps-Valls, Gustau; Malo, Jesus

    2014-11-01

    This paper presents a new framework for manifold learning based on a sequence of principal polynomials that capture the possibly nonlinear nature of the data. The proposed Principal Polynomial Analysis (PPA) generalizes PCA by modeling the directions of maximal variance by means of curves, instead of straight lines. Contrarily to previous approaches, PPA reduces to performing simple univariate regressions, which makes it computationally feasible and robust. Moreover, PPA shows a number of interesting analytical properties. First, PPA is a volume-preserving map, which in turn guarantees the existence of the inverse. Second, such an inverse can be obtained in closed form. Invertibility is an important advantage over other learning methods, because it permits to understand the identified features in the input domain where the data has physical meaning. Moreover, it allows to evaluate the performance of dimensionality reduction in sensible (input-domain) units. Volume preservation also allows an easy computation of information theoretic quantities, such as the reduction in multi-information after the transform. Third, the analytical nature of PPA leads to a clear geometrical interpretation of the manifold: it allows the computation of Frenet-Serret frames (local features) and of generalized curvatures at any point of the space. And fourth, the analytical Jacobian allows the computation of the metric induced by the data, thus generalizing the Mahalanobis distance. These properties are demonstrated theoretically and illustrated experimentally. The performance of PPA is evaluated in dimensionality and redundancy reduction, in both synthetic and real datasets from the UCI repository.

  7. Determination of vitamin C in foods: current state of method validation.

    PubMed

    Spínola, Vítor; Llorent-Martínez, Eulogio J; Castilho, Paula C

    2014-11-21

    Vitamin C is one of the most important vitamins, so reliable information about its content in foodstuffs is a concern to both consumers and quality control agencies. However, the heterogeneity of food matrixes and the potential degradation of this vitamin during its analysis create enormous challenges. This review addresses the development and validation of high-performance liquid chromatography methods for vitamin C analysis in food commodities, during the period 2000-2014. The main characteristics of vitamin C are mentioned, along with the strategies adopted by most authors during sample preparation (freezing and acidification) to avoid vitamin oxidation. After that, the advantages and handicaps of different analytical methods are discussed. Finally, the main aspects concerning method validation for vitamin C analysis are critically discussed. Parameters such as selectivity, linearity, limit of quantification, and accuracy were studied by most authors. Recovery experiments during accuracy evaluation were in general satisfactory, with usual values between 81 and 109%. However, few methods considered vitamin C stability during the analytical process, and the study of the precision was not always clear or complete. Potential future improvements regarding proper method validation are indicated to conclude this review. Copyright © 2014. Published by Elsevier B.V.

  8. Validation of a Stability-Indicating Method for Methylseleno-l-Cysteine (l-SeMC)

    PubMed Central

    Canady, Kristin; Cobb, Johnathan; Deardorff, Peter; Larson, Jami; White, Jonathan M.; Boring, Dan

    2016-01-01

    Methylseleno-l-cysteine (l-SeMC) is a naturally occurring amino acid analogue used as a general dietary supplement and is being explored as a chemopreventive agent. As a known dietary supplement, l-SeMC is not regulated as a pharmaceutical and there is a paucity of analytical methods available. To address the lack of methodology, a stability-indicating method was developed and validated to evaluate l-SeMC as both the bulk drug and formulated drug product (400 µg Se/capsule). The analytical approach presented is a simple, nonderivatization method that utilizes HPLC with ultraviolet detection at 220 nm. A C18 column with a volatile ion-pair agent and methanol mobile phase was used for the separation. The method accuracy was 99–100% from 0.05 to 0.15 mg/mL l-SeMC for the bulk drug, and 98–99% from 0.075 to 0.15 mg/mL l-SeMC for the drug product. Method precision was <1% for the bulk drug and was 3% for the drug product. The LOQ was 0.1 µg/mL l-SeMC or 0.002 µg l-SeMC on column. PMID:26199341

  9. A coupled mode formulation by reciprocity and a variational principle

    NASA Technical Reports Server (NTRS)

    Chuang, Shun-Lien

    1987-01-01

    A coupled mode formulation for parallel dielectric waveguides is presented via two methods: a reciprocity theorem and a variational principle. In the first method, a generalized reciprocity relation for two sets of field solutions satisfying Maxwell's equations and the boundary conditions in two different media, respectively, is derived. Based on the generalized reciprocity theorem, the coupled mode equations can then be formulated. The second method using a variational principle is also presented for a general waveguide system which can be lossy. The results of the variational principle can also be shown to be identical to those from the reciprocity theorem. The exact relations governing the 'conventional' and the new coupling coefficients are derived. It is shown analytically that the present formulation satisfies the reciprocity theorem and power conservation exactly, while the conventional theory violates the power conservation and reciprocity theorem by as much as 55 percent and the Hardy-Streifer (1985, 1986) theory by 0.033 percent, for example.

  10. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...

  11. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...

  12. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...

  13. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...

  14. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...

  15. Dynamics of a prey-predator system under Poisson white noise excitation

    NASA Astrophysics Data System (ADS)

    Pan, Shan-Shan; Zhu, Wei-Qiu

    2014-10-01

    The classical Lotka-Volterra (LV) model is a well-known mathematical model for prey-predator ecosystems. In the present paper, the pulse-type version of stochastic LV model, in which the effect of a random natural environment has been modeled as Poisson white noise, is investigated by using the stochastic averaging method. The averaged generalized Itô stochastic differential equation and Fokker-Planck-Kolmogorov (FPK) equation are derived for prey-predator ecosystem driven by Poisson white noise. Approximate stationary solution for the averaged generalized FPK equation is obtained by using the perturbation method. The effect of prey self-competition parameter ɛ2 s on ecosystem behavior is evaluated. The analytical result is confirmed by corresponding Monte Carlo (MC) simulation.

  16. Transverse vibrations of non-uniform beams. [combined finite element and Rayleigh-Ritz methods

    NASA Technical Reports Server (NTRS)

    Klein, L.

    1974-01-01

    The free vibrations of elastic beams with nonuniform characteristics are investigated theoretically by a new method. The new method is seen to combine the advantages of a finite element approach and of a Rayleigh-Ritz analysis. Comparison with the known analytical results for uniform beams shows good convergence of the method for natural frequencies and modes. For internal shear forces and bending moments, the rate of convergence is less rapid. Results from experiments conducted with a cantilevered helicopter blade with strong nonuniformities and also from alternative theoretical methods, indicate that the theory adequately predicts natural frequencies and mode shapes. General guidelines for efficient use of the method are presented.

  17. A Method to Solve Interior and Exterior Camera Calibration Parameters for Image Resection

    NASA Technical Reports Server (NTRS)

    Samtaney, Ravi

    1999-01-01

    An iterative method is presented to solve the internal and external camera calibration parameters, given model target points and their images from one or more camera locations. The direct linear transform formulation was used to obtain a guess for the iterative method, and herein lies one of the strengths of the present method. In all test cases, the method converged to the correct solution. In general, an overdetermined system of nonlinear equations is solved in the least-squares sense. The iterative method presented is based on Newton-Raphson for solving systems of nonlinear algebraic equations. The Jacobian is analytically derived and the pseudo-inverse of the Jacobian is obtained by singular value decomposition.

  18. Applications and assessment of QM:QM electronic embedding using generalized asymmetric Mulliken atomic charges.

    PubMed

    Parandekar, Priya V; Hratchian, Hrant P; Raghavachari, Krishnan

    2008-10-14

    Hybrid QM:QM (quantum mechanics:quantum mechanics) and QM:MM (quantum mechanics:molecular mechanics) methods are widely used to calculate the electronic structure of large systems where a full quantum mechanical treatment at a desired high level of theory is computationally prohibitive. The ONIOM (our own N-layer integrated molecular orbital molecular mechanics) approximation is one of the more popular hybrid methods, where the total molecular system is divided into multiple layers, each treated at a different level of theory. In a previous publication, we developed a novel QM:QM electronic embedding scheme within the ONIOM framework, where the model system is embedded in the external Mulliken point charges of the surrounding low-level region to account for the polarization of the model system wave function. Therein, we derived and implemented a rigorous expression for the embedding energy as well as analytic gradients that depend on the derivatives of the external Mulliken point charges. In this work, we demonstrate the applicability of our QM:QM method with point charge embedding and assess its accuracy. We study two challenging systems--zinc metalloenzymes and silicon oxide cages--and demonstrate that electronic embedding shows significant improvement over mechanical embedding. We also develop a modified technique for the energy and analytic gradients using a generalized asymmetric Mulliken embedding method involving an unequal splitting of the Mulliken overlap populations to offer improvement in situations where the Mulliken charges may be deficient.

  19. A literature review of empirical research on learning analytics in medical education

    PubMed Central

    Saqr, Mohammed

    2018-01-01

    The number of publications in the field of medical education is still markedly low, despite recognition of the value of the discipline in the medical education literature, and exponential growth of publications in other fields. This necessitates raising awareness of the research methods and potential benefits of learning analytics (LA). The aim of this paper was to offer a methodological systemic review of empirical LA research in the field of medical education and a general overview of the common methods used in the field in general. Search was done in Medline database using the term “LA.” Inclusion criteria included empirical original research articles investigating LA using qualitative, quantitative, or mixed methodologies. Articles were also required to be written in English, published in a scholarly peer-reviewed journal and have a dedicated section for methods and results. A Medline search resulted in only six articles fulfilling the inclusion criteria for this review. Most of the studies collected data about learners from learning management systems or online learning resources. Analysis used mostly quantitative methods including descriptive statistics, correlation tests, and regression models in two studies. Patterns of online behavior and usage of the digital resources as well as predicting achievement was the outcome most studies investigated. Research about LA in the field of medical education is still in infancy, with more questions than answers. The early studies are encouraging and showed that patterns of online learning can be easily revealed as well as predicting students’ performance. PMID:29599699

  20. A literature review of empirical research on learning analytics in medical education.

    PubMed

    Saqr, Mohammed

    2018-01-01

    The number of publications in the field of medical education is still markedly low, despite recognition of the value of the discipline in the medical education literature, and exponential growth of publications in other fields. This necessitates raising awareness of the research methods and potential benefits of learning analytics (LA). The aim of this paper was to offer a methodological systemic review of empirical LA research in the field of medical education and a general overview of the common methods used in the field in general. Search was done in Medline database using the term "LA." Inclusion criteria included empirical original research articles investigating LA using qualitative, quantitative, or mixed methodologies. Articles were also required to be written in English, published in a scholarly peer-reviewed journal and have a dedicated section for methods and results. A Medline search resulted in only six articles fulfilling the inclusion criteria for this review. Most of the studies collected data about learners from learning management systems or online learning resources. Analysis used mostly quantitative methods including descriptive statistics, correlation tests, and regression models in two studies. Patterns of online behavior and usage of the digital resources as well as predicting achievement was the outcome most studies investigated. Research about LA in the field of medical education is still in infancy, with more questions than answers. The early studies are encouraging and showed that patterns of online learning can be easily revealed as well as predicting students' performance.

  1. One-step leapfrog ADI-FDTD method for simulating electromagnetic wave propagation in general dispersive media.

    PubMed

    Wang, Xiang-Hua; Yin, Wen-Yan; Chen, Zhi Zhang David

    2013-09-09

    The one-step leapfrog alternating-direction-implicit finite-difference time-domain (ADI-FDTD) method is reformulated for simulating general electrically dispersive media. It models material dispersive properties with equivalent polarization currents. These currents are then solved with the auxiliary differential equation (ADE) and then incorporated into the one-step leapfrog ADI-FDTD method. The final equations are presented in the form similar to that of the conventional FDTD method but with second-order perturbation. The adapted method is then applied to characterize (a) electromagnetic wave propagation in a rectangular waveguide loaded with a magnetized plasma slab, (b) transmission coefficient of a plane wave normally incident on a monolayer graphene sheet biased by a magnetostatic field, and (c) surface plasmon polaritons (SPPs) propagation along a monolayer graphene sheet biased by an electrostatic field. The numerical results verify the stability, accuracy and computational efficiency of the proposed one-step leapfrog ADI-FDTD algorithm in comparison with analytical results and the results obtained with the other methods.

  2. 7 CFR 94.4 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture... POULTRY AND EGG PRODUCTS Mandatory Analyses of Egg Products § 94.4 Analytical methods. The majority of analytical methods used by the USDA laboratories to perform mandatory analyses for egg products are listed as...

  3. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...

  4. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...

  5. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...

  6. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...

  7. Element Library for Three-Dimensional Stress Analysis by the Integrated Force Method

    NASA Technical Reports Server (NTRS)

    Kaljevic, Igor; Patnaik, Surya N.; Hopkins, Dale A.

    1996-01-01

    The Integrated Force Method, a recently developed method for analyzing structures, is extended in this paper to three-dimensional structural analysis. First, a general formulation is developed to generate the stress interpolation matrix in terms of complete polynomials of the required order. The formulation is based on definitions of the stress tensor components in term of stress functions. The stress functions are written as complete polynomials and substituted into expressions for stress components. Then elimination of the dependent coefficients leaves the stress components expressed as complete polynomials whose coefficients are defined as generalized independent forces. Such derived components of the stress tensor identically satisfy homogenous Navier equations of equilibrium. The resulting element matrices are invariant with respect to coordinate transformation and are free of spurious zero-energy modes. The formulation provides a rational way to calculate the exact number of independent forces necessary to arrive at an approximation of the required order for complete polynomials. The influence of reducing the number of independent forces on the accuracy of the response is also analyzed. The stress fields derived are used to develop a comprehensive finite element library for three-dimensional structural analysis by the Integrated Force Method. Both tetrahedral- and hexahedral-shaped elements capable of modeling arbitrary geometric configurations are developed. A number of examples with known analytical solutions are solved by using the developments presented herein. The results are in good agreement with the analytical solutions. The responses obtained with the Integrated Force Method are also compared with those generated by the standard displacement method. In most cases, the performance of the Integrated Force Method is better overall.

  8. New method for estimating low-earth-orbit collision probabilities

    NASA Technical Reports Server (NTRS)

    Vedder, John D.; Tabor, Jill L.

    1991-01-01

    An unconventional but general method is described for estimating the probability of collision between an earth-orbiting spacecraft and orbital debris. This method uses a Monte Caralo simulation of the orbital motion of the target spacecraft and each discrete debris object to generate an empirical set of distances, each distance representing the separation between the spacecraft and the nearest debris object at random times. Using concepts from the asymptotic theory of extreme order statistics, an analytical density function is fitted to this set of minimum distances. From this function, it is possible to generate realistic collision estimates for the spacecraft.

  9. Considerations regarding the validation of chromatographic mass spectrometric methods for the quantification of endogenous substances in forensics.

    PubMed

    Hess, Cornelius; Sydow, Konrad; Kueting, Theresa; Kraemer, Michael; Maas, Alexandra

    2018-02-01

    The requirement for correct evaluation of forensic toxicological results in daily routine work and scientific studies is reliable analytical data based on validated methods. Validation of a method gives the analyst tools to estimate the efficacy and reliability of the analytical method. Without validation, data might be contested in court and lead to unjustified legal consequences for a defendant. Therefore, new analytical methods to be used in forensic toxicology require careful method development and validation of the final method. Until now, there are no publications on the validation of chromatographic mass spectrometric methods for the detection of endogenous substances although endogenous analytes can be important in Forensic Toxicology (alcohol consumption marker, congener alcohols, gamma hydroxy butyric acid, human insulin and C-peptide, creatinine, postmortal clinical parameters). For these analytes, conventional validation instructions cannot be followed completely. In this paper, important practical considerations in analytical method validation for endogenous substances will be discussed which may be used as guidance for scientists wishing to develop and validate analytical methods for analytes produced naturally in the human body. Especially the validation parameters calibration model, analytical limits, accuracy (bias and precision) and matrix effects and recovery have to be approached differently. Highest attention should be paid to selectivity experiments. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. General Methods for Analysis of Sequential “n-step” Kinetic Mechanisms: Application to Single Turnover Kinetics of Helicase-Catalyzed DNA Unwinding

    PubMed Central

    Lucius, Aaron L.; Maluf, Nasib K.; Fischer, Christopher J.; Lohman, Timothy M.

    2003-01-01

    Helicase-catalyzed DNA unwinding is often studied using “all or none” assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using “n-step” sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the “kinetic step size”, m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using “n-step” sequential mechanisms has previously been limited by an inability to float the number of “unwinding steps”, n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, fss(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain fss(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation. PMID:14507688

  11. General methods for analysis of sequential "n-step" kinetic mechanisms: application to single turnover kinetics of helicase-catalyzed DNA unwinding.

    PubMed

    Lucius, Aaron L; Maluf, Nasib K; Fischer, Christopher J; Lohman, Timothy M

    2003-10-01

    Helicase-catalyzed DNA unwinding is often studied using "all or none" assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using "n-step" sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the "kinetic step size", m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using "n-step" sequential mechanisms has previously been limited by an inability to float the number of "unwinding steps", n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, f(ss)(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain f(ss)(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation.

  12. Overdetermined elliptic problems in topological disks

    NASA Astrophysics Data System (ADS)

    Mira, Pablo

    2018-06-01

    We introduce a method, based on the Poincaré-Hopf index theorem, to classify solutions to overdetermined problems for fully nonlinear elliptic equations in domains diffeomorphic to a closed disk. Applications to some well-known nonlinear elliptic PDEs are provided. Our result can be seen as the analogue of Hopf's uniqueness theorem for constant mean curvature spheres, but for the general analytic context of overdetermined elliptic problems.

  13. Basic Concepts in Classical Test Theory: Tests Aren't Reliable, the Nature of Alpha, and Reliability Generalization as a Meta-analytic Method.

    ERIC Educational Resources Information Center

    Helms, LuAnn Sherbeck

    This paper discusses the fact that reliability is about scores and not tests and how reliability limits effect sizes. The paper also explores the classical reliability coefficients of stability, equivalence, and internal consistency. Stability is concerned with how stable test scores will be over time, while equivalence addresses the relationship…

  14. A new approach to exact optical soliton solutions for the nonlinear Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Morales-Delgado, V. F.; Gómez-Aguilar, J. F.; Baleanu, Dumitru

    2018-05-01

    By using the modified homotopy analysis transform method, we construct the analytical solutions of the space-time generalized nonlinear Schrödinger equation involving a new fractional conformable derivative in the Liouville-Caputo sense and the fractional-order derivative with the Mittag-Leffler law. Employing theoretical parameters, we present some numerical simulations and compare the solutions obtained.

  15. Model verification of large structural systems

    NASA Technical Reports Server (NTRS)

    Lee, L. T.; Hasselman, T. K.

    1977-01-01

    A methodology was formulated, and a general computer code implemented for processing sinusoidal vibration test data to simultaneously make adjustments to a prior mathematical model of a large structural system, and resolve measured response data to obtain a set of orthogonal modes representative of the test model. The derivation of estimator equations is shown along with example problems. A method for improving the prior analytic model is included.

  16. Application of artificial intelligence to impulsive orbital transfers

    NASA Technical Reports Server (NTRS)

    Burns, Rowland E.

    1987-01-01

    A generalized technique for the numerical solution of any given class of problems is presented. The technique requires the analytic (or numerical) solution of every applicable equation for all variables that appear in the problem. Conditional blocks are employed to rapidly expand the set of known variables from a minimum of input. The method is illustrated via the use of the Hohmann transfer problem from orbital mechanics.

  17. McStas 1.1: a tool for building neutron Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Lefmann, K.; Nielsen, K.; Tennant, A.; Lake, B.

    2000-03-01

    McStas is a project to develop general tools for the creation of simulations of neutron scattering experiments. In this paper, we briefly introduce McStas and describe a particular application of the program: the Monte Carlo calculation of the resolution function of a standard triple-axis neutron scattering instrument. The method compares well with the analytical calculations of Popovici.

  18. Coupled rotor/fuselage dynamic analysis of the AH-1G helicopter and correlation with flight vibrations data

    NASA Technical Reports Server (NTRS)

    Corrigan, J. C.; Cronkhite, J. D.; Dompka, R. V.; Perry, K. S.; Rogers, J. P.; Sadler, S. G.

    1989-01-01

    Under a research program designated Design Analysis Methods for VIBrationS (DAMVIBS), existing analytical methods are used for calculating coupled rotor-fuselage vibrations of the AH-1G helicopter for correlation with flight test data from an AH-1G Operational Load Survey (OLS) test program. The analytical representation of the fuselage structure is based on a NASTRAN finite element model (FEM), which has been developed, extensively documented, and correlated with ground vibration test. One procedure that was used for predicting coupled rotor-fuselage vibrations using the advanced Rotorcraft Flight Simulation Program C81 and NASTRAN is summarized. Detailed descriptions of the analytical formulation of rotor dynamics equations, fuselage dynamic equations, coupling between the rotor and fuselage, and solutions to the total system of equations in C81 are included. Analytical predictions of hub shears for main rotor harmonics 2p, 4p, and 6p generated by C81 are used in conjunction with 2p OLS measured control loads and a 2p lateral tail rotor gearbox force, representing downwash impingement on the vertical fin, to excite the NASTRAN model. NASTRAN is then used to correlate with measured OLS flight test vibrations. Blade load comparisons predicted by C81 showed good agreement. In general, the fuselage vibration correlations show good agreement between anslysis and test in vibration response through 15 to 20 Hz.

  19. 3-MCPD in food other than soy sauce or hydrolysed vegetable protein (HVP).

    PubMed

    Baer, Ines; de la Calle, Beatriz; Taylor, Philip

    2010-01-01

    This review gives an overview of current knowledge about 3-monochloropropane-1,2-diol (3-MCPD) formation and detection. Although 3-MCPD is often mentioned with regard to soy sauce and acid-hydrolysed vegetable protein (HVP), and much research has been done in that area, the emphasis here is placed on other foods. This contaminant can be found in a great variety of foodstuffs and is difficult to avoid in our daily nutrition. Despite its low concentration in most foods, its carcinogenic properties are of general concern. Its formation is a multivariate problem influenced by factors such as heat, moisture and sugar/lipid content, depending on the type of food and respective processing employed. Understanding the formation of this contaminant in food is fundamental to not only preventing or reducing it, but also developing efficient analytical methods of detecting it. Considering the differences between 3-MCPD-containing foods, and the need to test for the contaminant at different levels of food processing, one would expect a variety of analytical approaches. In this review, an attempt is made to provide an up-to-date list of available analytical methods and to highlight the differences among these techniques. Finally, the emergence of 3-MCPD esters and analytical techniques for them are also discussed here, although they are not the main focus of this review.

  20. Applications of He's semi-inverse method, ITEM and GGM to the Davey-Stewartson equation

    NASA Astrophysics Data System (ADS)

    Zinati, Reza Farshbaf; Manafian, Jalil

    2017-04-01

    We investigate the Davey-Stewartson (DS) equation. Travelling wave solutions were found. In this paper, we demonstrate the effectiveness of the analytical methods, namely, He's semi-inverse variational principle method (SIVPM), the improved tan(φ/2)-expansion method (ITEM) and generalized G'/G-expansion method (GGM) for seeking more exact solutions via the DS equation. These methods are direct, concise and simple to implement compared to other existing methods. The exact solutions containing four types solutions have been achieved. The results demonstrate that the aforementioned methods are more efficient than the Ansatz method applied by Mirzazadeh (2015). Abundant exact travelling wave solutions including solitons, kink, periodic and rational solutions have been found by the improved tan(φ/2)-expansion and generalized G'/G-expansion methods. By He's semi-inverse variational principle we have obtained dark and bright soliton wave solutions. Also, the obtained semi-inverse variational principle has profound implications in physical understandings. These solutions might play important role in engineering and physics fields. Moreover, by using Matlab, some graphical simulations were done to see the behavior of these solutions.

  1. Prediction of dynamical systems by symbolic regression

    NASA Astrophysics Data System (ADS)

    Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K.; Noack, Bernd R.

    2016-07-01

    We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast.

  2. Development and application of a unified balancing approach with multiple constraints

    NASA Technical Reports Server (NTRS)

    Zorzi, E. S.; Lee, C. C.; Giordano, J. C.

    1985-01-01

    The development of a general analytic approach to constrained balancing that is consistent with past influence coefficient methods is described. The approach uses Lagrange multipliers to impose orbit and/or weight constraints; these constraints are combined with the least squares minimization process to provide a set of coupled equations that result in a single solution form for determining correction weights. Proper selection of constraints results in the capability to: (1) balance higher speeds without disturbing previously balanced modes, thru the use of modal trial weight sets; (2) balance off-critical speeds; and (3) balance decoupled modes by use of a single balance plane. If no constraints are imposed, this solution form reduces to the general weighted least squares influence coefficient method. A test facility used to examine the use of the general constrained balancing procedure and application of modal trial weight ratios is also described.

  3. Evaluation of algorithms for point cloud surface reconstruction through the analysis of shape parameters

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Verbeek, Fons J.

    2012-03-01

    In computer graphics and visualization, reconstruction of a 3D surface from a point cloud is an important research area. As the surface contains information that can be measured, i.e. expressed in features, the application of surface reconstruction can be potentially important for application in bio-imaging. Opportunities in this application area are the motivation for this study. In the past decade, a number of algorithms for surface reconstruction have been proposed. Generally speaking, these methods can be separated into two categories: i.e., explicit representation and implicit approximation. Most of the aforementioned methods are firmly based in theory; however, so far, no analytical evaluation between these methods has been presented. The straightforward way of evaluation has been by convincing through visual inspection. Through evaluation we search for a method that can precisely preserve the surface characteristics and that is robust in the presence of noise. The outcome will be used to improve reliability in surface reconstruction of biological models. We, therefore, use an analytical approach by selecting features as surface descriptors and measure these features in varying conditions. We selected surface distance, surface area and surface curvature as three major features to compare quality of the surface created by the different algorithms. Our starting point has been ground truth values obtained from analytical shapes such as the sphere and the ellipsoid. In this paper we present four classical surface reconstruction methods from the two categories mentioned above, i.e. the Power Crust, the Robust Cocone, the Fourier-based method and the Poisson reconstruction method. The results obtained from our experiments indicate that Poisson reconstruction method performs the best in the presence of noise.

  4. Application of an Extended Parabolic Equation to the Calculation of the Mean Field and the Transverse and Longitudinal Mutual Coherence Functions Within Atmospheric Turbulence

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    2005-01-01

    Solutions are derived for the generalized mutual coherence function (MCF), i.e., the second order moment, of a random wave field propagating through a random medium within the context of the extended parabolic equation. Here, "generalized" connotes the consideration of both the transverse as well as the longitudinal second order moments (with respect to the direction of propagation). Such solutions will afford a comparison between the results of the parabolic equation within the pararaxial approximation and those of the wide-angle extended theory. To this end, a statistical operator method is developed which gives a general equation for an arbitrary spatial statistical moment of the wave field. The generality of the operator method allows one to obtain an expression for the second order field moment in the direction longitudinal to the direction of propagation. Analytical solutions to these equations are derived for the Kolmogorov and Tatarskii spectra of atmospheric permittivity fluctuations within the Markov approximation.

  5. Roll Damping Derivatives from Generalized Lifting-Surface Theory and Wind Tunnel Forced-Oscillation Tests

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S; Murphy, Patrick C.

    2014-01-01

    Improving aerodynamic models for adverse loss-of-control conditions in flight is an area being researched under the NASA Aviation Safety Program. Aerodynamic models appropriate for loss of control conditions require a more general mathematical representation to predict nonlinear unsteady behaviors. As more general aerodynamic models are studied that include nonlinear higher order effects, the possibility of measurements that confound aerodynamic and structural responses are probable. In this study an initial step is taken to look at including structural flexibility in analysis of rigid-body forced-oscillation testing that accounts for dynamic rig, sting and balance flexibility. Because of the significant testing required and associated costs in a general study, it makes sense to capitalize on low cost analytical methods where possible, especially where structural flexibility can be accounted for by a low cost method. This paper provides an initial look at using linear lifting surface theory applied to rigid-body aircraft roll forced-oscillation tests.

  6. Two-condition within-participant statistical mediation analysis: A path-analytic framework.

    PubMed

    Montoya, Amanda K; Hayes, Andrew F

    2017-03-01

    Researchers interested in testing mediation often use designs where participants are measured on a dependent variable Y and a mediator M in both of 2 different circumstances. The dominant approach to assessing mediation in such a design, proposed by Judd, Kenny, and McClelland (2001), relies on a series of hypothesis tests about components of the mediation model and is not based on an estimate of or formal inference about the indirect effect. In this article we recast Judd et al.'s approach in the path-analytic framework that is now commonly used in between-participant mediation analysis. By so doing, it is apparent how to estimate the indirect effect of a within-participant manipulation on some outcome through a mediator as the product of paths of influence. This path-analytic approach eliminates the need for discrete hypothesis tests about components of the model to support a claim of mediation, as Judd et al.'s method requires, because it relies only on an inference about the product of paths-the indirect effect. We generalize methods of inference for the indirect effect widely used in between-participant designs to this within-participant version of mediation analysis, including bootstrap confidence intervals and Monte Carlo confidence intervals. Using this path-analytic approach, we extend the method to models with multiple mediators operating in parallel and serially and discuss the comparison of indirect effects in these more complex models. We offer macros and code for SPSS, SAS, and Mplus that conduct these analyses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Analytical multiple scattering correction to the Mie theory: Application to the analysis of the lidar signal

    NASA Technical Reports Server (NTRS)

    Flesia, C.; Schwendimann, P.

    1992-01-01

    The contribution of the multiple scattering to the lidar signal is dependent on the optical depth tau. Therefore, the radar analysis, based on the assumption that the multiple scattering can be neglected is limited to cases characterized by low values of the optical depth (tau less than or equal to 0.1) and hence it exclude scattering from most clouds. Moreover, all inversion methods relating lidar signal to number densities and particle size must be modified since the multiple scattering affects the direct analysis. The essential requests of a realistic model for lidar measurements which include the multiple scattering and which can be applied to practical situations follow. (1) Requested are not only a correction term or a rough approximation describing results of a certain experiment, but a general theory of multiple scattering tying together the relevant physical parameter we seek to measure. (2) An analytical generalization of the lidar equation which can be applied in the case of a realistic aerosol is requested. A pure analytical formulation is important in order to avoid the convergency and stability problems which, in the case of numerical approach, are due to the large number of events that have to be taken into account in the presence of large depth and/or a strong experimental noise.

  8. Analytical and experimental study of vibrations in a gear transmission

    NASA Technical Reports Server (NTRS)

    Choy, F. K.; Ruan, Y. F.; Zakrajsek, J. J.; Oswald, Fred B.; Coy, J. J.

    1991-01-01

    An analytical simulation of the dynamics of a gear transmission system is presented and compared to experimental results from a gear noise test rig at the NASA Lewis Research Center. The analytical procedure developed couples the dynamic behaviors of the rotor-bearing-gear system with the response of the gearbox structure. The modal synthesis method is used in solving the overall dynamics of the system. Locally each rotor-gear stage is modeled as an individual rotor-bearing system using the matrix transfer technique. The dynamics of each individual rotor are coupled with other rotor stages through the nonlinear gear mesh forces and with the gearbox structure through bearing support systems. The modal characteristics of the gearbox structure are evaluated using the finite element procedure. A variable time steping integration routine is used to calculate the overall time transient behavior of the system in modal coordinates. The global dynamic behavior of the system is expressed in a generalized coordinate system. Transient and steady state vibrations of the gearbox system are presented in the time and frequency domains. The vibration characteristics of a simple single mesh gear noise test rig is modeled. The numerical simulations are compared to experimental data measured under typical operating conditions. The comparison of system natural frequencies, peak vibration amplitudes, and gear mesh frequencies are generally in good agreement.

  9. Surface-Enhanced Raman Scattering (SERS) for Detection in Immunoassays. Applications, fundamentals, and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Driskell, Jeremy Daniel

    2006-08-09

    Immunoassays have been utilized for the detection of biological analytes for several decades. Many formats and detection strategies have been explored, each having unique advantages and disadvantages. More recently, surface-enhanced Raman scattering (SERS) has been introduced as a readout method for immunoassays, and has shown great potential to meet many key analytical figures of merit. This technology is in its infancy and this dissertation explores the diversity of this method as well as the mechanism responsible for surface enhancement. Approaches to reduce assay times are also investigated. Implementing the knowledge gained from these studies will lead to a more sensitivemore » immunoassay requiring less time than its predecessors. This dissertation is organized into six sections. The first section includes a literature review of the previous work that led to this dissertation. A general overview of the different approaches to immunoassays is given, outlining the strengths and weaknesses of each. Included is a detailed review of binding kinetics, which is central for decreasing assay times. Next, the theoretical underpinnings of SERS is reviewed at its current level of understanding. Past work has argued that surface plasmon resonance (SPR) of the enhancing substrate influences the SERS signal; therefore, the SPR of the extrinsic Raman labels (ERLs) utilized in our SERS-based immunoassay is discussed. Four original research chapters follow the Introduction, each presented as separate manuscripts. Chapter 2 modifies a SERS-based immunoassay previously developed in our group, extending it to the low-level detection of viral pathogens and demonstrating its versatility in terms of analyte type, Chapter 3 investigates the influence of ERL size, material composition, and separation distance between the ERLs and capture substrate on the SERS signal. This chapter links SPR with SERS enhancement factors and is consistent with many of the results from theoretical treatments of SPR and SERS. Chapter 4 introduces a novel method of reducing sample incubation time via capture substrate rotation. Moreover, this work led to a method of virus quantification without the use of standards. Chapter 5 extends the methodology developed in Chapter 4 to both the antigen and ERL labeling step to perform assays with improved analytical performance in less time than can be accomplished in diffusion controlled assays. This dissertation concludes with a general summary and speculates on the future of this exciting approach to carrying out immunoassays.« less

  10. Comparison of bipolar vs. tripolar concentric ring electrode Laplacian estimates.

    PubMed

    Besio, W; Aakula, R; Dai, W

    2004-01-01

    Potentials on the body surface from the heart are of a spatial and temporal function. The 12-lead electrocardiogram (ECG) provides useful global temporal assessment, but it yields limited spatial information due to the smoothing effect caused by the volume conductor. The smoothing complicates identification of multiple simultaneous bioelectrical events. In an attempt to circumvent the smoothing problem, some researchers used a five-point method (FPM) to numerically estimate the analytical solution of the Laplacian with an array of monopolar electrodes. The FPM is generalized to develop a bi-polar concentric ring electrode system. We have developed a new Laplacian ECG sensor, a trielectrode sensor, based on a nine-point method (NPM) numerical approximation of the analytical Laplacian. For a comparison, the NPM, FPM and compact NPM were calculated over a 400 x 400 mesh with 1/400 spacing. Tri and bi-electrode sensors were also simulated and their Laplacian estimates were compared against the analytical Laplacian. We found that tri-electrode sensors have a much-improved accuracy with significantly less relative and maximum errors in estimating the Laplacian operator. Apart from the higher accuracy, our new electrode configuration will allow better localization of the electrical activity of the heart than bi-electrode configurations.

  11. A QC approach to the determination of day-to-day reproducibility and robustness of LC-MS methods for global metabolite profiling in metabonomics/metabolomics.

    PubMed

    Gika, Helen G; Theodoridis, Georgios A; Earll, Mark; Wilson, Ian D

    2012-09-01

    An approach to the determination of day-to-day analytical robustness of LC-MS-based methods for global metabolic profiling using a pooled QC sample is presented for the evaluation of metabonomic/metabolomic data. A set of 60 urine samples were repeatedly analyzed on five different days and the day-to-day reproducibility of the data obtained was determined. Multivariate statistical analysis was performed with the aim of evaluating variability and selected peaks were assessed and validated in terms of retention time stability, mass accuracy and intensity. The methodology enables the repeatability/reproducibility of extended analytical runs in large-scale studies to be determined, allowing the elimination of analytical (as opposed to biological) variability, in order to discover true patterns and correlations within the data. The day-to-day variability of the data revealed by this process suggested that, for this particular system, 3 days continuous operation was possible without the need for maintenance and cleaning. Variation was generally based on signal intensity changes over the 7-day period of the study, and was mainly a result of source contamination.

  12. A new analytical solution solved by triple series equations method for constant-head tests in confined aquifers

    NASA Astrophysics Data System (ADS)

    Chang, Ya-Chi; Yeh, Hund-Der

    2010-06-01

    The constant-head pumping tests are usually employed to determine the aquifer parameters and they can be performed in fully or partially penetrating wells. Generally, the Dirichlet condition is prescribed along the well screen and the Neumann type no-flow condition is specified over the unscreened part of the test well. The mathematical model describing the aquifer response to a constant-head test performed in a fully penetrating well can be easily solved by the conventional integral transform technique under the uniform Dirichlet-type condition along the rim of wellbore. However, the boundary condition for a test well with partial penetration should be considered as a mixed-type condition. This mixed boundary value problem in a confined aquifer system of infinite radial extent and finite vertical extent is solved by the Laplace and finite Fourier transforms in conjunction with the triple series equations method. This approach provides analytical results for the drawdown in a partially penetrating well for arbitrary location of the well screen in a finite thickness aquifer. The semi-analytical solutions are particularly useful for the practical applications from the computational point of view.

  13. On the half-life of luminescence signals in dosimetric applications: A unified presentation

    NASA Astrophysics Data System (ADS)

    Pagonis, V.; Kitis, G.; Polymeris, G. S.

    2018-06-01

    Luminescence signals from natural and man-made materials are widely used in dosimetric and dating applications. In general, there are two types of half-lives of luminescence signals which are of importance to experimental and modeling work in this research area. The first type of half-life is the time required for the population of the trapped charge in a single trap to decay to half its initial value. The second type of half-life is the time required for the luminescence intensity to drop to half of its initial value. While there a handful of analytical expressions available in the literature for the first type of half-life, there are no corresponding analytical expressions for the second type. In this work new analytical expressions are derived for the half-life of luminescence signals during continuous wave optical stimulation luminescence (CW-OSL) or isothermal luminescence (ITL) experiments. The analytical expressions are derived for several commonly used luminescence models which are based on delocalized transitions involving the conduction band: first and second order kinetics, empirical general order kinetics (GOK), mixed order kinetics (MOK) and the one-trap one-recombination center (OTOR) model. In addition, half-life expressions are derived for a different type of luminescence model, which is based on localized transitions in a random distribution of charges. The new half-life expressions contain two parts. The first part is inversely proportional to the thermal or optical excitation rate, and depends on the experimental conditions and on the cross section of the relevant luminescence process. The second part is characteristic of the optical and/or thermal properties of the material, as expressed by the parameters in the model. A new simple and quick method for analyzing luminescence signals is developed, and examples are given of applying the new method to a variety of dosimetric materials. The new test allows quick determination of whether a set of experimentally measured luminescence signals originate in a single trap, or in multiple traps.

  14. Molcas 8: New capabilities for multiconfigurational quantum chemical calculations across the periodic table.

    PubMed

    Aquilante, Francesco; Autschbach, Jochen; Carlson, Rebecca K; Chibotaru, Liviu F; Delcey, Mickaël G; De Vico, Luca; Fdez Galván, Ignacio; Ferré, Nicolas; Frutos, Luis Manuel; Gagliardi, Laura; Garavelli, Marco; Giussani, Angelo; Hoyer, Chad E; Li Manni, Giovanni; Lischka, Hans; Ma, Dongxia; Malmqvist, Per Åke; Müller, Thomas; Nenov, Artur; Olivucci, Massimo; Pedersen, Thomas Bondo; Peng, Daoling; Plasser, Felix; Pritchard, Ben; Reiher, Markus; Rivalta, Ivan; Schapiro, Igor; Segarra-Martí, Javier; Stenrup, Michael; Truhlar, Donald G; Ungur, Liviu; Valentini, Alessio; Vancoillie, Steven; Veryazov, Valera; Vysotskiy, Victor P; Weingart, Oliver; Zapata, Felipe; Lindh, Roland

    2016-02-15

    In this report, we summarize and describe the recent unique updates and additions to the Molcas quantum chemistry program suite as contained in release version 8. These updates include natural and spin orbitals for studies of magnetic properties, local and linear scaling methods for the Douglas-Kroll-Hess transformation, the generalized active space concept in MCSCF methods, a combination of multiconfigurational wave functions with density functional theory in the MC-PDFT method, additional methods for computation of magnetic properties, methods for diabatization, analytical gradients of state average complete active space SCF in association with density fitting, methods for constrained fragment optimization, large-scale parallel multireference configuration interaction including analytic gradients via the interface to the Columbus package, and approximations of the CASPT2 method to be used for computations of large systems. In addition, the report includes the description of a computational machinery for nonlinear optical spectroscopy through an interface to the QM/MM package Cobramm. Further, a module to run molecular dynamics simulations is added, two surface hopping algorithms are included to enable nonadiabatic calculations, and the DQ method for diabatization is added. Finally, we report on the subject of improvements with respects to alternative file options and parallelization. © 2015 Wiley Periodicals, Inc.

  15. Two important limitations relating to the spiking of environmental samples with contaminants of emerging concern: How close to the real analyte concentrations are the reported recovered values?

    PubMed

    Michael, Costas; Bayona, Josep Maria; Lambropoulou, Dimitra; Agüera, Ana; Fatta-Kassinos, Despo

    2017-06-01

    Occurrence and effects of contaminants of emerging concern pose a special challenge to environmental scientists. The investigation of these effects requires reliable, valid, and comparable analytical data. To this effect, two critical aspects are raised herein, concerning the limitations of the produced analytical data. The first relates to the inherent difficulty that exists in the analysis of environmental samples, which is related to the lack of knowledge (information), in many cases, of the form(s) of the contaminant in which is present in the sample. Thus, the produced analytical data can only refer to the amount of the free contaminant ignoring the amount in which it may be present in other forms; e.g., as in chelated and conjugated form. The other important aspect refers to the way with which the spiking procedure is generally performed to determine the recovery of the analytical method. Spiking environmental samples, in particular solid samples, with standard solution followed by immediate extraction, as is the common practice, can lead to an overestimation of the recovery. This is so, because no time is given to the system to establish possible equilibria between the solid matter-inorganic and/or organic-and the contaminant. Therefore, the spiking procedure need to be reconsidered by including a study of the extractable amount of the contaminant versus the time elapsed between spiking and the extraction of the sample. This study can become an element of the validation package of the method.

  16. On the performance of piezoelectric harvesters loaded by finite width impulses

    NASA Astrophysics Data System (ADS)

    Doria, A.; Medè, C.; Desideri, D.; Maschio, A.; Codecasa, L.; Moro, F.

    2018-02-01

    The response of cantilevered piezoelectric harvesters loaded by finite width impulses of base acceleration is studied analytically in the frequency domain in order to identify the parameters that influence the generated voltage. Experimental tests are then performed on harvesters loaded by hammer impacts. The latter are used to confirm analytical results and to validate a linear finite element (FE) model of a unimorph harvester. The FE model is, in turn, used to extend analytical results to more general harvesters (tapered, inverse tapered, triangular) and to more general impulses (heel strike in human gait). From analytical and numerical results design criteria for improving harvester performance are obtained.

  17. On Statistical Approaches for Demonstrating Analytical Similarity in the Presence of Correlation.

    PubMed

    Yang, Harry; Novick, Steven; Burdick, Richard K

    Analytical similarity is the foundation for demonstration of biosimilarity between a proposed product and a reference product. For this assessment, currently the U.S. Food and Drug Administration (FDA) recommends a tiered system in which quality attributes are categorized into three tiers commensurate with their risk and approaches of varying statistical rigor are subsequently used for the three-tier quality attributes. Key to the analyses of Tiers 1 and 2 quality attributes is the establishment of equivalence acceptance criterion and quality range. For particular licensure applications, the FDA has provided advice on statistical methods for demonstration of analytical similarity. For example, for Tier 1 assessment, an equivalence test can be used based on an equivalence margin of 1.5 σ R , where σ R is the reference product variability estimated by the sample standard deviation S R from a sample of reference lots. The quality range for demonstrating Tier 2 analytical similarity is of the form X̄ R ± K × σ R where the constant K is appropriately justified. To demonstrate Tier 2 analytical similarity, a large percentage (e.g., 90%) of test product must fall in the quality range. In this paper, through both theoretical derivations and simulations, we show that when the reference drug product lots are correlated, the sample standard deviation S R underestimates the true reference product variability σ R As a result, substituting S R for σ R in the Tier 1 equivalence acceptance criterion and the Tier 2 quality range inappropriately reduces the statistical power and the ability to declare analytical similarity. Also explored is the impact of correlation among drug product lots on Type I error rate and power. Three methods based on generalized pivotal quantities are introduced, and their performance is compared against a two-one-sided tests (TOST) approach. Finally, strategies to mitigate risk of correlation among the reference products lots are discussed. A biosimilar is a generic version of the original biological drug product. A key component of a biosimilar development is the demonstration of analytical similarity between the biosimilar and the reference product. Such demonstration relies on application of statistical methods to establish a similarity margin and appropriate test for equivalence between the two products. This paper discusses statistical issues with demonstration of analytical similarity and provides alternate approaches to potentially mitigate these problems. © PDA, Inc. 2016.

  18. Analytic Approximations to the Free Boundary and Multi-dimensional Problems in Financial Derivatives Pricing

    NASA Astrophysics Data System (ADS)

    Lau, Chun Sing

    This thesis studies two types of problems in financial derivatives pricing. The first type is the free boundary problem, which can be formulated as a partial differential equation (PDE) subject to a set of free boundary condition. Although the functional form of the free boundary condition is given explicitly, the location of the free boundary is unknown and can only be determined implicitly by imposing continuity conditions on the solution. Two specific problems are studied in details, namely the valuation of fixed-rate mortgages and CEV American options. The second type is the multi-dimensional problem, which involves multiple correlated stochastic variables and their governing PDE. One typical problem we focus on is the valuation of basket-spread options, whose underlying asset prices are driven by correlated geometric Brownian motions (GBMs). Analytic approximate solutions are derived for each of these three problems. For each of the two free boundary problems, we propose a parametric moving boundary to approximate the unknown free boundary, so that the original problem transforms into a moving boundary problem which can be solved analytically. The governing parameter of the moving boundary is determined by imposing the first derivative continuity condition on the solution. The analytic form of the solution allows the price and the hedging parameters to be computed very efficiently. When compared against the benchmark finite-difference method, the computational time is significantly reduced without compromising the accuracy. The multi-stage scheme further allows the approximate results to systematically converge to the benchmark results as one recasts the moving boundary into a piecewise smooth continuous function. For the multi-dimensional problem, we generalize the Kirk (1995) approximate two-asset spread option formula to the case of multi-asset basket-spread option. Since the final formula is in closed form, all the hedging parameters can also be derived in closed form. Numerical examples demonstrate that the pricing and hedging errors are in general less than 1% relative to the benchmark prices obtained by numerical integration or Monte Carlo simulation. By exploiting an explicit relationship between the option price and the underlying probability distribution, we further derive an approximate distribution function for the general basket-spread variable. It can be used to approximate the transition probability distribution of any linear combination of correlated GBMs. Finally, an implicit perturbation is applied to reduce the pricing errors by factors of up to 100. When compared against the existing methods, the basket-spread option formula coupled with the implicit perturbation turns out to be one of the most robust and accurate approximation methods.

  19. Serum levels of organochlorine pesticides in the general population of Thessaly, Greece, determined by HS-SPME GC-MS method.

    PubMed

    Koureas, Michalis; Karagkouni, Foteini; Rakitskii, Valerii; Hadjichristodoulou, Christos; Tsatsakis, Aristidis; Tsakalof, Andreas

    2016-07-01

    In this study, exposure levels of organochlorine pesticides (OCs) were determined in general population residing in Larissa, central Greece. Serum samples from 103 volunteers were analyzed by optimized headspace solid-phase microextraction gas chromatography-mass spectrometry, to detect and quantify OC levels. The most frequently detected analytes were p,p'-DDE (frequency 99%, median:1.25ng/ml) and Hexachlorobenzene (HCB) (frequency 69%, median: 0.13ng/ml). Statistical analysis revealed a significant relationship of p,p'-DDE and HCB levels with age. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Regge calculus and observations. II. Further applications.

    NASA Astrophysics Data System (ADS)

    Williams, Ruth M.; Ellis, G. F. R.

    1984-11-01

    The method, developed in an earlier paper, for tracing geodesies of particles and light rays through Regge calculus space-times, is applied to a number of problems in the Schwarzschild geometry. It is possible to obtain accurate predictions of light bending by taking sufficiently small Regge blocks. Calculations of perihelion precession, Thomas precession, and the distortion of a ball of fluid moving on a geodesic can also show good agreement with the analytic solution. However difficulties arise in obtaining accurate predictions for general orbits in these space-times. Applications to other problems in general relativity are discussed briefly.

  1. Some elements of a theory of multidimensional complex variables. I - General theory. II - Expansions of analytic functions and application to fluid flows

    NASA Technical Reports Server (NTRS)

    Martin, E. Dale

    1989-01-01

    The paper introduces a new theory of N-dimensional complex variables and analytic functions which, for N greater than 2, is both a direct generalization and a close analog of the theory of ordinary complex variables. The algebra in the present theory is a commutative ring, not a field. Functions of a three-dimensional variable were defined and the definition of the derivative then led to analytic functions.

  2. Nano-flow vs standard-flow: Which is the more suitable LC/MS method for quantifying hepcidin-25 in human serum in routine clinical settings?

    PubMed

    Vialaret, Jérôme; Picas, Alexia; Delaby, Constance; Bros, Pauline; Lehmann, Sylvain; Hirtz, Christophe

    2018-06-01

    Hepcidin-25 peptide is a biomarker which is known to have considerable clinical potential for diagnosing iron-related diseases. Developing analytical methods for the absolute quantification of hepcidin is still a real challenge, however, due to the sensitivity, specificity and reproducibility issues involved. In this study, we compare and discuss two MS-based assays for quantifying hepcidin, which differ only in terms of the type of liquid chromatography (nano LC/MS versus standard LC/MS) involved. The same sample preparation, the same internal standards and the same MS analyzer were used with both approaches. In the field of proteomics, nano LC chromatography is generally known to be more sensitive and less robust than standard LC methods. In this study, we established that the performances of the standard LC method are equivalent to those of our previously developed nano LC method. Although the analytical performances were very similar in both cases. The standard-flow platform therefore provides the more suitable alternative for accurately determining hepcidin in clinical settings. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Analytic Evolution of Singular Distribution Amplitudes in QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tandogan Kunkel, Asli

    2014-08-01

    Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a at (constant) DA, antisymmetric at DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standardmore » method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.« less

  4. 21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2011-04-01 2011-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...

  5. 21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2014-04-01 2014-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...

  6. 21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2012-04-01 2012-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...

  7. 21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2013-04-01 2013-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...

  8. How Do Gut Feelings Feature in Tutorial Dialogues on Diagnostic Reasoning in GP Traineeship?

    ERIC Educational Resources Information Center

    Stolper, C. F.; Van de Wiel, M. W. J.; Hendriks, R. H. M.; Van Royen, P.; Van Bokhoven, M. A.; Van der Weijden, T.; Dinant, G. J.

    2015-01-01

    Diagnostic reasoning is considered to be based on the interaction between analytical and non-analytical cognitive processes. Gut feelings, a specific form of non-analytical reasoning, play a substantial role in diagnostic reasoning by general practitioners (GPs) and may activate analytical reasoning. In GP traineeships in the Netherlands, trainees…

  9. Physical-geometric optics method for large size faceted particles.

    PubMed

    Sun, Bingqiang; Yang, Ping; Kattawar, George W; Zhang, Xiaodong

    2017-10-02

    A new physical-geometric optics method is developed to compute the single-scattering properties of faceted particles. It incorporates a general absorption vector to accurately account for inhomogeneous wave effects, and subsequently yields the relevant analytical formulas effective and computationally efficient for absorptive scattering particles. A bundle of rays incident on a certain facet can be traced as a single beam. For a beam incident on multiple facets, a systematic beam-splitting technique based on computer graphics is used to split the original beam into several sub-beams so that each sub-beam is incident only on an individual facet. The new beam-splitting technique significantly reduces the computational burden. The present physical-geometric optics method can be generalized to arbitrary faceted particles with either convex or concave shapes and with a homogeneous or an inhomogeneous (e.g., a particle with a core) composition. The single-scattering properties of irregular convex homogeneous and inhomogeneous hexahedra are simulated and compared to their counterparts from two other methods including a numerically rigorous method.

  10. Free-form surface design method for a collimator TIR lens.

    PubMed

    Tsai, Chung-Yu

    2016-04-01

    A free-form (FF) surface design method is proposed for a general axial-symmetrical collimator system consisting of a light source and a total internal reflection lens with two coupled FF boundary surfaces. The profiles of the boundary surfaces are designed using a FF surface construction method such that each incident ray is directed (refracted and reflected) in such a way as to form a specified image pattern on the target plane. The light ray paths within the system are analyzed using an exact analytical model and a skew-ray tracing approach. In addition, the validity of the proposed FF design method is demonstrated by means of ZEMAX simulations. It is shown that the illumination distribution formed on the target plane is in good agreement with that specified by the user. The proposed surface construction method is mathematically straightforward and easily implemented in computer code. As such, it provides a useful tool for the design and analysis of general axial-symmetrical optical systems.

  11. A general method to determine sampling windows for nonlinear mixed effects models with an application to population pharmacokinetic studies.

    PubMed

    Foo, Lee Kien; McGree, James; Duffull, Stephen

    2012-01-01

    Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.

  12. Probing the space of toric quiver theories

    NASA Astrophysics Data System (ADS)

    Hewlett, Joseph; He, Yang-Hui

    2010-03-01

    We demonstrate a practical and efficient method for generating toric Calabi-Yau quiver theories, applicable to both D3 and M2 brane world-volume physics. A new analytic method is presented at low order parametres and an algorithm for the general case is developed which has polynomial complexity in the number of edges in the quiver. Using this algorithm, carefully implemented, we classify the quiver diagram and assign possible superpotentials for various small values of the number of edges and nodes. We examine some preliminary statistics on this space of toric quiver theories.

  13. Green's function calculations for semi-infinite carbon nanotubes

    NASA Astrophysics Data System (ADS)

    John, D. L.; Pulfrey, D. L.

    2006-02-01

    In the modeling of nanoscale electronic devices, the non-equilibrium Green's function technique is gaining increasing popularity. One complication in this method is the need for computation of the self-energy functions that account for the interactions between the active portion of a device and its leads. In the one-dimensional case, these functions may be computed analytically. In higher dimensions, a numerical approach is required. In this work, we generalize earlier methods that were developed for tight-binding Hamiltonians, and present results for the case of a carbon nanotube.

  14. A binomial stochastic kinetic approach to the Michaelis-Menten mechanism

    NASA Astrophysics Data System (ADS)

    Lente, Gábor

    2013-05-01

    This Letter presents a new method that gives an analytical approximation of the exact solution of the stochastic Michaelis-Menten mechanism without computationally demanding matrix operations. The method is based on solving the deterministic rate equations and then using the results as guiding variables of calculating probability values using binomial distributions. This principle can be generalized to a number of different kinetic schemes and is expected to be very useful in the evaluation of measurements focusing on the catalytic activity of one or a few individual enzyme molecules.

  15. Solitary wave solutions and their interactions for fully nonlinear water waves with surface tension in the generalized Serre equations

    NASA Astrophysics Data System (ADS)

    Dutykh, Denys; Hoefer, Mark; Mitsotakis, Dimitrios

    2018-04-01

    Some effects of surface tension on fully nonlinear, long, surface water waves are studied by numerical means. The differences between various solitary waves and their interactions in subcritical and supercritical surface tension regimes are presented. Analytical expressions for new peaked traveling wave solutions are presented in the dispersionless case of critical surface tension. Numerical experiments are performed using a high-accurate finite element method based on smooth cubic splines and the four-stage, classical, explicit Runge-Kutta method of order 4.

  16. Geometry of Thin Nematic Elastomer Sheets

    NASA Astrophysics Data System (ADS)

    Aharoni, Hillel; Sharon, Eran; Kupferman, Raz

    A thin sheet of nematic elastomer attains 3D configurations depending on the nematic director field upon heating. In this talk we describe the intrinsic geometry of such a sheet, and derive an expression for the metric induced by general smooth nematic director fields. Furthermore, we investigate the reverse problem of constructing a director field that induces a specified 2D geometry. We provide an explicit analytical recipe for constructing any surface of revolution using this method. We demonstrate how the design of an arbitrary 2D geometry is accessible using approximate numerical methods.

  17. Electronic response to nuclear breathing mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ludwig, Hendrik; Ruffini, Remo; ICRANet, University of Nice-Sophia Antipolis, 28 Av. de Valrose, 06103 Nice Cedex 2

    2015-12-17

    Based on our previous work on stationary oscillation modes of electrons around giant nuclei, we show how to treat a general driving force on the electron gas, such as the one generated by the breathing mode of the nucleus, by means of the spectral method. As an example we demonstrate this method for a system with Z = 10{sup 4} in β-equilibrium with the electrons compressed up to the nuclear radius. In this case the stationary modes can be obtained analytically, which allows for a very speedy numerical calculation of the final result.

  18. Marker-based reconstruction of the kinematics of a chain of segments: a new method that incorporates joint kinematic constraints.

    PubMed

    Klous, Miriam; Klous, Sander

    2010-07-01

    The aim of skin-marker-based motion analysis is to reconstruct the motion of a kinematical model from noisy measured motion of skin markers. Existing kinematic models for reconstruction of chains of segments can be divided into two categories: analytical methods that do not take joint constraints into account and numerical global optimization methods that do take joint constraints into account but require numerical optimization of a large number of degrees of freedom, especially when the number of segments increases. In this study, a new and largely analytical method for a chain of rigid bodies is presented, interconnected in spherical joints (chain-method). In this method, the number of generalized coordinates to be determined through numerical optimization is three, irrespective of the number of segments. This new method is compared with the analytical method of Veldpaus et al. [1988, "A Least-Squares Algorithm for the Equiform Transformation From Spatial Marker Co-Ordinates," J. Biomech., 21, pp. 45-54] (Veldpaus-method, a method of the first category) and the numerical global optimization method of Lu and O'Connor [1999, "Bone Position Estimation From Skin-Marker Co-Ordinates Using Global Optimization With Joint Constraints," J. Biomech., 32, pp. 129-134] (Lu-method, a method of the second category) regarding the effects of continuous noise simulating skin movement artifacts and regarding systematic errors in joint constraints. The study is based on simulated data to allow a comparison of the results of the different algorithms with true (noise- and error-free) marker locations. Results indicate a clear trend that accuracy for the chain-method is higher than the Veldpaus-method and similar to the Lu-method. Because large parts of the equations in the chain-method can be solved analytically, the speed of convergence in this method is substantially higher than in the Lu-method. With only three segments, the average number of required iterations with the chain-method is 3.0+/-0.2 times lower than with the Lu-method when skin movement artifacts are simulated by applying a continuous noise model. When simulating systematic errors in joint constraints, the number of iterations for the chain-method was almost a factor 5 lower than the number of iterations for the Lu-method. However, the Lu-method performs slightly better than the chain-method. The RMSD value between the reconstructed and actual marker positions is approximately 57% of the systematic error on the joint center positions for the Lu-method compared with 59% for the chain-method.

  19. Determination of selected neurotoxic insecticides in small amounts of animal tissue utilizing a newly constructed mini-extractor.

    PubMed

    Seifertová, Marta; Čechová, Eliška; Llansola, Marta; Felipo, Vicente; Vykoukalová, Martina; Kočan, Anton

    2017-10-01

    We developed a simple analytical method for the simultaneous determination of representatives of various groups of neurotoxic insecticides (carbaryl, chlorpyrifos, cypermethrin, and α-endosulfan and β-endosulfan and their metabolite endosulfan sulfate) in limited amounts of animal tissues containing different amounts of lipids. Selected tissues (rodent fat, liver, and brain) were extracted in a special in-house-designed mini-extractor constructed on the basis of the Soxhlet and Twisselmann extractors. A dried tissue sample placed in a small cartridge was extracted, while the nascent extract was simultaneously filtered through a layer of sodium sulfate. The extraction was followed by combined clean-up, including gel permeation chromatography (in case of high lipid content), ultrasonication, and solid-phase extraction chromatography using C 18 on silica and aluminum oxide. Gas chromatography coupled with high-resolution mass spectrometry was used for analyte separation, detection, and quantification. Average recoveries for individual insecticides ranged from 82 to 111%. Expanded measurement uncertainties were generally lower than 35%. The developed method was successfully applied to rat tissue samples obtained from an animal model dealing with insecticide exposure during brain development. This method may also be applied to the analytical treatment of small amounts of various types of animal and human tissue samples. A significant advantage achieved using this method is high sample throughput due to the simultaneous treatment of many samples. Graphical abstract Optimized workflow for the determination of selected insecticides in small amounts of animal tissue including newly developed mini-extractor.

  20. SAM Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target chemical, radiochemical, pathogens, and biotoxin analytes in environmental samples can use this online query tool to identify analytical methods included in EPA's Selected Analytical Methods for Environmental Remediation

  1. Parameter Optimization for Feature and Hit Generation in a General Unknown Screening Method-Proof of Concept Study Using a Design of Experiment Approach for a High Resolution Mass Spectrometry Procedure after Data Independent Acquisition.

    PubMed

    Elmiger, Marco P; Poetzsch, Michael; Steuer, Andrea E; Kraemer, Thomas

    2018-03-06

    High resolution mass spectrometry and modern data independent acquisition (DIA) methods enable the creation of general unknown screening (GUS) procedures. However, even when DIA is used, its potential is far from being exploited, because often, the untargeted acquisition is followed by a targeted search. Applying an actual GUS (including untargeted screening) produces an immense amount of data that must be dealt with. An optimization of the parameters regulating the feature detection and hit generation algorithms of the data processing software could significantly reduce the amount of unnecessary data and thereby the workload. Design of experiment (DoE) approaches allow a simultaneous optimization of multiple parameters. In a first step, parameters are evaluated (crucial or noncrucial). Second, crucial parameters are optimized. The aim in this study was to reduce the number of hits, without missing analytes. The obtained parameter settings from the optimization were compared to the standard settings by analyzing a test set of blood samples spiked with 22 relevant analytes as well as 62 authentic forensic cases. The optimization lead to a marked reduction of workload (12.3 to 1.1% and 3.8 to 1.1% hits for the test set and the authentic cases, respectively) while simultaneously increasing the identification rate (68.2 to 86.4% and 68.8 to 88.1%, respectively). This proof of concept study emphasizes the great potential of DoE approaches to master the data overload resulting from modern data independent acquisition methods used for general unknown screening procedures by optimizing software parameters.

  2. SAM Pathogen Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target pathogen analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select pathogens.

  3. Bioluminescent Antibodies for Point‐of‐Care Diagnostics

    PubMed Central

    Xue, Lin; Yu, Qiuliyang; Griss, Rudolf; Schena, Alberto

    2017-01-01

    Abstract We introduce a general method to transform antibodies into ratiometric, bioluminescent sensor proteins for the no‐wash quantification of analytes. Our approach is based on the genetic fusion of antibody fragments to NanoLuc luciferase and SNAP‐tag, the latter being labeled with a synthetic fluorescent competitor of the antigen. Binding of the antigen, here synthetic drugs, by the sensor displaces the tethered fluorescent competitor from the antibody and disrupts bioluminescent resonance energy transfer (BRET) between the luciferase and fluorophore. The semisynthetic sensors display a tunable response range (submicromolar to submillimolar) and large dynamic range (ΔR max>500 %), and they permit the quantification of analytes through spotting of the samples onto paper followed by analysis with a digital camera. PMID:28510347

  4. Phenomenological model to fit complex permittivity data of water from radio to optical frequencies.

    PubMed

    Shubitidze, Fridon; Osterberg, Ulf

    2007-04-01

    A general factorized form of the dielectric function together with a fractional model-based parameter estimation method is used to provide an accurate analytical formula for the complex refractive index in water for the frequency range 10(8)-10(16)Hz . The analytical formula is derived using a combination of a microscopic frequency-dependent rational function for adjusting zeros and poles of the dielectric dispersion together with the macroscopic statistical Fermi-Dirac distribution to provide a description of both the real and imaginary parts of the complex permittivity for water. The Fermi-Dirac distribution allows us to model the dramatic reduction in the imaginary part of the permittivity in the visible window of the water spectrum.

  5. ICCS/ESCCA consensus guidelines to detect GPI-deficient cells in paroxysmal nocturnal hemoglobinuria (PNH) and related disorders part 4 - assay validation and quality assurance.

    PubMed

    Oldaker, Teri; Whitby, Liam; Saber, Maryam; Holden, Jeannine; Wallace, Paul K; Litwin, Virginia

    2018-01-01

    Over the past six years, a diverse group of stakeholders have put forth recommendations regarding the analytical validation of flow cytometric methods and described in detail the differences between cell-based and traditional soluble analyte assay validations. This manuscript is based on these general recommendations as well as the published experience of experts in the area of PNH testing. The goal is to provide practical assay-specific guidelines for the validation of high-sensitivity flow cytometric PNH assays. Examples of the reports and validation data described herein are provided in Supporting Information. © 2017 International Clinical Cytometry Society. © 2017 International Clinical Cytometry Society.

  6. Methods for computing comet core temperatures

    NASA Astrophysics Data System (ADS)

    McKay, C. P.; Squyres, S. W.; Reynolds, R. T.

    1986-06-01

    The temperature profile within the comet nucleus provides the key to an understanding of the history of the volatiles within a comet. Certain difficulties arise in connection with current cometary temperature models. It is shown that the constraint of zero net heat flow can be used to derive general analytical expressions which will allow for the determination of comet core temperature for a spherically symmetric comet, taking into account information about the surface temperature and the thermal conductivity. The obtained results are compared with the expression for comet core temperatures considered by Klinger (1981). Attention is given to analytical results, an example case, and numerical models. The formalization developed makes it possible to determine the core temperature on the basis of the numerical models of the surface temperature.

  7. Interacting steps with finite-range interactions: Analytical approximation and numerical results

    NASA Astrophysics Data System (ADS)

    Jaramillo, Diego Felipe; Téllez, Gabriel; González, Diego Luis; Einstein, T. L.

    2013-05-01

    We calculate an analytical expression for the terrace-width distribution P(s) for an interacting step system with nearest- and next-nearest-neighbor interactions. Our model is derived by mapping the step system onto a statistically equivalent one-dimensional system of classical particles. The validity of the model is tested with several numerical simulations and experimental results. We explore the effect of the range of interactions q on the functional form of the terrace-width distribution and pair correlation functions. For physically plausible interactions, we find modest changes when next-nearest neighbor interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.

  8. Analytical Solutions, Moments, and Their Asymptotic Behaviors for the Time-Space Fractional Cable Equation

    NASA Astrophysics Data System (ADS)

    Li, Can; Deng, Wei-Hua

    2014-07-01

    Following the fractional cable equation established in the letter [B.I. Henry, T.A.M. Langlands, and S.L. Wearne, Phys. Rev. Lett. 100 (2008) 128103], we present the time-space fractional cable equation which describes the anomalous transport of electrodiffusion in nerve cells. The derivation is based on the generalized fractional Ohm's law; and the temporal memory effects and spatial-nonlocality are involved in the time-space fractional model. With the help of integral transform method we derive the analytical solutions expressed by the Green's function; the corresponding fractional moments are calculated; and their asymptotic behaviors are discussed. In addition, the explicit solutions of the considered model with two different external current injections are also presented.

  9. Spacecraft drag-free technology development: On-board estimation and control synthesis

    NASA Technical Reports Server (NTRS)

    Key, R. W.; Mettler, E.; Milman, M. H.; Schaechter, D. B.

    1982-01-01

    Estimation and control methods for a Drag-Free spacecraft are discussed. The functional and analytical synthesis of on-board estimators and controllers for an integrated attitude and translation control system is represented. The framework for detail definition and design of the baseline drag-free system is created. The techniques for solution of self-gravity and electrostatic charging problems are applicable generally, as is the control system development.

  10. Image Analysis Using Quantum Entropy Scale Space and Diffusion Concepts

    DTIC Science & Technology

    2009-11-01

    images using a combination of analytic methods and prototype Matlab and Mathematica programs. We investigated concepts of generalized entropy and...Schmidt strength from quantum logic gate decomposition. This form of entropy gives a measure of the nonlocal content of an entangling logic gate...11 We recall that the Schmidt number is an indicator of entanglement , but not a measure of entanglement . For instance, let us compare

  11. Analysis of Vertiport Studies Funded by the Airport Improvement Program (AIP)

    DTIC Science & Technology

    1994-05-01

    the general population and travel behavior factors from surveys and other sources. FEASIBILITY The vertiport studies recognize the need to address the ... behavior factors obtained from surveys and other sources. All of the methods were dependent upon various secondary data and/or information sources that...economic responses and of travel behavior . The five types, in order of increasing analytical sophistication, are briefly identified as follows. I

  12. Algorithm for Surface of Translation Attached Radiators (A-STAR). Volume 1: Formulation of the analysis

    NASA Astrophysics Data System (ADS)

    Medgyesimitschang, L. N.; Putnam, J. M.

    1982-05-01

    A general analytical formulation, based on the method of moments (MM) is described for solving electromagnetic problems associated with off-surface (wire) and aperture radiators on finite-length cylinders of arbitrary cross section, denoted in this report as bodies of translation (BOT). This class of bodies can be used to model structures with noncircular cross sections such as wings, fins and aircraft fuselages.

  13. Counterfeit drugs: analytical techniques for their identification.

    PubMed

    Martino, R; Malet-Martino, M; Gilard, V; Balayssac, S

    2010-09-01

    In recent years, the number of counterfeit drugs has increased dramatically, including not only "lifestyle" products but also vital medicines. Besides the threat to public health, the financial and reputational damage to pharmaceutical companies is substantial. The lack of robust information on the prevalence of fake drugs is an obstacle in the fight against drug counterfeiting. It is generally accepted that approximately 10% of drugs worldwide could be counterfeit, but it is also well known that this number covers very different situations depending on the country, the places where the drugs are purchased, and the definition of what constitutes a counterfeit drug. The chemical analysis of drugs suspected to be fake is a crucial step as counterfeiters are becoming increasingly sophisticated, rendering visual inspection insufficient to distinguish the genuine products from the counterfeit ones. This article critically reviews the recent analytical methods employed to control the quality of drug formulations, using as an example artemisinin derivatives, medicines particularly targeted by counterfeiters. Indeed, a broad panel of techniques have been reported for their analysis, ranging from simple and cheap in-field ones (colorimetry and thin-layer chromatography) to more advanced laboratory methods (mass spectrometry, nuclear magnetic resonance, and vibrational spectroscopies) through chromatographic methods, which remain the most widely used. The conclusion section of the article highlights the questions to be posed before selecting the most appropriate analytical approach.

  14. Dynamic Looping of a Free-Draining Polymer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Felix X. -F.; Stinis, Panos; Qian, Hong

    Here, we revisit the celebrated Wilemski--Fixman (WF) treatment for the looping time of a free-draining polymer. The WF theory introduces a sink term into the Fokker--Planck equation for themore » $3(N+1)$-dimensional Ornstein--Uhlenbeck process of polymer dynamics, which accounts for the appropriate boundary condition due to the formation of a loop. The assumption for WF theory is considerably relaxed. A perturbation method approach is developed that justifies and generalizes the previous results using either a delta sink or a Heaviside sink. For both types of sinks, we show that under the condition of a small dimensionless $$\\epsilon$$, the ratio of capture radius to the Kuhn length, we are able to systematically produce all known analytical and asymptotic results obtained by other methods. This includes most notably the transition regime between the $N^2$ scaling of Doi, and $$N\\sqrt{N}/\\epsilon$$ scaling of Szabo, Schulten, and Schulten. The mathematical issue at play is the nonuniform convergence of $$\\epsilon\\to 0$$ and $$N\\to\\infty$$, the latter being an inherent part of the theory of a Gaussian polymer. Our analysis yields a novel term in the analytical expression for the looping time with small $$\\epsilon$$, which was previously unknown. Monte Carlo numerical simulations corroborate the analytical findings. The systematic method developed here can be applied to other systems modeled by multidimensional Smoluchowski equations.« less

  15. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method... (analytical method) provided that the chemistry of the method or the determinative technique is not changed... prevent efficient recovery of organic pollutants and prevent the method from meeting QC requirements, the...

  16. Performance characteristics of an ion chromatographic method for the quantitation of citrate and phosphate in pharmaceutical solutions.

    PubMed

    Jenke, Dennis; Sadain, Salma; Nunez, Karen; Byrne, Frances

    2007-01-01

    The performance of an ion chromatographic method for measuring citrate and phosphate in pharmaceutical solutions is evaluated. Performance characteristics examined include accuracy, precision, specificity, response linearity, robustness, and the ability to meet system suitability criteria. In general, the method is found to be robust within reasonable deviations from its specified operating conditions. Analytical accuracy is typically 100 +/- 3%, and short-term precision is not more than 1.5% relative standard deviation. The instrument response is linear over a range of 50% to 150% of the standard preparation target concentrations (12 mg/L for phosphate and 20 mg/L for citrate), and the results obtained using a single-point standard versus a calibration curve are essentially equivalent. A small analytical bias is observed and ascribed to the relative purity of the differing salts, used as raw materials in tested finished products and as reference standards in the analytical method. The assay is specific in that no phosphate or citrate peaks are observed in a variety of method-related solutions and matrix blanks (with and without autoclaving). The assay with manual preparation of the eluents is sensitive to the composition of the eluent in the sense that the eluent must be effectively degassed and protected from CO(2) ingress during use. In order for the assay to perform effectively, extensive system equilibration and conditioning is required. However, a properly conditioned and equilibrated system can be used to test a number of samples via chromatographic runs that include many (> 50) injections.

  17. Analytical approach to determine vertical dynamics of a semi-trailer truck from the point of view of goods protection

    NASA Astrophysics Data System (ADS)

    Pidl, Renáta

    2018-01-01

    The overwhelming majority of intercontinental long-haul transportations of goods are usually carried out on road by semi-trailer trucks. Vibration has a major effect regarding the safety of the transport, the load and the transported goods. This paper deals with the logistics goals from the point of view of vibration and summarizes the methods to predict or measure the vibration load in order to design a proper system. From these methods, the focus of this paper is on the computer simulation of the vibration. An analytical method is presented to calculate the vertical dynamics of a semi-trailer truck containing general viscous damping and exposed to harmonic base excitation. For the purpose of a better understanding, the method will be presented through a simplified four degrees-of-freedom (DOF) half-vehicle model, which neglects the stiffness and damping of the tires, thus the four degrees-of-freedom are the vertical and angular displacements of the truck and the trailer. From the vertical and angular accelerations of the trailer, the vertical acceleration of each point of the platform of the trailer can easily be determined, from which the forces acting on the transported goods are given. As a result of this paper the response of the full platform-load-packaging system to any kind of vehicle, any kind of load and any kind of road condition can be analyzed. The peak acceleration of any point on the platform can be determined by the presented analytical method.

  18. SAM Biotoxin Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target biotoxin analytes in environmental samples can use this online query tool to identify analytical methods included in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select biotoxins.

  19. SAM Chemical Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target chemical, radiochemical, pathogens, and biotoxin analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery

  20. Control theory based airfoil design using the Euler equations

    NASA Technical Reports Server (NTRS)

    Jameson, Antony; Reuther, James

    1994-01-01

    This paper describes the implementation of optimization techniques based on control theory for airfoil design. In our previous work it was shown that control theory could be employed to devise effective optimization procedures for two-dimensional profiles by using the potential flow equation with either a conformal mapping or a general coordinate system. The goal of our present work is to extend the development to treat the Euler equations in two-dimensions by procedures that can readily be generalized to treat complex shapes in three-dimensions. Therefore, we have developed methods which can address airfoil design through either an analytic mapping or an arbitrary grid perturbation method applied to a finite volume discretization of the Euler equations. Here the control law serves to provide computationally inexpensive gradient information to a standard numerical optimization method. Results are presented for both the inverse problem and drag minimization problem.

Top