Sample records for series analytic method

  1. Proposal of Classification Method of Time Series Data in International Emissions Trading Market Using Agent-based Simulation

    NASA Astrophysics Data System (ADS)

    Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi

    This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.

  2. Method of multiplexed analysis using ion mobility spectrometer

    DOEpatents

    Belov, Mikhail E [Richland, WA; Smith, Richard D [Richland, WA

    2009-06-02

    A method for analyzing analytes from a sample introduced into a Spectrometer by generating a pseudo random sequence of a modulation bins, organizing each modulation bin as a series of submodulation bins, thereby forming an extended pseudo random sequence of submodulation bins, releasing the analytes in a series of analyte packets into a Spectrometer, thereby generating an unknown original ion signal vector, detecting the analytes at a detector, and characterizing the sample using the plurality of analyte signal subvectors. The method is advantageously applied to an Ion Mobility Spectrometer, and an Ion Mobility Spectrometer interfaced with a Time of Flight Mass Spectrometer.

  3. Kapteyn series arising in radiation problems

    NASA Astrophysics Data System (ADS)

    Lerche, I.; Tautz, R. C.

    2008-01-01

    In discussing radiation from multiple point charges or magnetic dipoles, moving in circles or ellipses, a variety of Kapteyn series of the second kind arises. Some of the series have been known in closed form for a hundred years or more, others appear not to be available to analytic persuasion. This paper shows how 12 such generic series can be developed to produce either closed analytic expressions or integrals that are not analytically tractable. In addition, the method presented here may be of benefit when one has other Kapteyn series of the second kind to consider, thereby providing an additional reason to consider such series anew.

  4. A general statistical test for correlations in a finite-length time series.

    PubMed

    Hanson, Jeffery A; Yang, Haw

    2008-06-07

    The statistical properties of the autocorrelation function from a time series composed of independently and identically distributed stochastic variables has been studied. Analytical expressions for the autocorrelation function's variance have been derived. It has been found that two common ways of calculating the autocorrelation, moving-average and Fourier transform, exhibit different uncertainty characteristics. For periodic time series, the Fourier transform method is preferred because it gives smaller uncertainties that are uniform through all time lags. Based on these analytical results, a statistically robust method has been proposed to test the existence of correlations in a time series. The statistical test is verified by computer simulations and an application to single-molecule fluorescence spectroscopy is discussed.

  5. Analytical methods for solving boundary value heat conduction problems with heterogeneous boundary conditions on lines. I - Review

    NASA Astrophysics Data System (ADS)

    Kartashov, E. M.

    1986-10-01

    Analytical methods for solving boundary value problems for the heat conduction equation with heterogeneous boundary conditions on lines, on a plane, and in space are briefly reviewed. In particular, the method of dual integral equations and summator series is examined with reference to stationary processes. A table of principal solutions to dual integral equations and pair summator series is proposed which presents the known results in a systematic manner. Newly obtained results are presented in addition to the known ones.

  6. Solving the Helmholtz equation in conformal mapped ARROW structures using homotopy perturbation method.

    PubMed

    Reck, Kasper; Thomsen, Erik V; Hansen, Ole

    2011-01-31

    The scalar wave equation, or Helmholtz equation, describes within a certain approximation the electromagnetic field distribution in a given system. In this paper we show how to solve the Helmholtz equation in complex geometries using conformal mapping and the homotopy perturbation method. The solution of the mapped Helmholtz equation is found by solving an infinite series of Poisson equations using two dimensional Fourier series. The solution is entirely based on analytical expressions and is not mesh dependent. The analytical results are compared to a numerical (finite element method) solution.

  7. Analytical solutions for systems of partial differential-algebraic equations.

    PubMed

    Benhammouda, Brahim; Vazquez-Leal, Hector

    2014-01-01

    This work presents the application of the power series method (PSM) to find solutions of partial differential-algebraic equations (PDAEs). Two systems of index-one and index-three are solved to show that PSM can provide analytical solutions of PDAEs in convergent series form. What is more, we present the post-treatment of the power series solutions with the Laplace-Padé (LP) resummation method as a useful strategy to find exact solutions. The main advantage of the proposed methodology is that the procedure is based on a few straightforward steps and it does not generate secular terms or depends of a perturbation parameter.

  8. 40 CFR 63.145 - Process wastewater provisions-test methods and procedures to determine compliance.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 9 2011-07-01 2011-07-01 false Process wastewater provisions-test... Operations, and Wastewater § 63.145 Process wastewater provisions—test methods and procedures to determine... analytical method for wastewater which has that compound as a target analyte. (7) Treatment using a series of...

  9. 40 CFR 63.145 - Process wastewater provisions-test methods and procedures to determine compliance.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 10 2013-07-01 2013-07-01 false Process wastewater provisions-test... Operations, and Wastewater § 63.145 Process wastewater provisions—test methods and procedures to determine... analytical method for wastewater which has that compound as a target analyte. (7) Treatment using a series of...

  10. Power Series Solution to the Pendulum Equation

    ERIC Educational Resources Information Center

    Benacka, Jan

    2009-01-01

    This note gives a power series solution to the pendulum equation that enables to investigate the system in an analytical way only, i.e. to avoid numeric methods. A method of determining the number of the terms for getting a required relative error is presented that uses bigger and lesser geometric series. The solution is suitable for modelling the…

  11. Hybrid perturbation methods based on statistical time series models

    NASA Astrophysics Data System (ADS)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  12. Analytical Approach to (2+1)-Dimensional Boussinesq Equation and (3+1)-Dimensional Kadomtsev-Petviashvili Equation

    NASA Astrophysics Data System (ADS)

    Sarıaydın, Selin; Yıldırım, Ahmet

    2010-05-01

    In this paper, we studied the solitary wave solutions of the (2+1)-dimensional Boussinesq equation utt -uxx-uyy-(u2)xx-uxxxx = 0 and the (3+1)-dimensional Kadomtsev-Petviashvili (KP) equation uxt -6ux 2 +6uuxx -uxxxx -uyy -uzz = 0. By using this method, an explicit numerical solution is calculated in the form of a convergent power series with easily computable components. To illustrate the application of this method numerical results are derived by using the calculated components of the homotopy perturbation series. The numerical solutions are compared with the known analytical solutions. Results derived from our method are shown graphically.

  13. A semi-analytical method of computation of oceanic tidal perturbations in the motion of artificial satellites

    NASA Technical Reports Server (NTRS)

    Musen, P.

    1973-01-01

    The method of expansion of the satellite's perturbations, as caused by the oceanic tides, into Fourier series is discussed. The coefficients of the expansion are purely numerical and peculiar to each particular satellite. Such a method is termed as semi-analytical in celestial mechanics. Gaussian form of the differential equations for variation of elements, with the right hand sides averaged over the orbit of the satellite, is convenient to use with the semi-analytical expansion.

  14. Techniques for Forecasting Air Passenger Traffic

    NASA Technical Reports Server (NTRS)

    Taneja, N.

    1972-01-01

    The basic techniques of forecasting the air passenger traffic are outlined. These techniques can be broadly classified into four categories: judgmental, time-series analysis, market analysis and analytical. The differences between these methods exist, in part, due to the degree of formalization of the forecasting procedure. Emphasis is placed on describing the analytical method.

  15. Analytical solution for the transient wave propagation of a buried cylindrical P-wave line source in a semi-infinite elastic medium with a fluid surface layer

    NASA Astrophysics Data System (ADS)

    Shan, Zhendong; Ling, Daosheng

    2018-02-01

    This article develops an analytical solution for the transient wave propagation of a cylindrical P-wave line source in a semi-infinite elastic solid with a fluid layer. The analytical solution is presented in a simple closed form in which each term represents a transient physical wave. The Scholte equation is derived, through which the Scholte wave velocity can be determined. The Scholte wave is the wave that propagates along the interface between the fluid and solid. To develop the analytical solution, the wave fields in the fluid and solid are defined, their analytical solutions in the Laplace domain are derived using the boundary and interface conditions, and the solutions are then decomposed into series form according to the power series expansion method. Each item of the series solution has a clear physical meaning and represents a transient wave path. Finally, by applying Cagniard's method and the convolution theorem, the analytical solutions are transformed into the time domain. Numerical examples are provided to illustrate some interesting features in the fluid layer, the interface and the semi-infinite solid. When the P-wave velocity in the fluid is higher than that in the solid, two head waves in the solid, one head wave in the fluid and a Scholte wave at the interface are observed for the cylindrical P-wave line source.

  16. An analytical method for the inverse Cauchy problem of Lame equation in a rectangle

    NASA Astrophysics Data System (ADS)

    Grigor’ev, Yu

    2018-04-01

    In this paper, we present an analytical computational method for the inverse Cauchy problem of Lame equation in the elasticity theory. A rectangular domain is frequently used in engineering structures and we only consider the analytical solution in a two-dimensional rectangle, wherein a missing boundary condition is recovered from the full measurement of stresses and displacements on an accessible boundary. The essence of the method consists in solving three independent Cauchy problems for the Laplace and Poisson equations. For each of them, the Fourier series is used to formulate a first-kind Fredholm integral equation for the unknown function of data. Then, we use a Lavrentiev regularization method, and the termwise separable property of kernel function allows us to obtain a closed-form regularized solution. As a result, for the displacement components, we obtain solutions in the form of a sum of series with three regularization parameters. The uniform convergence and error estimation of the regularized solutions are proved.

  17. Applications of computer algebra to distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Storch, Joel A.

    1993-01-01

    In the analysis of vibrations of continuous elastic systems, one often encounters complicated transcendental equations with roots directly related to the system's natural frequencies. Typically, these equations contain system parameters whose values must be specified before a numerical solution can be obtained. The present paper presents a method whereby the fundamental frequency can be obtained in analytical form to any desired degree of accuracy. The method is based upon truncation of rapidly converging series involving inverse powers of the system natural frequencies. A straightforward method to developing these series and summing them in closed form is presented. It is demonstrated how Computer Algebra can be exploited to perform the intricate analytical procedures which otherwise would render the technique difficult to apply in practice. We illustrate the method by developing two analytical approximations to the fundamental frequency of a vibrating cantilever carrying a rigid tip body. The results are compared to the numerical solution of the exact (transcendental) frequency equation over a range of system parameters.

  18. Pedagogical Implications in the Thermal Analysis of Uniform Annular Fins: Alternative Analytic Solutions by Series.

    ERIC Educational Resources Information Center

    Campo, Antonio; Rodriguez, Franklin

    1998-01-01

    Presents two alternative computational procedures for solving the modified Bessel equation of zero order: the Frobenius method, and the power series method coupled with a curve fit. Students in heat transfer courses can benefit from these alternative procedures; a course on ordinary differential equations is the only mathematical background that…

  19. Application of differential transformation method for solving dengue transmission mathematical model

    NASA Astrophysics Data System (ADS)

    Ndii, Meksianis Z.; Anggriani, Nursanti; Supriatna, Asep K.

    2018-03-01

    The differential transformation method (DTM) is a semi-analytical numerical technique which depends on Taylor series and has application in many areas including Biomathematics. The aim of this paper is to employ the differential transformation method (DTM) to solve system of non-linear differential equations for dengue transmission mathematical model. Analytical and numerical solutions are determined and the results are compared to that of Runge-Kutta method. We found a good agreement between DTM and Runge-Kutta method.

  20. Kinetic Titration Series with Biolayer Interferometry

    PubMed Central

    Frenzel, Daniel; Willbold, Dieter

    2014-01-01

    Biolayer interferometry is a method to analyze protein interactions in real-time. In this study, we illustrate the usefulness to quantitatively analyze high affinity protein ligand interactions employing a kinetic titration series for characterizing the interactions between two pairs of interaction patterns, in particular immunoglobulin G and protein G B1 as well as scFv IC16 and amyloid beta (1–42). Kinetic titration series are commonly used in surface plasmon resonance and involve sequential injections of analyte over a desired concentration range on a single ligand coated sensor chip without waiting for complete dissociation between the injections. We show that applying this method to biolayer interferometry is straightforward and i) circumvents problems in data evaluation caused by unavoidable sensor differences, ii) saves resources and iii) increases throughput if screening a multitude of different analyte/ligand combinations. PMID:25229647

  1. Kinetic titration series with biolayer interferometry.

    PubMed

    Frenzel, Daniel; Willbold, Dieter

    2014-01-01

    Biolayer interferometry is a method to analyze protein interactions in real-time. In this study, we illustrate the usefulness to quantitatively analyze high affinity protein ligand interactions employing a kinetic titration series for characterizing the interactions between two pairs of interaction patterns, in particular immunoglobulin G and protein G B1 as well as scFv IC16 and amyloid beta (1-42). Kinetic titration series are commonly used in surface plasmon resonance and involve sequential injections of analyte over a desired concentration range on a single ligand coated sensor chip without waiting for complete dissociation between the injections. We show that applying this method to biolayer interferometry is straightforward and i) circumvents problems in data evaluation caused by unavoidable sensor differences, ii) saves resources and iii) increases throughput if screening a multitude of different analyte/ligand combinations.

  2. Horizontal lifelines - review of regulations and simple design method considering anchorage rigidity.

    PubMed

    Galy, Bertrand; Lan, André

    2018-03-01

    Among the many occupational risks construction workers encounter every day falling from a height is the most dangerous. The objective of this article is to propose a simple analytical design method for horizontal lifelines (HLLs) that considers anchorage flexibility. The article presents a short review of the standards and regulations/acts/codes concerning HLLs in Canada the USA and Europe. A static analytical approach is proposed considering anchorage flexibility. The analytical results are compared with a series of 42 dynamic fall tests and a SAP2000 numerical model. The experimental results show that the analytical method is a little conservative and overestimates the line tension in most cases with a maximum of 17%. The static SAP2000 results show a maximum 2.1% difference with the analytical method. The analytical method is accurate enough to safely design HLLs and quick design abaci are provided to allow the engineer to make quick on-site verification if needed.

  3. Approximate analytical solutions in the analysis of elastic structures of complex geometry

    NASA Astrophysics Data System (ADS)

    Goloskokov, Dmitriy P.; Matrosov, Alexander V.

    2018-05-01

    A method of analytical decomposition for analysis plane structures of a complex configuration is presented. For each part of the structure in the form of a rectangle all the components of the stress-strain state are constructed by the superposition method. The method is based on two solutions derived in the form of trigonometric series with unknown coefficients using the method of initial functions. The coefficients are determined from the system of linear algebraic equations obtained while satisfying the boundary conditions and the conditions for joining the structure parts. The components of the stress-strain state of a bent plate with holes are calculated using the analytical decomposition method.

  4. AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)

    EPA Science Inventory

    Abstract

    A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...

  5. On improvement of the series convergence in the problem of the vibrations of orhotropic rectangular prism

    NASA Astrophysics Data System (ADS)

    Lyashko, A. D.

    2017-11-01

    A new analytical presentation of the solution for steady-state oscillations of orthotopic rectangular prism is found. The corresponding infinite system of linear algebraic equations has been deduced by the superposition method. A countable set of precise eigenfrequencies and elementary eigenforms is found. The identities are found which make it possible to improve the convergence of all the infinite series in the solution of the problem. All the infinite series in presentation of solution are analytically summed up. Numerical calculations of stresses in the rectangular orthotropic prism with a uniform along the border and harmonic in time load on two opposite faces have been performed.

  6. Analytical concepts for health management systems of liquid rocket engines

    NASA Technical Reports Server (NTRS)

    Williams, Richard; Tulpule, Sharayu; Hawman, Michael

    1990-01-01

    Substantial improvement in health management systems performance can be realized by implementing advanced analytical methods of processing existing liquid rocket engine sensor data. In this paper, such techniques ranging from time series analysis to multisensor pattern recognition to expert systems to fault isolation models are examined and contrasted. The performance of several of these methods is evaluated using data from test firings of the Space Shuttle main engines.

  7. Visual analytics techniques for large multi-attribute time series data

    NASA Astrophysics Data System (ADS)

    Hao, Ming C.; Dayal, Umeshwar; Keim, Daniel A.

    2008-01-01

    Time series data commonly occur when variables are monitored over time. Many real-world applications involve the comparison of long time series across multiple variables (multi-attributes). Often business people want to compare this year's monthly sales with last year's sales to make decisions. Data warehouse administrators (DBAs) want to know their daily data loading job performance. DBAs need to detect the outliers early enough to act upon them. In this paper, two new visual analytic techniques are introduced: The color cell-based Visual Time Series Line Charts and Maps highlight significant changes over time in a long time series data and the new Visual Content Query facilitates finding the contents and histories of interesting patterns and anomalies, which leads to root cause identification. We have applied both methods to two real-world applications to mine enterprise data warehouse and customer credit card fraud data to illustrate the wide applicability and usefulness of these techniques.

  8. Double power series method for approximating cosmological perturbations

    NASA Astrophysics Data System (ADS)

    Wren, Andrew J.; Malik, Karim A.

    2017-04-01

    We introduce a double power series method for finding approximate analytical solutions for systems of differential equations commonly found in cosmological perturbation theory. The method was set out, in a noncosmological context, by Feshchenko, Shkil' and Nikolenko (FSN) in 1966, and is applicable to cases where perturbations are on subhorizon scales. The FSN method is essentially an extension of the well known Wentzel-Kramers-Brillouin (WKB) method for finding approximate analytical solutions for ordinary differential equations. The FSN method we use is applicable well beyond perturbation theory to solve systems of ordinary differential equations, linear in the derivatives, that also depend on a small parameter, which here we take to be related to the inverse wave-number. We use the FSN method to find new approximate oscillating solutions in linear order cosmological perturbation theory for a flat radiation-matter universe. Together with this model's well-known growing and decaying Mészáros solutions, these oscillating modes provide a complete set of subhorizon approximations for the metric potential, radiation and matter perturbations. Comparison with numerical solutions of the perturbation equations shows that our approximations can be made accurate to within a typical error of 1%, or better. We also set out a heuristic method for error estimation. A Mathematica notebook which implements the double power series method is made available online.

  9. Accurate expressions for solar cell fill factors including series and shunt resistances

    NASA Astrophysics Data System (ADS)

    Green, Martin A.

    2016-02-01

    Together with open-circuit voltage and short-circuit current, fill factor is a key solar cell parameter. In their classic paper on limiting efficiency, Shockley and Queisser first investigated this factor's analytical properties showing, for ideal cells, it could be expressed implicitly in terms of the maximum power point voltage. Subsequently, fill factors usually have been calculated iteratively from such implicit expressions or from analytical approximations. In the absence of detrimental series and shunt resistances, analytical fill factor expressions have recently been published in terms of the Lambert W function available in most mathematical computing software. Using a recently identified perturbative relationship, exact expressions in terms of this function are derived in technically interesting cases when both series and shunt resistances are present but have limited impact, allowing a better understanding of their effect individually and in combination. Approximate expressions for arbitrary shunt and series resistances are then deduced, which are significantly more accurate than any previously published. A method based on the insights developed is also reported for deducing one-diode fits to experimental data.

  10. Sampling and analysis for radon-222 dissolved in ground water and surface water

    USGS Publications Warehouse

    DeWayne, Cecil L.; Gesell, T.F.

    1992-01-01

    Radon-222 is a naturally occurring radioactive gas in the uranium-238 decay series that has traditionally been called, simply, radon. The lung cancer risks associated with the inhalation of radon decay products have been well documented by epidemiological studies on populations of uranium miners. The realization that radon is a public health hazard has raised the need for sampling and analytical guidelines for field personnel. Several sampling and analytical methods are being used to document radon concentrations in ground water and surface water worldwide but no convenient, single set of guidelines is available. Three different sampling and analytical methods - bubbler, liquid scintillation, and field screening - are discussed in this paper. The bubbler and liquid scintillation methods have high accuracy and precision, and small analytical method detection limits of 0.2 and 10 pCi/l (picocuries per liter), respectively. The field screening method generally is used as a qualitative reconnaissance tool.

  11. Quantitative evaluation of cross correlation between two finite-length time series with applications to single-molecule FRET.

    PubMed

    Hanson, Jeffery A; Yang, Haw

    2008-11-06

    The statistical properties of the cross correlation between two time series has been studied. An analytical expression for the cross correlation function's variance has been derived. On the basis of these results, a statistically robust method has been proposed to detect the existence and determine the direction of cross correlation between two time series. The proposed method has been characterized by computer simulations. Applications to single-molecule fluorescence spectroscopy are discussed. The results may also find immediate applications in fluorescence correlation spectroscopy (FCS) and its variants.

  12. Analytic few-photon scattering in waveguide QED

    NASA Astrophysics Data System (ADS)

    Hurst, David L.; Kok, Pieter

    2018-04-01

    We develop an approach to light-matter coupling in waveguide QED based upon scattering amplitudes evaluated via Dyson series. For optical states containing more than single photons, terms in this series become increasingly complex, and we provide a diagrammatic recipe for their evaluation, which is capable of yielding analytic results. Our method fully specifies a combined emitter-optical state that permits investigation of light-matter entanglement generation protocols. We use our expressions to study two-photon scattering from a Λ -system and find that the pole structure of the transition amplitude is dramatically altered as the two ground states are tuned from degeneracy.

  13. Solutions for the diurnally forced advection-diffusion equation to estimate bulk fluid velocity and diffusivity in streambeds from temperature time series

    Treesearch

    Charles H. Luce; Daniele Tonina; Frank Gariglio; Ralph Applebee

    2013-01-01

    Work over the last decade has documented methods for estimating fluxes between streams and streambeds from time series of temperature at two depths in the streambed. We present substantial extension to the existing theory and practice of using temperature time series to estimate streambed water fluxes and thermal properties, including (1) a new explicit analytical...

  14. A new method for the determination of short-chain fatty acids from the aliphatic series in wines by headspace solid-phase microextraction-gas chromatography-ion trap mass spectrometry.

    PubMed

    Olivero, Sergio J Pérez; Trujillo, Juan P Pérez

    2011-06-24

    A new analytical method for the determination of nine short-chain fatty acids (acetic, propionic, isobutyric, butyric, isovaleric, 2-methylbutyric, hexanoic, octanoic and decanoic acids) in wines using the automated HS/SPME-GC-ITMS technique was developed and optimised. Five different SPME fibers were tested and the influence of different factors such as temperature and time of extraction, temperature and time of desorption, pH, strength ionic, tannins, anthocyans, SO(2), sugar and ethanol content were studied and optimised using model solutions. Some analytes showed matrix effect so a study of recoveries was performed. The proposed HS/SPME-GC-ITMS method, that covers the concentration range of the different analytes in wines, showed wide linear ranges, values of repeatability and reproducibility lower than 4.0% of RSD and detection limits between 3 and 257 μgL(-1), lower than the olfactory thresholds. The optimised method is a suitable technique for the quantitative analysis of short-chain fatty acids from the aliphatic series in real samples of white, rose and red wines. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Fourier series expansion for nonlinear Hamiltonian oscillators.

    PubMed

    Méndez, Vicenç; Sans, Cristina; Campos, Daniel; Llopis, Isaac

    2010-06-01

    The problem of nonlinear Hamiltonian oscillators is one of the classical questions in physics. When an analytic solution is not possible, one can resort to obtaining a numerical solution or using perturbation theory around the linear problem. We apply the Fourier series expansion to find approximate solutions to the oscillator position as a function of time as well as the period-amplitude relationship. We compare our results with other recent approaches such as variational methods or heuristic approximations, in particular the Ren-He's method. Based on its application to the Duffing oscillator, the nonlinear pendulum and the eardrum equation, it is shown that the Fourier series expansion method is the most accurate.

  16. Target analyte quantification by isotope dilution LC-MS/MS directly referring to internal standard concentrations--validation for serum cortisol measurement.

    PubMed

    Maier, Barbara; Vogeser, Michael

    2013-04-01

    Isotope dilution LC-MS/MS methods used in the clinical laboratory typically involve multi-point external calibration in each analytical series. Our aim was to test the hypothesis that determination of target analyte concentrations directly derived from the relation of the target analyte peak area to the peak area of a corresponding stable isotope labelled internal standard compound [direct isotope dilution analysis (DIDA)] may be not inferior to conventional external calibration with respect to accuracy and reproducibility. Quality control samples and human serum pools were analysed in a comparative validation protocol for cortisol as an exemplary analyte by LC-MS/MS. Accuracy and reproducibility were compared between quantification either involving a six-point external calibration function, or a result calculation merely based on peak area ratios of unlabelled and labelled analyte. Both quantification approaches resulted in similar accuracy and reproducibility. For specified analytes, reliable analyte quantification directly derived from the ratio of peak areas of labelled and unlabelled analyte without the need for a time consuming multi-point calibration series is possible. This DIDA approach is of considerable practical importance for the application of LC-MS/MS in the clinical laboratory where short turnaround times often have high priority.

  17. From Networks to Time Series

    NASA Astrophysics Data System (ADS)

    Shimada, Yutaka; Ikeguchi, Tohru; Shigehara, Takaomi

    2012-10-01

    In this Letter, we propose a framework to transform a complex network to a time series. The transformation from complex networks to time series is realized by the classical multidimensional scaling. Applying the transformation method to a model proposed by Watts and Strogatz [Nature (London) 393, 440 (1998)], we show that ring lattices are transformed to periodic time series, small-world networks to noisy periodic time series, and random networks to random time series. We also show that these relationships are analytically held by using the circulant-matrix theory and the perturbation theory of linear operators. The results are generalized to several high-dimensional lattices.

  18. A Study on Predictive Analytics Application to Ship Machinery Maintenance

    DTIC Science & Technology

    2013-09-01

    Looking at the nature of the time series forecasting method , it would be better applied to offline analysis . The application for real- time online...other system attributes in future. Two techniques of statistical analysis , mainly time series models and cumulative sum control charts, are discussed in...statistical tool employed for the two techniques of statistical analysis . Both time series forecasting as well as CUSUM control charts are shown to be

  19. Effective numerical method of spectral analysis of quantum graphs

    NASA Astrophysics Data System (ADS)

    Barrera-Figueroa, Víctor; Rabinovich, Vladimir S.

    2017-05-01

    We present in the paper an effective numerical method for the determination of the spectra of periodic metric graphs equipped by Schrödinger operators with real-valued periodic electric potentials as Hamiltonians and with Kirchhoff and Neumann conditions at the vertices. Our method is based on the spectral parameter power series method, which leads to a series representation of the dispersion equation, which is suitable for both analytical and numerical calculations. Several important examples demonstrate the effectiveness of our method for some periodic graphs of interest that possess potentials usually found in quantum mechanics.

  20. On accelerated flow of MHD powell-eyring fluid via homotopy analysis method

    NASA Astrophysics Data System (ADS)

    Salah, Faisal; Viswanathan, K. K.; Aziz, Zainal Abdul

    2017-09-01

    The aim of this article is to obtain the approximate analytical solution for incompressible magnetohydrodynamic (MHD) flow for Powell-Eyring fluid induced by an accelerated plate. Both constant and variable accelerated cases are investigated. Approximate analytical solution in each case is obtained by using the Homotopy Analysis Method (HAM). The resulting nonlinear analysis is carried out to generate the series solution. Finally, Graphical outcomes of different values of the material constants parameters on the velocity flow field are discussed and analyzed.

  1. Big Data Analytics for Demand Response: Clustering Over Space and Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chelmis, Charalampos; Kolte, Jahanvi; Prasanna, Viktor K.

    The pervasive deployment of advanced sensing infrastructure in Cyber-Physical systems, such as the Smart Grid, has resulted in an unprecedented data explosion. Such data exhibit both large volumes and high velocity characteristics, two of the three pillars of Big Data, and have a time-series notion as datasets in this context typically consist of successive measurements made over a time interval. Time-series data can be valuable for data mining and analytics tasks such as identifying the “right” customers among a diverse population, to target for Demand Response programs. However, time series are challenging to mine due to their high dimensionality. Inmore » this paper, we motivate this problem using a real application from the smart grid domain. We explore novel representations of time-series data for BigData analytics, and propose a clustering technique for determining natural segmentation of customers and identification of temporal consumption patterns. Our method is generizable to large-scale, real-world scenarios, without making any assumptions about the data. We evaluate our technique using real datasets from smart meters, totaling ~ 18,200,000 data points, and show the efficacy of our technique in efficiency detecting the number of optimal number of clusters.« less

  2. Analysis of SRM model nozzle calibration test data in support of IA12B, IA12C and IA36 space shuttle launch vehicle aerodynamics tests

    NASA Technical Reports Server (NTRS)

    Baker, L. R., Jr.; Tevepaugh, J. A.; Penny, M. M.

    1973-01-01

    Variations of nozzle performance characteristics of the model nozzles used in the Space Shuttle IA12B, IA12C, IA36 power-on launch vehicle test series are shown by comparison between experimental and analytical data. The experimental data are nozzle wall pressure distributions and schlieren photographs of the exhaust plume shapes. The exhaust plume shapes were simulated experimentally with cold flow while the analytical data were generated using a method-of-characteristics solution. Exhaust plume boundaries, boundary shockwave locations and nozzle wall pressure measurements calculated analytically agree favorably with the experimental data from the IA12C and IA36 test series. For the IA12B test series condensation was suspected in the exhaust plumes at the higher pressure ratios required to simulate the prototype plume shapes. Nozzle calibration tests for the series were conducted at pressure ratios where condensation either did not occur or if present did not produce a noticeable effect on the plume shapes. However, at the pressure ratios required in the power-on launch vehicle tests condensation probably occurs and could significantly affect the exhaust plume shapes.

  3. Bessel function expansion to reduce the calculation time and memory usage for cylindrical computer-generated holograms.

    PubMed

    Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko

    2017-07-10

    This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.

  4. Magnitude and sign of long-range correlated time series: Decomposition and surrogate signal generation.

    PubMed

    Gómez-Extremera, Manuel; Carpena, Pedro; Ivanov, Plamen Ch; Bernaola-Galván, Pedro A

    2016-04-01

    We systematically study the scaling properties of the magnitude and sign of the fluctuations in correlated time series, which is a simple and useful approach to distinguish between systems with different dynamical properties but the same linear correlations. First, we decompose artificial long-range power-law linearly correlated time series into magnitude and sign series derived from the consecutive increments in the original series, and we study their correlation properties. We find analytical expressions for the correlation exponent of the sign series as a function of the exponent of the original series. Such expressions are necessary for modeling surrogate time series with desired scaling properties. Next, we study linear and nonlinear correlation properties of series composed as products of independent magnitude and sign series. These surrogate series can be considered as a zero-order approximation to the analysis of the coupling of magnitude and sign in real data, a problem still open in many fields. We find analytical results for the scaling behavior of the composed series as a function of the correlation exponents of the magnitude and sign series used in the composition, and we determine the ranges of magnitude and sign correlation exponents leading to either single scaling or to crossover behaviors. Finally, we obtain how the linear and nonlinear properties of the composed series depend on the correlation exponents of their magnitude and sign series. Based on this information we propose a method to generate surrogate series with controlled correlation exponent and multifractal spectrum.

  5. Isotope-ratio-monitoring gas chromatography-mass spectrometry: methods for isotopic calibration

    NASA Technical Reports Server (NTRS)

    Merritt, D. A.; Brand, W. A.; Hayes, J. M.

    1994-01-01

    In trial analyses of a series of n-alkanes, precise determinations of 13C contents were based on isotopic standards introduced by five different techniques and results were compared. Specifically, organic-compound standards were coinjected with the analytes and carried through chromatography and combustion with them; or CO2 was supplied from a conventional inlet and mixed with the analyte in the ion source, or CO2 was supplied from an auxiliary mixing volume and transmitted to the source without interruption of the analyte stream. Additionally, two techniques were investigated in which the analyte stream was diverted and CO2 standards were placed on a near-zero background. All methods provided accurate results. Where applicable, methods not involving interruption of the analyte stream provided the highest performance (sigma = 0.00006 at.% 13C or 0.06% for 250 pmol C as CO2 reaching the ion source), but great care was required. Techniques involving diversion of the analyte stream were immune to interference from coeluting sample components and still provided high precision (0.0001 < or = sigma < or = 0.0002 at.% or 0.1 < or = sigma < or = 0.2%).

  6. 3D-MICE: integration of cross-sectional and longitudinal imputation for multi-analyte longitudinal clinical data.

    PubMed

    Luo, Yuan; Szolovits, Peter; Dighe, Anand S; Baron, Jason M

    2018-06-01

    A key challenge in clinical data mining is that most clinical datasets contain missing data. Since many commonly used machine learning algorithms require complete datasets (no missing data), clinical analytic approaches often entail an imputation procedure to "fill in" missing data. However, although most clinical datasets contain a temporal component, most commonly used imputation methods do not adequately accommodate longitudinal time-based data. We sought to develop a new imputation algorithm, 3-dimensional multiple imputation with chained equations (3D-MICE), that can perform accurate imputation of missing clinical time series data. We extracted clinical laboratory test results for 13 commonly measured analytes (clinical laboratory tests). We imputed missing test results for the 13 analytes using 3 imputation methods: multiple imputation with chained equations (MICE), Gaussian process (GP), and 3D-MICE. 3D-MICE utilizes both MICE and GP imputation to integrate cross-sectional and longitudinal information. To evaluate imputation method performance, we randomly masked selected test results and imputed these masked results alongside results missing from our original data. We compared predicted results to measured results for masked data points. 3D-MICE performed significantly better than MICE and GP-based imputation in a composite of all 13 analytes, predicting missing results with a normalized root-mean-square error of 0.342, compared to 0.373 for MICE alone and 0.358 for GP alone. 3D-MICE offers a novel and practical approach to imputing clinical laboratory time series data. 3D-MICE may provide an additional tool for use as a foundation in clinical predictive analytics and intelligent clinical decision support.

  7. An experimental and analytical investigation of the effect of spanwise curvature on wing flutter at Mach number of 0.7

    NASA Technical Reports Server (NTRS)

    Rivera, Jose A., Jr.

    1989-01-01

    An experimental and analytical study was conducted at Mach 0.7 to investigate the effects of spanwise curvature on flutter. Two series of rectangular planform wings of aspect ration 1.5 and curvature ranging from zero (uncurved) to 1.04/ft were flutter tested in the NASA Langley Transonic Dynamics Tunnel (TDT). One series consisted of models with a NACA 65 A010 airfoil section and the other of flat plate cross section models. Flutter analyses were conducted for correlation with the experimental results by using structural finite element methods to perform vibration analysis and two aerodynamic theories to obtain unsteady aerodynamic load calculations. The experimental results showed that for one series of models the flutter dynamic pressure increased significantly with curvature while for the other series of models the flutter dynamic pressure decreased with curvature. The flutter analyses, which generally predicted the experimental results, indicated that the difference in behavior of the two series of models was primarily due to differences in their structural properties.

  8. Approximate method for calculating a thickwalled cylinder with rigidly clamped ends

    NASA Astrophysics Data System (ADS)

    Andreev, Vladimir

    2018-03-01

    Numerous papers dealing with the calculations of cylindrical bodies [1 -8 and others] have shown that analytic and numerical-analytical solutions in both homogeneous and inhomogeneous thick-walled shells can be obtained quite simply, using expansions in Fourier series on trigonometric functions, if the ends are hinged movable (sliding support). It is much more difficult to solve the problem of calculating shells with builtin ends.

  9. Inorganic chemical analysis of environmental materials—A lecture series

    USGS Publications Warehouse

    Crock, J.G.; Lamothe, P.J.

    2011-01-01

    At the request of the faculty of the Colorado School of Mines, Golden, Colorado, the authors prepared and presented a lecture series to the students of a graduate level advanced instrumental analysis class. The slides and text presented in this report are a compilation and condensation of this series of lectures. The purpose of this report is to present the slides and notes and to emphasize the thought processes that should be used by a scientist submitting samples for analyses in order to procure analytical data to answer a research question. First and foremost, the analytical data generated can be no better than the samples submitted. The questions to be answered must first be well defined and the appropriate samples collected from the population that will answer the question. The proper methods of analysis, including proper sample preparation and digestion techniques, must then be applied. Care must be taken to achieve the required limits of detection of the critical analytes to yield detectable analyte concentration (above "action" levels) for the majority of the study's samples and to address what portion of those analytes answer the research question-total or partial concentrations. To guarantee a robust analytical result that answers the research question(s), a well-defined quality assurance and quality control (QA/QC) plan must be employed. This QA/QC plan must include the collection and analysis of field and laboratory blanks, sample duplicates, and matrix-matched standard reference materials (SRMs). The proper SRMs may include in-house materials and/or a selection of widely available commercial materials. A discussion of the preparation and applicability of in-house reference materials is also presented. Only when all these analytical issues are sufficiently addressed can the research questions be answered with known certainty.

  10. Constructing and predicting solitary pattern solutions for nonlinear time-fractional dispersive partial differential equations

    NASA Astrophysics Data System (ADS)

    Arqub, Omar Abu; El-Ajou, Ahmad; Momani, Shaher

    2015-07-01

    Building fractional mathematical models for specific phenomena and developing numerical or analytical solutions for these fractional mathematical models are crucial issues in mathematics, physics, and engineering. In this work, a new analytical technique for constructing and predicting solitary pattern solutions of time-fractional dispersive partial differential equations is proposed based on the generalized Taylor series formula and residual error function. The new approach provides solutions in the form of a rapidly convergent series with easily computable components using symbolic computation software. For method evaluation and validation, the proposed technique was applied to three different models and compared with some of the well-known methods. The resultant simulations clearly demonstrate the superiority and potentiality of the proposed technique in terms of the quality performance and accuracy of substructure preservation in the construct, as well as the prediction of solitary pattern solutions for time-fractional dispersive partial differential equations.

  11. From observational to analytical morphology of the stratum corneum: progress avoiding hazardous animal and human testings

    PubMed Central

    Piérard, Gérald E; Courtois, Justine; Ritacco, Caroline; Humbert, Philippe; Fanian, Ferial; Piérard-Franchimont, Claudine

    2015-01-01

    Background In cosmetic science, noninvasive sampling of the upper part of the stratum corneum is conveniently performed using strippings with adhesive-coated discs (SACD) and cyanoacrylate skin surface strippings (CSSSs). Methods Under controlled conditions, it is possible to scrutinize SACD and CSSS with objectivity using appropriate methods of analytical morphology. These procedures apply to a series of clinical conditions including xerosis grading, comedometry, corneodynamics, corneomelametry, corneosurfametry, corneoxenometry, and dandruff assessment. Results With any of the analytical evaluations, SACD and CSSS provide specific salient information that is useful in the field of cosmetology. In particular, both methods appear valuable and complementary in assessing the human skin compatibility of personal skincare products. Conclusion A set of quantitative analytical methods applicable to the minimally invasive and low-cost SACD and CSSS procedures allow for a sound assessment of cosmetic effects on the stratum corneum. Under regular conditions, both methods are painless and do not induce adverse events. Globally, CSSS appears more precise and informative than the regular SACD stripping. PMID:25767402

  12. Analytic-continuation approach to the resummation of divergent series in Rayleigh-Schrödinger perturbation theory

    NASA Astrophysics Data System (ADS)

    Mihálka, Zsuzsanna É.; Surján, Péter R.

    2017-12-01

    The method of analytic continuation is applied to estimate eigenvalues of linear operators from finite order results of perturbation theory even in cases when the latter is divergent. Given a finite number of terms E(k ),k =1 ,2 ,⋯M resulting from a Rayleigh-Schrödinger perturbation calculation, scaling these numbers by μk (μ being the perturbation parameter) we form the sum E (μ ) =∑kμkE(k ) for small μ values for which the finite series is convergent to a certain numerical accuracy. Extrapolating the function E (μ ) to μ =1 yields an estimation of the exact solution of the problem. For divergent series, this procedure may serve as resummation tool provided the perturbation problem has a nonzero radius of convergence. As illustrations, we treat the anharmonic (quartic) oscillator and an example from the many-electron correlation problem.

  13. Fourier-based integration of quasi-periodic gait accelerations for drift-free displacement estimation using inertial sensors.

    PubMed

    Sabatini, Angelo Maria; Ligorio, Gabriele; Mannini, Andrea

    2015-11-23

    In biomechanical studies Optical Motion Capture Systems (OMCS) are considered the gold standard for determining the orientation and the position (pose) of an object in a global reference frame. However, the use of OMCS can be difficult, which has prompted research on alternative sensing technologies, such as body-worn inertial sensors. We developed a drift-free method to estimate the three-dimensional (3D) displacement of a body part during cyclical motions using body-worn inertial sensors. We performed the Fourier analysis of the stride-by-stride estimates of the linear acceleration, which were obtained by transposing the specific forces measured by the tri-axial accelerometer into the global frame using a quaternion-based orientation estimation algorithm and detecting when each stride began using a gait-segmentation algorithm. The time integration was performed analytically using the Fourier series coefficients; the inverse Fourier series was then taken for reconstructing the displacement over each single stride. The displacement traces were concatenated and spline-interpolated to obtain the entire trace. The method was applied to estimate the motion of the lower trunk of healthy subjects that walked on a treadmill and it was validated using OMCS reference 3D displacement data; different approaches were tested for transposing the measured specific force into the global frame, segmenting the gait and performing time integration (numerically and analytically). The width of the limits of agreements were computed between each tested method and the OMCS reference method for each anatomical direction: Medio-Lateral (ML), VerTical (VT) and Antero-Posterior (AP); using the proposed method, it was observed that the vertical component of displacement (VT) was within ±4 mm (±1.96 standard deviation) of OMCS data and each component of horizontal displacement (ML and AP) was within ±9 mm of OMCS data. Fourier harmonic analysis was applied to model stride-by-stride linear accelerations during walking and to perform their analytical integration. Our results showed that analytical integration based on Fourier series coefficients was a useful approach to accurately estimate 3D displacement from noisy acceleration data.

  14. Transformation between surface spherical harmonic expansion of arbitrary high degree and order and double Fourier series on sphere

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2018-02-01

    In order to accelerate the spherical harmonic synthesis and/or analysis of arbitrary function on the unit sphere, we developed a pair of procedures to transform between a truncated spherical harmonic expansion and the corresponding two-dimensional Fourier series. First, we obtained an analytic expression of the sine/cosine series coefficient of the 4 π fully normalized associated Legendre function in terms of the rectangle values of the Wigner d function. Then, we elaborated the existing method to transform the coefficients of the surface spherical harmonic expansion to those of the double Fourier series so as to be capable with arbitrary high degree and order. Next, we created a new method to transform inversely a given double Fourier series to the corresponding surface spherical harmonic expansion. The key of the new method is a couple of new recurrence formulas to compute the inverse transformation coefficients: a decreasing-order, fixed-degree, and fixed-wavenumber three-term formula for general terms, and an increasing-degree-and-order and fixed-wavenumber two-term formula for diagonal terms. Meanwhile, the two seed values are analytically prepared. Both of the forward and inverse transformation procedures are confirmed to be sufficiently accurate and applicable to an extremely high degree/order/wavenumber as 2^{30} {≈ } 10^9. The developed procedures will be useful not only in the synthesis and analysis of the spherical harmonic expansion of arbitrary high degree and order, but also in the evaluation of the derivatives and integrals of the spherical harmonic expansion.

  15. Improved vertical streambed flux estimation using multiple diurnal temperature methods in series

    USGS Publications Warehouse

    Irvine, Dylan J.; Briggs, Martin A.; Cartwright, Ian; Scruggs, Courtney; Lautz, Laura K.

    2017-01-01

    Analytical solutions that use diurnal temperature signals to estimate vertical fluxes between groundwater and surface water based on either amplitude ratios (Ar) or phase shifts (Δϕ) produce results that rarely agree. Analytical solutions that simultaneously utilize Ar and Δϕ within a single solution have more recently been derived, decreasing uncertainty in flux estimates in some applications. Benefits of combined (ArΔϕ) methods also include that thermal diffusivity and sensor spacing can be calculated. However, poor identification of either Ar or Δϕ from raw temperature signals can lead to erratic parameter estimates from ArΔϕ methods. An add-on program for VFLUX 2 is presented to address this issue. Using thermal diffusivity selected from an ArΔϕ method during a reliable time period, fluxes are recalculated using an Ar method. This approach maximizes the benefits of the Ar and ArΔϕ methods. Additionally, sensor spacing calculations can be used to identify periods with unreliable flux estimates, or to assess streambed scour. Using synthetic and field examples, the use of these solutions in series was particularly useful for gaining conditions where fluxes exceeded 1 m/d.

  16. Analytical derivatives of the individual state energies in ensemble density functional theory method. I. General formalism

    DOE PAGES

    Filatov, Michael; Liu, Fang; Martínez, Todd J.

    2017-07-21

    The state-averaged (SA) spin restricted ensemble referenced Kohn-Sham (REKS) method and its state interaction (SI) extension, SI-SA-REKS, enable one to describe correctly the shape of the ground and excited potential energy surfaces of molecules undergoing bond breaking/bond formation reactions including features such as conical intersections crucial for theoretical modeling of non-adiabatic reactions. Until recently, application of the SA-REKS and SI-SA-REKS methods to modeling the dynamics of such reactions was obstructed due to the lack of the analytical energy derivatives. Here, the analytical derivatives of the individual SA-REKS and SI-SA-REKS energies are derived. The final analytic gradient expressions are formulated entirelymore » in terms of traces of matrix products and are presented in the form convenient for implementation in the traditional quantum chemical codes employing basis set expansions of the molecular orbitals. Finally, we will describe the implementation and benchmarking of the derived formalism in a subsequent article of this series.« less

  17. Applying advanced analytics to guide emergency department operational decisions: A proof-of-concept study examining the effects of boarding.

    PubMed

    Andrew Taylor, R; Venkatesh, Arjun; Parwani, Vivek; Chekijian, Sharon; Shapiro, Marc; Oh, Andrew; Harriman, David; Tarabar, Asim; Ulrich, Andrew

    2018-01-04

    Emergency Department (ED) leaders are increasingly confronted with large amounts of data with the potential to inform and guide operational decisions. Routine use of advanced analytic methods may provide additional insights. To examine the practical application of available advanced analytic methods to guide operational decision making around patient boarding. Retrospective analysis of the effect of boarding on ED operational metrics from a single site between 1/2015 and 1/2017. Times series were visualized through decompositional techniques accounting for seasonal trends, to determine the effect of boarding on ED performance metrics and to determine the impact of boarding "shocks" to the system on operational metrics over several days. There were 226,461 visits with the mean (IQR) number of visits per day was 273 (258-291). Decomposition of the boarding count time series illustrated an upward trend in the last 2-3 quarters as well as clear seasonal components. All performance metrics were significantly impacted (p<0.05) by boarding count, except for overall Press Ganey scores (p<0.65). For every additional increase in boarder count, overall length-of-stay (LOS) increased by 1.55min (0.68, 1.50). Smaller effects were seen for waiting room LOS and treat and release LOS. The impulse responses indicate that the boarding shocks are characterized by changes in the performance metrics within the first day that fade out after 4-5days. In this study regarding the use of advanced analytics in daily ED operations, time series analysis provided multiple useful insights into boarding and its impact on performance metrics. Copyright © 2018. Published by Elsevier Inc.

  18. Analytical close-form solutions to the elastic fields of solids with dislocations and surface stress

    NASA Astrophysics Data System (ADS)

    Ye, Wei; Paliwal, Bhasker; Ougazzaden, Abdallah; Cherkaoui, Mohammed

    2013-07-01

    The concept of eigenstrain is adopted to derive a general analytical framework to solve the elastic field for 3D anisotropic solids with general defects by considering the surface stress. The formulation shows the elastic constants and geometrical features of the surface play an important role in determining the elastic fields of the solid. As an application, the analytical close-form solutions to the stress fields of an infinite isotropic circular nanowire are obtained. The stress fields are compared with the classical solutions and those of complex variable method. The stress fields from this work demonstrate the impact from the surface stress when the size of the nanowire shrinks but becomes negligible in macroscopic scale. Compared with the power series solutions of complex variable method, the analytical solutions in this work provide a better platform and they are more flexible in various applications. More importantly, the proposed analytical framework profoundly improves the studies of general 3D anisotropic materials with surface effects.

  19. Developments in mycotoxin analysis: an update for 2014-2015

    USDA-ARS?s Scientific Manuscript database

    This review summarizes developments in the determination of mycotoxins over a period between mid-2014 and mid-2015. In tradition with previous articles of this series, analytical methods to determine aflatoxins, Alternaria toxins, ergot alkaloids, fumonisins, ochratoxins, patulin, trichothecenes, an...

  20. A new analytical solution solved by triple series equations method for constant-head tests in confined aquifers

    NASA Astrophysics Data System (ADS)

    Chang, Ya-Chi; Yeh, Hund-Der

    2010-06-01

    The constant-head pumping tests are usually employed to determine the aquifer parameters and they can be performed in fully or partially penetrating wells. Generally, the Dirichlet condition is prescribed along the well screen and the Neumann type no-flow condition is specified over the unscreened part of the test well. The mathematical model describing the aquifer response to a constant-head test performed in a fully penetrating well can be easily solved by the conventional integral transform technique under the uniform Dirichlet-type condition along the rim of wellbore. However, the boundary condition for a test well with partial penetration should be considered as a mixed-type condition. This mixed boundary value problem in a confined aquifer system of infinite radial extent and finite vertical extent is solved by the Laplace and finite Fourier transforms in conjunction with the triple series equations method. This approach provides analytical results for the drawdown in a partially penetrating well for arbitrary location of the well screen in a finite thickness aquifer. The semi-analytical solutions are particularly useful for the practical applications from the computational point of view.

  1. Determination of fundamental asteroseismic parameters using the Hilbert transform

    NASA Astrophysics Data System (ADS)

    Kiefer, René; Schad, Ariane; Herzberg, Wiebke; Roth, Markus

    2015-06-01

    Context. Solar-like oscillations exhibit a regular pattern of frequencies. This pattern is dominated by the small and large frequency separations between modes. The accurate determination of these parameters is of great interest, because they give information about e.g. the evolutionary state and the mass of a star. Aims: We want to develop a robust method to determine the large and small frequency separations for time series with low signal-to-noise ratio. For this purpose, we analyse a time series of the Sun from the GOLF instrument aboard SOHO and a time series of the star KIC 5184732 from the NASA Kepler satellite by employing a combination of Fourier and Hilbert transform. Methods: We use the analytic signal of filtered stellar oscillation time series to compute the signal envelope. Spectral analysis of the signal envelope then reveals frequency differences of dominant modes in the periodogram of the stellar time series. Results: With the described method the large frequency separation Δν can be extracted from the envelope spectrum even for data of poor signal-to-noise ratio. A modification of the method allows for an overview of the regularities in the periodogram of the time series.

  2. Integración automatizada de las ecuaciones de Lagrange en el movimiento orbital.

    NASA Astrophysics Data System (ADS)

    Abad, A.; San Juan, J. F.

    The new techniques of algebraic manipulation, especially the Poisson Series Processor, permit the analytical integration of the more and more complex problems of celestial mechanics. The authors are developing a new Poisson Series Processor, PSPC, and they use it to solve the Lagrange equation of the orbital motion. They integrate the Lagrange equation by using the stroboscopic method, and apply it to the main problem of the artificial satellite theory.

  3. Wavelet-based analysis of circadian behavioral rhythms.

    PubMed

    Leise, Tanya L

    2015-01-01

    The challenging problems presented by noisy biological oscillators have led to the development of a great variety of methods for accurately estimating rhythmic parameters such as period and amplitude. This chapter focuses on wavelet-based methods, which can be quite effective for assessing how rhythms change over time, particularly if time series are at least a week in length. These methods can offer alternative views to complement more traditional methods of evaluating behavioral records. The analytic wavelet transform can estimate the instantaneous period and amplitude, as well as the phase of the rhythm at each time point, while the discrete wavelet transform can extract the circadian component of activity and measure the relative strength of that circadian component compared to those in other frequency bands. Wavelet transforms do not require the removal of noise or trend, and can, in fact, be effective at removing noise and trend from oscillatory time series. The Fourier periodogram and spectrogram are reviewed, followed by descriptions of the analytic and discrete wavelet transforms. Examples illustrate application of each method and their prior use in chronobiology is surveyed. Issues such as edge effects, frequency leakage, and implications of the uncertainty principle are also addressed. © 2015 Elsevier Inc. All rights reserved.

  4. Local Modelling of Groundwater Flow Using Analytic Element Method Three-dimensional Transient Unconfined Groundwater Flow With Partially Penetrating Wells and Ellipsoidal Inhomogeneites

    NASA Astrophysics Data System (ADS)

    Jankovic, I.; Barnes, R. J.; Soule, R.

    2001-12-01

    The analytic element method is used to model local three-dimensional flow in the vicinity of partially penetrating wells. The flow domain is bounded by an impermeable horizontal base, a phreatic surface with recharge and a cylindrical lateral boundary. The analytic element solution for this problem contains (1) a fictitious source technique to satisfy the head and the discharge conditions along the phreatic surface, (2) a fictitious source technique to satisfy specified head conditions along the cylindrical boundary, (3) a method of imaging to satisfy the no-flow condition across the impermeable base, (4) the classical analytic solution for a well and (5) spheroidal harmonics to account for the influence of the inhomogeneities in hydraulic conductivity. Temporal variations of the flow system due to time-dependent recharge and pumping are represented by combining the analytic element method with a finite difference method: analytic element method is used to represent spatial changes in head and discharge, while the finite difference method represents temporal variations. The solution provides a very detailed description of local groundwater flow with an arbitrary number of wells of any orientation and an arbitrary number of ellipsoidal inhomogeneities of any size and conductivity. These inhomogeneities may be used to model local hydrogeologic features (such as gravel packs and clay lenses) that significantly influence the flow in the vicinity of partially penetrating wells. Several options for specifying head values along the lateral domain boundary are available. These options allow for inclusion of the model into steady and transient regional groundwater models. The head values along the lateral domain boundary may be specified directly (as time series). The head values along the lateral boundary may also be assigned by specifying the water-table gradient and a head value at a single point (as time series). A case study is included to demonstrate the application of the model in local modeling of the groundwater flow. Transient three-dimensional capture zones are delineated for a site on Prairie Island, MN. Prairie Island is located on the Mississippi River 40 miles south of the Twin Cities metropolitan area. The case study focuses on a well that has been known to contain viral DNA. The objective of the study was to assess the potential for pathogen migration toward the well.

  5. Zwitterionic, cationic, and anionic fluorinated chemicals in aqueous film forming foam formulations and groundwater from U.S. military bases by nonaqueous large-volume injection HPLC-MS/MS.

    PubMed

    Backe, Will J; Day, Thomas C; Field, Jennifer A

    2013-05-21

    A new analytical method was developed to quantify 26 newly-identified and 21 legacy (e.g. perfluoroalkyl carboxylates, perfluoroalkyl sulfonates, and fluorotelomer sulfonates) per and polyfluorinated alkyl substances (PFAS) in groundwater and aqueous film forming foam (AFFF) formulations. Prior to analysis, AFFF formulations were diluted into methanol and PFAS in groundwater were micro liquid-liquid extracted. Methanolic dilutions of AFFF formulations and groundwater extracts were analyzed by large-volume injection (900 μL) high-performance liquid chromatography tandem mass spectrometry. Orthogonal chromatography was performed using cation exchange (silica) and anion exchange (propylamine) guard columns connected in series to a reverse-phase (C18) analytical column. Method detection limits for PFAS in groundwater ranged from 0.71 ng/L to 67 ng/L, and whole-method accuracy ranged from 96% to 106% for analytes for which matched authentic analytical standards were available. For analytes without authentic analytical standards, whole-method accuracy ranged from 78 % to 144 %, and whole-method precision was less than 15 % relative standard deviation for all analytes. A demonstration of the method on groundwater samples from five military bases revealed eight of the 26 newly-identified PFAS present at concentrations up to 6900 ng/L. The newly-identified PFAS represent a minor fraction of the fluorinated chemicals in groundwater relative to legacy PFAS. The profiles of PFAS in groundwater differ from those found in fluorotelomer- and electrofluorination-based AFFF formulations, which potentially indicates environmental transformation of PFAS.

  6. DETERMINATION OF CHLOROETHENES IN ENVIRONMENTAL BIOLOGICAL SAMPLES USING GAS CHROMATOGRAPHY COUPLED WITH SOLID PHASE MICRO EXTRACTION

    EPA Science Inventory

    An analytical method has been developed to determine the chloroethene series, tetrachloroethene (PCE), trichloroethene (TCE),cisdichloroethene (cis-DCE) andtransdichloroethene (trans-DCE) in environmental biotreatment studies using gas chromatography coupled with a solid phase mi...

  7. Interrupted time series analysis in drug utilization research is increasing: systematic review and recommendations.

    PubMed

    Jandoc, Racquel; Burden, Andrea M; Mamdani, Muhammad; Lévesque, Linda E; Cadarette, Suzanne M

    2015-08-01

    To describe the use and reporting of interrupted time series methods in drug utilization research. We completed a systematic search of MEDLINE, Web of Science, and reference lists to identify English language articles through to December 2013 that used interrupted time series methods in drug utilization research. We tabulated the number of studies by publication year and summarized methodological detail. We identified 220 eligible empirical applications since 1984. Only 17 (8%) were published before 2000, and 90 (41%) were published since 2010. Segmented regression was the most commonly applied interrupted time series method (67%). Most studies assessed drug policy changes (51%, n = 112); 22% (n = 48) examined the impact of new evidence, 18% (n = 39) examined safety advisories, and 16% (n = 35) examined quality improvement interventions. Autocorrelation was considered in 66% of studies, 31% reported adjusting for seasonality, and 15% accounted for nonstationarity. Use of interrupted time series methods in drug utilization research has increased, particularly in recent years. Despite methodological recommendations, there is large variation in reporting of analytic methods. Developing methodological and reporting standards for interrupted time series analysis is important to improve its application in drug utilization research, and we provide recommendations for consideration. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Scattering from phase-separated vesicles. I. An analytical form factor for multiple static domains

    DOE PAGES

    Heberle, Frederick A.; Anghel, Vinicius N. P.; Katsaras, John

    2015-08-18

    This is the first in a series of studies considering elastic scattering from laterally heterogeneous lipid vesicles containing multiple domains. Unique among biophysical tools, small-angle neutron scattering can in principle give detailed information about the size, shape and spatial arrangement of domains. A general theory for scattering from laterally heterogeneous vesicles is presented, and the analytical form factor for static domains with arbitrary spatial configuration is derived, including a simplification for uniformly sized round domains. The validity of the model, including series truncation effects, is assessed by comparison with simulated data obtained from a Monte Carlo method. Several aspects ofmore » the analytical solution for scattering intensity are discussed in the context of small-angle neutron scattering data, including the effect of varying domain size and number, as well as solvent contrast. Finally, the analysis indicates that effects of domain formation are most pronounced when the vesicle's average scattering length density matches that of the surrounding solvent.« less

  9. Inverse scattering theory: Inverse scattering series method for one dimensional non-compact support potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Jie, E-mail: yjie2@uh.edu; Lesage, Anne-Cécile; Hussain, Fazle

    2014-12-15

    The reversion of the Born-Neumann series of the Lippmann-Schwinger equation is one of the standard ways to solve the inverse acoustic scattering problem. One limitation of the current inversion methods based on the reversion of the Born-Neumann series is that the velocity potential should have compact support. However, this assumption cannot be satisfied in certain cases, especially in seismic inversion. Based on the idea of distorted wave scattering, we explore an inverse scattering method for velocity potentials without compact support. The strategy is to decompose the actual medium as a known single interface reference medium, which has the same asymptoticmore » form as the actual medium and a perturbative scattering potential with compact support. After introducing the method to calculate the Green’s function for the known reference potential, the inverse scattering series and Volterra inverse scattering series are derived for the perturbative potential. Analytical and numerical examples demonstrate the feasibility and effectiveness of this method. Besides, to ensure stability of the numerical computation, the Lanczos averaging method is employed as a filter to reduce the Gibbs oscillations for the truncated discrete inverse Fourier transform of each order. Our method provides a rigorous mathematical framework for inverse acoustic scattering with a non-compact support velocity potential.« less

  10. Phase measurement error in summation of electron holography series.

    PubMed

    McLeod, Robert A; Bergen, Michael; Malac, Marek

    2014-06-01

    Off-axis electron holography is a method for the transmission electron microscope (TEM) that measures the electric and magnetic properties of a specimen. The electrostatic and magnetic potentials modulate the electron wavefront phase. The error in measurement of the phase therefore determines the smallest observable changes in electric and magnetic properties. Here we explore the summation of a hologram series to reduce the phase error and thereby improve the sensitivity of electron holography. Summation of hologram series requires independent registration and correction of image drift and phase wavefront drift, the consequences of which are discussed. Optimization of the electro-optical configuration of the TEM for the double biprism configuration is examined. An analytical model of image and phase drift, composed of a combination of linear drift and Brownian random-walk, is derived and experimentally verified. The accuracy of image registration via cross-correlation and phase registration is characterized by simulated hologram series. The model of series summation errors allows the optimization of phase error as a function of exposure time and fringe carrier frequency for a target spatial resolution. An experimental example of hologram series summation is provided on WS2 fullerenes. A metric is provided to measure the object phase error from experimental results and compared to analytical predictions. The ultimate experimental object root-mean-square phase error is 0.006 rad (2π/1050) at a spatial resolution less than 0.615 nm and a total exposure time of 900 s. The ultimate phase error in vacuum adjacent to the specimen is 0.0037 rad (2π/1700). The analytical prediction of phase error differs with the experimental metrics by +7% inside the object and -5% in the vacuum, indicating that the model can provide reliable quantitative predictions. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  11. Analytic solution to variance optimization with no short positions

    NASA Astrophysics Data System (ADS)

    Kondor, Imre; Papp, Gábor; Caccioli, Fabio

    2017-12-01

    We consider the variance portfolio optimization problem with a ban on short selling. We provide an analytical solution by means of the replica method for the case of a portfolio of independent, but not identically distributed, assets. We study the behavior of the solution as a function of the ratio r between the number N of assets and the length T of the time series of returns used to estimate risk. The no-short-selling constraint acts as an asymmetric \

  12. U/Th dating by SHRIMP RG ion-microprobe mass spectrometry using single ion-exchange beads

    NASA Astrophysics Data System (ADS)

    Bischoff, James L.; Wooden, Joe; Murphy, Fred; Williams, Ross W.

    2005-04-01

    We present a new analytical method for U-series isotopes using the SHRIMP RG (Sensitive High mass Resolution Ion MicroProbe) mass spectrometer that utilizes the preconcentration of the U-series isotopes from a sample onto a single ion-exchange bead. Ion-microprobe mass spectrometry is capable of producing Th ionization efficiencies in excess of 2%. Analytical precision is typically better than alpha spectroscopy, but not as good as thermal ionization mass spectroscopy (TIMS) and inductively coupled plasma multicollector mass spectrometry (ICP-MS). Like TIMS and ICP-MS the method allows analysis of small samples sizes, but also adds the advantage of rapidity of analysis. A major advantage of ion-microprobe analysis is that U and Th isotopes are analyzed in the same bead, simplifying the process of chemical separation. Analytical time on the instrument is ˜60 min per sample, and a single instrument-loading can accommodate 15-20 samples to be analyzed in a 24-h day. An additional advantage is that the method allows multiple reanalyses of the same bead and that samples can be archived for reanalysis at a later time. Because the ion beam excavates a pit only a few μm deep, the mount can later be repolished and reanalyzed numerous times. The method described of preconcentrating a low concentration sample onto a small conductive substrate to allow ion-microprobe mass spectrometry is potentially applicable to many other systems.

  13. U/Th dating by SHRIMP RG ion-microprobe mass spectrometry using single ion-exchange beads

    USGS Publications Warehouse

    Bischoff, J.L.; Wooden, J.; Murphy, F.; Williams, Ross W.

    2005-01-01

    We present a new analytical method for U-series isotopes using the SHRIMP RG (Sensitive High mass Resolution Ion MicroProbe) mass spectrometer that utilizes the preconcentration of the U-series isotopes from a sample onto a single ion-exchange bead. Ion-microprobe mass spectrometry is capable of producing Th ionization efficiencies in excess of 2%. Analytical precision is typically better than alpha spectroscopy, but not as good as thermal ionization mass spectroscopy (TIMS) and inductively coupled plasma multicollector mass spectrometry (ICP-MS). Like TIMS and ICP-MS the method allows analysis of small samples sizes, but also adds the advantage of rapidity of analysis. A major advantage of ion-microprobe analysis is that U and Th isotopes are analyzed in the same bead, simplifying the process of chemical separation. Analytical time on the instrument is ???60 min per sample, and a single instrument-loading can accommodate 15-20 samples to be analyzed in a 24-h day. An additional advantage is that the method allows multiple reanalyses of the same bead and that samples can be archived for reanalysis at a later time. Because the ion beam excavates a pit only a few ??m deep, the mount can later be repolished and reanalyzed numerous times. The method described of preconcentrating a low concentration sample onto a small conductive substrate to allow ion-microprobe mass spectrometry is potentially applicable to many other systems. Copyright ?? 2005 Elsevier Ltd.

  14. Developments in mycotoxin analysis: an update for 2013 – 2014

    USDA-ARS?s Scientific Manuscript database

    This review highlights developments in the determination of mycotoxins over a period between mid-2013 and mid-2014. It continues in the format of the previous articles of this series, emphasising on analytical methods to determine aflatoxins, Alternaria toxins, ergot alkaloids, fumonisins, ochratoxi...

  15. Analytical Characterization on Pulse Propagation in a Semiconductor Optical Amplifier Based on Homotopy Analysis Method

    NASA Astrophysics Data System (ADS)

    Jia, Xiaofei

    2018-06-01

    Starting from the basic equations describing the evolution of the carriers and photons inside a semiconductor optical amplifier (SOA), the equation governing pulse propagation in the SOA is derived. By employing homotopy analysis method (HAM), a series solution for the output pulse by the SOA is obtained, which can effectively characterize the temporal features of the nonlinear process during the pulse propagation inside the SOA. Moreover, the analytical solution is compared with numerical simulations with a good agreement. The theoretical results will benefit the future analysis of other problems related to the pulse propagation in the SOA.

  16. Survey of Manual Methods of Measurements of Asbestos, Beryllium, Lead, Cadmium, Selenium, and Mercury in Stationary Source Emissions. Environmental Monitoring Series.

    ERIC Educational Resources Information Center

    Coulson, Dale M.; And Others

    The purpose of this study is to evaluate existing manual methods for analyzing asbestos, beryllium, lead, cadmium, selenium, and mercury, and from this evaluation to provide the best and most practical set of analytical methods for measuring emissions of these elements from stationary sources. The work in this study was divided into two phases.…

  17. Multilevel Dynamic Generalized Structured Component Analysis for Brain Connectivity Analysis in Functional Neuroimaging Data.

    PubMed

    Jung, Kwanghee; Takane, Yoshio; Hwang, Heungsun; Woodward, Todd S

    2016-06-01

    We extend dynamic generalized structured component analysis (GSCA) to enhance its data-analytic capability in structural equation modeling of multi-subject time series data. Time series data of multiple subjects are typically hierarchically structured, where time points are nested within subjects who are in turn nested within a group. The proposed approach, named multilevel dynamic GSCA, accommodates the nested structure in time series data. Explicitly taking the nested structure into account, the proposed method allows investigating subject-wise variability of the loadings and path coefficients by looking at the variance estimates of the corresponding random effects, as well as fixed loadings between observed and latent variables and fixed path coefficients between latent variables. We demonstrate the effectiveness of the proposed approach by applying the method to the multi-subject functional neuroimaging data for brain connectivity analysis, where time series data-level measurements are nested within subjects.

  18. Andy Walker | NREL

    Science.gov Websites

    efficiency and renewable energy projects. His patent on the Renewable Energy Optimization (REO) method of distribution function for time-series simulation Analytical and numerical optimization Project delivery with System Operations and Maintenance: 2nd Edition, 2016, NREL/Sandia/Sunspec Alliance SuNLaMP PV O&M

  19. Multi-crosswell profile 3D imaging and method

    DOEpatents

    Washbourne, John K.; Rector, III, James W.; Bube, Kenneth P.

    2002-01-01

    Characterizing the value of a particular property, for example, seismic velocity, of a subsurface region of ground is described. In one aspect, the value of the particular property is represented using at least one continuous analytic function such as a Chebychev polynomial. The seismic data may include data derived from at least one crosswell dataset for the subsurface region of interest and may also include other data. In either instance, data may simultaneously be used from a first crosswell dataset in conjunction with one or more other crosswell datasets and/or with the other data. In another aspect, the value of the property is characterized in three dimensions throughout the region of interest using crosswell and/or other data. In still another aspect, crosswell datasets for highly deviated or horizontal boreholes are inherently useful. The method is performed, in part, by fitting a set of vertically spaced layer boundaries, represented by an analytic function such as a Chebychev polynomial, within and across the region encompassing the boreholes such that a series of layers is defined between the layer boundaries. Initial values of the particular property are then established between the layer boundaries and across the subterranean region using a series of continuous analytic functions. The continuous analytic functions are then adjusted to more closely match the value of the particular property across the subterranean region of ground to determine the value of the particular property for any selected point within the region.

  20. A simple method for computing the relativistic Compton scattering kernel for radiative transfer

    NASA Technical Reports Server (NTRS)

    Prasad, M. K.; Kershaw, D. S.; Beason, J. D.

    1986-01-01

    Correct computation of the Compton scattering kernel (CSK), defined to be the Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution, is reported. The CSK is analytically reduced to a single integral, which can then be rapidly evaluated using a power series expansion, asymptotic series, and rational approximation for sigma(s). The CSK calculation has application to production codes that aim at understanding certain astrophysical, laser fusion, and nuclear weapons effects phenomena.

  1. Kinetics analysis and quantitative calculations for the successive radioactive decay process

    NASA Astrophysics Data System (ADS)

    Zhou, Zhiping; Yan, Deyue; Zhao, Yuliang; Chai, Zhifang

    2015-01-01

    The general radioactive decay kinetics equations with branching were developed and the analytical solutions were derived by Laplace transform method. The time dependence of all the nuclide concentrations can be easily obtained by applying the equations to any known radioactive decay series. Taking the example of thorium radioactive decay series, the concentration evolution over time of various nuclide members in the family has been given by the quantitative numerical calculations with a computer. The method can be applied to the quantitative prediction and analysis for the daughter nuclides in the successive decay with branching of the complicated radioactive processes, such as the natural radioactive decay series, nuclear reactor, nuclear waste disposal, nuclear spallation, synthesis and identification of superheavy nuclides, radioactive ion beam physics and chemistry, etc.

  2. The Parker-Sochacki Method of Solving Differential Equations: Applications and Limitations

    NASA Astrophysics Data System (ADS)

    Rudmin, Joseph W.

    2006-11-01

    The Parker-Sochacki method is a powerful but simple technique of solving systems of differential equations, giving either analytical or numerical results. It has been in use for about 10 years now since its discovery by G. Edgar Parker and James Sochacki of the James Madison University Dept. of Mathematics and Statistics. It is being presented here because it is still not widely known and can benefit the listeners. It is a method of rapidly generating the Maclauren series to high order, non-iteratively. It has been successfully applied to more than a hundred systems of equations, including the classical many-body problem. Its advantages include its speed of calculation, its simplicity, and the fact that it uses only addition, subtraction and multiplication. It is not just a polynomial approximation, because it yields the Maclaurin series, and therefore exhibits the advantages and disadvantages of that series. A few applications will be presented.

  3. Direct application of Padé approximant for solving nonlinear differential equations.

    PubMed

    Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Garcia-Gervacio, Jose Luis; Huerta-Chua, Jesus; Morales-Mendoza, Luis Javier; Gonzalez-Lee, Mario

    2014-01-01

    This work presents a direct procedure to apply Padé method to find approximate solutions for nonlinear differential equations. Moreover, we present some cases study showing the strength of the method to generate highly accurate rational approximate solutions compared to other semi-analytical methods. The type of tested nonlinear equations are: a highly nonlinear boundary value problem, a differential-algebraic oscillator problem, and an asymptotic problem. The high accurate handy approximations obtained by the direct application of Padé method shows the high potential if the proposed scheme to approximate a wide variety of problems. What is more, the direct application of the Padé approximant aids to avoid the previous application of an approximative method like Taylor series method, homotopy perturbation method, Adomian Decomposition method, homotopy analysis method, variational iteration method, among others, as tools to obtain a power series solutions to post-treat with the Padé approximant. 34L30.

  4. Constructing analytic solutions on the Tricomi equation

    NASA Astrophysics Data System (ADS)

    Ghiasi, Emran Khoshrouye; Saleh, Reza

    2018-04-01

    In this paper, homotopy analysis method (HAM) and variational iteration method (VIM) are utilized to derive the approximate solutions of the Tricomi equation. Afterwards, the HAM is optimized to accelerate the convergence of the series solution by minimizing its square residual error at any order of the approximation. It is found that effect of the optimal values of auxiliary parameter on the convergence of the series solution is not negligible. Furthermore, the present results are found to agree well with those obtained through a closed-form equation available in the literature. To conclude, it is seen that the two are effective to achieve the solution of the partial differential equations.

  5. Conditions for synchronization in Josephson-junction arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chernikov, A.A.; Schmidt, G.

    An effective perturbation theoretical method has been developed to study the dynamics of Josephson Junction series arrays. It is shown that the inclusion of Junction capacitances, often ignored, has a significant impact on synchronization. Comparison of analytic with computational results over a wide range of parameters shows excellent agreement.

  6. Measurement of Henry's Law Constants Using Internal Standards: A Quantitative GC Experiment for the Instrumental Analysis or Environmental Chemistry Laboratory

    ERIC Educational Resources Information Center

    Ji, Chang; Boisvert, Susanne M.; Arida, Ann-Marie C.; Day, Shannon E.

    2008-01-01

    An internal standard method applicable to undergraduate instrumental analysis or environmental chemistry laboratory has been designed and tested to determine the Henry's law constants for a series of alkyl nitriles. In this method, a mixture of the analytes and an internal standard is prepared and used to make a standard solution (organic solvent)…

  7. An analytical study of physical models with inherited temporal and spatial memory

    NASA Astrophysics Data System (ADS)

    Jaradat, Imad; Alquran, Marwan; Al-Khaled, Kamel

    2018-04-01

    Du et al. (Sci. Reb. 3, 3431 (2013)) demonstrated that the fractional derivative order can be physically interpreted as a memory index by fitting the test data of memory phenomena. The aim of this work is to study analytically the joint effect of the memory index on time and space coordinates simultaneously. For this purpose, we introduce a novel bivariate fractional power series expansion that is accompanied by twofold fractional derivatives ordering α, β\\in(0,1]. Further, some convergence criteria concerning our expansion are presented and an analog of the well-known bivariate Taylor's formula in the sense of mixed fractional derivatives is obtained. Finally, in order to show the functionality and efficiency of this expansion, we employ the corresponding Taylor's series method to obtain closed-form solutions of various physical models with inherited time and space memory.

  8. Triangular dislocation: an analytical, artefact-free solution

    NASA Astrophysics Data System (ADS)

    Nikkhoo, Mehdi; Walter, Thomas R.

    2015-05-01

    Displacements and stress-field changes associated with earthquakes, volcanoes, landslides and human activity are often simulated using numerical models in an attempt to understand the underlying processes and their governing physics. The application of elastic dislocation theory to these problems, however, may be biased because of numerical instabilities in the calculations. Here, we present a new method that is free of artefact singularities and numerical instabilities in analytical solutions for triangular dislocations (TDs) in both full-space and half-space. We apply the method to both the displacement and the stress fields. The entire 3-D Euclidean space {R}3 is divided into two complementary subspaces, in the sense that in each one, a particular analytical formulation fulfils the requirements for the ideal, artefact-free solution for a TD. The primary advantage of the presented method is that the development of our solutions involves neither numerical approximations nor series expansion methods. As a result, the final outputs are independent of the scale of the input parameters, including the size and position of the dislocation as well as its corresponding slip vector components. Our solutions are therefore well suited for application at various scales in geoscience, physics and engineering. We validate the solutions through comparison to other well-known analytical methods and provide the MATLAB codes.

  9. SociAL Sensor Analytics: Measuring Phenomenology at Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corley, Courtney D.; Dowling, Chase P.; Rose, Stuart J.

    The objective of this paper is to present a system for interrogating immense social media streams through analytical methodologies that characterize topics and events critical to tactical and strategic planning. First, we propose a conceptual framework for interpreting social media as a sensor network. Time-series models and topic clustering algorithms are used to implement this concept into a functioning analytical system. Next, we address two scientific challenges: 1) to understand, quantify, and baseline phenomenology of social media at scale, and 2) to develop analytical methodologies to detect and investigate events of interest. This paper then documents computational methods and reportsmore » experimental findings that address these challenges. Ultimately, the ability to process billions of social media posts per week over a period of years enables the identification of patterns and predictors of tactical and strategic concerns at an unprecedented rate through SociAL Sensor Analytics (SALSA).« less

  10. Visual Analytics of integrated Data Systems for Space Weather Purposes

    NASA Astrophysics Data System (ADS)

    Rosa, Reinaldo; Veronese, Thalita; Giovani, Paulo

    Analysis of information from multiple data sources obtained through high resolution instrumental measurements has become a fundamental task in all scientific areas. The development of expert methods able to treat such multi-source data systems, with both large variability and measurement extension, is a key for studying complex scientific phenomena, especially those related to systemic analysis in space and environmental sciences. In this talk, we present a time series generalization introducing the concept of generalized numerical lattice, which represents a discrete sequence of temporal measures for a given variable. In this novel representation approach each generalized numerical lattice brings post-analytical data information. We define a generalized numerical lattice as a set of three parameters representing the following data properties: dimensionality, size and post-analytical measure (e.g., the autocorrelation, Hurst exponent, etc)[1]. From this representation generalization, any multi-source database can be reduced to a closed set of classified time series in spatiotemporal generalized dimensions. As a case study, we show a preliminary application in space science data, highlighting the possibility of a real time analysis expert system. In this particular application, we have selected and analyzed, using detrended fluctuation analysis (DFA), several decimetric solar bursts associated to X flare-classes. The association with geomagnetic activity is also reported. DFA method is performed in the framework of a radio burst automatic monitoring system. Our results may characterize the variability pattern evolution, computing the DFA scaling exponent, scanning the time series by a short windowing before the extreme event [2]. For the first time, the application of systematic fluctuation analysis for space weather purposes is presented. The prototype for visual analytics is implemented in a Compute Unified Device Architecture (CUDA) by using the K20 Nvidia graphics processing units (GPUs) to reduce the integrated analysis runtime. [1] Veronese et al. doi: 10.6062/jcis.2009.01.02.0021, 2010. [2] Veronese et al. doi:http://dx.doi.org/10.1016/j.jastp.2010.09.030, 2011.

  11. On the comparison of perturbation-iteration algorithm and residual power series method to solve fractional Zakharov-Kuznetsov equation

    NASA Astrophysics Data System (ADS)

    Şenol, Mehmet; Alquran, Marwan; Kasmaei, Hamed Daei

    2018-06-01

    In this paper, we present analytic-approximate solution of time-fractional Zakharov-Kuznetsov equation. This model demonstrates the behavior of weakly nonlinear ion acoustic waves in a plasma bearing cold ions and hot isothermal electrons in the presence of a uniform magnetic field. Basic definitions of fractional derivatives are described in the Caputo sense. Perturbation-iteration algorithm (PIA) and residual power series method (RPSM) are applied to solve this equation with success. The convergence analysis is also presented for both methods. Numerical results are given and then they are compared with the exact solutions. Comparison of the results reveal that both methods are competitive, powerful, reliable, simple to use and ready to apply to wide range of fractional partial differential equations.

  12. A Mathematica program for the approximate analytical solution to a nonlinear undamped Duffing equation by a new approximate approach

    NASA Astrophysics Data System (ADS)

    Wu, Dongmei; Wang, Zhongcheng

    2006-03-01

    According to Mickens [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563], the general HB (harmonic balance) method is an approximation to the convergent Fourier series representation of the periodic solution of a nonlinear oscillator and not an approximation to an expansion in terms of a small parameter. Consequently, for a nonlinear undamped Duffing equation with a driving force Bcos(ωx), to find a periodic solution when the fundamental frequency is identical to ω, the corresponding Fourier series can be written as y˜(x)=∑n=1m acos[(2n-1)ωx]. How to calculate the coefficients of the Fourier series efficiently with a computer program is still an open problem. For HB method, by substituting approximation y˜(x) into force equation, expanding the resulting expression into a trigonometric series, then letting the coefficients of the resulting lowest-order harmonic be zero, one can obtain approximate coefficients of approximation y˜(x) [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563]. But for nonlinear differential equations such as Duffing equation, it is very difficult to construct higher-order analytical approximations, because the HB method requires solving a set of algebraic equations for a large number of unknowns with very complex nonlinearities. To overcome the difficulty, forty years ago, Urabe derived a computational method for Duffing equation based on Galerkin procedure [M. Urabe, A. Reiter, Numerical computation of nonlinear forced oscillations by Galerkin's procedure, J. Math. Anal. Appl. 14 (1966) 107-140]. Dooren obtained an approximate solution of the Duffing oscillator with a special set of parameters by using Urabe's method [R. van Dooren, Stabilization of Cowell's classic finite difference method for numerical integration, J. Comput. Phys. 16 (1974) 186-192]. In this paper, in the frame of the general HB method, we present a new iteration algorithm to calculate the coefficients of the Fourier series. By using this new method, the iteration procedure starts with a(x)cos(ωx)+b(x)sin(ωx), and the accuracy may be improved gradually by determining new coefficients a,a,… will be produced automatically in an one-by-one manner. In all the stage of calculation, we need only to solve a cubic equation. Using this new algorithm, we develop a Mathematica program, which demonstrates following main advantages over the previous HB method: (1) it avoids solving a set of associate nonlinear equations; (2) it is easier to be implemented into a computer program, and produces a highly accurate solution with analytical expression efficiently. It is interesting to find that, generally, for a given set of parameters, a nonlinear Duffing equation can have three independent oscillation modes. For some sets of the parameters, it can have two modes with complex displacement and one with real displacement. But in some cases, it can have three modes, all of them having real displacement. Therefore, we can divide the parameters into two classes, according to the solution property: there is only one mode with real displacement and there are three modes with real displacement. This program should be useful to study the dynamically periodic behavior of a Duffing oscillator and can provide an approximate analytical solution with high-accuracy for testing the error behavior of newly developed numerical methods with a wide range of parameters. Program summaryTitle of program:AnalyDuffing.nb Catalogue identifier:ADWR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWR_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:none Computer for which the program is designed and others on which it has been tested:the program has been designed for a microcomputer and been tested on the microcomputer. Computers:IBM PC Installations:the address(es) of your computer(s) Operating systems under which the program has been tested:Windows XP Programming language used:Software Mathematica 4.2, 5.0 and 5.1 No. of lines in distributed program, including test data, etc.:23 663 No. of bytes in distributed program, including test data, etc.:152 321 Distribution format:tar.gz Memory required to execute with typical data:51 712 Bytes No. of bits in a word: No. of processors used:1 Has the code been vectorized?:no Peripherals used:no Program Library subprograms used:no Nature of physical problem:To find an approximate solution with analytical expressions for the undamped nonlinear Duffing equation with periodic driving force when the fundamental frequency is identical to the driving force. Method of solution:In the frame of the general HB method, by using a new iteration algorithm to calculate the coefficients of the Fourier series, we can obtain an approximate analytical solution with high-accuracy efficiently. Restrictions on the complexity of the problem:For problems, which have a large driving frequency, the convergence may be a little slow, because more iterative times are needed. Typical running time:several seconds Unusual features of the program:For an undamped Duffing equation, it can provide all the solutions or the oscillation modes with real displacement for any interesting parameters, for the required accuracy, efficiently. The program can be used to study the dynamically periodic behavior of a nonlinear oscillator, and can provide a high-accurate approximate analytical solution for developing high-accurate numerical method.

  13. A globally convergent and closed analytical solution of the Blasius equation with beneficial applications

    NASA Astrophysics Data System (ADS)

    Zheng, Jun; Han, Xinyue; Wang, ZhenTao; Li, Changfeng; Zhang, Jiazhong

    2017-06-01

    For about a century, people have been trying to seek for a globally convergent and closed analytical solution (CAS) of the Blasius Equation (BE). In this paper, we proposed a formally satisfied solution which could be parametrically expressed by two power series. Some analytical results of the laminar boundary layer of a flat plate, that were not analytically given in former studies, e.g. the thickness of the boundary layer and higher order derivatives, could be obtained based on the solution. Besides, the heat transfer in the laminar boundary layer of a flat plate with constant temperature could also be analytically formulated. Especially, the solution of the singular situation with Prandtl number Pr=0, which seems impossible to be analyzed in prior studies, could be given analytically. The method for finding the CAS of Blasius equation was also utilized in the problem of the boundary layer regulation through wall injection and slip velocity on the wall surface.

  14. Volterra series truncation and kernel estimation of nonlinear systems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Billings, S. A.

    2017-02-01

    The Volterra series model is a direct generalisation of the linear convolution integral and is capable of displaying the intrinsic features of a nonlinear system in a simple and easy to apply way. Nonlinear system analysis using Volterra series is normally based on the analysis of its frequency-domain kernels and a truncated description. But the estimation of Volterra kernels and the truncation of Volterra series are coupled with each other. In this paper, a novel complex-valued orthogonal least squares algorithm is developed. The new algorithm provides a powerful tool to determine which terms should be included in the Volterra series expansion and to estimate the kernels and thus solves the two problems all together. The estimated results are compared with those determined using the analytical expressions of the kernels to validate the method. To further evaluate the effectiveness of the method, the physical parameters of the system are also extracted from the measured kernels. Simulation studies demonstrates that the new approach not only can truncate the Volterra series expansion and estimate the kernels of a weakly nonlinear system, but also can indicate the applicability of the Volterra series analysis in a severely nonlinear system case.

  15. Joint multifractal analysis based on the partition function approach: analytical analysis, numerical simulation and empirical application

    NASA Astrophysics Data System (ADS)

    Xie, Wen-Jie; Jiang, Zhi-Qiang; Gu, Gao-Feng; Xiong, Xiong; Zhou, Wei-Xing

    2015-10-01

    Many complex systems generate multifractal time series which are long-range cross-correlated. Numerous methods have been proposed to characterize the multifractal nature of these long-range cross correlations. However, several important issues about these methods are not well understood and most methods consider only one moment order. We study the joint multifractal analysis based on partition function with two moment orders, which was initially invented to investigate fluid fields, and derive analytically several important properties. We apply the method numerically to binomial measures with multifractal cross correlations and bivariate fractional Brownian motions without multifractal cross correlations. For binomial multifractal measures, the explicit expressions of mass function, singularity strength and multifractal spectrum of the cross correlations are derived, which agree excellently with the numerical results. We also apply the method to stock market indexes and unveil intriguing multifractality in the cross correlations of index volatilities.

  16. Fluctuation of similarity (FLUS) to detect transitions between distinct dynamical regimes in short time series

    PubMed Central

    Malik, Nishant; Marwan, Norbert; Zou, Yong; Mucha, Peter J.; Kurths, Jürgen

    2016-01-01

    A method to identify distinct dynamical regimes and transitions between those regimes in a short univariate time series was recently introduced [1], employing the computation of fluctuations in a measure of nonlinear similarity based on local recurrence properties. In the present work, we describe the details of the analytical relationships between this newly introduced measure and the well known concepts of attractor dimensions and Lyapunov exponents. We show that the new measure has linear dependence on the effective dimension of the attractor and it measures the variations in the sum of the Lyapunov spectrum. To illustrate the practical usefulness of the method, we identify various types of dynamical transitions in different nonlinear models. We present testbed examples for the new method’s robustness against noise and missing values in the time series. We also use this method to analyze time series of social dynamics, specifically an analysis of the U.S. crime record time series from 1975 to 1993. Using this method, we find that dynamical complexity in robberies was influenced by the unemployment rate until the late 1980’s. We have also observed a dynamical transition in homicide and robbery rates in the late 1980’s and early 1990’s, leading to increase in the dynamical complexity of these rates. PMID:25019852

  17. Optimal sampling for radiotelemetry studies of spotted owl habitat and home range.

    Treesearch

    Andrew B. Carey; Scott P. Horton; Janice A. Reid

    1989-01-01

    Radiotelemetry studies of spotted owl (Strix occidentalis) ranges and habitat-use must be designed efficiently to estimate parameters needed for a sample of individuals sufficient to describe the population. Independent data are required by analytical methods and provide the greatest return of information per effort. We examined time series of...

  18. Optimal Low-Thrust Limited-Power Transfers between Arbitrary Elliptic Coplanar Orbits

    NASA Technical Reports Server (NTRS)

    daSilvaFernandes, Sandro; dasChagasCarvalho, Francisco

    2007-01-01

    In this work, a complete first order analytical solution, which includes the short periodic terms, for the problem of optimal low-thrust limited-power transfers between arbitrary elliptic coplanar orbits in a Newtonian central gravity field is obtained through Hamilton-Jacobi theory and a perturbation method based on Lie series.

  19. A new multi-domain method based on an analytical control surface for linear and second-order mean drift wave loads on floating bodies

    NASA Astrophysics Data System (ADS)

    Liang, Hui; Chen, Xiaobo

    2017-10-01

    A novel multi-domain method based on an analytical control surface is proposed by combining the use of free-surface Green function and Rankine source function. A cylindrical control surface is introduced to subdivide the fluid domain into external and internal domains. Unlike the traditional domain decomposition strategy or multi-block method, the control surface here is not panelized, on which the velocity potential and normal velocity components are analytically expressed as a series of base functions composed of Laguerre function in vertical coordinate and Fourier series in the circumference. Free-surface Green function is applied in the external domain, and the boundary integral equation is constructed on the control surface in the sense of Galerkin collocation via integrating test functions orthogonal to base functions over the control surface. The external solution gives rise to the so-called Dirichlet-to-Neumann [DN2] and Neumann-to-Dirichlet [ND2] relations on the control surface. Irregular frequencies, which are only dependent on the radius of the control surface, are present in the external solution, and they are removed by extending the boundary integral equation to the interior free surface (circular disc) on which the null normal derivative of potential is imposed, and the dipole distribution is expressed as Fourier-Bessel expansion on the disc. In the internal domain, where the Rankine source function is adopted, new boundary integral equations are formulated. The point collocation is imposed over the body surface and free surface, while the collocation of the Galerkin type is applied on the control surface. The present method is valid in the computation of both linear and second-order mean drift wave loads. Furthermore, the second-order mean drift force based on the middle-field formulation can be calculated analytically by using the coefficients of the Fourier-Laguerre expansion.

  20. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    NASA Technical Reports Server (NTRS)

    Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen

    2016-01-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  1. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    NASA Astrophysics Data System (ADS)

    Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.

    2016-12-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  2. Rational approximations from power series of vector-valued meromorphic functions

    NASA Technical Reports Server (NTRS)

    Sidi, Avram

    1992-01-01

    Let F(z) be a vector-valued function, F: C yields C(sup N), which is analytic at z = 0 and meromorphic in a neighborhood of z = 0, and let its Maclaurin series be given. In this work we developed vector-valued rational approximation procedures for F(z) by applying vector extrapolation methods to the sequence of partial sums of its Maclaurin series. We analyzed some of the algebraic and analytic properties of the rational approximations thus obtained, and showed that they were akin to Pade approximations. In particular, we proved a Koenig type theorem concerning their poles and a de Montessus type theorem concerning their uniform convergence. We showed how optical approximations to multiple poles and to Laurent expansions about these poles can be constructed. Extensions of the procedures above and the accompanying theoretical results to functions defined in arbitrary linear spaces was also considered. One of the most interesting and immediate applications of the results of this work is to the matrix eigenvalue problem. In a forthcoming paper we exploited the developments of the present work to devise bona fide generalizations of the classical power method that are especially suitable for very large and sparse matrices. These generalizations can be used to approximate simultaneously several of the largest distinct eigenvalues and corresponding eigenvectors and invariant subspaces of arbitrary matrices which may or may not be diagonalizable, and are very closely related with known Krylov subspace methods.

  3. Time averaging, ageing and delay analysis of financial time series

    NASA Astrophysics Data System (ADS)

    Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf

    2017-06-01

    We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.

  4. A Multilevel Multiset Time-Series Model for Describing Complex Developmental Processes

    PubMed Central

    Ma, Xin; Shen, Jianping

    2017-01-01

    The authors sought to develop an analytical platform where multiple sets of time series can be examined simultaneously. This multivariate platform capable of testing interaction effects among multiple sets of time series can be very useful in empirical research. The authors demonstrated that the multilevel framework can readily accommodate this analytical capacity. Given their intention to use the multilevel multiset time-series model to pursue complicated research purposes, their resulting model is relatively simple to specify, to run, and to interpret. These advantages make the adoption of their model relatively effortless as long as researchers have the basic knowledge and skills in working with multilevel growth modeling. With multiple potential extensions of their model, the establishment of this analytical platform for analysis of multiple sets of time series can inspire researchers to pursue far more advanced research designs to address complex developmental processes in reality. PMID:29881094

  5. Number series of atoms, interatomic bonds and interface bonds defining zinc-blende nanocrystals as function of size, shape and surface orientation: Analytic tools to interpret solid state spectroscopy data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    König, Dirk, E-mail: dirk.koenig@unsw.edu.au

    2016-08-15

    Semiconductor nanocrystals (NCs) experience stress and charge transfer by embedding materials or ligands and impurity atoms. In return, the environment of NCs experiences a NC stress response which may lead to matrix deformation and propagated strain. Up to now, there is no universal gauge to evaluate the stress impact on NCs and their response as a function of NC size d{sub NC}. I deduce geometrical number series as analytical tools to obtain the number of NC atoms N{sub NC}(d{sub NC}[i]), bonds between NC atoms N{sub bnd}(d{sub NC}[i]) and interface bonds N{sub IF}(d{sub NC}[i]) for seven high symmetry zinc-blende (zb) NCsmore » with low-index faceting: {001} cubes, {111} octahedra, {110} dodecahedra, {001}-{111} pyramids, {111} tetrahedra, {111}-{001} quatrodecahedra and {001}-{111} quadrodecahedra. The fundamental insights into NC structures revealed here allow for major advancements in data interpretation and understanding of zb- and diamond-lattice based nanomaterials. The analytical number series can serve as a standard procedure for stress evaluation in solid state spectroscopy due to their deterministic nature, easy use and general applicability over a wide range of spectroscopy methods as well as NC sizes, forms and materials.« less

  6. Coherent and partially coherent dark hollow beams with rectangular symmetry and paraxial propagation properties

    NASA Astrophysics Data System (ADS)

    Cai, Yangjian; Zhang, Lei

    2006-07-01

    A theoretical model is proposed to describe coherent dark hollow beams (DHBs) with rectangular symmetry. The electric field of a coherent rectangular DHB is expressed as a superposition of a series of the electric field of a finite series of fundamental Gaussian beams. Analytical propagation formulas for a coherent rectangular DHB passing through paraxial optical systems are derived in a tensor form. Furthermore, for the more general case, we propose a theoretical model to describe a partially coherent rectangular DHB. Analytical propagation formulas for a partially coherent rectangular DHB passing through paraxial optical systems are derived. The beam propagation factor (M2 factor) for both coherent and partially coherent rectangular DHBs are studied. Numerical examples are given by using the derived formulas. Our models and method provide an effective way to describe and treat the propagation of coherent and partially coherent rectangular DHBs.

  7. Drift-Free Position Estimation of Periodic or Quasi-Periodic Motion Using Inertial Sensors

    PubMed Central

    Latt, Win Tun; Veluvolu, Kalyana Chakravarthy; Ang, Wei Tech

    2011-01-01

    Position sensing with inertial sensors such as accelerometers and gyroscopes usually requires other aided sensors or prior knowledge of motion characteristics to remove position drift resulting from integration of acceleration or velocity so as to obtain accurate position estimation. A method based on analytical integration has previously been developed to obtain accurate position estimate of periodic or quasi-periodic motion from inertial sensors using prior knowledge of the motion but without using aided sensors. In this paper, a new method is proposed which employs linear filtering stage coupled with adaptive filtering stage to remove drift and attenuation. The prior knowledge of the motion the proposed method requires is only approximate band of frequencies of the motion. Existing adaptive filtering methods based on Fourier series such as weighted-frequency Fourier linear combiner (WFLC), and band-limited multiple Fourier linear combiner (BMFLC) are modified to combine with the proposed method. To validate and compare the performance of the proposed method with the method based on analytical integration, simulation study is performed using periodic signals as well as real physiological tremor data, and real-time experiments are conducted using an ADXL-203 accelerometer. Results demonstrate that the performance of the proposed method outperforms the existing analytical integration method. PMID:22163935

  8. Subsonic flutter analysis addition to NASTRAN. [for use with CDC 6000 series digital computers

    NASA Technical Reports Server (NTRS)

    Doggett, R. V., Jr.; Harder, R. L.

    1973-01-01

    A subsonic flutter analysis capability has been developed for NASTRAN, and a developmental version of the program has been installed on the CDC 6000 series digital computers at the Langley Research Center. The flutter analysis is of the modal type, uses doublet lattice unsteady aerodynamic forces, and solves the flutter equations by using the k-method. Surface and one-dimensional spline functions are used to transform from the aerodynamic degrees of freedom to the structural degrees of freedom. Some preliminary applications of the method to a beamlike wing, a platelike wing, and a platelike wing with a folded tip are compared with existing experimental and analytical results.

  9. Analyzing chromatographic data using multilevel modeling.

    PubMed

    Wiczling, Paweł

    2018-06-01

    It is relatively easy to collect chromatographic measurements for a large number of analytes, especially with gradient chromatographic methods coupled with mass spectrometry detection. Such data often have a hierarchical or clustered structure. For example, analytes with similar hydrophobicity and dissociation constant tend to be more alike in their retention than a randomly chosen set of analytes. Multilevel models recognize the existence of such data structures by assigning a model for each parameter, with its parameters also estimated from data. In this work, a multilevel model is proposed to describe retention time data obtained from a series of wide linear organic modifier gradients of different gradient duration and different mobile phase pH for a large set of acids and bases. The multilevel model consists of (1) the same deterministic equation describing the relationship between retention time and analyte-specific and instrument-specific parameters, (2) covariance relationships relating various physicochemical properties of the analyte to chromatographically specific parameters through quantitative structure-retention relationship based equations, and (3) stochastic components of intra-analyte and interanalyte variability. The model was implemented in Stan, which provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods. Graphical abstract Relationships between log k and MeOH content for acidic, basic, and neutral compounds with different log P. CI credible interval, PSA polar surface area.

  10. Phase walk analysis of leptokurtic time series.

    PubMed

    Schreiber, Korbinian; Modest, Heike I; Räth, Christoph

    2018-06-01

    The Fourier phase information play a key role for the quantified description of nonlinear data. We present a novel tool for time series analysis that identifies nonlinearities by sensitively detecting correlations among the Fourier phases. The method, being called phase walk analysis, is based on well established measures from random walk analysis, which are now applied to the unwrapped Fourier phases of time series. We provide an analytical description of its functionality and demonstrate its capabilities on systematically controlled leptokurtic noise. Hereby, we investigate the properties of leptokurtic time series and their influence on the Fourier phases of time series. The phase walk analysis is applied to measured and simulated intermittent time series, whose probability density distribution is approximated by power laws. We use the day-to-day returns of the Dow-Jones industrial average, a synthetic time series with tailored nonlinearities mimicing the power law behavior of the Dow-Jones and the acceleration of the wind at an Atlantic offshore site. Testing for nonlinearities by means of surrogates shows that the new method yields strong significances for nonlinear behavior. Due to the drastically decreased computing time as compared to embedding space methods, the number of surrogate realizations can be increased by orders of magnitude. Thereby, the probability distribution of the test statistics can very accurately be derived and parameterized, which allows for much more precise tests on nonlinearities.

  11. Fluctuation of similarity to detect transitions between distinct dynamical regimes in short time series

    NASA Astrophysics Data System (ADS)

    Malik, Nishant; Marwan, Norbert; Zou, Yong; Mucha, Peter J.; Kurths, Jürgen

    2014-06-01

    A method to identify distinct dynamical regimes and transitions between those regimes in a short univariate time series was recently introduced [N. Malik et al., Europhys. Lett. 97, 40009 (2012), 10.1209/0295-5075/97/40009], employing the computation of fluctuations in a measure of nonlinear similarity based on local recurrence properties. In this work, we describe the details of the analytical relationships between this newly introduced measure and the well-known concepts of attractor dimensions and Lyapunov exponents. We show that the new measure has linear dependence on the effective dimension of the attractor and it measures the variations in the sum of the Lyapunov spectrum. To illustrate the practical usefulness of the method, we identify various types of dynamical transitions in different nonlinear models. We present testbed examples for the new method's robustness against noise and missing values in the time series. We also use this method to analyze time series of social dynamics, specifically an analysis of the US crime record time series from 1975 to 1993. Using this method, we find that dynamical complexity in robberies was influenced by the unemployment rate until the late 1980s. We have also observed a dynamical transition in homicide and robbery rates in the late 1980s and early 1990s, leading to increase in the dynamical complexity of these rates.

  12. Analytical model of cracking due to rebar corrosion expansion in concrete considering the structure internal force

    NASA Astrophysics Data System (ADS)

    Lin, Xiangyue; Peng, Minli; Lei, Fengming; Tan, Jiangxian; Shi, Huacheng

    2017-12-01

    Based on the assumptions of uniform corrosion and linear elastic expansion, an analytical model of cracking due to rebar corrosion expansion in concrete was established, which is able to consider the structure internal force. And then, by means of the complex variable function theory and series expansion technology established by Muskhelishvili, the corresponding stress component functions of concrete around the reinforcement were obtained. Also, a comparative analysis was conducted between the numerical simulation model and present model in this paper. The results show that the calculation results of both methods were consistent with each other, and the numerical deviation was less than 10%, proving that the analytical model established in this paper is reliable.

  13. Buckling Testing and Analysis of Space Shuttle Solid Rocket Motor Cylinders

    NASA Technical Reports Server (NTRS)

    Weidner, Thomas J.; Larsen, David V.; McCool, Alex (Technical Monitor)

    2002-01-01

    A series of full-scale buckling tests were performed on the space shuttle Reusable Solid Rocket Motor (RSRM) cylinders. The tests were performed to determine the buckling capability of the cylinders and to provide data for analytical comparison. A nonlinear ANSYS Finite Element Analysis (FEA) model was used to represent and evaluate the testing. Analytical results demonstrated excellent correlation to test results, predicting the failure load within 5%. The analytical value was on the conservative side, predicting a lower failure load than was applied to the test. The resulting study and analysis indicated the important parameters for FEA to accurately predict buckling failure. The resulting method was subsequently used to establish the pre-launch buckling capability of the space shuttle system.

  14. The non-Gaussian joint probability density function of slope and elevation for a nonlinear gravity wave field. [in ocean surface

    NASA Technical Reports Server (NTRS)

    Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.

    1984-01-01

    On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.

  15. From analytical solutions of solute transport equations to multidimensional time-domain random walk (TDRW) algorithms

    NASA Astrophysics Data System (ADS)

    Bodin, Jacques

    2015-03-01

    In this study, new multi-dimensional time-domain random walk (TDRW) algorithms are derived from approximate one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) analytical solutions of the advection-dispersion equation and from exact 1-D, 2-D, and 3-D analytical solutions of the pure-diffusion equation. These algorithms enable the calculation of both the time required for a particle to travel a specified distance in a homogeneous medium and the mass recovery at the observation point, which may be incomplete due to 2-D or 3-D transverse dispersion or diffusion. The method is extended to heterogeneous media, represented as a piecewise collection of homogeneous media. The particle motion is then decomposed along a series of intermediate checkpoints located on the medium interface boundaries. The accuracy of the multi-dimensional TDRW method is verified against (i) exact analytical solutions of solute transport in homogeneous media and (ii) finite-difference simulations in a synthetic 2-D heterogeneous medium of simple geometry. The results demonstrate that the method is ideally suited to purely diffusive transport and to advection-dispersion transport problems dominated by advection. Conversely, the method is not recommended for highly dispersive transport problems because the accuracy of the advection-dispersion TDRW algorithms degrades rapidly for a low Péclet number, consistent with the accuracy limit of the approximate analytical solutions. The proposed approach provides a unified methodology for deriving multi-dimensional time-domain particle equations and may be applicable to other mathematical transport models, provided that appropriate analytical solutions are available.

  16. CoinCalc-A new R package for quantifying simultaneities of event series

    NASA Astrophysics Data System (ADS)

    Siegmund, Jonatan F.; Siegmund, Nicole; Donner, Reik V.

    2017-01-01

    We present the new R package CoinCalc for performing event coincidence analysis (ECA), a novel statistical method to quantify the simultaneity of events contained in two series of observations, either as simultaneous or lagged coincidences within a user-specific temporal tolerance window. The package also provides different analytical as well as surrogate-based significance tests (valid under different assumptions about the nature of the observed event series) as well as an intuitive visualization of the identified coincidences. We demonstrate the usage of CoinCalc based on two typical geoscientific example problems addressing the relationship between meteorological extremes and plant phenology as well as that between soil properties and land cover.

  17. Analytical Modeling of a Double-Sided Flux Concentrating E-Core Transverse Flux Machine with Pole Windings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muljadi, Eduard; Hasan, Iftekhar; Husain, Tausif

    In this paper, a nonlinear analytical model based on the Magnetic Equivalent Circuit (MEC) method is developed for a double-sided E-Core Transverse Flux Machine (TFM). The proposed TFM has a cylindrical rotor, sandwiched between E-core stators on both sides. Ferrite magnets are used in the rotor with flux concentrating design to attain high airgap flux density, better magnet utilization, and higher torque density. The MEC model was developed using a series-parallel combination of flux tubes to estimate the reluctance network for different parts of the machine including air gaps, permanent magnets, and the stator and rotor ferromagnetic materials, in amore » two-dimensional (2-D) frame. An iterative Gauss-Siedel method is integrated with the MEC model to capture the effects of magnetic saturation. A single phase, 1 kW, 400 rpm E-Core TFM is analytically modeled and its results for flux linkage, no-load EMF, and generated torque, are verified with Finite Element Analysis (FEA). The analytical model significantly reduces the computation time while estimating results with less than 10 percent error.« less

  18. Annual banned-substance review: analytical approaches in human sports drug testing.

    PubMed

    Thevis, Mario; Kuuranne, Tiia; Walpurgis, Katja; Geyer, Hans; Schänzer, Wilhelm

    2016-01-01

    The aim of improving anti-doping efforts is predicated on several different pillars, including, amongst others, optimized analytical methods. These commonly result from exploiting most recent developments in analytical instrumentation as well as research data on elite athletes' physiology in general, and pharmacology, metabolism, elimination, and downstream effects of prohibited substances and methods of doping, in particular. The need for frequent and adequate adaptations of sports drug testing procedures has been incessant, largely due to the uninterrupted emergence of new chemical entities but also due to the apparent use of established or even obsolete drugs for reasons other than therapeutic means, such as assumed beneficial effects on endurance, strength, and regeneration capacities. Continuing the series of annual banned-substance reviews, literature concerning human sports drug testing published between October 2014 and September 2015 is summarized and reviewed in reference to the content of the 2015 Prohibited List as issued by the World Anti-Doping Agency (WADA), with particular emphasis on analytical approaches and their contribution to enhanced doping controls. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Approximate analytical solutions in the analysis of thin elastic plates

    NASA Astrophysics Data System (ADS)

    Goloskokov, Dmitriy P.; Matrosov, Alexander V.

    2018-05-01

    Two approaches to the construction of approximate analytical solutions for bending of a rectangular thin plate are presented: the superposition method based on the method of initial functions (MIF) and the one built using the Green's function in the form of orthogonal series. Comparison of two approaches is carried out by analyzing a square plate clamped along its contour. Behavior of the moment and the shear force in the neighborhood of the corner points is discussed. It is shown that both solutions give identical results at all points of the plate except for the neighborhoods of the corner points. There are differences in the values of bending moments and generalized shearing forces in the neighborhoods of the corner points.

  20. Extending existing structural identifiability analysis methods to mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2018-01-01

    The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Forecasting hotspots using predictive visual analytics approach

    DOEpatents

    Maciejewski, Ross; Hafen, Ryan; Rudolph, Stephen; Cleveland, William; Ebert, David

    2014-12-30

    A method for forecasting hotspots is provided. The method may include the steps of receiving input data at an input of the computational device, generating a temporal prediction based on the input data, generating a geospatial prediction based on the input data, and generating output data based on the time series and geospatial predictions. The output data may be configured to display at least one user interface at an output of the computational device.

  2. Approximate Solution of Time-Fractional Advection-Dispersion Equation via Fractional Variational Iteration Method

    PubMed Central

    İbiş, Birol

    2014-01-01

    This paper aims to obtain the approximate solution of time-fractional advection-dispersion equation (FADE) involving Jumarie's modification of Riemann-Liouville derivative by the fractional variational iteration method (FVIM). FVIM provides an analytical approximate solution in the form of a convergent series. Some examples are given and the results indicate that the FVIM is of high accuracy, more efficient, and more convenient for solving time FADEs. PMID:24578662

  3. Library Statistics of Colleges and Universities, 1963-1964. Analytic Report.

    ERIC Educational Resources Information Center

    Samore, Theodore

    The series of analytic reports on management and salary data of the academic libraries, paralleling the series titled "Library Statistics of Colleges and Universities, Institutional Data," is continued by this publication. The statistical tables of this report are of value to administrators, librarians, and others because: (1) they help…

  4. On analytic modeling of lunar perturbations of artificial satellites of the earth

    NASA Astrophysics Data System (ADS)

    Lane, M. T.

    1989-06-01

    Two different procedures for analytically modeling the effects of the moon's direct gravitational force on artificial earth satellites are discussed from theoretical and numerical viewpoints. One is developed using classical series expansions of inclination and eccentricity for both the satellite and the moon, and the other employs the method of averaging. Both solutions are seen to have advantages, but it is shown that while the former is more accurate in special situations, the latter is quicker and more practical for the general orbit determination problem where observed data are used to correct the orbit in near real time.

  5. Determination of gap solution and critical temperature in doped graphene superconductivity

    NASA Astrophysics Data System (ADS)

    Xu, Chenmei; Yang, Yisong

    2017-04-01

    It is shown that the gap solution and critical transition temperature are significantly enhanced by doping in a recently developed BCS formalism for graphene superconductivity in such a way that positive gap and transition temperature both occur in arbitrary pairing coupling as far as doping is present. The analytic construction of the BCS gap and transition temperature offers highly effective globally convergent iterative methods for the computation of these quantities. A series of numerical examples are presented as illustrations which are in agreement with the theoretical and experimental results obtained in the physics literature and consolidate the analytic understanding achieved.

  6. Design of Passive Power Filter for Hybrid Series Active Power Filter using Estimation, Detection and Classification Method

    NASA Astrophysics Data System (ADS)

    Swain, Sushree Diptimayee; Ray, Pravat Kumar; Mohanty, K. B.

    2016-06-01

    This research paper discover the design of a shunt Passive Power Filter (PPF) in Hybrid Series Active Power Filter (HSAPF) that employs a novel analytic methodology which is superior than FFT analysis. This novel approach consists of the estimation, detection and classification of the signals. The proposed method is applied to estimate, detect and classify the power quality (PQ) disturbance such as harmonics. This proposed work deals with three methods: the harmonic detection through wavelet transform method, the harmonic estimation by Kalman Filter algorithm and harmonic classification by decision tree method. From different type of mother wavelets in wavelet transform method, the db8 is selected as suitable mother wavelet because of its potency on transient response and crouched oscillation at frequency domain. In harmonic compensation process, the detected harmonic is compensated through Hybrid Series Active Power Filter (HSAPF) based on Instantaneous Reactive Power Theory (IRPT). The efficacy of the proposed method is verified in MATLAB/SIMULINK domain and as well as with an experimental set up. The obtained results confirm the superiority of the proposed methodology than FFT analysis. This newly proposed PPF is used to make the conventional HSAPF more robust and stable.

  7. Multi-center evaluation of analytical performance of the Beckman Coulter AU5822 chemistry analyzer.

    PubMed

    Zimmerman, M K; Friesen, L R; Nice, A; Vollmer, P A; Dockery, E A; Rankin, J D; Zmuda, K; Wong, S H

    2015-09-01

    Our three academic institutions, Indiana University, Northwestern Memorial Hospital, and Wake Forest, were among the first in the United States to implement the Beckman Coulter AU5822 series chemistry analyzers. We undertook this post-hoc multi-center study by merging our data to determine performance characteristics and the impact of methodology changes on analyte measurement. We independently completed performance validation studies including precision, linearity/analytical measurement range, method comparison, and reference range verification. Complete data sets were available from at least one institution for 66 analytes with the following groups: 51 from all three institutions, and 15 from 1 or 2 institutions for a total sample size of 12,064. Precision was similar among institutions. Coefficients of variation (CV) were <10% for 97%. Analytes with CVs >10% included direct bilirubin and digoxin. All analytes exhibited linearity over the analytical measurement range. Method comparison data showed slopes between 0.900-1.100 for 87.9% of the analytes. Slopes for amylase, tobramycin and urine amylase were <0.8; the slope for lipase was >1.5, due to known methodology or standardization differences. Consequently, reference ranges of amylase, urine amylase and lipase required only minor or no modification. The four AU5822 analyzers independently evaluated at three sites showed consistent precision, linearity, and correlation results. Since installations, the test results had been well received by clinicians from all three institutions. Copyright © 2015. Published by Elsevier Inc.

  8. A Review and Introduction to Higher Education Price Response Studies. Working Paper Series.

    ERIC Educational Resources Information Center

    Chisholm, Mark; Cohen, Bethaviva

    Background information needed to understand the literature on the impact of price on college attendance (i.e. price-response literature) is provided. After briefly introducing price theory and its use in demand studies in higher education, the major expository articles are reviewed, and major analytical methods used by researchers are examined.…

  9. Online Learning Era: Exploring the Most Decisive Determinants of MOOCs in Taiwanese Higher Education

    ERIC Educational Resources Information Center

    Hsieh, Ming-Yuan

    2016-01-01

    Because the development of Taiwanese Massive Open Online Course (MOOCs) websites is at this moment full of vitality, this research employs a series of analytical cross-measurements of Quality Function Deployment method of House of Quality (QFD-HOQ) model and Multiple Criteria Decision Making (MCDM) methodology to cross-evaluate the weighted…

  10. Detrended partial cross-correlation analysis of two nonstationary time series influenced by common external forces

    NASA Astrophysics Data System (ADS)

    Qian, Xi-Yuan; Liu, Ya-Min; Jiang, Zhi-Qiang; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene

    2015-06-01

    When common factors strongly influence two power-law cross-correlated time series recorded in complex natural or social systems, using detrended cross-correlation analysis (DCCA) without considering these common factors will bias the results. We use detrended partial cross-correlation analysis (DPXA) to uncover the intrinsic power-law cross correlations between two simultaneously recorded time series in the presence of nonstationarity after removing the effects of other time series acting as common forces. The DPXA method is a generalization of the detrended cross-correlation analysis that takes into account partial correlation analysis. We demonstrate the method by using bivariate fractional Brownian motions contaminated with a fractional Brownian motion. We find that the DPXA is able to recover the analytical cross Hurst indices, and thus the multiscale DPXA coefficients are a viable alternative to the conventional cross-correlation coefficient. We demonstrate the advantage of the DPXA coefficients over the DCCA coefficients by analyzing contaminated bivariate fractional Brownian motions. We calculate the DPXA coefficients and use them to extract the intrinsic cross correlation between crude oil and gold futures by taking into consideration the impact of the U.S. dollar index. We develop the multifractal DPXA (MF-DPXA) method in order to generalize the DPXA method and investigate multifractal time series. We analyze multifractal binomial measures masked with strong white noises and find that the MF-DPXA method quantifies the hidden multifractal nature while the multifractal DCCA method fails.

  11. Cross-stream diffusion under pressure-driven flow in microchannels with arbitrary aspect ratios: a phase diagram study using a three-dimensional analytical model

    PubMed Central

    Song, Hongjun; Wang, Yi; Pant, Kapil

    2011-01-01

    This article presents a three-dimensional analytical model to investigate cross-stream diffusion transport in rectangular microchannels with arbitrary aspect ratios under pressure-driven flow. The Fourier series solution to the three-dimensional convection–diffusion equation is obtained using a double integral transformation method and associated eigensystem calculation. A phase diagram derived from the dimensional analysis is presented to thoroughly interrogate the characteristics in various transport regimes and examine the validity of the model. The analytical model is verified against both experimental and numerical models in terms of the concentration profile, diffusion scaling law, and mixing efficiency with excellent agreement (with <0.5% relative error). Quantitative comparison against other prior analytical models in extensive parameter space is also performed, which demonstrates that the present model accommodates much broader transport regimes with significantly enhanced applicability. PMID:22247719

  12. Cross-stream diffusion under pressure-driven flow in microchannels with arbitrary aspect ratios: a phase diagram study using a three-dimensional analytical model.

    PubMed

    Song, Hongjun; Wang, Yi; Pant, Kapil

    2012-01-01

    This article presents a three-dimensional analytical model to investigate cross-stream diffusion transport in rectangular microchannels with arbitrary aspect ratios under pressure-driven flow. The Fourier series solution to the three-dimensional convection-diffusion equation is obtained using a double integral transformation method and associated eigensystem calculation. A phase diagram derived from the dimensional analysis is presented to thoroughly interrogate the characteristics in various transport regimes and examine the validity of the model. The analytical model is verified against both experimental and numerical models in terms of the concentration profile, diffusion scaling law, and mixing efficiency with excellent agreement (with <0.5% relative error). Quantitative comparison against other prior analytical models in extensive parameter space is also performed, which demonstrates that the present model accommodates much broader transport regimes with significantly enhanced applicability.

  13. Maximum flow-based resilience analysis: From component to system

    PubMed Central

    Jin, Chong; Li, Ruiying; Kang, Rui

    2017-01-01

    Resilience, the ability to withstand disruptions and recover quickly, must be considered during system design because any disruption of the system may cause considerable loss, including economic and societal. This work develops analytic maximum flow-based resilience models for series and parallel systems using Zobel’s resilience measure. The two analytic models can be used to evaluate quantitatively and compare the resilience of the systems with the corresponding performance structures. For systems with identical components, the resilience of the parallel system increases with increasing number of components, while the resilience remains constant in the series system. A Monte Carlo-based simulation method is also provided to verify the correctness of our analytic resilience models and to analyze the resilience of networked systems based on that of components. A road network example is used to illustrate the analysis process, and the resilience comparison among networks with different topologies but the same components indicates that a system with redundant performance is usually more resilient than one without redundant performance. However, not all redundant capacities of components can improve the system resilience, the effectiveness of the capacity redundancy depends on where the redundant capacity is located. PMID:28545135

  14. Application of Learning Analytics Using Clustering Data Mining for Students' Disposition Analysis

    ERIC Educational Resources Information Center

    Bharara, Sanyam; Sabitha, Sai; Bansal, Abhay

    2018-01-01

    Learning Analytics (LA) is an emerging field in which sophisticated analytic tools are used to improve learning and education. It draws from, and is closely tied to, a series of other fields of study like business intelligence, web analytics, academic analytics, educational data mining, and action analytics. The main objective of this research…

  15. Hydraulic modeling of riverbank filtration systems with curved boundaries using analytic elements and series solutions

    NASA Astrophysics Data System (ADS)

    Bakker, Mark

    2010-08-01

    A new analytic solution approach is presented for the modeling of steady flow to pumping wells near rivers in strip aquifers; all boundaries of the river and strip aquifer may be curved. The river penetrates the aquifer only partially and has a leaky stream bed. The water level in the river may vary spatially. Flow in the aquifer below the river is semi-confined while flow in the aquifer adjacent to the river is confined or unconfined and may be subject to areal recharge. Analytic solutions are obtained through superposition of analytic elements and Fourier series. Boundary conditions are specified at collocation points along the boundaries. The number of collocation points is larger than the number of coefficients in the Fourier series and a solution is obtained in the least squares sense. The solution is analytic while boundary conditions are met approximately. Very accurate solutions are obtained when enough terms are used in the series. Several examples are presented for domains with straight and curved boundaries, including a well pumping near a meandering river with a varying water level. The area of the river bottom where water infiltrates into the aquifer is delineated and the fraction of river water in the well water is computed for several cases.

  16. Online identification of chlorogenic acids, sesquiterpene lactones, and flavonoids in the Brazilian arnica Lychnophora ericoides Mart. (Asteraceae) leaves by HPLC-DAD-MS and HPLC-DAD-MS/MS and a validated HPLC-DAD method for their simultaneous analysis.

    PubMed

    Gobbo-Neto, Leonardo; Lopes, Norberto P

    2008-02-27

    Lychnophora ericoides Mart. (Asteraceae, Vernonieae) is a plant, endemic to Brazil, with occurrence restricted to the "cerrado" biome. Traditional medicine employs alcoholic and aqueous-alcoholic preparations of leaves from this species for the treatment of wounds, inflammation, and pain. Furthermore, leaves of L. ericoides are also widely used as flavorings for the Brazilian traditional spirit "cachaça". A method has been developed for the extraction and HPLC-DAD analysis of the secondary metabolites of L. ericoides leaves. This analytical method was validated with 11 secondary metabolites chosen to represent the different classes and polarities of secondary metabolites occurring in L. ericoides leaves, and good responses were obtained for each validation parameter analyzed. The same HPLC analytical method was also employed for online secondary metabolite identification by HPLC-DAD-MS and HPLC-DAD-MS/MS, leading to the identification of di- C-glucosylflavones, coumaroylglucosylflavonols, flavone, flavanones, flavonols, chalcones, goyazensolide, and eremantholide-type sesquiterpene lactones and positional isomeric series of chlorogenic acids possessing caffeic and/or ferulic moieties. Among the 52 chromatographic peaks observed, 36 were fully identified and 8 were attributed to compounds belonging to series of caffeoylferuloylquinic and diferuloylquinic acids that could not be individualized from each other.

  17. Reverse phase protein microarrays: fluorometric and colorimetric detection.

    PubMed

    Gallagher, Rosa I; Silvestri, Alessandra; Petricoin, Emanuel F; Liotta, Lance A; Espina, Virginia

    2011-01-01

    The Reverse Phase Protein Microarray (RPMA) is an array platform used to quantitate proteins and their posttranslationally modified forms. RPMAs are applicable for profiling key cellular signaling pathways and protein networks, allowing direct comparison of the activation state of proteins from multiple samples within the same array. The RPMA format consists of proteins immobilized directly on a nitrocellulose substratum. The analyte is subsequently probed with a primary antibody and a series of reagents for signal amplification and detection. Due to the diversity, low concentration, and large dynamic range of protein analytes, RPMAs require stringent signal amplification methods, high quality image acquisition, and software capable of precisely analyzing spot intensities on an array. Microarray detection strategies can be either fluorescent or colorimetric. The choice of a detection system depends on (a) the expected analyte concentration, (b) type of microarray imaging system, and (c) type of sample. The focus of this chapter is to describe RPMA detection and imaging using fluorescent and colorimetric (diaminobenzidine (DAB)) methods.

  18. An adaptive gridless methodology in one dimension

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, N.T.; Hailey, C.E.

    1996-09-01

    Gridless numerical analysis offers great potential for accurately solving for flow about complex geometries or moving boundary problems. Because gridless methods do not require point connection, the mesh cannot twist or distort. The gridless method utilizes a Taylor series about each point to obtain the unknown derivative terms from the current field variable estimates. The governing equation is then numerically integrated to determine the field variables for the next iteration. Effects of point spacing and Taylor series order on accuracy are studied, and they follow similar trends of traditional numerical techniques. Introducing adaption by point movement using a spring analogymore » allows the solution method to track a moving boundary. The adaptive gridless method models linear, nonlinear, steady, and transient problems. Comparison with known analytic solutions is given for these examples. Although point movement adaption does not provide a significant increase in accuracy, it helps capture important features and provides an improved solution.« less

  19. Recursive linearization of multibody dynamics equations of motion

    NASA Technical Reports Server (NTRS)

    Lin, Tsung-Chieh; Yae, K. Harold

    1989-01-01

    The equations of motion of a multibody system are nonlinear in nature, and thus pose a difficult problem in linear control design. One approach is to have a first-order approximation through the numerical perturbations at a given configuration, and to design a control law based on the linearized model. Here, a linearized model is generated analytically by following the footsteps of the recursive derivation of the equations of motion. The equations of motion are first written in a Newton-Euler form, which is systematic and easy to construct; then, they are transformed into a relative coordinate representation, which is more efficient in computation. A new computational method for linearization is obtained by applying a series of first-order analytical approximations to the recursive kinematic relationships. The method has proved to be computationally more efficient because of its recursive nature. It has also turned out to be more accurate because of the fact that analytical perturbation circumvents numerical differentiation and other associated numerical operations that may accumulate computational error, thus requiring only analytical operations of matrices and vectors. The power of the proposed linearization algorithm is demonstrated, in comparison to a numerical perturbation method, with a two-link manipulator and a seven degrees of freedom robotic manipulator. Its application to control design is also demonstrated.

  20. Transient dynamics of a flexible rotor with squeeze film dampers

    NASA Technical Reports Server (NTRS)

    Buono, D. F.; Schlitzer, L. D.; Hall, R. G., III; Hibner, D. H.

    1978-01-01

    A series of simulated blade loss tests are reported on a test rotor designed to operate above its second bending critical speed. A series of analyses were performed which predicted the transient behavior of the test rig for each of the blade loss tests. The scope of the program included the investigation of transient rotor dynamics of a flexible rotor system, similar to modern flexible jet engine rotors, both with and without squeeze film dampers. The results substantiate the effectiveness of squeeze film dampers and document the ability of available analytical methods to predict their effectiveness and behavior.

  1. van der Waals interactions between nanostructures: Some analytic results from series expansions

    NASA Astrophysics Data System (ADS)

    Stedman, T.; Drosdoff, D.; Woods, L. M.

    2014-01-01

    The van der Waals force between objects of nontrivial geometries is considered. A technique based on a perturbation series approach is formulated in the dilute limit. We show that the dielectric response and object size can be decoupled and dominant contributions in terms of object separations can be obtained. This is a powerful method, which enables straightforward calculations of the interaction for different geometries. Our results for planar structures, such as thin sheets, infinitely long ribbons, and ribbons with finite dimensions, may be applicable for nanostructured devices where the van der Waals interaction plays an important role.

  2. Shape Optimization of Cylindrical Shell for Interior Noise

    NASA Technical Reports Server (NTRS)

    Robinson, Jay H.

    1999-01-01

    In this paper an analytic method is used to solve for the cross spectral density of the interior acoustic response of a cylinder with nonuniform thickness subjected to turbulent boundary layer excitation. The cylinder is of honeycomb core construction with the thickness of the core material expressed as a cosine series in the circumferential direction. The coefficients of this series are used as the design variable in the optimization study. The objective function is the space and frequency averaged acoustic response. Results confirm the presence of multiple local minima as previously reported and demonstrate the potential for modest noise reduction.

  3. A theoretical study of alpha star populations in loaded nuclear emulsions

    USGS Publications Warehouse

    Senftle, F.E.; Farley, T.A.; Stieff, L.R.

    1954-01-01

    This theoretical study of the alpha star populations in loaded emulsions was undertaken in an effort to find a quantitative method for the analysis of less than microgram amounts of thorium in the presence of larger amounts of uranium. Analytical expressions for each type of star from each of the significantly contributing members of the uranium and thorium series as well as summation formulas for the whole series have been computed. The analysis for thorium may be made by determining the abundance of five-branched stars in a loaded nuclear emulsion and comparing of observed and predicted star populations. The comparison may also be used to check the half-lives of several members of the uranium and thorium series. ?? 1954.

  4. Generalized model of electromigration with 1:1 (analyte:selector) complexation stoichiometry: part I. Theory.

    PubMed

    Dubský, Pavel; Müllerová, Ludmila; Dvořák, Martin; Gaš, Bohuslav

    2015-03-06

    The model of electromigration of a multivalent weak acidic/basic/amphoteric analyte that undergoes complexation with a mixture of selectors is introduced. The model provides an extension of the series of models starting with the single-selector model without dissociation by Wren and Rowe in 1992, continuing with the monovalent weak analyte/single-selector model by Rawjee, Williams and Vigh in 1993 and that by Lelièvre in 1994, and ending with the multi-selector overall model without dissociation developed by our group in 2008. The new multivalent analyte multi-selector model shows that the effective mobility of the analyte obeys the original Wren and Row's formula. The overall complexation constant, mobility of the free analyte and mobility of complex can be measured and used in a standard way. The mathematical expressions for the overall parameters are provided. We further demonstrate mathematically that the pH dependent parameters for weak analytes can be simply used as an input into the multi-selector overall model and, in reverse, the multi-selector overall parameters can serve as an input into the pH-dependent models for the weak analytes. These findings can greatly simplify the rationale method development in analytical electrophoresis, specifically enantioseparations. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Analyticity without Differentiability

    ERIC Educational Resources Information Center

    Kirillova, Evgenia; Spindler, Karlheinz

    2008-01-01

    In this article we derive all salient properties of analytic functions, including the analytic version of the inverse function theorem, using only the most elementary convergence properties of series. Not even the notion of differentiability is required to do so. Instead, analytical arguments are replaced by combinatorial arguments exhibiting…

  6. MDRC's Approach to Using Predictive Analytics to Improve and Target Social Services Based on Risk. Reflections on Methodology

    ERIC Educational Resources Information Center

    Porter, Kristin E.; Balu, Rekha; Hendra, Richard

    2017-01-01

    This post is one in a series highlighting MDRC's methodological work. Contributors discuss the refinement and practical use of research methods being employed across the organization. Across policy domains, practitioners and researchers are benefiting from a trend of greater access to both more detailed and frequent data and the increased…

  7. Effect of train carbody's parameters on vertical bending stiffness performance

    NASA Astrophysics Data System (ADS)

    Yang, Guangwu; Wang, Changke; Xiang, Futeng; Xiao, Shoune

    2016-10-01

    Finite element analysis(FEA) and modal test are main methods to give the first-order vertical bending vibration frequency of train carbody at present, but they are inefficiency and waste plenty of time. Based on Timoshenko beam theory, the bending deformation, moment of inertia and shear deformation are considered. Carbody is divided into some parts with the same length, and it's stiffness is calculated with series principle, it's cross section area, moment of inertia and shear shape coefficient is equivalent by segment length, and the fimal corrected first-order vertical bending vibration frequency analytical formula is deduced. There are 6 simple carbodies and 1 real carbody as examples to test the formula, all analysis frequencies are very close to their FEA frequencies, and especially for the real carbody, the error between analysis and experiment frequency is 0.75%. Based on the analytic formula, sensitivity analysis of the real carbody's design parameters is done, and some main parameters are found. The series principle of carbody stiffness is introduced into Timoshenko beam theory to deduce a formula, which can estimate the first-order vertical bending vibration frequency of carbody quickly without traditional FEA method and provide a reference to design engineers.

  8. Higher order alchemical derivatives from coupled perturbed self-consistent field theory.

    PubMed

    Lesiuk, Michał; Balawender, Robert; Zachara, Janusz

    2012-01-21

    We present an analytical approach to treat higher order derivatives of Hartree-Fock (HF) and Kohn-Sham (KS) density functional theory energy in the Born-Oppenheimer approximation with respect to the nuclear charge distribution (so-called alchemical derivatives). Modified coupled perturbed self-consistent field theory is used to calculate molecular systems response to the applied perturbation. Working equations for the second and the third derivatives of HF/KS energy are derived. Similarly, analytical forms of the first and second derivatives of orbital energies are reported. The second derivative of Kohn-Sham energy and up to the third derivative of Hartree-Fock energy with respect to the nuclear charge distribution were calculated. Some issues of practical calculations, in particular the dependence of the basis set and Becke weighting functions on the perturbation, are considered. For selected series of isoelectronic molecules values of available alchemical derivatives were computed and Taylor series expansion was used to predict energies of the "surrounding" molecules. Predicted values of energies are in unexpectedly good agreement with the ones computed using HF/KS methods. Presented method allows one to predict orbital energies with the error less than 1% or even smaller for valence orbitals. © 2012 American Institute of Physics

  9. Simultaneous analysis of hydrochlorothiazide, triamterene and reserpine in rat plasma by high performance liquid chromatography and tandem solid-phase extraction.

    PubMed

    Li, Hang; He, Junting; Liu, Qin; Huo, Zhaohui; Liang, Si; Liang, Yong

    2011-03-01

    A tandem solid-phase extraction method (SPE) of connecting two different cartridges (C(18) and MCX) in series was developed as the extraction procedure in this article, which provided better extraction yields (>86%) for all analytes and more appropriate sample purification from endogenous interference materials compared with a single cartridge. Analyte separation was achieved on a C(18) reversed-phase column at the wavelength of 265 nm by high-performance liquid chromatography (HPLC). The method was validated in terms of extraction yield, precision and accuracy. These assays gave mean accuracy values higher than 89% with RSD values that were always less than 3.8%. The method has been successfully applied to plasma samples from rats after oral administration of target compounds. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. A systematic and feasible method for computing nuclear contributions to electrical properties of polyatomic molecules

    NASA Astrophysics Data System (ADS)

    Luis, Josep M.; Duran, Miquel; Andrés, José L.

    1997-08-01

    An analytic method to evaluate nuclear contributions to electrical properties of polyatomic molecules is presented. Such contributions control changes induced by an electric field on equilibrium geometry (nuclear relaxation contribution) and vibrational motion (vibrational contribution) of a molecular system. Expressions to compute the nuclear contributions have been derived from a power series expansion of the potential energy. These contributions to the electrical properties are given in terms of energy derivatives with respect to normal coordinates, electric field intensity or both. Only one calculation of such derivatives at the field-free equilibrium geometry is required. To show the useful efficiency of the analytical evaluation of electrical properties (the so-called AEEP method), results for calculations on water and pyridine at the SCF/TZ2P and the MP2/TZ2P levels of theory are reported. The results obtained are compared with previous theoretical calculations and with experimental values.

  11. Designing Studies That Would Address the Multilayered Nature of Health Care

    PubMed Central

    Pennell, Michael; Rhoda, Dale; Hade, Erinn M.; Paskett, Electra D.

    2010-01-01

    We review design and analytic methods available for multilevel interventions in cancer research with particular attention to study design, sample size requirements, and potential to provide statistical evidence for causal inference. The most appropriate methods will depend on the stage of development of the research and whether randomization is possible. Early on, fractional factorial designs may be used to screen intervention components, particularly when randomization of individuals is possible. Quasi-experimental designs, including time-series and multiple baseline designs, can be useful once the intervention is designed because they require few sites and can provide the preliminary evidence to plan efficacy studies. In efficacy and effectiveness studies, group-randomized trials are preferred when randomization is possible and regression discontinuity designs are preferred otherwise if assignment based on a quantitative score is possible. Quasi-experimental designs may be used, especially when combined with recent developments in analytic methods to reduce bias in effect estimates. PMID:20386057

  12. Molecular detection of Borrelia burgdorferi sensu lato – An analytical comparison of real-time PCR protocols from five different Scandinavian laboratories

    PubMed Central

    Faller, Maximilian; Wilhelmsson, Peter; Kjelland, Vivian; Andreassen, Åshild; Dargis, Rimtas; Quarsten, Hanne; Dessau, Ram; Fingerle, Volker; Margos, Gabriele; Noraas, Sølvi; Ornstein, Katharina; Petersson, Ann-Cathrine; Matussek, Andreas; Lindgren, Per-Eric; Henningsson, Anna J.

    2017-01-01

    Introduction Lyme borreliosis (LB) is the most common tick transmitted disease in Europe. The diagnosis of LB today is based on the patient´s medical history, clinical presentation and laboratory findings. The laboratory diagnostics are mainly based on antibody detection, but in certain conditions molecular detection by polymerase chain reaction (PCR) may serve as a complement. Aim The purpose of this study was to evaluate the analytical sensitivity, analytical specificity and concordance of eight different real-time PCR methods at five laboratories in Sweden, Norway and Denmark. Method Each participating laboratory was asked to analyse three different sets of samples (reference panels; all blinded) i) cDNA extracted and transcribed from water spiked with cultured Borrelia strains, ii) cerebrospinal fluid spiked with cultured Borrelia strains, and iii) DNA dilution series extracted from cultured Borrelia and relapsing fever strains. The results and the method descriptions of each laboratory were systematically evaluated. Results and conclusions The analytical sensitivities and the concordance between the eight protocols were in general high. The concordance was especially high between the protocols using 16S rRNA as the target gene, however, this concordance was mainly related to cDNA as the type of template. When comparing cDNA and DNA as the type of template the analytical sensitivity was in general higher for the protocols using DNA as template regardless of the use of target gene. The analytical specificity for all eight protocols was high. However, some protocols were not able to detect Borrelia spielmanii, Borrelia lusitaniae or Borrelia japonica. PMID:28937997

  13. Virial series expansion and Monte Carlo studies of equation of state for hard spheres in narrow cylindrical pores

    NASA Astrophysics Data System (ADS)

    Mon, K. K.

    2018-05-01

    In this paper, the virial series expansion and constant pressure Monte Carlo method are used to study the longitudinal pressure equation of state for hard spheres in narrow cylindrical pores. We invoke dimensional reduction and map the model into an effective one-dimensional fluid model with interacting internal degrees of freedom. The one-dimensional model is extensive. The Euler relation holds, and longitudinal pressure can be probed with the standard virial series expansion method. Virial coefficients B2 and B3 were obtained analytically, and numerical quadrature was used for B4. A range of narrow pore widths (2 Rp) , Rp<(√{3 }+2 ) /4 =0.9330 ... (in units of the hard sphere diameter) was used, corresponding to fluids in the important single-file formations. We have also computed the virial pressure series coefficients B2', B3', and B4' to compare a truncated virial pressure series equation of state with accurate constant pressure Monte Carlo data. We find very good agreement for a wide range of pressures for narrow pores. These results contribute toward increasing the rather limited understanding of virial coefficients and the equation of state of hard sphere fluids in narrow cylindrical pores.

  14. Accurate Estimate of Some Propagation Characteristics for the First Higher Order Mode in Graded Index Fiber with Simple Analytic Chebyshev Method

    NASA Astrophysics Data System (ADS)

    Dutta, Ivy; Chowdhury, Anirban Roy; Kumbhakar, Dharmadas

    2013-03-01

    Using Chebyshev power series approach, accurate description for the first higher order (LP11) mode of graded index fibers having three different profile shape functions are presented in this paper and applied to predict their propagation characteristics. These characteristics include fractional power guided through the core, excitation efficiency and Petermann I and II spot sizes with their approximate analytic formulations. We have shown that where two and three Chebyshev points in LP11 mode approximation present fairly accurate results, the values based on our calculations involving four Chebyshev points match excellently with available exact numerical results.

  15. The analytical and numerical approaches to the theory of the Moon's librations: Modern analysis and results

    NASA Astrophysics Data System (ADS)

    Petrova, N.; Zagidullin, A.; Nefedyev, Y.; Kosulin, V.; Andreev, A.

    2017-11-01

    Observing physical librations of celestial bodies and the Moon represents one of the astronomical methods of remotely assessing the internal structure of a celestial body without conducting expensive space experiments. The paper contains a review of recent advances in studying the Moon's structure using various methods of obtaining and applying the lunar physical librations (LPhL) data. In this article LPhL simulation methods of assessing viscoelastic and dissipative properties of the lunar body and lunar core parameters, whose existence has been recently confirmed during the seismic data reprocessing of ;Apollo; space mission, are described. Much attention is paid to physical interpretation of the free librations phenomenon and the methods for its determination. In the paper the practical application of the most accurate analytical LPhL tables (Rambaux and Williams, 2011) is discussed. The tables were built on the basis of complex analytical processing of the residual differences obtained when comparing long-term series of laser observations with the numerical ephemeris DE421. In the paper an efficiency analysis of two approaches to LPhL theory is conducted: the numerical and the analytical ones. It has been shown that in lunar investigation both approaches complement each other in various aspects: the numerical approach provides high accuracy of the theory, which is required for the proper processing of modern observations, the analytical approach allows to comprehend the essence of the phenomena in the lunar rotation, predict and interpret new effects in the observations of lunar body and lunar core parameters.

  16. Developing a comprehensive time series of GDP per capita for 210 countries from 1950 to 2015

    PubMed Central

    2012-01-01

    Background Income has been extensively studied and utilized as a determinant of health. There are several sources of income expressed as gross domestic product (GDP) per capita, but there are no time series that are complete for the years between 1950 and 2015 for the 210 countries for which data exist. It is in the interest of population health research to establish a global time series that is complete from 1950 to 2015. Methods We collected GDP per capita estimates expressed in either constant US dollar terms or international dollar terms (corrected for purchasing power parity) from seven sources. We applied several stages of models, including ordinary least-squares regressions and mixed effects models, to complete each of the seven source series from 1950 to 2015. The three US dollar and four international dollar series were each averaged to produce two new GDP per capita series. Results and discussion Nine complete series from 1950 to 2015 for 210 countries are available for use. These series can serve various analytical purposes and can illustrate myriad economic trends and features. The derivation of the two new series allows for researchers to avoid any series-specific biases that may exist. The modeling approach used is flexible and will allow for yearly updating as new estimates are produced by the source series. Conclusion GDP per capita is a necessary tool in population health research, and our development and implementation of a new method has allowed for the most comprehensive known time series to date. PMID:22846561

  17. Stochastic optimization for modeling physiological time series: application to the heart rate response to exercise

    NASA Astrophysics Data System (ADS)

    Zakynthinaki, M. S.; Stirling, J. R.

    2007-01-01

    Stochastic optimization is applied to the problem of optimizing the fit of a model to the time series of raw physiological (heart rate) data. The physiological response to exercise has been recently modeled as a dynamical system. Fitting the model to a set of raw physiological time series data is, however, not a trivial task. For this reason and in order to calculate the optimal values of the parameters of the model, the present study implements the powerful stochastic optimization method ALOPEX IV, an algorithm that has been proven to be fast, effective and easy to implement. The optimal parameters of the model, calculated by the optimization method for the particular athlete, are very important as they characterize the athlete's current condition. The present study applies the ALOPEX IV stochastic optimization to the modeling of a set of heart rate time series data corresponding to different exercises of constant intensity. An analysis of the optimization algorithm, together with an analytic proof of its convergence (in the absence of noise), is also presented.

  18. Current antiviral drugs and their analysis in biological materials - Part II: Antivirals against hepatitis and HIV viruses.

    PubMed

    Nováková, Lucie; Pavlík, Jakub; Chrenková, Lucia; Martinec, Ondřej; Červený, Lukáš

    2018-01-05

    This review is a Part II of the series aiming to provide comprehensive overview of currently used antiviral drugs and to show modern approaches to their analysis. While in the Part I antivirals against herpes viruses and antivirals against respiratory viruses were addressed, this part concerns antivirals against hepatitis viruses (B and C) and human immunodeficiency virus (HIV). Many novel antivirals against hepatitis C virus (HCV) and HIV have been introduced into the clinical practice over the last decade. The recent broadening portfolio of these groups of antivirals is reflected in increasing number of developed analytical methods required to meet the needs of clinical terrain. Part II summarizes the mechanisms of action of antivirals against hepatitis B virus (HBV), HCV, and HIV, their use in clinical practice, and analytical methods for individual classes. It also provides expert opinion on state of art in the field of bioanalysis of these drugs. Analytical methods reflect novelty of these chemical structures and use by far the most current approaches, such as simple and high-throughput sample preparation and fast separation, often by means of UHPLC-MS/MS. Proper method validation based on requirements of bioanalytical guidelines is an inherent part of the developed methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. The effect of transverse wave vector and magnetic fields on resonant tunneling times in double-barrier structures

    NASA Astrophysics Data System (ADS)

    Wang, Hongmei; Zhang, Yafei; Xu, Huaizhe

    2007-01-01

    The effect of transverse wave vector and magnetic fields on resonant tunneling times in double-barrier structures, which is significant but has been frequently omitted in previous theoretical methods, has been reported in this paper. The analytical expressions of the longitudinal energies of quasibound levels (LEQBL) and the lifetimes of quasibound levels (LQBL) in symmetrical double-barrier (SDB) structures have been derived as a function of transverse wave vector and longitudinal magnetic fields perpendicular to interfaces. Based on our derived analytical expressions, the LEQBL and LQBL dependence upon transverse wave vector and longitudinal magnetic fields has been explored numerically for a SDB structure. Model calculations show that the LEQBL decrease monotonically and the LQBL shorten with increasing transverse wave vector, and each original LEQBL splits to a series of sub-LEQBL which shift nearly linearly toward the well bottom and the lifetimes of quasibound level series (LQBLS) shorten with increasing Landau-level indices and magnetic fields.

  20. Magnus expansion method for two-level atom interacting with few-cycle pulse

    NASA Astrophysics Data System (ADS)

    Begzjav, T.; Ben-Benjamin, J. S.; Eleuch, H.; Nessler, R.; Rostovtsev, Y.; Shchedrin, G.

    2018-06-01

    Using the Magnus expansion to the fourth order, we obtain analytic expressions for the atomic state of a two-level system driven by a laser pulse of arbitrary shape with small pulse area. We also determine the limitation of our obtained formulas due to limited range of convergence of the Magnus series. We compare our method to the recently developed method of Rostovtsev et al. (PRA 2009, 79, 063833) for several detunings. Our analysis shows that our technique based on the Magnus expansion can be used as a complementary method to the one in PRA 2009.

  1. Information mining over heterogeneous and high-dimensional time-series data in clinical trials databases.

    PubMed

    Altiparmak, Fatih; Ferhatosmanoglu, Hakan; Erdal, Selnur; Trost, Donald C

    2006-04-01

    An effective analysis of clinical trials data involves analyzing different types of data such as heterogeneous and high dimensional time series data. The current time series analysis methods generally assume that the series at hand have sufficient length to apply statistical techniques to them. Other ideal case assumptions are that data are collected in equal length intervals, and while comparing time series, the lengths are usually expected to be equal to each other. However, these assumptions are not valid for many real data sets, especially for the clinical trials data sets. An addition, the data sources are different from each other, the data are heterogeneous, and the sensitivity of the experiments varies by the source. Approaches for mining time series data need to be revisited, keeping the wide range of requirements in mind. In this paper, we propose a novel approach for information mining that involves two major steps: applying a data mining algorithm over homogeneous subsets of data, and identifying common or distinct patterns over the information gathered in the first step. Our approach is implemented specifically for heterogeneous and high dimensional time series clinical trials data. Using this framework, we propose a new way of utilizing frequent itemset mining, as well as clustering and declustering techniques with novel distance metrics for measuring similarity between time series data. By clustering the data, we find groups of analytes (substances in blood) that are most strongly correlated. Most of these relationships already known are verified by the clinical panels, and, in addition, we identify novel groups that need further biomedical analysis. A slight modification to our algorithm results an effective declustering of high dimensional time series data, which is then used for "feature selection." Using industry-sponsored clinical trials data sets, we are able to identify a small set of analytes that effectively models the state of normal health.

  2. The examination of headache activity using time-series research designs.

    PubMed

    Houle, Timothy T; Remble, Thomas A; Houle, Thomas A

    2005-05-01

    The majority of research conducted on headache has utilized cross-sectional designs which preclude the examination of dynamic factors and principally rely on group-level effects. The present article describes the application of an individual-oriented process model using time-series analytical techniques. The blending of a time-series approach with an interactive process model allows consideration of the relationships of intra-individual dynamic processes, while not precluding the researcher to examine inter-individual differences. The authors explore the nature of time-series data and present two necessary assumptions underlying the time-series approach. The concept of shock and its contribution to headache activity is also presented. The time-series approach is not without its problems and two such problems are specifically reported: autocorrelation and the distribution of daily observations. The article concludes with the presentation of several analytical techniques suited to examine the time-series interactive process model.

  3. Off-diagonal series expansion for quantum partition functions

    NASA Astrophysics Data System (ADS)

    Hen, Itay

    2018-05-01

    We derive an integral-free thermodynamic perturbation series expansion for quantum partition functions which enables an analytical term-by-term calculation of the series. The expansion is carried out around the partition function of the classical component of the Hamiltonian with the expansion parameter being the strength of the off-diagonal, or quantum, portion. To demonstrate the usefulness of the technique we analytically compute to third order the partition functions of the 1D Ising model with longitudinal and transverse fields, and the quantum 1D Heisenberg model.

  4. Visualizing frequent patterns in large multivariate time series

    NASA Astrophysics Data System (ADS)

    Hao, M.; Marwah, M.; Janetzko, H.; Sharma, R.; Keim, D. A.; Dayal, U.; Patnaik, D.; Ramakrishnan, N.

    2011-01-01

    The detection of previously unknown, frequently occurring patterns in time series, often called motifs, has been recognized as an important task. However, it is difficult to discover and visualize these motifs as their numbers increase, especially in large multivariate time series. To find frequent motifs, we use several temporal data mining and event encoding techniques to cluster and convert a multivariate time series to a sequence of events. Then we quantify the efficiency of the discovered motifs by linking them with a performance metric. To visualize frequent patterns in a large time series with potentially hundreds of nested motifs on a single display, we introduce three novel visual analytics methods: (1) motif layout, using colored rectangles for visualizing the occurrences and hierarchical relationships of motifs in a multivariate time series, (2) motif distortion, for enlarging or shrinking motifs as appropriate for easy analysis and (3) motif merging, to combine a number of identical adjacent motif instances without cluttering the display. Analysts can interactively optimize the degree of distortion and merging to get the best possible view. A specific motif (e.g., the most efficient or least efficient motif) can be quickly detected from a large time series for further investigation. We have applied these methods to two real-world data sets: data center cooling and oil well production. The results provide important new insights into the recurring patterns.

  5. IUS solid rocket motor contamination prediction methods

    NASA Technical Reports Server (NTRS)

    Mullen, C. R.; Kearnes, J. H.

    1980-01-01

    A series of computer codes were developed to predict solid rocket motor produced contamination to spacecraft sensitive surfaces. Subscale and flight test data have confirmed some of the analytical results. Application of the analysis tools to a typical spacecraft has provided early identification of potential spacecraft contamination problems and provided insight into their solution; e.g., flight plan modifications, plume or outgassing shields and/or contamination covers.

  6. Evaluation of Hydrologic and Meteorological Impacts on Dengue Fever Incidences in Southern Taiwan using Time- Frequency Method

    NASA Astrophysics Data System (ADS)

    Tsai, Christina; Yeh, Ting-Gu

    2017-04-01

    Extreme weather events are occurring more frequently as a result of climate change. Recently dengue fever has become a serious issue in southern Taiwan. It may have characteristic temporal scales that can be identified. Some researchers have hypothesized that dengue fever incidences are related to climate change. This study applies time-frequency analysis to time series data concerning dengue fever and hydrologic and meteorological variables. Results of three time-frequency analytical methods - the Hilbert Huang transform (HHT), the Wavelet Transform (WT) and the Short Time Fourier Transform (STFT) are compared and discussed. A more effective time-frequency analysis method will be identified to analyze relevant time series data. The most influential time scales of hydrologic and meteorological variables that are associated with dengue fever are determined. Finally, the linkage between hydrologic/meteorological factors and dengue fever incidences can be established.

  7. Mitigation of intra-channel nonlinearities using a frequency-domain Volterra series equalizer.

    PubMed

    Guiomar, Fernando P; Reis, Jacklyn D; Teixeira, António L; Pinto, Armando N

    2012-01-16

    We address the issue of intra-channel nonlinear compensation using a Volterra series nonlinear equalizer based on an analytical closed-form solution for the 3rd order Volterra kernel in frequency-domain. The performance of the method is investigated through numerical simulations for a single-channel optical system using a 20 Gbaud NRZ-QPSK test signal propagated over 1600 km of both standard single-mode fiber and non-zero dispersion shifted fiber. We carry on performance and computational effort comparisons with the well-known backward propagation split-step Fourier (BP-SSF) method. The alias-free frequency-domain implementation of the Volterra series nonlinear equalizer makes it an attractive approach to work at low sampling rates, enabling to surpass the maximum performance of BP-SSF at 2× oversampling. Linear and nonlinear equalization can be treated independently, providing more flexibility to the equalization subsystem. The parallel structure of the algorithm is also a key advantage in terms of real-time implementation.

  8. Extended local similarity analysis (eLSA) of microbial community and other time series data with replicates.

    PubMed

    Xia, Li C; Steele, Joshua A; Cram, Jacob A; Cardon, Zoe G; Simmons, Sheri L; Vallino, Joseph J; Fuhrman, Jed A; Sun, Fengzhu

    2011-01-01

    The increasing availability of time series microbial community data from metagenomics and other molecular biological studies has enabled the analysis of large-scale microbial co-occurrence and association networks. Among the many analytical techniques available, the Local Similarity Analysis (LSA) method is unique in that it captures local and potentially time-delayed co-occurrence and association patterns in time series data that cannot otherwise be identified by ordinary correlation analysis. However LSA, as originally developed, does not consider time series data with replicates, which hinders the full exploitation of available information. With replicates, it is possible to understand the variability of local similarity (LS) score and to obtain its confidence interval. We extended our LSA technique to time series data with replicates and termed it extended LSA, or eLSA. Simulations showed the capability of eLSA to capture subinterval and time-delayed associations. We implemented the eLSA technique into an easy-to-use analytic software package. The software pipeline integrates data normalization, statistical correlation calculation, statistical significance evaluation, and association network construction steps. We applied the eLSA technique to microbial community and gene expression datasets, where unique time-dependent associations were identified. The extended LSA analysis technique was demonstrated to reveal statistically significant local and potentially time-delayed association patterns in replicated time series data beyond that of ordinary correlation analysis. These statistically significant associations can provide insights to the real dynamics of biological systems. The newly designed eLSA software efficiently streamlines the analysis and is freely available from the eLSA homepage, which can be accessed at http://meta.usc.edu/softs/lsa.

  9. Extended local similarity analysis (eLSA) of microbial community and other time series data with replicates

    PubMed Central

    2011-01-01

    Background The increasing availability of time series microbial community data from metagenomics and other molecular biological studies has enabled the analysis of large-scale microbial co-occurrence and association networks. Among the many analytical techniques available, the Local Similarity Analysis (LSA) method is unique in that it captures local and potentially time-delayed co-occurrence and association patterns in time series data that cannot otherwise be identified by ordinary correlation analysis. However LSA, as originally developed, does not consider time series data with replicates, which hinders the full exploitation of available information. With replicates, it is possible to understand the variability of local similarity (LS) score and to obtain its confidence interval. Results We extended our LSA technique to time series data with replicates and termed it extended LSA, or eLSA. Simulations showed the capability of eLSA to capture subinterval and time-delayed associations. We implemented the eLSA technique into an easy-to-use analytic software package. The software pipeline integrates data normalization, statistical correlation calculation, statistical significance evaluation, and association network construction steps. We applied the eLSA technique to microbial community and gene expression datasets, where unique time-dependent associations were identified. Conclusions The extended LSA analysis technique was demonstrated to reveal statistically significant local and potentially time-delayed association patterns in replicated time series data beyond that of ordinary correlation analysis. These statistically significant associations can provide insights to the real dynamics of biological systems. The newly designed eLSA software efficiently streamlines the analysis and is freely available from the eLSA homepage, which can be accessed at http://meta.usc.edu/softs/lsa. PMID:22784572

  10. "Analytical" vector-functions I

    NASA Astrophysics Data System (ADS)

    Todorov, Vladimir Todorov

    2017-12-01

    In this note we try to give a new (or different) approach to the investigation of analytical vector functions. More precisely a notion of a power xn; n ∈ ℕ+ of a vector x ∈ ℝ3 is introduced which allows to define an "analytical" function f : ℝ3 → ℝ3. Let furthermore f (ξ )= ∑n =0 ∞ anξn be an analytical function of the real variable ξ. Here we replace the power ξn of the number ξ with the power of a vector x ∈ ℝ3 to obtain a vector "power series" f (x )= ∑n =0 ∞ anxn . We research some properties of the vector series as well as some applications of this idea. Note that an "analytical" vector function does not depend of any basis, which may be used in research into some problems in physics.

  11. Approximate series solution of multi-dimensional, time fractional-order (heat-like) diffusion equations using FRDTM.

    PubMed

    Singh, Brajesh K; Srivastava, Vineet K

    2015-04-01

    The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations.

  12. Approximate series solution of multi-dimensional, time fractional-order (heat-like) diffusion equations using FRDTM

    PubMed Central

    Singh, Brajesh K.; Srivastava, Vineet K.

    2015-01-01

    The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations. PMID:26064639

  13. Calibrant-Free Analyte Quantitation via a Variable Velocity Flow Cell.

    PubMed

    Beck, Jason G; Skuratovsky, Aleksander; Granger, Michael C; Porter, Marc D

    2017-01-17

    In this paper, we describe a novel method for analyte quantitation that does not rely on calibrants, internal standards, or calibration curves but, rather, leverages the relationship between disparate and predictable surface-directed analyte flux to an array of sensing addresses and a measured resultant signal. To reduce this concept to practice, we fabricated two flow cells such that the mean linear fluid velocity, U, was varied systematically over an array of electrodes positioned along the flow axis. This resulted in a predictable variation of the address-directed flux of a redox analyte, ferrocenedimethanol (FDM). The resultant limiting currents measured at a series of these electrodes, and accurately described by a convective-diffusive transport model, provided a means to calculate an "unknown" concentration without the use of calibrants, internal standards, or a calibration curve. Furthermore, the experiment and concentration calculation only takes minutes to perform. Deviation in calculated FDM concentrations from true values was minimized to less than 0.5% when empirically derived values of U were employed.

  14. Analytical approximations for the oscillators with anti-symmetric quadratic nonlinearity

    NASA Astrophysics Data System (ADS)

    Alal Hosen, Md.; Chowdhury, M. S. H.; Yeakub Ali, Mohammad; Faris Ismail, Ahmad

    2017-12-01

    A second-order ordinary differential equation involving anti-symmetric quadratic nonlinearity changes sign. The behaviour of the oscillators with an anti-symmetric quadratic nonlinearity is assumed to oscillate different in the positive and negative directions. In this reason, Harmonic Balance Method (HBM) cannot be directly applied. The main purpose of the present paper is to propose an analytical approximation technique based on the HBM for obtaining approximate angular frequencies and the corresponding periodic solutions of the oscillators with anti-symmetric quadratic nonlinearity. After applying HBM, a set of complicated nonlinear algebraic equations is found. Analytical approach is not always fruitful for solving such kinds of nonlinear algebraic equations. In this article, two small parameters are found, for which the power series solution produces desired results. Moreover, the amplitude-frequency relationship has also been determined in a novel analytical way. The presented technique gives excellent results as compared with the corresponding numerical results and is better than the existing ones.

  15. Insights into the varnishes of historical musical instruments using synchrotron micro-analytical methods

    NASA Astrophysics Data System (ADS)

    Echard, J.-P.; Cotte, M.; Dooryhee, E.; Bertrand, L.

    2008-07-01

    Though ancient violins and other stringed instruments are often revered for the beauty of their varnishes, the varnishing techniques are not much known. In particular, very few detailed varnish analyses have been published so far. Since 2002, a research program at the Musée de la musique (Paris) is dedicated to a detailed description of varnishes on famous ancient musical instruments using a series of novel analytical methods. For the first time, results are presented on the study of the varnish from a late 16th century Venetian lute, using synchrotron micro-analytical methods. Identification of both organic and inorganic compounds distributed within the individual layers of a varnish microsample has been performed using spatially resolved synchrotron Fourier transform infrared microscopy. The univocal identification of the mineral phases is obtained through synchrotron powder X-ray diffraction. The materials identified may be of utmost importance to understand the varnishing process and its similarities with some painting techniques. In particular, the proteinaceous binding medium and the calcium sulfate components (bassanite and anhydrite) that have been identified in the lower layers of the varnish microsample could be related, to a certain extent, to the ground materials of earlier Italian paintings.

  16. The inverse Numerical Computer Program FLUX-BOT for estimating Vertical Water Fluxes from Temperature Time-Series.

    NASA Astrophysics Data System (ADS)

    Trauth, N.; Schmidt, C.; Munz, M.

    2016-12-01

    Heat as a natural tracer to quantify water fluxes between groundwater and surface water has evolved to a standard hydrological method. Typically, time series of temperatures in the surface water and in the sediment are observed and are subsequently evaluated by a vertical 1D representation of heat transport by advection and dispersion. Several analytical solutions as well as their implementation into user-friendly software exist in order to estimate water fluxes from the observed temperatures. Analytical solutions can be easily implemented but assumptions on the boundary conditions have to be made a priori, e.g. sinusoidal upper temperature boundary. Numerical models offer more flexibility and can handle temperature data which is characterized by irregular variations such as storm-event induced temperature changes and thus cannot readily be incorporated in analytical solutions. This also reduced the effort of data preprocessing such as the extraction of the diurnal temperature variation. We developed a software to estimate water FLUXes Based On Temperatures- FLUX-BOT. FLUX-BOT is a numerical code written in MATLAB which is intended to calculate vertical water fluxes in saturated sediments, based on the inversion of measured temperature time series observed at multiple depths. It applies a cell-centered Crank-Nicolson implicit finite difference scheme to solve the one-dimensional heat advection-conduction equation. Besides its core inverse numerical routines, FLUX-BOT includes functions visualizing the results and functions for performing uncertainty analysis. We provide applications of FLUX-BOT to generic as well as to measured temperature data to demonstrate its performance.

  17. Computation of type curves for flow to partially penetrating wells in water-table aquifers

    USGS Publications Warehouse

    Moench, Allen F.

    1993-01-01

    Evaluation of Neuman's analytical solution for flow to a well in a homogeneous, anisotropic, water-table aquifer commonly requires large amounts of computation time and can produce inaccurate results for selected combinations of parameters. Large computation times occur because the integrand of a semi-infinite integral involves the summation of an infinite series. Each term of the series requires evaluation of the roots of equations, and the series itself is sometimes slowly convergent. Inaccuracies can result from lack of computer precision or from the use of improper methods of numerical integration. In this paper it is proposed to use a method of numerical inversion of the Laplace transform solution, provided by Neuman, to overcome these difficulties. The solution in Laplace space is simpler in form than the real-time solution; that is, the integrand of the semi-infinite integral does not involve an infinite series or the need to evaluate roots of equations. Because the integrand is evaluated rapidly, advanced methods of numerical integration can be used to improve accuracy with an overall reduction in computation time. The proposed method of computing type curves, for which a partially documented computer program (WTAQ1) was written, was found to reduce computation time by factors of 2 to 20 over the time needed to evaluate the closed-form, real-time solution.

  18. GENERAL: The Analytic Solution of Schrödinger Equation with Potential Function Superposed by Six Terms with Positive-power and Inverse-power Potentials

    NASA Astrophysics Data System (ADS)

    Hu, Xian-Quan; Luo, Guang; Cui, Li-Peng; Li, Fang-Yu; Niu, Lian-Bin

    2009-03-01

    The analytic solution of the radial Schrödinger equation is studied by using the tight coupling condition of several positive-power and inverse-power potential functions in this article. Furthermore, the precisely analytic solutions and the conditions that decide the existence of analytic solution have been searched when the potential of the radial Schrödinger equation is V(r) = α1r8 + α2r3 + α3r2 + β3r-1 + β2r-3 + β1r-4. Generally speaking, there is only an approximate solution, but not analytic solution for Schrödinger equation with several potentials' superposition. However, the conditions that decide the existence of analytic solution have been found and the analytic solution and its energy level structure are obtained for the Schrödinger equation with the potential which is motioned above in this paper. According to the single-value, finite and continuous standard of wave function in a quantum system, the authors firstly solve the asymptotic solution through the radial coordinate r → and r → 0; secondly, they make the asymptotic solutions combining with the series solutions nearby the neighborhood of irregular singularities; and then they compare the power series coefficients, deduce a series of analytic solutions of the stationary state wave function and corresponding energy level structure by tight coupling among the coefficients of potential functions for the radial Schrödinger equation; and lastly, they discuss the solutions and make conclusions.

  19. Fast computation of the Gauss hypergeometric function with all its parameters complex with application to the Pöschl Teller Ginocchio potential wave functions

    NASA Astrophysics Data System (ADS)

    Michel, N.; Stoitsov, M. V.

    2008-04-01

    The fast computation of the Gauss hypergeometric function F12 with all its parameters complex is a difficult task. Although the F12 function verifies numerous analytical properties involving power series expansions whose implementation is apparently immediate, their use is thwarted by instabilities induced by cancellations between very large terms. Furthermore, small areas of the complex plane, in the vicinity of z=e, are inaccessible using F12 power series linear transformations. In order to solve these problems, a generalization of R.C. Forrey's transformation theory has been developed. The latter has been successful in treating the F12 function with real parameters. As in real case transformation theory, the large canceling terms occurring in F12 analytical formulas are rigorously dealt with, but by way of a new method, directly applicable to the complex plane. Taylor series expansions are employed to enter complex areas outside the domain of validity of power series analytical formulas. The proposed algorithm, however, becomes unstable in general when |a|, |b|, |c| are moderate or large. As a physical application, the calculation of the wave functions of the analytical Pöschl-Teller-Ginocchio potential involving F12 evaluations is considered. Program summaryProgram title: hyp_2F1, PTG_wf Catalogue identifier: AEAE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6839 No. of bytes in distributed program, including test data, etc.: 63 334 Distribution format: tar.gz Programming language: C++, Fortran 90 Computer: Intel i686 Operating system: Linux, Windows Word size: 64 bits Classification: 4.7 Nature of problem: The Gauss hypergeometric function F12, with all its parameters complex, is uniquely calculated in the frame of transformation theory with power series summations, thus providing a very fast algorithm. The evaluation of the wave functions of the analytical Pöschl-Teller-Ginocchio potential is treated as a physical application. Solution method: The Gauss hypergeometric function F12 verifies linear transformation formulas allowing consideration of arguments of a small modulus which then can be handled by a power series. They, however, give rise to indeterminate or numerically unstable cases, when b-a and c-a-b are equal or close to integers. They are properly dealt with through analytical manipulations of the Lanczos expression providing the Gamma function. The remaining zones of the complex plane uncovered by transformation formulas are dealt with Taylor expansions of the F12 function around complex points where linear transformations can be employed. The Pöschl-Teller-Ginocchio potential wave functions are calculated directly with F12 evaluations. Restrictions: The algorithm provides full numerical precision in almost all cases for |a|, |b|, and |c| of the order of one or smaller, but starts to be less precise or unstable when they increase, especially through a, b, and c imaginary parts. While it is possible to run the code for moderate or large |a|, |b|, and |c| and obtain satisfactory results for some specified values, the code is very likely to be unstable in this regime. Unusual features: Two different codes, one for the hypergeometric function and one for the Pöschl-Teller-Ginocchio potential wave functions, are provided in C++ and Fortran 90 versions. Running time: 20,000 F12 function evaluations take an average of one second.

  20. Application of vector-valued rational approximations to the matrix eigenvalue problem and connections with Krylov subspace methods

    NASA Technical Reports Server (NTRS)

    Sidi, Avram

    1992-01-01

    Let F(z) be a vectored-valued function F: C approaches C sup N, which is analytic at z=0 and meromorphic in a neighborhood of z=0, and let its Maclaurin series be given. We use vector-valued rational approximation procedures for F(z) that are based on its Maclaurin series in conjunction with power iterations to develop bona fide generalizations of the power method for an arbitrary N X N matrix that may be diagonalizable or not. These generalizations can be used to obtain simultaneously several of the largest distinct eigenvalues and the corresponding invariant subspaces, and present a detailed convergence theory for them. In addition, it is shown that the generalized power methods of this work are equivalent to some Krylov subspace methods, among them the methods of Arnoldi and Lanczos. Thus, the theory provides a set of completely new results and constructions for these Krylov subspace methods. This theory suggests at the same time a new mode of usage for these Krylov subspace methods that were observed to possess computational advantages over their common mode of usage.

  1. Beyond Antibodies as Binding Partners: The Role of Antibody Mimetics in Bioanalysis.

    PubMed

    Yu, Xiaowen; Yang, Yu-Ping; Dikici, Emre; Deo, Sapna K; Daunert, Sylvia

    2017-06-12

    The emergence of novel binding proteins or antibody mimetics capable of binding to ligand analytes in a manner analogous to that of the antigen-antibody interaction has spurred increased interest in the biotechnology and bioanalytical communities. The goal is to produce antibody mimetics designed to outperform antibodies with regard to binding affinities, cellular and tumor penetration, large-scale production, and temperature and pH stability. The generation of antibody mimetics with tailored characteristics involves the identification of a naturally occurring protein scaffold as a template that binds to a desired ligand. This scaffold is then engineered to create a superior binder by first creating a library that is then subjected to a series of selection steps. Antibody mimetics have been successfully used in the development of binding assays for the detection of analytes in biological samples, as well as in separation methods, cancer therapy, targeted drug delivery, and in vivo imaging. This review describes recent advances in the field of antibody mimetics and their applications in bioanalytical chemistry, specifically in diagnostics and other analytical methods.

  2. Nuclear magnetic resonance signal dynamics of liquids in the presence of distant dipolar fields, revisited

    PubMed Central

    Barros, Wilson; Gochberg, Daniel F.; Gore, John C.

    2009-01-01

    The description of the nuclear magnetic resonance magnetization dynamics in the presence of long-range dipolar interactions, which is based upon approximate solutions of Bloch–Torrey equations including the effect of a distant dipolar field, has been revisited. New experiments show that approximate analytic solutions have a broader regime of validity as well as dependencies on pulse-sequence parameters that seem to have been overlooked. In order to explain these experimental results, we developed a new method consisting of calculating the magnetization via an iterative formalism where both diffusion and distant dipolar field contributions are treated as integral operators incorporated into the Bloch–Torrey equations. The solution can be organized as a perturbative series, whereby access to higher order terms allows one to set better boundaries on validity regimes for analytic first-order approximations. Finally, the method legitimizes the use of simple analytic first-order approximations under less demanding experimental conditions, it predicts new pulse-sequence parameter dependencies for the range of validity, and clarifies weak points in previous calculations. PMID:19425789

  3. Recent Advances in Laplace Transform Analytic Element Method (LT-AEM) Theory and Application to Transient Groundwater Flow

    NASA Astrophysics Data System (ADS)

    Kuhlman, K. L.; Neuman, S. P.

    2006-12-01

    Furman and Neuman (2003) proposed a Laplace Transform Analytic Element Method (LT-AEM) for transient groundwater flow. LT-AEM applies the traditionally steady-state AEM to the Laplace transformed groundwater flow equation, and back-transforms the resulting solution to the time domain using a Fourier Series numerical inverse Laplace transform method (de Hoog, et.al., 1982). We have extended the method so it can compute hydraulic head and flow velocity distributions due to any two-dimensional combination and arrangement of point, line, circular and elliptical area sinks and sources, nested circular or elliptical regions having different hydraulic properties, and areas of specified head, flux or initial condition. The strengths of all sinks and sources, and the specified head and flux values, can all vary in both space and time in an independent and arbitrary fashion. Initial conditions may vary from one area element to another. A solution is obtained by matching heads and normal fluxes along the boundary of each element. The effect which each element has on the total flow is expressed in terms of generalized Fourier series which converge rapidly (<20 terms) in most cases. As there are more matching points than unknown Fourier terms, the matching is accomplished in Laplace space using least-squares. The method is illustrated by calculating the resulting transient head and flow velocities due to an arrangement of elements in both finite and infinite domains. The 2D LT-AEM elements already developed and implemented are currently being extended to solve the 3D groundwater flow equation.

  4. Thermal Analysis of Antenna Structures. Part 2: Panel Temperature Distribution

    NASA Technical Reports Server (NTRS)

    Schonfeld, D.; Lansing, F. L.

    1983-01-01

    This article is the second in a series that analyzes the temperature distribution in microwave antennas. An analytical solution in a series form is obtained for the temperature distribution in a flat plate analogous to an antenna surface panel under arbitrary temperature and boundary conditions. The solution includes the effects of radiation and air convection from the plate. Good agreement is obtained between the numerical and analytical solutions.

  5. Rapid determination of thermodynamic parameters from one-dimensional programmed-temperature gas chromatography for use in retention time prediction in comprehensive multidimensional chromatography.

    PubMed

    McGinitie, Teague M; Ebrahimi-Najafabadi, Heshmatollah; Harynuk, James J

    2014-01-17

    A new method for estimating the thermodynamic parameters of ΔH(T0), ΔS(T0), and ΔCP for use in thermodynamic modeling of GC×GC separations has been developed. The method is an alternative to the traditional isothermal separations required to fit a three-parameter thermodynamic model to retention data. Herein, a non-linear optimization technique is used to estimate the parameters from a series of temperature-programmed separations using the Nelder-Mead simplex algorithm. With this method, the time required to obtain estimates of thermodynamic parameters a series of analytes is significantly reduced. This new method allows for precise predictions of retention time with the average error being only 0.2s for 1D separations. Predictions for GC×GC separations were also in agreement with experimental measurements; having an average relative error of 0.37% for (1)tr and 2.1% for (2)tr. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. 40 CFR Appendix A to Subpart E of... - Interim Transmission Electron Microscopy Analytical Methods-Mandatory and Nonmandatory-and...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... pore size less than or equal to 0.45 µm. 6. Place these filters in series with a 5.0 µm backup filter... for not more than 30 seconds and replacing it at the time of sampling before sampling is initiated at.... Ensure that the sampler is turned upright before interrupting the pump flow. 21. Check that all samples...

  7. 40 CFR Appendix A to Subpart E of... - Interim Transmission Electron Microscopy Analytical Methods-Mandatory and Nonmandatory-and...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... pore size less than or equal to 0.45 µm. 6. Place these filters in series with a 5.0 µm backup filter... for not more than 30 seconds and replacing it at the time of sampling before sampling is initiated at.... Ensure that the sampler is turned upright before interrupting the pump flow. 21. Check that all samples...

  8. 40 CFR Appendix A to Subpart E of... - Interim Transmission Electron Microscopy Analytical Methods-Mandatory and Nonmandatory-and...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... pore size less than or equal to 0.45 µm. 6. Place these filters in series with a 5.0 µm backup filter... for not more than 30 seconds and replacing it at the time of sampling before sampling is initiated at.... Ensure that the sampler is turned upright before interrupting the pump flow. 21. Check that all samples...

  9. Numerically stable formulas for a particle-based explicit exponential integrator

    NASA Astrophysics Data System (ADS)

    Nadukandi, Prashanth

    2015-05-01

    Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.

  10. High-throughput biomonitoring of dioxins and polychlorinated biphenyls at the sub-picogram level in human serum.

    PubMed

    Focant, Jean-François; Eppe, Gauthier; Massart, Anne-Cécile; Scholl, Georges; Pirard, Catherine; De Pauw, Edwin

    2006-10-13

    We report on the use of a state-of-the-art method for the measurement of selected polychlorinated dibenzo-p-dioxins, polychlorinated dibenzofurans and polychlorinated biphenyls in human serum specimens. The sample preparation procedure is based on manual small size solid-phase extraction (SPE) followed by automated clean-up and fractionation using multi-sorbent liquid chromatography columns. SPE cartridges and all clean-up columns are disposable. Samples are processed in batches of 20 units, including one blank control (BC) sample and one quality control (QC) sample. The analytical measurement is performed using gas chromatography coupled to isotope dilution high-resolution mass spectrometry. The sample throughput corresponds to one series of 20 samples per day, from sample reception to data quality cross-check and reporting, once the procedure has been started and series of samples keep being produced. Four analysts are required to ensure proper performances of the procedure. The entire procedure has been validated under International Organization for Standardization (ISO) 17025 criteria and further tested over more than 1500 unknown samples during various epidemiological studies. The method is further discussed in terms of reproducibility, efficiency and long-term stability regarding the 35 target analytes. Data related to quality control and limit of quantification (LOQ) calculations are also presented and discussed.

  11. A series solution for horizontal infiltration in an initially dry aquifer

    NASA Astrophysics Data System (ADS)

    Furtak-Cole, Eden; Telyakovskiy, Aleksey S.; Cooper, Clay A.

    2018-06-01

    The porous medium equation (PME) is a generalization of the traditional Boussinesq equation for hydraulic conductivity as a power law function of height. We analyze the horizontal recharge of an initially dry unconfined aquifer of semi-infinite extent, as would be found in an aquifer adjacent a rising river. If the water level can be modeled as a power law function of time, similarity variables can be introduced and the original problem can be reduced to a boundary value problem for a nonlinear ordinary differential equation. The position of the advancing front is not known ahead of time and must be found in the process of solution. We present an analytical solution in the form of a power series, with the coefficients of the series given by a recurrence relation. The analytical solution compares favorably with a highly accurate numerical solution, and only a small number of terms of the series are needed to achieve high accuracy in the scenarios considered here. We also conduct a series of physical experiments in an initially dry wedged Hele-Shaw cell, where flow is modeled by a special form of the PME. Our analytical solution closely matches the hydraulic head profiles in the Hele-Shaw cell experiment.

  12. Annual banned-substance review: analytical approaches in human sports drug testing.

    PubMed

    Thevis, Mario; Kuuranne, Tiia; Geyer, Hans; Schänzer, Wilhelm

    2017-01-01

    There has been an immense amount of visibility of doping issues on the international stage over the past 12 months with the complexity of doping controls reiterated on various occasions. Hence, analytical test methods continuously being updated, expanded, and improved to provide specific, sensitive, and comprehensive test results in line with the World Anti-Doping Agency's (WADA) 2016 Prohibited List represent one of several critical cornerstones of doping controls. This enterprise necessitates expediting the (combined) exploitation of newly generated information on novel and/or superior target analytes for sports drug testing assays, drug elimination profiles, alternative test matrices, and recent advances in instrumental developments. This paper is a continuation of the series of annual banned-substance reviews appraising the literature published between October 2015 and September 2016 concerning human sports drug testing in the context of WADA's 2016 Prohibited List. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. African Primary Care Research: Qualitative data analysis and writing results

    PubMed Central

    Govender, Indiran; Ogunbanjo, Gboyega A.; Mash, Bob

    2014-01-01

    Abstract This article is part of a series on African primary care research and gives practical guidance on qualitative data analysis and the presentation of qualitative findings. After an overview of qualitative methods and analytical approaches, the article focuses particularly on content analysis, using the framework method as an example. The steps of familiarisation, creating a thematic index, indexing, charting, interpretation and confirmation are described. Key concepts with regard to establishing the quality and trustworthiness of data analysis are described. Finally, an approach to the presentation of qualitative findings is given. PMID:26245437

  14. African Primary Care Research: qualitative data analysis and writing results.

    PubMed

    Mabuza, Langalibalele H; Govender, Indiran; Ogunbanjo, Gboyega A; Mash, Bob

    2014-06-05

    This article is part of a series on African primary care research and gives practical guidance on qualitative data analysis and the presentation of qualitative findings. After an overview of qualitative methods and analytical approaches, the article focuses particularly on content analysis, using the framework method as an example. The steps of familiarisation, creating a thematic index, indexing, charting, interpretation and confirmation are described. Key concepts with regard to establishing the quality and trustworthiness of data analysis are described. Finally, an approach to the presentation of qualitative findings is given.

  15. Field Performance of ISFET based Deep Ocean pH Sensors

    NASA Astrophysics Data System (ADS)

    Branham, C. W.; Murphy, D. J.

    2017-12-01

    Historically, ocean pH time series data was acquired from infrequent shipboard grab samples and measured using labor intensive spectrophotometry methods. However, with the introduction of robust and stable ISFET pH sensors for use in ocean applications a paradigm shift in the methods used to acquire long-term pH time series data has occurred. Sea-Bird Scientific played a critical role in the adoption this new technology by commercializing the SeaFET pH sensor and float pH Sensor developed by the MBARI chemical sensor group. Sea-Bird Scientific continues to advance this technology through a concerted effort to improve pH sensor accuracy and reliability by characterizing their performance in the laboratory and field. This presentation will focus on calibration of the ISFET pH sensor, evaluate its analytical performance, and validate performance using recent field data.

  16. Rapid and sensitive analytical method for monitoring of 12 organotin compounds in natural waters.

    PubMed

    Vahčič, Mitja; Milačič, Radmila; Sčančar, Janez

    2011-03-01

    A rapid analytical method for the simultaneous determination of 12 different organotin compounds (OTC): methyl-, butyl-, phenyl- and octyl-tins in natural water samples was developed. It comprises of in situ derivatisation (by using NaBEt4) of OTC in salty or fresh water sample matrix adjusted to pH 6 with Tris-citrate buffer, extraction of ethylated OTC into hexane, separation of OTC in organic phase on 15 m GC column and subsequent quantitative determination of separated OTC by ICP-MS. To optimise the pH of ethylation, phosphate, carbonate and Tris-citrate buffer were investigated alternatively to commonly applied sodium acetate - acetic acid buffer. The ethylation yields in Tris-citrate buffer were found to be better for TBT, MOcT and DOcT in comparison to commonly used acetate buffer. Iso-octane and hexane were examined as organic phase for extraction of ethylated OTC. The advantage of hexane was in its ability for quantitative determination of TMeT. GC column of 15 m in length was used for separation of studied OTC under the optimised separation conditions and its performances compared to 30 m column. The analytical method developed enables sensitive simultaneous determination of 12 different OTC and appreciably shortened analysis time in larger series of water samples. LOD's obtained for the newly developed method ranged from 0.05-0.06 ng Sn L-1 for methyl-, 0.11-0.45 ng Sn L-1 for butyl-, 0.11-0.16 ng Sn L-1 for phenyl-, and 0.07-0.10 ng Sn L-1 for octyl-tins. By applying the developed analytical method, marine water samples from the Northern Adriatic Sea containing mainly butyl- and methyl-tin species were analysed to confirm the proposed method's applicability.

  17. Two Approaches in the Lunar Libration Theory: Analytical vs. Numerical Methods

    NASA Astrophysics Data System (ADS)

    Petrova, Natalia; Zagidullin, Arthur; Nefediev, Yurii; Kosulin, Valerii

    2016-10-01

    Observation of the physical libration of the Moon and the celestial bodies is one of the astronomical methods to remotely evaluate the internal structure of a celestial body without using expensive space experiments. Review of the results obtained due to the physical libration study, is presented in the report.The main emphasis is placed on the description of successful lunar laser ranging for libration determination and on the methods of simulating the physical libration. As a result, estimation of the viscoelastic and dissipative properties of the lunar body, of the lunar core parameters were done. The core's existence was confirmed by the recent reprocessing of seismic data Apollo missions. Attention is paid to the physical interpretation of the phenomenon of free libration and methods of its determination.A significant part of the report is devoted to describing the practical application of the most accurate to date the analytical tables of lunar libration built by comprehensive analytical processing of residual differences obtained when comparing the long-term series of laser observations with numerical ephemeris DE421 [1].In general, the basic outline of the report reflects the effectiveness of two approaches in the libration theory - numerical and analytical solution. It is shown that the two approaches complement each other for the study of the Moon in different aspects: numerical approach provides high accuracy of the theory necessary for adequate treatment of modern high-accurate observations and the analytic approach allows you to see the essence of the various kind manifestations in the lunar rotation, predict and interpret the new effects in observations of physical libration [2].[1] Rambaux, N., J. G. Williams, 2011, The Moon's physical librations and determination of their free modes, Celest. Mech. Dyn. Astron., 109, 85-100.[2] Petrova N., A. Zagidullin, Yu. Nefediev. Analysis of long-periodic variations of lunar libration parameters on the basis of analytical theory / // The Russian-Japanese Workshop, 20-25 October, Tokyo (Mitaka) - Mizusawa, Japan. - 2014.

  18. Power series solution of the inhomogeneous exclusion process

    NASA Astrophysics Data System (ADS)

    Szavits-Nossan, Juraj; Romano, M. Carmen; Ciandrini, Luca

    2018-05-01

    We develop a power series method for the nonequilibrium steady state of the inhomogeneous one-dimensional totally asymmetric simple exclusion process (TASEP) in contact with two particle reservoirs and with site-dependent hopping rates in the bulk. The power series is performed in the entrance or exit rates governing particle exchange with the reservoirs, and the corresponding particle current is computed analytically up to the cubic term in the entry or exit rate, respectively. We also show how to compute higher-order terms using combinatorial objects known as Young tableaux. Our results address the long outstanding problem of finding the exact nonequilibrium steady state of the inhomogeneous TASEP. The findings are particularly relevant to the modeling of mRNA translation in which the rate of translation initiation, corresponding to the entrance rate in the TASEP, is typically small.

  19. Evaluation of methods for simultaneous collection and determination of nicotine and polynuclear aromatic hydrocarbons in indoor air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chuang, J.C.; Kuhlman, M.R.; Wilson, N.K.

    1990-01-01

    A study was performed to determine whether one sampling system and one analytical method can be used to measure both polynuclear aromatic hydrocarbons (PAH) and nicotine. The PAH collection efficiencies for both XAD-2 and XAD-4 adsorbents are very similar, but the nicotine collection efficiency was greater for XAD-4. The spiked perdeuterated PAH were retained well in both adsorbents after exposure to more than 300 cu m of air. A two-step Soxhlet extraction, dichloromethane followed by ethylacetate, was used to remove nicotine and PAH from XAD-4. The extract was analyzed by positive chemical ionization or electron impact gas chromatography/mass spectrometry (GC/MS)more » to determine nicotine and PAH. It is shown that one sampling system (quartz fiber filter and XAD-4 in series) and one analytical method (Soxhlet extraction and GC/MS) can be used to measure both nicotine and PAH in indoor air.« less

  20. Evaluation of methods for simultaneous collection and determination of nicotine and polynuclear aromatic hydrocarbons in indoor air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chuang, J.C.; Kuhlman, M.R.; Wilson, N.K.

    1990-05-01

    A study was performed to determine whether one sampling system and one analytical method can be used to collect and measure both polynuclear aromatic hydrocarbons (PAHs) and nicotine. PAH collection efficiencies for both XAD-2 and XAD-4 adsorbents were very similar, but nicotine collection efficiency was greater for XAD-4. Spiked perdeuterated PAHs were retained well in both adsorbents after exposure to more than 300 m{sup 3} of air. A two-step Soxhlet extraction, dichloromethane followed by ethyl acetate, was used to remove nicotine and PAHs from XAD-4. The extract was analyzed by positive chemical ionization or electron impact gas chromatography/mass spectrometry (GC/MS)more » to determine nicotine and PAHs. It is shown that one sampling system (quartz fiber filter and XAD-4 in series) and one analytical method (Soxhlet extraction and GC/MS) can be used for both nicotine and PAHs in indoor.« less

  1. Application of Dynamic Analysis in Semi-Analytical Finite Element Method.

    PubMed

    Liu, Pengfei; Xing, Qinyan; Wang, Dawei; Oeser, Markus

    2017-08-30

    Analyses of dynamic responses are significantly important for the design, maintenance and rehabilitation of asphalt pavement. In order to evaluate the dynamic responses of asphalt pavement under moving loads, a specific computational program, SAFEM, was developed based on a semi-analytical finite element method. This method is three-dimensional and only requires a two-dimensional FE discretization by incorporating Fourier series in the third dimension. In this paper, the algorithm to apply the dynamic analysis to SAFEM was introduced in detail. Asphalt pavement models under moving loads were built in the SAFEM and commercial finite element software ABAQUS to verify the accuracy and efficiency of the SAFEM. The verification shows that the computational accuracy of SAFEM is high enough and its computational time is much shorter than ABAQUS. Moreover, experimental verification was carried out and the prediction derived from SAFEM is consistent with the measurement. Therefore, the SAFEM is feasible to reliably predict the dynamic response of asphalt pavement under moving loads, thus proving beneficial to road administration in assessing the pavement's state.

  2. Analytical assessment of woven fabrics under vertical stabbing - The role of protective clothing.

    PubMed

    Hejazi, Sayyed Mahdi; Kadivar, Nastaran; Sajjadi, Ali

    2016-02-01

    Knives are being used more commonly in street fights and muggings. Therefore, this work presents an analytical model for woven fabrics under vertical stabbing loads. The model is based on energy method and the fabric is assumed to be unidirectional comprised of N layers. Thus, the ultimate stab resistance of fabric was determined based on structural parameters of fabric and geometrical characteristics of blade. Moreover, protective clothing is nowadays considered as a strategic branch in technical textile industry. The main idea of the present work is improving the stab resistance of woven textiles by using metal coating method. In the final, a series of vertical stabbing tests were conducted on cotton, polyester and polyamide fabrics. Consequently, it was found that the model predicts with a good accuracy the ultimate stab resistance of the sample fabrics. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. A Framework and Algorithms for Multivariate Time Series Analytics (MTSA): Learning, Monitoring, and Recommendation

    ERIC Educational Resources Information Center

    Ngan, Chun-Kit

    2013-01-01

    Making decisions over multivariate time series is an important topic which has gained significant interest in the past decade. A time series is a sequence of data points which are measured and ordered over uniform time intervals. A multivariate time series is a set of multiple, related time series in a particular domain in which domain experts…

  4. A method for direct, semi-quantitative analysis of gas phase samples using gas chromatography-inductively coupled plasma-mass spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, Kimberly E; Gerdes, Kirk

    2013-07-01

    A new and complete GC–ICP-MS method is described for direct analysis of trace metals in a gas phase process stream. The proposed method is derived from standard analytical procedures developed for ICP-MS, which are regularly exercised in standard ICP-MS laboratories. In order to implement the method, a series of empirical factors were generated to calibrate detector response with respect to a known concentration of an internal standard analyte. Calibrated responses are ultimately used to determine the concentration of metal analytes in a gas stream using a semi-quantitative algorithm. The method was verified using a traditional gas injection from a GCmore » sampling valve and a standard gas mixture containing either a 1 ppm Xe + Kr mix with helium balance or 100 ppm Xe with helium balance. Data collected for Xe and Kr gas analytes revealed that agreement of 6–20% with the actual concentration can be expected for various experimental conditions. To demonstrate the method using a relevant “unknown” gas mixture, experiments were performed for continuous 4 and 7 hour periods using a Hg-containing sample gas that was co-introduced into the GC sample loop with the xenon gas standard. System performance and detector response to the dilute concentration of the internal standard were pre-determined, which allowed semi-quantitative evaluation of the analyte. The calculated analyte concentrations varied during the course of the 4 hour experiment, particularly during the first hour of the analysis where the actual Hg concentration was under predicted by up to 72%. Calculated concentration improved to within 30–60% for data collected after the first hour of the experiment. Similar results were seen during the 7 hour test with the deviation from the actual concentration being 11–81% during the first hour and then decreasing for the remaining period. The method detection limit (MDL) was determined for the mercury by injecting the sample gas into the system following a period of equilibration. The MDL for Hg was calculated as 6.8 μg · m -3. This work describes the first complete GC–ICP-MS method to directly analyze gas phase samples, and detailed sample calculations and comparisons to conventional ICP-MS methods are provided.« less

  5. Access to Elementary Education in India. Country Analytical Review

    ERIC Educational Resources Information Center

    Govinda, R.; Bandyopadhyay, Madhumita

    2008-01-01

    This analytical review aims at exploring trends in educational access and delineating different groups, which are vulnerable to exclusion from educational opportunities at the elementary stage. This review has drawn references from series of analytical papers developed on different themes i.e. regional disparity in education, social equity and…

  6. Equivalent-circuit models for electret-based vibration energy harvesters

    NASA Astrophysics Data System (ADS)

    Phu Le, Cuong; Halvorsen, Einar

    2017-08-01

    This paper presents a complete analysis to build a tool for modelling electret-based vibration energy harvesters. The calculational approach includes all possible effects of fringing fields that may have significant impact on output power. The transducer configuration consists of two sets of metal strip electrodes on a top substrate that faces electret strips deposited on a bottom movable substrate functioning as a proof mass. Charge distribution on each metal strip is expressed by series expansion using Chebyshev polynomials multiplied by a reciprocal square-root form. The Galerkin method is then applied to extract all charge induction coefficients. The approach is validated by finite element calculations. From the analytic tool, a variety of connection schemes for power extraction in slot-effect and cross-wafer configurations can be lumped to a standard equivalent circuit with inclusion of parasitic capacitance. Fast calculation of the coefficients is also obtained by a proposed closed-form solution based on leading terms of the series expansions. The achieved analytical result is an important step for further optimisation of the transducer geometry and maximising harvester performance.

  7. [Analytic evaluation of potential nootropic agents].

    PubMed

    Opatrilová, R; Sokolová, P

    2004-01-01

    The paper deals with analytical evaluation of newly prepared substances, derivatives of N-(4-alkoxy-phenyl)-2-(2-oxo-azepan-1-yl)-acetamide. The substances are a homological series (methyl- to hexyl-). The purity of the substances was verified by thin-layer adsorption chromatography, and the principal physical characteristics--melting point and solubility--were determined. Experimental determination of the partition coefficient, extraction of the substances between two liquids miscible to a limited degree (n-octanol--water), determination of RM values by means of TLC partition chromatography (glass plates DC-Fertigplatten RP-8 F254S), determination of the capacity factor by means of HPLC (column C18 Plaris), and calculation by means of computer programmes were employed to determine the lipophilicity of this series of substances. The antiradical activity of the substances was evaluated by the method of extinguishing the stable radical 2,2-diphenyl-1-picryl-hydrazyl. Ascorbic acid, in which an antiradical effect had been demonstrated, was used for the sake of comparison. The substances show a certain activity, but they do not reach the antioxidative effect of ascorbic acid.

  8. Molecular imaging of cannabis leaf tissue with MeV-SIMS method

    NASA Astrophysics Data System (ADS)

    Jenčič, Boštjan; Jeromel, Luka; Ogrinc Potočnik, Nina; Vogel-Mikuš, Katarina; Kovačec, Eva; Regvar, Marjana; Siketić, Zdravko; Vavpetič, Primož; Rupnik, Zdravko; Bučar, Klemen; Kelemen, Mitja; Kovač, Janez; Pelicon, Primož

    2016-03-01

    To broaden our analytical capabilities with molecular imaging in addition to the existing elemental imaging with micro-PIXE, a linear Time-Of-Flight mass spectrometer for MeV Secondary Ion Mass Spectrometry (MeV-SIMS) was constructed and added to the existing nuclear microprobe at the Jožef Stefan Institute. We measured absolute molecular yields and damage cross-section of reference materials, without significant alteration of the fragile biological samples during the duration of measurements in the mapping mode. We explored the analytical capability of the MeV-SIMS technique for chemical mapping of the plant tissue of medicinal cannabis leaves. A series of hand-cut plant tissue slices were prepared by standard shock-freezing and freeze-drying protocol and deposited on the Si wafer. We show the measured MeV-SIMS spectra showing a series of peaks in the mass area of cannabinoids, as well as their corresponding maps. The indicated molecular distributions at masses of 345.5 u and 359.4 u may be attributed to the protonated THCA and THCA-C4 acids, and show enhancement in the areas with opened trichome morphology.

  9. Yarkovsky-O'Keefe-Radzievskii-Paddack effect on tumbling objects

    NASA Astrophysics Data System (ADS)

    Breiter, S.; Rożek, A.; Vokrouhlický, D.

    2011-11-01

    A semi-analytical model of the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect on an asteroid spin in a non-principal axis rotation state is developed. The model describes the spin-state evolution in Deprit-Elipe variables, first-order averaged with respect to rotation and Keplerian orbital motion. Assuming zero conductivity, the YORP torque is represented by spherical harmonic series with vectorial coefficients, allowing us to use any degree and order of approximation. Within the quadrupole approximation of the illumination function we find the same first integrals involving rotational momentum, obliquity and dynamical inertia that were obtained by Cicaló & Scheeres. The integrals do not exist when higher degree terms of the illumination function are included, and then the asymptotic states known from Vokrouhlický et al. appear. This resolves an apparent contradiction between earlier results. Averaged equations of motion admit stable and unstable limit cycle solutions that were not previously detected. Non-averaged numerical integration by the Taylor series method for an exemplary shape of 3103 Eger is in good agreement with the semi-analytical theory.

  10. Matched Backprojection Operator for Combined Scanning Transmission Electron Microscopy Tilt- and Focal Series.

    PubMed

    Dahmen, Tim; Kohr, Holger; de Jonge, Niels; Slusallek, Philipp

    2015-06-01

    Combined tilt- and focal series scanning transmission electron microscopy is a recently developed method to obtain nanoscale three-dimensional (3D) information of thin specimens. In this study, we formulate the forward projection in this acquisition scheme as a linear operator and prove that it is a generalization of the Ray transform for parallel illumination. We analytically derive the corresponding backprojection operator as the adjoint of the forward projection. We further demonstrate that the matched backprojection operator drastically improves the convergence rate of iterative 3D reconstruction compared to the case where a backprojection based on heuristic weighting is used. In addition, we show that the 3D reconstruction is of better quality.

  11. The propagation of the shock wave from a strong explosion in a plane-parallel stratified medium: the Kompaneets approximation

    NASA Astrophysics Data System (ADS)

    Olano, C. A.

    2009-11-01

    Context: Using certain simplifications, Kompaneets derived a partial differential equation that states the local geometrical and kinematical conditions that each surface element of a shock wave, created by a point blast in a stratified gaseous medium, must satisfy. Kompaneets could solve his equation analytically for the case of a wave propagating in an exponentially stratified medium, obtaining the form of the shock front at progressive evolutionary stages. Complete analytical solutions of the Kompaneets equation for shock wave motion in further plane-parallel stratified media were not found, except for radially stratified media. Aims: We aim to analytically solve the Kompaneets equation for the motion of a shock wave in different plane-parallel stratified media that can reflect a wide variety of astrophysical contexts. We were particularly interested in solving the Kompaneets equation for a strong explosion in the interstellar medium of the Galactic disk, in which, due to intense winds and explosions of stars, gigantic gaseous structures known as superbubbles and supershells are formed. Methods: Using the Kompaneets approximation, we derived a pair of equations that we call adapted Kompaneets equations, that govern the propagation of a shock wave in a stratified medium and that permit us to obtain solutions in parametric form. The solutions provided by the system of adapted Kompaneets equations are equivalent to those of the Kompaneets equation. We solved the adapted Kompaneets equations for shock wave propagation in a generic stratified medium by means of a power-series method. Results: Using the series solution for a shock wave in a generic medium, we obtained the series solutions for four specific media whose respective density distributions in the direction perpendicular to the stratification plane are of an exponential, power-law type (one with exponent k=-1 and the other with k =-2) and a quadratic hyperbolic-secant. From these series solutions, we deduced exact solutions for the four media in terms of elemental functions. The exact solution for shock wave propagation in a medium of quadratic hyperbolic-secant density distribution is very appropriate to describe the growth of superbubbles in the Galactic disk. Member of the Carrera del Investigador Científico del CONICET, Argentina.

  12. An analytical formulation of two‐dimensional groundwater dispersion induced by surficial recharge variability

    USGS Publications Warehouse

    Swain, Eric D.; Chin, David A.

    2003-01-01

    A predominant cause of dispersion in groundwater is advective mixing due to variability in seepage rates. Hydraulic conductivity variations have been extensively researched as a cause of this seepage variability. In this paper the effect of variations in surface recharge to a shallow surficial aquifer is investigated as an important additional effect. An analytical formulation has been developed that relates aquifer parameters and the statistics of recharge variability to increases in the dispersivity. This is accomplished by solving Fourier transforms of the small perturbation forms of the groundwater flow equations. Two field studies are presented in this paper to determine the statistics of recharge variability for input to the analytical formulation. A time series of water levels at a continuous groundwater recorder is used to investigate the temporal statistics of hydraulic head caused by recharge, and a series of infiltrometer measurements are used to define the spatial variability in the recharge parameters. With these field statistics representing head fluctuations due to recharge, the analytical formulation can be used to compute the dispersivity without an explicit representation of the recharge boundary. Results from a series of numerical experiments are used to define the limits of this analytical formulation and to provide some comparison. A sophisticated model has been developed using a particle‐tracking algorithm (modified to account for temporal variations) to estimate groundwater dispersion. Dispersivity increases of 9 percent are indicated by the analytical formulation for the aquifer at the field site. A comparison with numerical model results indicates that the analytical results are reasonable for shallow surficial aquifers in which two‐dimensional flow can be assumed.

  13. Bicubic uniform B-spline wavefront fitting technology applied in computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Cao, Hui; Sun, Jun-qiang; Chen, Guo-jie

    2006-02-01

    This paper presented a bicubic uniform B-spline wavefront fitting technology to figure out the analytical expression for object wavefront used in Computer-Generated Holograms (CGHs). In many cases, to decrease the difficulty of optical processing, off-axis CGHs rather than complex aspherical surface elements are used in modern advanced military optical systems. In order to design and fabricate off-axis CGH, we have to fit out the analytical expression for object wavefront. Zernike Polynomial is competent for fitting wavefront of centrosymmetric optical systems, but not for axisymmetrical optical systems. Although adopting high-degree polynomials fitting method would achieve higher fitting precision in all fitting nodes, the greatest shortcoming of this method is that any departure from the fitting nodes would result in great fitting error, which is so-called pulsation phenomenon. Furthermore, high-degree polynomials fitting method would increase the calculation time in coding computer-generated hologram and solving basic equation. Basing on the basis function of cubic uniform B-spline and the character mesh of bicubic uniform B-spline wavefront, bicubic uniform B-spline wavefront are described as the product of a series of matrices. Employing standard MATLAB routines, four kinds of different analytical expressions for object wavefront are fitted out by bicubic uniform B-spline as well as high-degree polynomials. Calculation results indicate that, compared with high-degree polynomials, bicubic uniform B-spline is a more competitive method to fit out the analytical expression for object wavefront used in off-axis CGH, for its higher fitting precision and C2 continuity.

  14. Stochastic modeling of experimental chaotic time series.

    PubMed

    Stemler, Thomas; Werner, Johannes P; Benner, Hartmut; Just, Wolfram

    2007-01-26

    Methods developed recently to obtain stochastic models of low-dimensional chaotic systems are tested in electronic circuit experiments. We demonstrate that reliable drift and diffusion coefficients can be obtained even when no excessive time scale separation occurs. Crisis induced intermittent motion can be described in terms of a stochastic model showing tunneling which is dominated by state space dependent diffusion. Analytical solutions of the corresponding Fokker-Planck equation are in excellent agreement with experimental data.

  15. Novel bone metabolism-associated hormones: the importance of the pre-analytical phase for understanding their physiological roles.

    PubMed

    Lombardi, Giovanni; Barbaro, Mosè; Locatelli, Massimo; Banfi, Giuseppe

    2017-06-01

    The endocrine function of bone is now a recognized feature of this tissue. Bone-derived hormones that modulate whole-body homeostasis, are being discovered as for the effects on bone of novel and classic hormones produced by other tissues become known. Often, however, the data regarding these last generation bone-derived or bone-targeting hormones do not give about a clear picture of their physiological roles or concentration ranges. A certain degree of uncertainty could stem from differences in the pre-analytical management of biological samples. The pre-analytical phase comprises a series of decisions and actions (i.e., choice of sample matrix, methods of collection, transportation, treatment and storage) preceding analysis. Errors arising in this phase will inevitably be carried over to the analytical phase where they can reduce the measurement accuracy, ultimately, leading discrepant results. While the pre-analytical phase is all important, in routine laboratory medicine, it is often not given due consideration in research and clinical trials. This is particularly true for novel molecules, such as the hormones regulating the endocrine function of bone. In this review we discuss the importance of the pre-analytical variables affecting the measurement of last generation bone-associated hormones and describe their, often debated and rarely clear physiological roles.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.W.; Boparai, A.S.; Bowers, D.L.

    This report summarizes the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year (FY) 2000 (October 1999 through September 2000). This annual progress report, which is the seventeenth in this series for the ACL, describes effort on continuing projects, work on new projects, and contributions of the ACL staff to various programs at ANL. The ACL operates within the ANL system as a full-cost-recovery service center, but it has a mission that includes a complementary research and development component: The Analytical Chemistry Laboratory will provide high-quality, cost-effective chemical analysis and related technical support tomore » solve research problems of our clients--Argonne National Laboratory, the Department of Energy, and others--and will conduct world-class research and development in analytical chemistry and its applications. The ACL handles a wide range of analytical problems that reflects the diversity of research and development (R&D) work at ANL. Some routine or standard analyses are done, but the ACL operates more typically in a problem-solving mode in which development of methods is required or adaptation of techniques is needed to obtain useful analytical data. The ACL works with clients and commercial laboratories if a large number of routine analyses are required. Much of the support work done by the ACL is very similar to applied analytical chemistry research work.« less

  17. Techniques of orbital decay and long-term ephemeris prediction for satellites in earth orbit

    NASA Technical Reports Server (NTRS)

    Barry, B. F.; Pimm, R. S.; Rowe, C. K.

    1971-01-01

    In the special perturbation method, Cowell and variation-of-parameters formulations of the motion equations are implemented and numerically integrated. Variations in the orbital elements due to drag are computed using the 1970 Jacchia atmospheric density model, which includes the effects of semiannual variations, diurnal bulge, solar activity, and geomagnetic activity. In the general perturbation method, two-variable asymptotic series and automated manipulation capabilities are used to obtain analytical solutions to the variation-of-parameters equations. Solutions are obtained considering the effect of oblateness only and the combined effects of oblateness and drag. These solutions are then numerically evaluated by means of a FORTRAN program in which an updating scheme is used to maintain accurate epoch values of the elements. The atmospheric density function is approximated by a Fourier series in true anomaly, and the 1970 Jacchia model is used to periodically update the Fourier coefficients. The accuracy of both methods is demonstrated by comparing computed orbital elements to actual elements over time spans of up to 8 days for the special perturbation method and up to 356 days for the general perturbation method.

  18. Assessing the impact of natural policy experiments on socioeconomic inequalities in health: how to apply commonly used quantitative analytical methods?

    PubMed

    Hu, Yannan; van Lenthe, Frank J; Hoffmann, Rasmus; van Hedel, Karen; Mackenbach, Johan P

    2017-04-20

    The scientific evidence-base for policies to tackle health inequalities is limited. Natural policy experiments (NPE) have drawn increasing attention as a means to evaluating the effects of policies on health. Several analytical methods can be used to evaluate the outcomes of NPEs in terms of average population health, but it is unclear whether they can also be used to assess the outcomes of NPEs in terms of health inequalities. The aim of this study therefore was to assess whether, and to demonstrate how, a number of commonly used analytical methods for the evaluation of NPEs can be applied to quantify the effect of policies on health inequalities. We identified seven quantitative analytical methods for the evaluation of NPEs: regression adjustment, propensity score matching, difference-in-differences analysis, fixed effects analysis, instrumental variable analysis, regression discontinuity and interrupted time-series. We assessed whether these methods can be used to quantify the effect of policies on the magnitude of health inequalities either by conducting a stratified analysis or by including an interaction term, and illustrated both approaches in a fictitious numerical example. All seven methods can be used to quantify the equity impact of policies on absolute and relative inequalities in health by conducting an analysis stratified by socioeconomic position, and all but one (propensity score matching) can be used to quantify equity impacts by inclusion of an interaction term between socioeconomic position and policy exposure. Methods commonly used in economics and econometrics for the evaluation of NPEs can also be applied to assess the equity impact of policies, and our illustrations provide guidance on how to do this appropriately. The low external validity of results from instrumental variable analysis and regression discontinuity makes these methods less desirable for assessing policy effects on population-level health inequalities. Increased use of the methods in social epidemiology will help to build an evidence base to support policy making in the area of health inequalities.

  19. Temperature field determination in slabs, circular plates and spheres with saw tooth heat generating sources

    NASA Astrophysics Data System (ADS)

    Diestra Cruz, Heberth Alexander

    The Green's functions integral technique is used to determine the conduction heat transfer temperature field in flat plates, circular plates, and solid spheres with saw tooth heat generating sources. In all cases the boundary temperature is specified (Dirichlet's condition) and the thermal conductivity is constant. The method of images is used to find the Green's function in infinite solids, semi-infinite solids, infinite quadrants, circular plates, and solid spheres. The saw tooth heat generation source has been modeled using Dirac delta function and Heaviside step function. The use of Green's functions allows obtain the temperature distribution in the form of an integral that avoids the convergence problems of infinite series. For the infinite solid and the sphere, the temperature distribution is three-dimensional and in the cases of semi-infinite solid, infinite quadrant and circular plate the distribution is two-dimensional. The method used in this work is superior to other methods because it obtains elegant analytical or quasi-analytical solutions to complex heat conduction problems with less computational effort and more accuracy than the use of fully numerical methods.

  20. Estimation of confidence limits for descriptive indexes derived from autoregressive analysis of time series: Methods and application to heart rate variability.

    PubMed

    Beda, Alessandro; Simpson, David M; Faes, Luca

    2017-01-01

    The growing interest in personalized medicine requires making inferences from descriptive indexes estimated from individual recordings of physiological signals, with statistical analyses focused on individual differences between/within subjects, rather than comparing supposedly homogeneous cohorts. To this end, methods to compute confidence limits of individual estimates of descriptive indexes are needed. This study introduces numerical methods to compute such confidence limits and perform statistical comparisons between indexes derived from autoregressive (AR) modeling of individual time series. Analytical approaches are generally not viable, because the indexes are usually nonlinear functions of the AR parameters. We exploit Monte Carlo (MC) and Bootstrap (BS) methods to reproduce the sampling distribution of the AR parameters and indexes computed from them. Here, these methods are implemented for spectral and information-theoretic indexes of heart-rate variability (HRV) estimated from AR models of heart-period time series. First, the MS and BC methods are tested in a wide range of synthetic HRV time series, showing good agreement with a gold-standard approach (i.e. multiple realizations of the "true" process driving the simulation). Then, real HRV time series measured from volunteers performing cognitive tasks are considered, documenting (i) the strong variability of confidence limits' width across recordings, (ii) the diversity of individual responses to the same task, and (iii) frequent disagreement between the cohort-average response and that of many individuals. We conclude that MC and BS methods are robust in estimating confidence limits of these AR-based indexes and thus recommended for short-term HRV analysis. Moreover, the strong inter-individual differences in the response to tasks shown by AR-based indexes evidence the need of individual-by-individual assessments of HRV features. Given their generality, MC and BS methods are promising for applications in biomedical signal processing and beyond, providing a powerful new tool for assessing the confidence limits of indexes estimated from individual recordings.

  1. Estimation of confidence limits for descriptive indexes derived from autoregressive analysis of time series: Methods and application to heart rate variability

    PubMed Central

    2017-01-01

    The growing interest in personalized medicine requires making inferences from descriptive indexes estimated from individual recordings of physiological signals, with statistical analyses focused on individual differences between/within subjects, rather than comparing supposedly homogeneous cohorts. To this end, methods to compute confidence limits of individual estimates of descriptive indexes are needed. This study introduces numerical methods to compute such confidence limits and perform statistical comparisons between indexes derived from autoregressive (AR) modeling of individual time series. Analytical approaches are generally not viable, because the indexes are usually nonlinear functions of the AR parameters. We exploit Monte Carlo (MC) and Bootstrap (BS) methods to reproduce the sampling distribution of the AR parameters and indexes computed from them. Here, these methods are implemented for spectral and information-theoretic indexes of heart-rate variability (HRV) estimated from AR models of heart-period time series. First, the MS and BC methods are tested in a wide range of synthetic HRV time series, showing good agreement with a gold-standard approach (i.e. multiple realizations of the "true" process driving the simulation). Then, real HRV time series measured from volunteers performing cognitive tasks are considered, documenting (i) the strong variability of confidence limits' width across recordings, (ii) the diversity of individual responses to the same task, and (iii) frequent disagreement between the cohort-average response and that of many individuals. We conclude that MC and BS methods are robust in estimating confidence limits of these AR-based indexes and thus recommended for short-term HRV analysis. Moreover, the strong inter-individual differences in the response to tasks shown by AR-based indexes evidence the need of individual-by-individual assessments of HRV features. Given their generality, MC and BS methods are promising for applications in biomedical signal processing and beyond, providing a powerful new tool for assessing the confidence limits of indexes estimated from individual recordings. PMID:28968394

  2. Time series analysis for psychological research: examining and forecasting change

    PubMed Central

    Jebb, Andrew T.; Tay, Louis; Wang, Wei; Huang, Qiming

    2015-01-01

    Psychological research has increasingly recognized the importance of integrating temporal dynamics into its theories, and innovations in longitudinal designs and analyses have allowed such theories to be formalized and tested. However, psychological researchers may be relatively unequipped to analyze such data, given its many characteristics and the general complexities involved in longitudinal modeling. The current paper introduces time series analysis to psychological research, an analytic domain that has been essential for understanding and predicting the behavior of variables across many diverse fields. First, the characteristics of time series data are discussed. Second, different time series modeling techniques are surveyed that can address various topics of interest to psychological researchers, including describing the pattern of change in a variable, modeling seasonal effects, assessing the immediate and long-term impact of a salient event, and forecasting future values. To illustrate these methods, an illustrative example based on online job search behavior is used throughout the paper, and a software tutorial in R for these analyses is provided in the Supplementary Materials. PMID:26106341

  3. Time series analysis for psychological research: examining and forecasting change.

    PubMed

    Jebb, Andrew T; Tay, Louis; Wang, Wei; Huang, Qiming

    2015-01-01

    Psychological research has increasingly recognized the importance of integrating temporal dynamics into its theories, and innovations in longitudinal designs and analyses have allowed such theories to be formalized and tested. However, psychological researchers may be relatively unequipped to analyze such data, given its many characteristics and the general complexities involved in longitudinal modeling. The current paper introduces time series analysis to psychological research, an analytic domain that has been essential for understanding and predicting the behavior of variables across many diverse fields. First, the characteristics of time series data are discussed. Second, different time series modeling techniques are surveyed that can address various topics of interest to psychological researchers, including describing the pattern of change in a variable, modeling seasonal effects, assessing the immediate and long-term impact of a salient event, and forecasting future values. To illustrate these methods, an illustrative example based on online job search behavior is used throughout the paper, and a software tutorial in R for these analyses is provided in the Supplementary Materials.

  4. Peak clustering in two-dimensional gas chromatography with mass spectrometric detection based on theoretical calculation of two-dimensional peak shapes: the 2DAid approach.

    PubMed

    van Stee, Leo L P; Brinkman, Udo A Th

    2011-10-28

    A method is presented to facilitate the non-target analysis of data obtained in temperature-programmed comprehensive two-dimensional (2D) gas chromatography coupled to time-of-flight mass spectrometry (GC×GC-ToF-MS). One main difficulty of GC×GC data analysis is that each peak is usually modulated several times and therefore appears as a series of peaks (or peaklets) in the one-dimensionally recorded data. The proposed method, 2DAid, uses basic chromatographic laws to calculate the theoretical shape of a 2D peak (a cluster of peaklets originating from the same analyte) in order to define the area in which the peaklets of each individual compound can be expected to show up. Based on analyte-identity information obtained by means of mass spectral library searching, the individual peaklets are then combined into a single 2D peak. The method is applied, amongst others, to a complex mixture containing 362 analytes. It is demonstrated that the 2D peak shapes can be accurately predicted and that clustering and further processing can reduce the final peak list to a manageable size. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Statistical process control in nursing research.

    PubMed

    Polit, Denise F; Chaboyer, Wendy

    2012-02-01

    In intervention studies in which randomization to groups is not possible, researchers typically use quasi-experimental designs. Time series designs are strong quasi-experimental designs but are seldom used, perhaps because of technical and analytic hurdles. Statistical process control (SPC) is an alternative analytic approach to testing hypotheses about intervention effects using data collected over time. SPC, like traditional statistical methods, is a tool for understanding variation and involves the construction of control charts that distinguish between normal, random fluctuations (common cause variation), and statistically significant special cause variation that can result from an innovation. The purpose of this article is to provide an overview of SPC and to illustrate its use in a study of a nursing practice improvement intervention. Copyright © 2011 Wiley Periodicals, Inc.

  6. An analytic technique for statistically modeling random atomic clock errors in estimation

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1981-01-01

    Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.

  7. A generalization of random matrix theory and its application to statistical physics.

    PubMed

    Wang, Duan; Zhang, Xin; Horvatic, Davor; Podobnik, Boris; Eugene Stanley, H

    2017-02-01

    To study the statistical structure of crosscorrelations in empirical data, we generalize random matrix theory and propose a new method of cross-correlation analysis, known as autoregressive random matrix theory (ARRMT). ARRMT takes into account the influence of auto-correlations in the study of cross-correlations in multiple time series. We first analytically and numerically determine how auto-correlations affect the eigenvalue distribution of the correlation matrix. Then we introduce ARRMT with a detailed procedure of how to implement the method. Finally, we illustrate the method using two examples taken from inflation rates for air pressure data for 95 US cities.

  8. High-performance liquid chromatographic method for guanylhydrazone compounds.

    PubMed

    Cerami, C; Zhang, X; Ulrich, P; Bianchi, M; Tracey, K J; Berger, B J

    1996-01-12

    A high-performance liquid chromatographic method has been developed for a series of aromatic guanylhydrazones that have demonstrated therapeutic potential as anti-inflammatory agents. The compounds were separated using octadecyl or diisopropyloctyl reversed-phase columns, with an acetonitrile gradient in water containing heptane sulfonate, tetramethylammonium chloride, and phosphoric acid. The method was used to reliably quantify levels of analyte as low as 785 ng/ml, and the detector response was linear to at least 50 micrograms/ml using a 100 microliters injection volume. The assay system was used to determine the basic pharmacokinetics of a lead compound, CNI-1493, from serum concentrations following a single intravenous injection in rats.

  9. PENDISC: a simple method for constructing a mathematical model from time-series data of metabolite concentrations.

    PubMed

    Sriyudthsak, Kansuporn; Iwata, Michio; Hirai, Masami Yokota; Shiraishi, Fumihide

    2014-06-01

    The availability of large-scale datasets has led to more effort being made to understand characteristics of metabolic reaction networks. However, because the large-scale data are semi-quantitative, and may contain biological variations and/or analytical errors, it remains a challenge to construct a mathematical model with precise parameters using only these data. The present work proposes a simple method, referred to as PENDISC (Parameter Estimation in a N on- DImensionalized S-system with Constraints), to assist the complex process of parameter estimation in the construction of a mathematical model for a given metabolic reaction system. The PENDISC method was evaluated using two simple mathematical models: a linear metabolic pathway model with inhibition and a branched metabolic pathway model with inhibition and activation. The results indicate that a smaller number of data points and rate constant parameters enhances the agreement between calculated values and time-series data of metabolite concentrations, and leads to faster convergence when the same initial estimates are used for the fitting. This method is also shown to be applicable to noisy time-series data and to unmeasurable metabolite concentrations in a network, and to have a potential to handle metabolome data of a relatively large-scale metabolic reaction system. Furthermore, it was applied to aspartate-derived amino acid biosynthesis in Arabidopsis thaliana plant. The result provides confirmation that the mathematical model constructed satisfactorily agrees with the time-series datasets of seven metabolite concentrations.

  10. Altered amygdalar resting-state connectivity in depression is explained by both genes and environment.

    PubMed

    Córdova-Palomera, Aldo; Tornador, Cristian; Falcón, Carles; Bargalló, Nuria; Nenadic, Igor; Deco, Gustavo; Fañanás, Lourdes

    2015-10-01

    Recent findings indicate that alterations of the amygdalar resting-state fMRI connectivity play an important role in the etiology of depression. While both depression and resting-state brain activity are shaped by genes and environment, the relative contribution of genetic and environmental factors mediating the relationship between amygdalar resting-state connectivity and depression remain largely unexplored. Likewise, novel neuroimaging research indicates that different mathematical representations of resting-state fMRI activity patterns are able to embed distinct information relevant to brain health and disease. The present study analyzed the influence of genes and environment on amygdalar resting-state fMRI connectivity, in relation to depression risk. High-resolution resting-state fMRI scans were analyzed to estimate functional connectivity patterns in a sample of 48 twins (24 monozygotic pairs) informative for depressive psychopathology (6 concordant, 8 discordant and 10 healthy control pairs). A graph-theoretical framework was employed to construct brain networks using two methods: (i) the conventional approach of filtered BOLD fMRI time-series and (ii) analytic components of this fMRI activity. Results using both methods indicate that depression risk is increased by environmental factors altering amygdalar connectivity. When analyzing the analytic components of the BOLD fMRI time-series, genetic factors altering the amygdala neural activity at rest show an important contribution to depression risk. Overall, these findings show that both genes and environment modify different patterns the amygdala resting-state connectivity to increase depression risk. The genetic relationship between amygdalar connectivity and depression may be better elicited by examining analytic components of the brain resting-state BOLD fMRI signals. © 2015 Wiley Periodicals, Inc.

  11. New, small, fast acting blood glucose meters--an analytical laboratory evaluation.

    PubMed

    Weitgasser, Raimund; Hofmann, Manuela; Gappmayer, Brigitta; Garstenauer, Christa

    2007-09-22

    Patients and medical personnel are eager to use blood glucose meters that are easy to handle and fast acting. We questioned whether accuracy and precision of these new, small and light weight devices would meet analytical laboratory standards and tested four meters with the above mentioned conditions. Approximately 300 capillary blood samples were collected and tested using two devices of each brand and two different types of glucose test strips. Blood from the same samples was used for comparison. Results were evaluated using maximum deviation of 5% and 10% from the comparative method, the error grid analysis, the overall deviation of the devices, the linear regression analysis as well as the CVs for measurement in series. Of all 1196 measurements a deviation of less than 5% resp. 10% from the reference method was found for the FreeStyle (FS) meter in 69.5% and 96%, the Glucocard X Meter (GX) in 44% and 75%, the One Touch Ultra (OT) in 29% and 60%, the Wellion True Track (WT) in 28.5% and 58%. The error grid analysis gave 99.7% for FS, 99% for GX, 98% for OT and 97% for WT in zone A. The remainder of the values lay within zone B. Linear regression analysis resembled these results. CVs for measurement in series showed higher deviations for OT and WT compared to FS and GX. The four new, small and fast acting glucose meters fulfil clinically relevant analytical laboratory requirements making them appropriate for use by medical personnel. However, with regard to the tight and restrictive limits of the ADA recommendations, the devices are still in need of improvement. This should be taken into account when the devices are used by primarily inexperienced persons and is relevant for further industrial development of such devices.

  12. Rapid, simultaneous and interference-free determination of three rhodamine dyes illegally added into chilli samples using excitation-emission matrix fluorescence coupled with second-order calibration method.

    PubMed

    Chang, Yue-Yue; Wu, Hai-Long; Fang, Huan; Wang, Tong; Liu, Zhi; Ouyang, Yang-Zi; Ding, Yu-Jie; Yu, Ru-Qin

    2018-06-15

    In this study, a smart and green analytical method based on the second-order calibration algorithm coupled with excitation-emission matrix (EEM) fluorescence was developed for the determination of rhodamine dyes illegally added into chilli samples. The proposed method not only has the advantage of high sensitivity over the traditional fluorescence method but also fully displays the "second-order advantage". Pure signals of analytes were successfully extracted from severely interferential EEMs profiles via using alternating trilinear decomposition (ATLD) algorithm even in the presence of common fluorescence problems such as scattering, peak overlaps and unknown interferences. It is worth noting that the unknown interferents can denote different kinds of backgrounds, not only refer to a constant background. In addition, the method using interpolation method could avoid the information loss of analytes of interest. The use of "mathematical separation" instead of complicated "chemical or physical separation" strategy can be more effective and environmentally friendly. A series of statistical parameters including figures of merit and RSDs of intra- (≤1.9%) and inter-day (≤6.6%) were calculated to validate the accuracy of the proposed method. Furthermore, the authoritative method of HPLC-FLD was adopted to verify the qualitative and quantitative results of the proposed method. Compared with the two methods, it also showed that the ATLD-EEMs method has the advantages of accuracy, rapidness, simplicity and green, which is expected to be developed as an attractive alternative method for simultaneous and interference-free determination of rhodamine dyes illegally added into complex matrices. Copyright © 2018. Published by Elsevier B.V.

  13. Comparison between four dissimilar solar panel configurations

    NASA Astrophysics Data System (ADS)

    Suleiman, K.; Ali, U. A.; Yusuf, Ibrahim; Koko, A. D.; Bala, S. I.

    2017-12-01

    Several studies on photovoltaic systems focused on how it operates and energy required in operating it. Little attention is paid on its configurations, modeling of mean time to system failure, availability, cost benefit and comparisons of parallel and series-parallel designs. In this research work, four system configurations were studied. Configuration I consists of two sub-components arranged in parallel with 24 V each, configuration II consists of four sub-components arranged logically in parallel with 12 V each, configuration III consists of four sub-components arranged in series-parallel with 8 V each, and configuration IV has six sub-components with 6 V each arranged in series-parallel. Comparative analysis was made using Chapman Kolmogorov's method. The derivation for explicit expression of mean time to system failure, steady state availability and cost benefit analysis were performed, based on the comparison. Ranking method was used to determine the optimal configuration of the systems. The results of analytical and numerical solutions of system availability and mean time to system failure were determined and it was found that configuration I is the optimal configuration.

  14. Modelling spatiotemporal change using multidimensional arrays Meng

    NASA Astrophysics Data System (ADS)

    Lu, Meng; Appel, Marius; Pebesma, Edzer

    2017-04-01

    The large variety of remote sensors, model simulations, and in-situ records provide great opportunities to model environmental change. The massive amount of high-dimensional data calls for methods to integrate data from various sources and to analyse spatiotemporal and thematic information jointly. An array is a collection of elements ordered and indexed in arbitrary dimensions, which naturally represent spatiotemporal phenomena that are identified by their geographic locations and recording time. In addition, array regridding (e.g., resampling, down-/up-scaling), dimension reduction, and spatiotemporal statistical algorithms are readily applicable to arrays. However, the role of arrays in big geoscientific data analysis has not been systematically studied: How can arrays discretise continuous spatiotemporal phenomena? How can arrays facilitate the extraction of multidimensional information? How can arrays provide a clean, scalable and reproducible change modelling process that is communicable between mathematicians, computer scientist, Earth system scientist and stakeholders? This study emphasises on detecting spatiotemporal change using satellite image time series. Current change detection methods using satellite image time series commonly analyse data in separate steps: 1) forming a vegetation index, 2) conducting time series analysis on each pixel, and 3) post-processing and mapping time series analysis results, which does not consider spatiotemporal correlations and ignores much of the spectral information. Multidimensional information can be better extracted by jointly considering spatial, spectral, and temporal information. To approach this goal, we use principal component analysis to extract multispectral information and spatial autoregressive models to account for spatial correlation in residual based time series structural change modelling. We also discuss the potential of multivariate non-parametric time series structural change methods, hierarchical modelling, and extreme event detection methods to model spatiotemporal change. We show how array operations can facilitate expressing these methods, and how the open-source array data management and analytics software SciDB and R can be used to scale the process and make it easily reproducible.

  15. A Semi-Analytical Model for Dispersion Modelling Studies in the Atmospheric Boundary Layer

    NASA Astrophysics Data System (ADS)

    Gupta, A.; Sharan, M.

    2017-12-01

    The severe impact of harmful air pollutants has always been a cause of concern for a wide variety of air quality analysis. The analytical models based on the solution of the advection-diffusion equation have been the first and remain the convenient way for modeling air pollutant dispersion as it is easy to handle the dispersion parameters and related physics in it. A mathematical model describing the crosswind integrated concentration is presented. The analytical solution to the resulting advection-diffusion equation is limited to a constant and simple profiles of eddy diffusivity and wind speed. In practice, the wind speed depends on the vertical height above the ground and eddy diffusivity profiles on the downwind distance from the source as well as the vertical height. In the present model, a method of eigen-function expansion is used to solve the resulting partial differential equation with the appropriate boundary conditions. This leads to a system of first order ordinary differential equations with a coefficient matrix depending on the downwind distance. The solution of this system, in general, can be expressed in terms of Peano-baker series which is not easy to compute, particularly when the coefficient matrix becomes non-commutative (Martin et al., 1967). An approach based on Taylor's series expansion is introduced to find the numerical solution of first order system. The method is applied to various profiles of wind speed and eddy diffusivities. The solution computed from the proposed methodology is found to be efficient and accurate in comparison to those available in the literature. The performance of the model is evaluated with the diffusion datasets from Copenhagen (Gryning et al., 1987) and Hanford (Doran et al., 1985). In addition, the proposed method is used to deduce three dimensional concentrations by considering the Gaussian distribution in crosswind direction, which is also evaluated with diffusion data corresponding to a continuous point source.

  16. A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall

    NASA Astrophysics Data System (ADS)

    Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.

    2017-06-01

    Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.

  17. Recursive-operator method in vibration problems for rod systems

    NASA Astrophysics Data System (ADS)

    Rozhkova, E. V.

    2009-12-01

    Using linear differential equations with constant coefficients describing one-dimensional dynamical processes as an example, we show that the solutions of these equations and systems are related to the solution of the corresponding numerical recursion relations and one does not have to compute the roots of the corresponding characteristic equations. The arbitrary functions occurring in the general solution of the homogeneous equations are determined by the initial and boundary conditions or are chosen from various classes of analytic functions. The solutions of the inhomogeneous equations are constructed in the form of integro-differential series acting on the right-hand side of the equation, and the coefficients of the series are determined from the same recursion relations. The convergence of formal solutions as series of a more general recursive-operator construction was proved in [1]. In the special case where the solutions of the equation can be represented in separated variables, the power series can be effectively summed, i.e., expressed in terms of elementary functions, and coincide with the known solutions. In this case, to determine the natural vibration frequencies, one obtains algebraic rather than transcendental equations, which permits exactly determining the imaginary and complex roots of these equations without using the graphic method [2, pp. 448-449]. The correctness of the obtained formulas (differentiation formulas, explicit expressions for the series coefficients, etc.) can be verified directly by appropriate substitutions; therefore, we do not prove them here.

  18. Complex Landscape Terms in Seri

    ERIC Educational Resources Information Center

    O'Meara, Carolyn; Bohnemeyer, Jurgen

    2008-01-01

    The nominal lexicon of Seri is characterized by a prevalence of analytical descriptive terms. We explore the consequences of this typological trait in the landscape domain. The complex landscape terms of Seri classify geographic entities in terms of their material make-up and spatial properties such as shape, orientation, and merological…

  19. Modeling Time Series Data for Supervised Learning

    ERIC Educational Resources Information Center

    Baydogan, Mustafa Gokce

    2012-01-01

    Temporal data are increasingly prevalent and important in analytics. Time series (TS) data are chronological sequences of observations and an important class of temporal data. Fields such as medicine, finance, learning science and multimedia naturally generate TS data. Each series provide a high-dimensional data vector that challenges the learning…

  20. Quantifying non-linear dynamics of mass-springs in series oscillators via asymptotic approach

    NASA Astrophysics Data System (ADS)

    Starosta, Roman; Sypniewska-Kamińska, Grażyna; Awrejcewicz, Jan

    2017-05-01

    Dynamical regular response of an oscillator with two serially connected springs with nonlinear characteristics of cubic type and governed by a set of differential-algebraic equations (DAEs) is studied. The classical approach of the multiple scales method (MSM) in time domain has been employed and appropriately modified to solve the governing DAEs of two systems, i.e. with one- and two degrees-of-freedom. The approximate analytical solutions have been verified by numerical simulations.

  1. Rapid Quantification of Melamine in Different Brands/Types of Milk Powders Using Standard Addition Net Analyte Signal and Near-Infrared Spectroscopy

    PubMed Central

    2016-01-01

    Multivariate calibration (MVC) and near-infrared (NIR) spectroscopy have demonstrated potential for rapid analysis of melamine in various dairy products. However, the practical application of ordinary MVC can be largely restricted because the prediction of a new sample from an uncalibrated batch would be subject to a significant bias due to matrix effect. In this study, the feasibility of using NIR spectroscopy and the standard addition (SA) net analyte signal (NAS) method (SANAS) for rapid quantification of melamine in different brands/types of milk powders was investigated. In SANAS, the NAS vector of melamine in an unknown sample as well as in a series of samples added with melamine standards was calculated and then the Euclidean norms of series standards were used to build a straightforward univariate regression model. The analysis results of 10 different brands/types of milk powders with melamine levels 0~0.12% (w/w) indicate that SANAS obtained accurate results with the root mean squared error of prediction (RMSEP) values ranging from 0.0012 to 0.0029. An additional advantage of NAS is to visualize and control the possible unwanted variations during standard addition. The proposed method will provide a practically useful tool for rapid and nondestructive quantification of melamine in different brands/types of milk powders. PMID:27525154

  2. Using learning analytics to evaluate a video-based lecture series.

    PubMed

    Lau, K H Vincent; Farooque, Pue; Leydon, Gary; Schwartz, Michael L; Sadler, R Mark; Moeller, Jeremy J

    2018-01-01

    The video-based lecture (VBL), an important component of the flipped classroom (FC) and massive open online course (MOOC) approaches to medical education, has primarily been evaluated through direct learner feedback. Evaluation may be enhanced through learner analytics (LA) - analysis of quantitative audience usage data generated by video-sharing platforms. We applied LA to an experimental series of ten VBLs on electroencephalography (EEG) interpretation, uploaded to YouTube in the model of a publicly accessible MOOC. Trends in view count; total percentage of video viewed and audience retention (AR) (percentage of viewers watching at a time point compared to the initial total) were examined. The pattern of average AR decline was characterized using regression analysis, revealing a uniform linear decline in viewership for each video, with no evidence of an optimal VBL length. Segments with transient increases in AR corresponded to those focused on core concepts, indicative of content requiring more detailed evaluation. We propose a model for applying LA at four levels: global, series, video, and feedback. LA may be a useful tool in evaluating a VBL series. Our proposed model combines analytics data and learner self-report for comprehensive evaluation.

  3. Calibration and accuracy analysis of a focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Quint, F.; Stilla, U.

    2014-08-01

    In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.

  4. Hydrogen-oxygen catalytic ignition and thruster investigation. Volume 2: High pressure thruster evaluations

    NASA Technical Reports Server (NTRS)

    Johnson, R. J.; Heckert, B.; Burge, H. L.

    1972-01-01

    A high pressure thruster effort was conducted with the major objective of demonstrating a duct cooling concept with gaseous propellant in a thruster operating at nominally 300 psia and 1500 lbf. The analytical design methods for the duct cooling were proven in a series of tests with both ambient and reduced temperature propellants. Long duration tests as well as pulse mode tests demonstrated the feasibility of the concept. All tests were conducted with a scaling of the raised post triplet injector design previously demonstrated at 900 lbf in demonstration firings. A series of environmental conditioned firings were also conducted to determine the effects of thermal soaks, atmospheric air and high humidity. This volume presents the results of the high pressure thruster evaluations.

  5. Lie Symmetry Analysis, Analytical Solutions, and Conservation Laws of the Generalised Whitham-Broer-Kaup-Like Equations

    NASA Astrophysics Data System (ADS)

    Wang, Xiu-Bin; Tian, Shou-Fu; Qin, Chun-Yan; Zhang, Tian-Tian

    2017-03-01

    In this article, a generalised Whitham-Broer-Kaup-Like (WBKL) equations is investigated, which can describe the bidirectional propagation of long waves in shallow water. The equations can be reduced to the dispersive long wave equations, variant Boussinesq equations, Whitham-Broer-Kaup-Like equations, etc. The Lie symmetry analysis method is used to consider the vector fields and optimal system of the equations. The similarity reductions are given on the basic of the optimal system. Furthermore, the power series solutions are derived by using the power series theory. Finally, based on a new theorem of conservation laws, the conservation laws associated with symmetries of this equations are constructed with a detailed derivation.

  6. Coupled channel effects on resonance states of positronic alkali atom

    NASA Astrophysics Data System (ADS)

    Yamashita, Takuma; Kino, Yasushi

    2018-01-01

    S-wave Feshbach resonance states belonging to dipole series in positronic alkali atoms (e+Li, e+Na, e+K, e+Rb and e+Cs) are studied by coupled-channel calculations within a three-body model. Resonance energies and widths below a dissociation threshold of alkali-ion and positronium are calculated with a complex scaling method. Extended model potentials that provide positronic pseudo-alkali-atoms are introduced to investigate the relationship between the resonance states and dissociation thresholds based on a three-body dynamics. Resonances of the dipole series below a dissociation threshold of alkali-atom and positron would have some associations with atomic energy levels that results in longer resonance lifetimes than the prediction of the analytical law derived from the ion-dipole interaction.

  7. Understanding wax screen-printing: a novel patterning process for microfluidic cloth-based analytical devices.

    PubMed

    Liu, Min; Zhang, Chunsun; Liu, Feifei

    2015-09-03

    In this work, we first introduce the fabrication of microfluidic cloth-based analytical devices (μCADs) using a wax screen-printing approach that is suitable for simple, inexpensive, rapid, low-energy-consumption and high-throughput preparation of cloth-based analytical devices. We have carried out a detailed study on the wax screen-printing of μCADs and have obtained some interesting results. Firstly, an analytical model is established for the spreading of molten wax in cloth. Secondly, a new wax screen-printing process has been proposed for fabricating μCADs, where the melting of wax into the cloth is much faster (∼5 s) and the heating temperature is much lower (75 °C). Thirdly, the experimental results show that the patterning effects of the proposed wax screen-printing method depend to a certain extent on types of screens, wax melting temperatures and melting time. Under optimized conditions, the minimum printing width of hydrophobic wax barrier and hydrophilic channel is 100 μm and 1.9 mm, respectively. Importantly, the developed analytical model is also well validated by these experiments. Fourthly, the μCADs fabricated by the presented wax screen-printing method are used to perform a proof-of-concept assay of glucose or protein in artificial urine with rapid high-throughput detection taking place on a 48-chamber cloth-based device and being performed by a visual readout. Overall, the developed cloth-based wax screen-printing and arrayed μCADs should provide a new research direction in the development of advanced sensor arrays for detection of a series of analytes relevant to many diverse applications. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. A spectral dynamic stiffness method for free vibration analysis of plane elastodynamic problems

    NASA Astrophysics Data System (ADS)

    Liu, X.; Banerjee, J. R.

    2017-03-01

    A highly efficient and accurate analytical spectral dynamic stiffness (SDS) method for modal analysis of plane elastodynamic problems based on both plane stress and plane strain assumptions is presented in this paper. First, the general solution satisfying the governing differential equation exactly is derived by applying two types of one-dimensional modified Fourier series. Then the SDS matrix for an element is formulated symbolically using the general solution. The SDS matrices are assembled directly in a similar way to that of the finite element method, demonstrating the method's capability to model complex structures. Any arbitrary boundary conditions are represented accurately in the form of the modified Fourier series. The Wittrick-Williams algorithm is then used as the solution technique where the mode count problem (J0) of a fully-clamped element is resolved. The proposed method gives highly accurate solutions with remarkable computational efficiency, covering low, medium and high frequency ranges. The method is applied to both plane stress and plane strain problems with simple as well as complex geometries. All results from the theory in this paper are accurate up to the last figures quoted to serve as benchmarks.

  9. A semi-analytical analysis of electro-thermo-hydrodynamic stability in dielectric nanofluids using Buongiorno's mathematical model together with more realistic boundary conditions

    NASA Astrophysics Data System (ADS)

    Wakif, Abderrahim; Boulahia, Zoubair; Sehaqui, Rachid

    2018-06-01

    The main aim of the present analysis is to examine the electroconvection phenomenon that takes place in a dielectric nanofluid under the influence of a perpendicularly applied alternating electric field. In this investigation, we assume that the nanofluid has a Newtonian rheological behavior and verifies the Buongiorno's mathematical model, in which the effects of thermophoretic and Brownian diffusions are incorporated explicitly in the governing equations. Moreover, the nanofluid layer is taken to be confined horizontally between two parallel plate electrodes, heated from below and cooled from above. In a fast pulse electric field, the onset of electroconvection is due principally to the buoyancy forces and the dielectrophoretic forces. Within the framework of the Oberbeck-Boussinesq approximation and the linear stability theory, the governing stability equations are solved semi-analytically by means of the power series method for isothermal, no-slip and non-penetrability conditions. In addition, the computational implementation with the impermeability condition implies that there exists no nanoparticles mass flux on the electrodes. On the other hand, the obtained analytical solutions are validated by comparing them to those available in the literature for the limiting case of dielectric fluids. In order to check the accuracy of our semi-analytical results obtained for the case of dielectric nanofluids, we perform further numerical and semi-analytical computations by means of the Runge-Kutta-Fehlberg method, the Chebyshev-Gauss-Lobatto spectral method, the Galerkin weighted residuals technique, the polynomial collocation method and the Wakif-Galerkin weighted residuals technique. In this analysis, the electro-thermo-hydrodynamic stability of the studied nanofluid is controlled through the critical AC electric Rayleigh number Rec , whose value depends on several physical parameters. Furthermore, the effects of various pertinent parameters on the electro-thermo-hydrodynamic stability of the nanofluidic system are discussed in more detail through graphical and tabular illustrations.

  10. A combined analytical formulation and genetic algorithm to analyze the nonlinear damage responses of continuous fiber toughened composites

    NASA Astrophysics Data System (ADS)

    Jeon, Haemin; Yu, Jaesang; Lee, Hunsu; Kim, G. M.; Kim, Jae Woo; Jung, Yong Chae; Yang, Cheol-Min; Yang, B. J.

    2017-09-01

    Continuous fiber-reinforced composites are important materials that have the highest commercialized potential in the upcoming future among existing advanced materials. Despite their wide use and value, their theoretical mechanisms have not been fully established due to the complexity of the compositions and their unrevealed failure mechanisms. This study proposes an effective three-dimensional damage modeling of a fibrous composite by combining analytical micromechanics and evolutionary computation. The interface characteristics, debonding damage, and micro-cracks are considered to be the most influential factors on the toughness and failure behaviors of composites, and a constitutive equation considering these factors was explicitly derived in accordance with the micromechanics-based ensemble volume averaged method. The optimal set of various model parameters in the analytical model were found using modified evolutionary computation that considers human-induced error. The effectiveness of the proposed formulation was validated by comparing a series of numerical simulations with experimental data from available studies.

  11. Why does the sign problem occur in evaluating the overlap of HFB wave functions?

    NASA Astrophysics Data System (ADS)

    Mizusaki, Takahiro; Oi, Makito; Shimizu, Noritaka

    2018-04-01

    For the overlap matrix element between Hartree-Fock-Bogoliubov states, there are two analytically different formulae: one with the square root of the determinant (the Onishi formula) and the other with the Pfaffian (Robledo's Pfaffian formula). The former formula is two-valued as a complex function, hence it leaves the sign of the norm overlap undetermined (i.e., the so-called sign problem of the Onishi formula). On the other hand, the latter formula does not suffer from the sign problem. The derivations for these two formulae are so different that the reasons are obscured why the resultant formulae possess different analytical properties. In this paper, we discuss the reason why the difference occurs by means of the consistent framework, which is based on the linked cluster theorem and the product-sum identity for the Pfaffian. Through this discussion, we elucidate the source of the sign problem in the Onishi formula. We also point out that different summation methods of series expansions may result in analytically different formulae.

  12. Transfer function verification and block diagram simplification of a very high-order distributed pole closed-loop servo by means of non-linear time-response simulation

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, A. K.

    1975-01-01

    Linear frequency domain methods are inadequate in analyzing the 1975 Viking Orbiter (VO75) digital tape recorder servo due to dominant nonlinear effects such as servo signal limiting, unidirectional servo control, and static/dynamic Coulomb friction. The frequency loop (speed control) servo of the VO75 tape recorder is used to illustrate the analytical tools and methodology of system redundancy elimination and high order transfer function verification. The paper compares time-domain performance parameters derived from a series of nonlinear time responses with the available experimental data in order to select the best possible analytical transfer function representation of the tape transport (mechanical segment of the tape recorder) from several possible candidates. The study also shows how an analytical time-response simulation taking into account most system nonlinearities can pinpoint system redundancy and overdesign stemming from a strictly empirical design approach. System order reduction is achieved through truncation of individual transfer functions and elimination of redundant blocks.

  13. Pressure data for four analytically defined arrow wings in supersonic flow. [Langley Unitary Plan Wind Tunnel tests

    NASA Technical Reports Server (NTRS)

    Townsend, J. C.

    1980-01-01

    In order to provide experimental data for comparison with newly developed finite difference methods for computing supersonic flows over aircraft configurations, wind tunnel tests were conducted on four arrow wing models. The models were machined under numeric control to precisely duplicate analytically defined shapes. They were heavily instrumented with pressure orifices at several cross sections ahead of and in the region where there is a gap between the body and the wing trailing edge. The test Mach numbers were 2.36, 2.96, and 4.63. Tabulated pressure data for the complete test series are presented along with selected oil flow photographs. Comparisons of some preliminary numerical results at zero angle of attack show good to excellent agreement with the experimental pressure distributions.

  14. Two-condition within-participant statistical mediation analysis: A path-analytic framework.

    PubMed

    Montoya, Amanda K; Hayes, Andrew F

    2017-03-01

    Researchers interested in testing mediation often use designs where participants are measured on a dependent variable Y and a mediator M in both of 2 different circumstances. The dominant approach to assessing mediation in such a design, proposed by Judd, Kenny, and McClelland (2001), relies on a series of hypothesis tests about components of the mediation model and is not based on an estimate of or formal inference about the indirect effect. In this article we recast Judd et al.'s approach in the path-analytic framework that is now commonly used in between-participant mediation analysis. By so doing, it is apparent how to estimate the indirect effect of a within-participant manipulation on some outcome through a mediator as the product of paths of influence. This path-analytic approach eliminates the need for discrete hypothesis tests about components of the model to support a claim of mediation, as Judd et al.'s method requires, because it relies only on an inference about the product of paths-the indirect effect. We generalize methods of inference for the indirect effect widely used in between-participant designs to this within-participant version of mediation analysis, including bootstrap confidence intervals and Monte Carlo confidence intervals. Using this path-analytic approach, we extend the method to models with multiple mediators operating in parallel and serially and discuss the comparison of indirect effects in these more complex models. We offer macros and code for SPSS, SAS, and Mplus that conduct these analyses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Application of Dynamic Analysis in Semi-Analytical Finite Element Method

    PubMed Central

    Oeser, Markus

    2017-01-01

    Analyses of dynamic responses are significantly important for the design, maintenance and rehabilitation of asphalt pavement. In order to evaluate the dynamic responses of asphalt pavement under moving loads, a specific computational program, SAFEM, was developed based on a semi-analytical finite element method. This method is three-dimensional and only requires a two-dimensional FE discretization by incorporating Fourier series in the third dimension. In this paper, the algorithm to apply the dynamic analysis to SAFEM was introduced in detail. Asphalt pavement models under moving loads were built in the SAFEM and commercial finite element software ABAQUS to verify the accuracy and efficiency of the SAFEM. The verification shows that the computational accuracy of SAFEM is high enough and its computational time is much shorter than ABAQUS. Moreover, experimental verification was carried out and the prediction derived from SAFEM is consistent with the measurement. Therefore, the SAFEM is feasible to reliably predict the dynamic response of asphalt pavement under moving loads, thus proving beneficial to road administration in assessing the pavement’s state. PMID:28867813

  16. Ultrasound-assisted dispersive liquid-liquid microextraction based on the solidification of a floating organic droplet followed by gas chromatography for the determination of eight pyrethroid pesticides in tea samples.

    PubMed

    Hou, Xiaohong; Zheng, Xin; Zhang, Conglu; Ma, Xiaowei; Ling, Qiyuan; Zhao, Longshan

    2014-10-15

    A novel ultrasound-assisted dispersive liquid-liquid microextraction based on solidification of floating organic droplet method (UA-DLLME-SFO) combined with gas chromatography (GC) was developed for the determination of eight pyrethroid pesticides in tea for the first time. After ultrasound and centrifugation, 1-dodecanol and ethanol was used as the extraction and dispersive solvent, respectively. A series of parameters, including extraction solvent and volume, dispersive solvent and volume, extraction time, pH, and ultrasonic time influencing the microextraction efficiency were systematically investigated. Under the optimal conditions, the enrichment factors (EFs) were from 292 to 883 for the eight analytes. The linear ranges for the analytes were from 5 to 100μg/kg. The method recoveries ranged from 92.1% to 99.6%, with the corresponding RSDs less than 6.0%. The developed method was considered to be simple, fast, and precise to satisfy the requirements of the residual analysis of pyrethroid pesticides. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Exact solitary wave solution for higher order nonlinear Schrodinger equation using He's variational iteration method

    NASA Astrophysics Data System (ADS)

    Rani, Monika; Bhatti, Harbax S.; Singh, Vikramjeet

    2017-11-01

    In optical communication, the behavior of the ultrashort pulses of optical solitons can be described through nonlinear Schrodinger equation. This partial differential equation is widely used to contemplate a number of physically important phenomena, including optical shock waves, laser and plasma physics, quantum mechanics, elastic media, etc. The exact analytical solution of (1+n)-dimensional higher order nonlinear Schrodinger equation by He's variational iteration method has been presented. Our proposed solutions are very helpful in studying the solitary wave phenomena and ensure rapid convergent series and avoid round off errors. Different examples with graphical representations have been given to justify the capability of the method.

  18. Travelling wave solutions of the homogeneous one-dimensional FREFLO model

    NASA Astrophysics Data System (ADS)

    Huang, B.; Hong, J. Y.; Jing, G. Q.; Niu, W.; Fang, L.

    2018-01-01

    Presently there is quite few analytical studies in traffic flows due to the non-linearity of the governing equations. In the present paper we introduce travelling wave solutions for the homogeneous one-dimensional FREFLO model, which are expressed in the form of series and describe the procedure that vehicles/pedestrians move with a negative velocity and decelerate until rest, then accelerate inversely to positive velocities. This method is expect to be extended to more complex situations in the future.

  19. FREE-SURFACE SEPARATION OF STEAM AND WATER FOR APPLICATION IN A MARINE REACTOR AT 1000 PSIG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steamer, A.G.; Ongman, H.D.

    1960-07-13

    A series of free-surface steam separation tests were carried out at 1000 psig to obtain data to aid in checking out analytical methods for the effect of ship-s motion on steam separation. Data are presented on the shape and height of the steam-water interface with respect to the indicated water level for two vessel sizes. Further data are presented on the effects of water level and downcomer water velocity on steam carryunder. (auth)

  20. Linear perturbations of a Schwarzschild blackhole by thin disc - convergence

    NASA Astrophysics Data System (ADS)

    Čížek, P.; Semerák, O.

    2012-07-01

    In order to find the perturbation of a Schwarzschild space-time due to a rotating thin disc, we try to adjust the method used by [4] in the case of perturbation by a one-dimensional ring. This involves solution of stationary axisymmetric Einstein's equations in terms of spherical-harmonic expansions whose convergence however turned out questionable in numerical examples. Here we show, analytically, that the series are almost everywhere convergent, but in some regions the convergence is not absolute.

  1. Forced response of mistuned bladed disk assemblies

    NASA Technical Reports Server (NTRS)

    Watson, Brian C.; Kamat, Manohar P.; Murthy, Durbha V.

    1993-01-01

    A complete analytic model of mistuned bladed disk assemblies, designed to simulate the dynamical behavior of these systems, is analyzed. The model incorporates a generalized method for describing the mistuning of the assembly through the introduction of specific mistuning modes. The model is used to develop a computational bladed disk assembly model for a series of parametric studies. Results are presented demonstrating that the response amplitudes of bladed disk assemblies depend both on the excitation mode and on the mistune mode.

  2. Computation of the radiation amplitude of oscillons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fodor, Gyula; Forgacs, Peter; LMPT, CNRS-UMR 6083, Universite de Tours, Parc de Grandmont, 37200 Tours

    2009-03-15

    The radiation loss of small-amplitude oscillons (very long-living, spatially localized, time-dependent solutions) in one-dimensional scalar field theories is computed in the small-amplitude expansion analytically using matched asymptotic series expansions and Borel summation. The amplitude of the radiation is beyond all orders in perturbation theory and the method used has been developed by Segur and Kruskal in Phys. Rev. Lett. 58, 747 (1987). Our results are in good agreement with those of long-time numerical simulations of oscillons.

  3. Analysis of Oblique Wave Interaction with a Comb-Type Caisson Breakwater

    NASA Astrophysics Data System (ADS)

    Wang, Xinyu; Liu, Yong; Liang, Bingchen

    2018-04-01

    This study develops an analytical solution for oblique wave interaction with a comb-type caisson breakwater based on linear potential theory. The fluid domain is divided into inner and outer regions according to the geometrical shape of breakwater. By using periodic boundary condition and separation of variables, series solutions of velocity potentials in inner and outer regions are developed. Unknown expansion coefficients in series solutions are determined by matching velocity and pressure of continuous conditions on the interface between two regions. Then, hydrodynamic quantities involving reflection coefficients and wave forces acting on breakwater are estimated. Analytical solution is validated by a multi-domain boundary element method solution for the present problem. Diffusion reflection due to periodic variations in breakwater shape and corresponding surface elevations around the breakwater are analyzed. Numerical examples are also presented to examine effects of caisson parameters on total wave forces acting on caissons and total wave forces acting on side plates. Compared with a traditional vertical wall breakwater, the wave force acting on a suitably designed comb-type caisson breakwater can be significantly reduced. This study can give a better understanding of the hydrodynamic performance of comb-type caisson breakwaters.

  4. Preparation and characterization of a suite of ephedra-containing standard reference materials.

    PubMed

    Sharpless, Katherine E; Anderson, David L; Betz, Joseph M; Butler, Therese A; Capar, Stephen G; Cheng, John; Fraser, Catharine A; Gardner, Graeme; Gay, Martha L; Howell, Daniel W; Ihara, Toshihide; Khan, Mansoor A; Lam, Joseph W; Long, Stephen E; McCooeye, Margaret; Mackey, Elizabeth A; Mindak, William R; Mitvalsky, Staci; Murphy, Karen E; NguyenPho, Agnes; Phinney, Karen W; Porter, Barbara J; Roman, Mark; Sander, Lane C; Satterfield, Mary B; Scriver, Christine; Sturgeon, Ralph; Thomas, Jeanice Brown; Vocke, Robert D; Wise, Stephen A; Wood, Laura J; Yang, Lu; Yen, James H; Ziobro, George C

    2006-01-01

    The National Institute of Standards and Technology, the U.S. Food and Drug Administration, Center for Drug Evaluation and Research and Center for Food Safety and Applied Nutrition, and the National Institutes of Health, Office of Dietary Supplements, are collaborating to produce a series of Standard Reference Materials (SRMs) for dietary supplements. A suite of ephedra materials is the first in the series, and this paper describes the acquisition, preparation, and value assignment of these materials: SRMs 3240 Ephedra sinica Stapf Aerial Parts, 3241 E. sinica Stapf Native Extract, 3242 E. sinica Stapf Commercial Extract, 3243 Ephedra-Containing Solid Oral Dosage Form, and 3244 Ephedra-Containing Protein Powder. Values are assigned for ephedrine alkaloids and toxic elements in all 5 materials. Values are assigned for other analytes (e.g., caffeine, nutrient elements, proximates, etc.) in some of the materials, as appropriate. Materials in this suite of SRMs are intended for use as primary control materials when values are assigned to in-house (secondary) control materials and for validation of analytical methods for the measurement of alkaloids, toxic elements, and, in the case of SRM 3244, nutrients in similar materials.

  5. Hydroelastic vibration analysis of partially liquid-filled shells using a series representation of the liquid

    NASA Technical Reports Server (NTRS)

    Housner, J. M.; Herr, R. W.; Sewall, J. L.

    1980-01-01

    A series representation of the oscillatory behavior of incompressible nonviscous liquids contained in partially filled elastic tanks is presented. Each term is selected on the basis of hydroelastic vibrations in circular cylindrical tanks. Using a complementary energy principle, the superposition of terms is made to approximately satisfy the liquid-tank interface compatibility. This analysis is applied to the gravity sloshing and hydroelastic vibrations of liquids in hemispherical tanks and in a typical elastic aerospace propellant tank. With only a few series terms retained, the results correlate very well with existing analytical results, NASTRAN-generated analytical results, and experimental test results. Hence, although each term is based on a cylindrical tank geometry, the superposition can be successfully applied to noncylindrical tanks.

  6. Analytical and experimental comparisons of electromechanical vibration response of a piezoelectric bimorph beam for power harvesting

    NASA Astrophysics Data System (ADS)

    Lumentut, M. F.; Howard, I. M.

    2013-03-01

    Power harvesters that extract energy from vibrating systems via piezoelectric transduction show strong potential for powering smart wireless sensor devices in applications of health condition monitoring of rotating machinery and structures. This paper presents an analytical method for modelling an electromechanical piezoelectric bimorph beam with tip mass under two input base transverse and longitudinal excitations. The Euler-Bernoulli beam equations were used to model the piezoelectric bimorph beam. The polarity-electric field of the piezoelectric element is excited by the strain field caused by base input excitation, resulting in electrical charge. The governing electromechanical dynamic equations were derived analytically using the weak form of the Hamiltonian principle to obtain the constitutive equations. Three constitutive electromechanical dynamic equations based on independent coefficients of virtual displacement vectors were formulated and then further modelled using the normalised Ritz eigenfunction series. The electromechanical formulations include both the series and parallel connections of the piezoelectric bimorph. The multi-mode frequency response functions (FRFs) under varying electrical load resistance were formulated using Laplace transformation for the multi-input mechanical vibrations to provide the multi-output dynamic displacement, velocity, voltage, current and power. The experimental and theoretical validations reduced for the single mode system were shown to provide reasonable predictions. The model results from polar base excitation for off-axis input motions were validated with experimental results showing the change to the electrical power frequency response amplitude as a function of excitation angle, with relevance for practical implementation.

  7. Microfluidic devices to enrich and isolate circulating tumor cells

    PubMed Central

    Myung, J. H.; Hong, S.

    2015-01-01

    Given the potential clinical impact of circulating tumor cells (CTCs) in blood as a clinical biomarker for diagnosis and prognosis of various cancers, a myriad of detection methods for CTCs have been recently introduced. Among those, a series of microfluidic devices are particularly promising as these uniquely offer micro-scale analytical systems that are highlighted by low consumption of samples and reagents, high flexibility to accommodate other cutting-edge technologies, precise and well-defined flow behaviors, and automation capability, presenting significant advantages over the conventional larger scale systems. In this review, we highlight the advantages of microfluidic devices and their translational potential into CTC detection methods, categorized by miniaturization of bench-top analytical instruments, integration capability with nanotechnologies, and in situ or sequential analysis of captured CTCs. This review provides a comprehensive overview of recent advances in the CTC detection achieved through application of microfluidic devices and their challenges that these promising technologies must overcome to be clinically impactful. PMID:26549749

  8. A Squeeze-film Damping Model for the Circular Torsion Micro-resonators

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Li, Pu

    2017-07-01

    In recent years, MEMS devices are widely used in many industries. The prediction of squeeze-film damping is very important for the research of high quality factor resonators. In the past, there have been many analytical models predicting the squeeze-film damping of the torsion micro-resonators. However, for the circular torsion micro-plate, the works over it is very rare. The only model presented by Xia et al[7] using the method of eigenfunction expansions. In this paper, The Bessel series solution is used to solve the Reynolds equation under the assumption of the incompressible gas of the gap, the pressure distribution of the gas between two micro-plates is obtained. Then the analytical expression for the damping constant of the device is derived. The result of the present model matches very well with the finite element method (FEM) solutions and the result of Xia’s model, so the present models’ accuracy is able to be validated.

  9. Analytical and experimental investigation on transmission loss of clamped double panels: implication of boundary effects.

    PubMed

    Xin, F X; Lu, T J

    2009-03-01

    The air-borne sound insulation performance of a rectangular double-panel partition clamp mounted on an infinite acoustic rigid baffle is investigated both analytically and experimentally and compared with that of a simply supported one. With the clamped (or simply supported) boundary accounted for by using the method of modal function, a double series solution for the sound transmission loss (STL) of the structure is obtained by employing the weighted residual (Galerkin) method. Experimental measurements with Al double-panel partitions having air cavity are subsequently carried out to validate the theoretical model for both types of the boundary condition, and good overall agreement is achieved. A consistency check of the two different models (based separately on clamped modal function and simply supported modal function) is performed by extending the panel dimensions to infinite where no boundaries exist. The significant discrepancies between the two different boundary conditions are demonstrated in terms of the STL versus frequency plots as well as the panel deflection mode shapes.

  10. Microstates in resting-state EEG: current status and future directions.

    PubMed

    Khanna, Arjun; Pascual-Leone, Alvaro; Michel, Christoph M; Farzan, Faranak

    2015-02-01

    Electroencephalography (EEG) is a powerful method of studying the electrophysiology of the brain with high temporal resolution. Several analytical approaches to extract information from the EEG signal have been proposed. One method, termed microstate analysis, considers the multichannel EEG recording as a series of quasi-stable "microstates" that are each characterized by a unique topography of electric potentials over the entire channel array. Because this technique simultaneously considers signals recorded from all areas of the cortex, it is capable of assessing the function of large-scale brain networks whose disruption is associated with several neuropsychiatric disorders. In this review, we first introduce the method of EEG microstate analysis. We then review studies that have discovered significant changes in the resting-state microstate series in a variety of neuropsychiatric disorders and behavioral states. We discuss the potential utility of this method in detecting neurophysiological impairments in disease and monitoring neurophysiological changes in response to an intervention. Finally, we discuss how the resting-state microstate series may reflect rapid switching among neural networks while the brain is at rest, which could represent activity of resting-state networks described by other neuroimaging modalities. We conclude by commenting on the current and future status of microstate analysis, and suggest that EEG microstates represent a promising neurophysiological tool for understanding and assessing brain network dynamics on a millisecond timescale in health and disease. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Microstates in Resting-State EEG: Current Status and Future Directions

    PubMed Central

    Khanna, Arjun; Pascual-Leone, Alvaro; Michel, Christoph M.; Farzan, Faranak

    2015-01-01

    Electroencephalography (EEG) is a powerful method of studying the electrophysiology of the brain with high temporal resolution. Several analytical approaches to extract information from the EEG signal have been proposed. One method, termed microstate analysis, considers the multichannel EEG recording as a series of quasi-stable “microstates” that are each characterized by a unique topography of electric potentials over the entire channel array. Because this technique simultaneously considers signals recorded from all areas of the cortex, it is capable of assessing the function of large-scale brain networks whose disruption is associated with several neuropsychiatric disorders. In this review, we first introduce the method of EEG microstate analysis. We then review studies that have discovered significant changes in the resting-state microstate series in a variety of neuropsychiatric disorders and behavioral states. We discuss the potential utility of this method in detecting neurophysiological impairments in disease and monitoring neurophysiological changes in response to an intervention. Finally, we discuss how the resting-state microstate series may reflect rapid switching among neural networks while the brain is at rest, which could represent activity of resting-state networks described by other neuroimaging modalities. We conclude by commenting on the current and future status of microstate analysis, and suggest that EEG microstates represent a promising neurophysiological tool for understanding and assessing brain network dynamics on a millisecond timescale in health and disease. PMID:25526823

  12. Effect of Microscopic Damage Events on Static and Ballistic Impact Strength of Triaxial Braid Composites

    NASA Technical Reports Server (NTRS)

    Littell, Justin D.; Binienda, Wieslaw K.; Arnold, William A.; Roberts, Gary D.; Goldberg, Robert K.

    2010-01-01

    The reliability of impact simulations for aircraft components made with triaxial-braided carbon-fiber composites is currently limited by inadequate material property data and lack of validated material models for analysis. Methods to characterize the material properties used in the analytical models from a systematically obtained set of test data are also lacking. A macroscopic finite element based analytical model to analyze the impact response of these materials has been developed. The stiffness and strength properties utilized in the material model are obtained from a set of quasi-static in-plane tension, compression and shear coupon level tests. Full-field optical strain measurement techniques are applied in the testing, and the results are used to help in characterizing the model. The unit cell of the braided composite is modeled as a series of shell elements, where each element is modeled as a laminated composite. The braided architecture can thus be approximated within the analytical model. The transient dynamic finite element code LS-DYNA is utilized to conduct the finite element simulations, and an internal LS-DYNA constitutive model is utilized in the analysis. Methods to obtain the stiffness and strength properties required by the constitutive model from the available test data are developed. Simulations of quasi-static coupon tests and impact tests of a represented braided composite are conducted. Overall, the developed method shows promise, but improvements that are needed in test and analysis methods for better predictive capability are examined.

  13. Quantitative analysis of drugs in hair by UHPLC high resolution mass spectrometry.

    PubMed

    Kronstrand, Robert; Forsman, Malin; Roman, Markus

    2018-02-01

    Liquid chromatographic methods coupled to high resolution mass spectrometry are increasingly used to identify compounds in various matrices including hair but there are few recommendations regarding the parameters and their criteria to identify a compound. In this study we present a method for the identification and quantification of a range of drugs and discuss the parameters used to identify a compound with high resolution mass spectrometry. Drugs were extracted from hair by incubation in a buffer:solvent mixture at 37°C during 18h. Analysis was performed on a chromatographic system comprised of an Agilent 6550 QTOF coupled to a 1290 Infinity UHPLC system. High resolution accurate mass data were acquired in the All Ions mode and exported into Mass Hunter Quantitative software for quantitation and identification using qualifier fragment ions. Validation included selectivity, matrix effects, calibration range, within day and between day precision and accuracy. The analytes were 7-amino-flunitrazepam, 7-amino-clonazepam, 7-amino-nitrazepam, acetylmorphine, alimemazine, alprazolam, amphetamine, benzoylecgonine, buprenorphine, diazepam, ethylmorphine, fentanyl, hydroxyzine, ketobemidone, codeine, cocaine, MDMA, methadone, methamphetamine, morphine, oxycodone, promethazine, propiomazine, propoxyphene, tramadol, zaleplone, zolpidem, and zopiclone. As proof of concept, hair from 29 authentic post mortem cases were analysed. The calibration range was established between 0.05ng/mg to 5.0ng/mg for all analytes except fentanyl (0.02-2.0), buprenorphine (0.04-2.0), and ketobemidone (0.05-4.0) as well as for alimemazine, amphetamine, cocaine, methadone, and promethazine (0.10-5.0). For all analytes, the accuracy of the fortified pooled hair matrix was 84-108% at the low level and 89-106% at the high level. The within series precisions were between 1.4 and 6.7% and the between series precisions were between 1.4 and 10.1%. From the 29 autopsy cases, 121 positive findings were encountered from 23 of the analytes in concentrations similar to those previously published. We conclude that the developed method proved precise and accurate and that it had sufficient performance for the purpose of detecting regular use of drugs or treatment with prescription drugs. To identify a compound we recommend the use of ion ratios as a complement to instrument software "matching scores". Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Detrending moving average algorithm for multifractals

    NASA Astrophysics Data System (ADS)

    Gu, Gao-Feng; Zhou, Wei-Xing

    2010-07-01

    The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of nonstationary time series and the long-range correlations of fractal surfaces, which contains a parameter θ determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward (θ=0) , centered (θ=0.5) , and forward (θ=1) detrending windows. We find that the estimated multifractal scaling exponent τ(q) and the singularity spectrum f(α) are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis. The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed.

  15. A Short Biography of Joseph Fourier and Historical Development of Fourier Series and Fourier Transforms

    ERIC Educational Resources Information Center

    Debnath, Lokenath

    2012-01-01

    This article deals with a brief biographical sketch of Joseph Fourier, his first celebrated work on analytical theory of heat, his first great discovery of Fourier series and Fourier transforms. Included is a historical development of Fourier series and Fourier transforms with their properties, importance and applications. Special emphasis is made…

  16. Development and application of a validated HPLC method for the analysis of dissolution samples of levothyroxine sodium drug products.

    PubMed

    Collier, J W; Shah, R B; Bryant, A R; Habib, M J; Khan, M A; Faustino, P J

    2011-02-20

    A rapid, selective, and sensitive gradient HPLC method was developed for the analysis of dissolution samples of levothyroxine sodium tablets. Current USP methodology for levothyroxine (L-T(4)) was not adequate to resolve co-elutants from a variety of levothyroxine drug product formulations. The USP method for analyzing dissolution samples of the drug product has shown significant intra- and inter-day variability. The sources of method variability include chromatographic interferences introduced by the dissolution media and the formulation excipients. In the present work, chromatographic separation of levothyroxine was achieved on an Agilent 1100 Series HPLC with a Waters Nova-pak column (250 mm × 3.9 mm) using a 0.01 M phosphate buffer (pH 3.0)-methanol (55:45, v/v) in a gradient elution mobile phase at a flow rate of 1.0 mL/min and detection UV wavelength of 225 nm. The injection volume was 800 μL and the column temperature was maintained at 28°C. The method was validated according to USP Category I requirements. The validation characteristics included accuracy, precision, specificity, linearity, and analytical range. The standard curve was found to have a linear relationship (r(2)>0.99) over the analytical range of 0.08-0.8 μg/mL. Accuracy ranged from 90 to 110% for low quality control (QC) standards and 95 to 105% for medium and high QC standards. Precision was <2% at all QC levels. The method was found to be accurate, precise, selective, and linear for L-T(4) over the analytical range. The HPLC method was successfully applied to the analysis of dissolution samples of marketed levothyroxine sodium tablets. Published by Elsevier B.V.

  17. Development and application of a validated HPLC method for the analysis of dissolution samples of levothyroxine sodium drug products

    PubMed Central

    Collier, J.W.; Shah, R.B.; Bryant, A.R.; Habib, M.J.; Khan, M.A.; Faustino, P.J.

    2011-01-01

    A rapid, selective, and sensitive gradient HPLC method was developed for the analysis of dissolution samples of levothyroxine sodium tablets. Current USP methodology for levothyroxine (l-T4) was not adequate to resolve co-elutants from a variety of levothyroxine drug product formulations. The USP method for analyzing dissolution samples of the drug product has shown significant intra- and inter-day variability. The sources of method variability include chromatographic interferences introduced by the dissolution media and the formulation excipients. In the present work, chromatographic separation of levothyroxine was achieved on an Agilent 1100 Series HPLC with a Waters Nova-pak column (250mm × 3.9mm) using a 0.01 M phosphate buffer (pH 3.0)–methanol (55:45, v/v) in a gradient elution mobile phase at a flow rate of 1.0 mL/min and detection UV wavelength of 225 nm. The injection volume was 800 µL and the column temperature was maintained at 28 °C. The method was validated according to USP Category I requirements. The validation characteristics included accuracy, precision, specificity, linearity, and analytical range. The standard curve was found to have a linear relationship (r2 > 0.99) over the analytical range of 0.08–0.8 µg/mL. Accuracy ranged from 90 to 110% for low quality control (QC) standards and 95 to 105% for medium and high QC standards. Precision was <2% at all QC levels. The method was found to be accurate, precise, selective, and linear for l-T4 over the analytical range. The HPLC method was successfully applied to the analysis of dissolution samples of marketed levothyroxine sodium tablets. PMID:20947276

  18. Generic and Automated Data Evaluation in Analytical Measurement.

    PubMed

    Adam, Martin; Fleischer, Heidi; Thurow, Kerstin

    2017-04-01

    In the past year, automation has become more and more important in the field of elemental and structural chemical analysis to reduce the high degree of manual operation and processing time as well as human errors. Thus, a high number of data points are generated, which requires fast and automated data evaluation. To handle the preprocessed export data from different analytical devices with software from various vendors offering a standardized solution without any programming knowledge should be preferred. In modern laboratories, multiple users will use this software on multiple personal computers with different operating systems (e.g., Windows, Macintosh, Linux). Also, mobile devices such as smartphones and tablets have gained growing importance. The developed software, Project Analytical Data Evaluation (ADE), is implemented as a web application. To transmit the preevaluated data from the device software to the Project ADE, the exported XML report files are detected and the included data are imported into the entities database using the Data Upload software. Different calculation types of a sample within one measurement series (e.g., method validation) are identified using information tags inside the sample name. The results are presented in tables and diagrams on different information levels (general, detailed for one analyte or sample).

  19. Research Methods in Healthcare Epidemiology and Antimicrobial Stewardship-Quasi-Experimental Designs.

    PubMed

    Schweizer, Marin L; Braun, Barbara I; Milstone, Aaron M

    2016-10-01

    Quasi-experimental studies evaluate the association between an intervention and an outcome using experiments in which the intervention is not randomly assigned. Quasi-experimental studies are often used to evaluate rapid responses to outbreaks or other patient safety problems requiring prompt, nonrandomized interventions. Quasi-experimental studies can be categorized into 3 major types: interrupted time-series designs, designs with control groups, and designs without control groups. This methods paper highlights key considerations for quasi-experimental studies in healthcare epidemiology and antimicrobial stewardship, including study design and analytic approaches to avoid selection bias and other common pitfalls of quasi-experimental studies. Infect Control Hosp Epidemiol 2016;1-6.

  20. Research Methods in Healthcare Epidemiology and Antimicrobial Stewardship – Quasi-Experimental Designs

    PubMed Central

    Schweizer, Marin L.; Braun, Barbara I.; Milstone, Aaron M.

    2016-01-01

    Quasi-experimental studies evaluate the association between an intervention and an outcome using experiments in which the intervention is not randomly assigned. Quasi-experimental studies are often used to evaluate rapid responses to outbreaks or other patient safety problems requiring prompt non-randomized interventions. Quasi-experimental studies can be categorized into three major types: interrupted time series designs, designs with control groups, and designs without control groups. This methods paper highlights key considerations for quasi-experimental studies in healthcare epidemiology and antimicrobial stewardship including study design and analytic approaches to avoid selection bias and other common pitfalls of quasi-experimental studies. PMID:27267457

  1. Development flight tests of the Viking decelerator system.

    NASA Technical Reports Server (NTRS)

    Murrow, H. N.; Eckstrom, C. V.; Henke, D. W.

    1973-01-01

    Significant aspects of a low altitude flight test phase of the overall Viking decelerator system development are given. This test series included nine aircraft drop tests that were conducted at the Joint Parachute Test Facility, El Centro, California, between September 1971 and May 1972. The test technique and analytical planning method utilized to best simulate loading conditions in a low density environment are presented and some test results are shown to assess their adequacy. Performance effects relating to suspension line lengths of 1.7 D sub o with different canopy loadings are noted. System hardware developments are described, in particular the utilization of a fabric deployment mortar cover which remained attached to the parachute canopy. Finally, the contribution of this test series to the overall program is assessed.

  2. Surface pressure data on a series of analytic forebodies at Mach numbers from 1.70 to 4.50 and combined angles of attack and sideslip. [Langley Unitary Plan wind tunnel

    NASA Technical Reports Server (NTRS)

    Townsend, J. C.; Howell, D. T.; Collins, I. K.; Hayes, C.

    1979-01-01

    Tabulated surface pressure data for a series of four forebodies which have analytically defined cross sections and which are based on a parabolic arc profile having a 20 deg half angle at the nose are presented without analysis. The first forebody has a circular cross section, and the second has a cross section which is an ellipse with an axis ratio of 2/1. The third has a cross section defined by a lobed analytic curve. The fourth forebody has cross sections which develop smoothly from circular at the pointed nose through the lobed analytic curve and back to circular at the aft end. The data generally cover angles of attack from -5 deg to 20 deg at angles of sideslip from 0 deg to 5 deg for Mach numbers of 1.70, 2.50, 3.95, and 4.50 at a constant Reynolds number.

  3. Transportation Life Cycle Assessment (LCA) Synthesis, Phase II

    DOT National Transportation Integrated Search

    2018-04-24

    The Transportation Life Cycle Assessment (LCA) Synthesis includes an LCA Learning Module Series, case studies, and analytics on the use of the modules. The module series is a set of narrated slideshows on topics related to environmental LCA. Phase I ...

  4. Testing and analysis of flat and curved panels with multiple cracks

    DOT National Transportation Integrated Search

    1994-08-01

    An experimental and analytical investigation of multiple cracking in various types of test specimens is described in this paper. The testing phase is comprised of a flat unstiffened panel series and curved stiffened and unstiffened panel series. The ...

  5. A discrete method for modal analysis of overhead line conductor bundles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Migdalovici, M.A.; Sireteanu, T.D.; Albrecht, A.A.

    The paper presents a mathematical model and a semi-analytical procedure to calculate the vibration modes and eigenfrequencies of single or bundled conductors with spacers which are needed for evaluation of the wind induced vibration of conductors and for optimization of spacer-dampers placement. The method consists in decomposition of conductors in modules and the expansion by polynomial series of unknown displacements on each module. A complete system of polynomials are deduced for this by Legendre polynomials. Each module is considered either boundary conditions at the extremity of the module or the continuity conditions between the modules and also a number ofmore » projections of module equilibrium equation on the polynomials from the expansion series of unknown displacement. The global system of the eigenmodes and eigenfrequencies is of the matrix form: A X + {omega}{sup 2} M X = 0. The theoretical considerations are exemplified on one conductor and on bundle of two conductors with spacers. From this, a method for forced vibration calculus of a single or bundled conductors is also presented.« less

  6. Chiral separation of G-type chemical warfare nerve agents via analytical supercritical fluid chromatography.

    PubMed

    Kasten, Shane A; Zulli, Steven; Jones, Jonathan L; Dephillipo, Thomas; Cerasoli, Douglas M

    2014-12-01

    Chemical warfare nerve agents (CWNAs) are extremely toxic organophosphorus compounds that contain a chiral phosphorus center. Undirected synthesis of G-type CWNAs produces stereoisomers of tabun, sarin, soman, and cyclosarin (GA, GB, GD, and GF, respectively). Analytical-scale methods were developed using a supercritical fluid chromatography (SFC) system in tandem with a mass spectrometer for the separation, quantitation, and isolation of individual stereoisomers of GA, GB, GD, and GF. Screening various chiral stationary phases (CSPs) for the capacity to provide full baseline separation of the CWNAs revealed that a Regis WhelkO1 (SS) column was capable of separating the enantiomers of GA, GB, and GF, with elution of the P(+) enantiomer preceding elution of the corresponding P(-) enantiomer; two WhelkO1 (SS) columns had to be connected in series to achieve complete baseline resolution. The four diastereomers of GD were also resolved using two tandem WhelkO1 (SS) columns, with complete baseline separation of the two P(+) epimers. A single WhelkO1 (RR) column with inverse stereochemistry resulted in baseline separation of the GD P(-) epimers. The analytical methods described can be scaled to allow isolation of individual stereoisomers to assist in screening and development of countermeasures to organophosphorus nerve agents. © 2014 The Authors. Chirality published by John Wiley Periodicals, Inc.

  7. [Study on the method for the determination of trace boron, molybdenum, silver, tin and lead in geochemical samples by direct current arc full spectrum direct reading atomic emission spectroscopy (DC-Arc-AES)].

    PubMed

    Hao, Zhi-hong; Yao, Jian-zhen; Tang, Rui-ling; Zhang, Xue-mei; Li, Wen-ge; Zhang, Qin

    2015-02-01

    The method for the determmation of trace boron, molybdenum, silver, tin and lead in geochemical samples by direct current are full spectrum direct reading atomic emission spectroscopy (DC-Arc-AES) was established. Direct current are full spectrum direct reading atomic emission spectrometer with a large area of solid-state detectors has functions of full spectrum direct reading and real-time background correction. The new electrodes and new buffer recipe were proposed in this paper, and have applied for national patent. Suitable analytical line pairs, back ground correcting points of elements and the internal standard method were selected, and Ge was used as internal standard. Multistage currents were selected in the research on current program, and each current set different holding time to ensure that each element has a good signal to noise ratio. Continuous rising current mode selected can effectively eliminate the splash of the sample. Argon as shielding gas can eliminate CN band generating and reduce spectral background, also plays a role in stabilizing the are, and argon flow 3.5 L x min(-1) was selected. Evaporation curve of each element was made, and it was concluded that the evaporation behavior of each element is consistent, and combined with the effects of different spectrographic times on the intensity and background, the spectrographic time of 35s was selected. In this paper, national standards substances were selected as a standard series, and the standard series includes different nature and different content of standard substances which meet the determination of trace boron, molybdenum, silver, tin and lead in geochemical samples. In the optimum experimental conditions, the detection limits for B, Mo, Ag, Sn and Pb are 1.1, 0.09, 0.01, 0.41, and 0.56 microg x g(-1) respectively, and the precisions (RSD, n=12) for B, Mo, Ag, Sn and Pb are 4.57%-7.63%, 5.14%-7.75%, 5.48%-12.30%, 3.97%-10.46%, and 4.26%-9.21% respectively. The analytical accuracy was validated by national standards and the results are in agreement with certified values. The method is simple, rapid, is an advanced analytical method for the determination of trace amounts of geochemical samples' boron, molybdenum, silver, tin and lead, and has a certain practicality.

  8. Geomagnetism of earth's core

    NASA Technical Reports Server (NTRS)

    Benton, E. R.

    1983-01-01

    Instrumentation, analytical methods, and research goals for understanding the behavior and source of geophysical magnetism are reviewed. Magsat, launched in 1979, collected global magnetometer data and identified the main terrestrial magnetic fields. The data has been treated by representing the curl-free field in terms of a scalar potential which is decomposed into a truncated series of spherical harmonics. Solutions to the Laplace equation then extend the field upward or downward from the measurement level through intervening spaces with no source. Further research is necessary on the interaction between harmonics of various spatial scales. Attempts are also being made to analytically model the main field and its secular variation at the core-mantle boundary. Work is also being done on characterizing the core structure, composition, thermodynamics, energetics, and formation, as well as designing a new Magsat or a tethered satellite to be flown on the Shuttle.

  9. A new frequency domain analytical solution of a cascade of diffusive channels for flood routing

    NASA Astrophysics Data System (ADS)

    Cimorelli, Luigi; Cozzolino, Luca; Della Morte, Renata; Pianese, Domenico; Singh, Vijay P.

    2015-04-01

    Simplified flood propagation models are often employed in practical applications for hydraulic and hydrologic analyses. In this paper, we present a new numerical method for the solution of the Linear Parabolic Approximation (LPA) of the De Saint Venant equations (DSVEs), accounting for the space variation of model parameters and the imposition of appropriate downstream boundary conditions. The new model is based on the analytical solution of a cascade of linear diffusive channels in the Laplace Transform domain. The time domain solutions are obtained using a Fourier series approximation of the Laplace Inversion formula. The new Inverse Laplace Transform Diffusive Flood Routing model (ILTDFR) can be used as a building block for the construction of real-time flood forecasting models or in optimization models, because it is unconditionally stable and allows fast and fairly precise computation.

  10. Efficient parallelization of analytic bond-order potentials for large-scale atomistic simulations

    NASA Astrophysics Data System (ADS)

    Teijeiro, C.; Hammerschmidt, T.; Drautz, R.; Sutmann, G.

    2016-07-01

    Analytic bond-order potentials (BOPs) provide a way to compute atomistic properties with controllable accuracy. For large-scale computations of heterogeneous compounds at the atomistic level, both the computational efficiency and memory demand of BOP implementations have to be optimized. Since the evaluation of BOPs is a local operation within a finite environment, the parallelization concepts known from short-range interacting particle simulations can be applied to improve the performance of these simulations. In this work, several efficient parallelization methods for BOPs that use three-dimensional domain decomposition schemes are described. The schemes are implemented into the bond-order potential code BOPfox, and their performance is measured in a series of benchmarks. Systems of up to several millions of atoms are simulated on a high performance computing system, and parallel scaling is demonstrated for up to thousands of processors.

  11. Analytic representation of FK/Fπ in two loop chiral perturbation theory

    NASA Astrophysics Data System (ADS)

    Ananthanarayan, B.; Bijnens, Johan; Friot, Samuel; Ghosh, Shayan

    2018-05-01

    We present an analytic representation of FK/Fπ as calculated in three-flavor two-loop chiral perturbation theory, which involves expressing three mass scale sunsets in terms of Kampé de Fériet series. We demonstrate how approximations may be made to obtain relatively compact analytic representations. An illustrative set of fits using lattice data is also presented, which shows good agreement with existing fits.

  12. Resonance-state properties from a phase shift analysis with the S -matrix pole method and the effective-range method

    NASA Astrophysics Data System (ADS)

    Irgaziev, B. F.; Orlov, Yu. V.

    2015-02-01

    Asymptotic normalization coefficients (ANCs) are fundamental nuclear constants playing an important role in nuclear physics and astrophysics. We derive a new useful relationship between ANCs of the Gamow radial wave function and the renormalized (due to the Coulomb interaction) Coulomb-nuclear partial scattering amplitude. We use an analytical approximation in the form of a series for the nonresonant part of the phase shift which can be analytically continued to the point of an isolated resonance pole in the complex plane of the momentum. Earlier, this method which we call the S -matrix pole method was used by us to find the resonance pole energy. We find the corresponding fitting parameters for the 5He,5Li , and 16O concrete resonance states. Additionally, based on the theory of the effective range, we calculate the parameters of the p3 /2 and p1 /2 resonance states of the nuclei 5He and 5Li and compare them with the results obtained by the S -matrix pole method. ANC values are found which can be used to calculate the reaction rate through the 16O resonances which lie slightly above the threshold for the α 12C channel.

  13. Cultivating Institutional Capacities for Learning Analytics

    ERIC Educational Resources Information Center

    Lonn, Steven; McKay, Timothy A.; Teasley, Stephanie D.

    2017-01-01

    This chapter details the process the University of Michigan developed to build institutional capacity for learning analytics. A symposium series, faculty task force, fellows program, research grants, and other initiatives are discussed, with lessons learned for future efforts and how other institutions might adapt such efforts to spur cultural…

  14. Single Particle-Inductively Coupled Plasma Mass Spectroscopy Analysis of Metallic Nanoparticles in Environmental Samples with Large Dissolved Analyte Fractions.

    PubMed

    Schwertfeger, D M; Velicogna, Jessica R; Jesmer, Alexander H; Scroggins, Richard P; Princz, Juliska I

    2016-10-18

    There is an increasing interest to use single particle-inductively coupled plasma mass spectroscopy (SP-ICPMS) to help quantify exposure to engineered nanoparticles, and their transformation products, released into the environment. Hindering the use of this analytical technique for environmental samples is the presence of high levels of dissolved analyte which impedes resolution of the particle signal from the dissolved. While sample dilution is often necessary to achieve the low analyte concentrations necessary for SP-ICPMS analysis, and to reduce the occurrence of matrix effects on the analyte signal, it is used here to also reduce the dissolved signal relative to the particulate, while maintaining a matrix chemistry that promotes particle stability. We propose a simple, systematic dilution series approach where by the first dilution is used to quantify the dissolved analyte, the second is used to optimize the particle signal, and the third is used as an analytical quality control. Using simple suspensions of well characterized Au and Ag nanoparticles spiked with the dissolved analyte form, as well as suspensions of complex environmental media (i.e., extracts from soils previously contaminated with engineered silver nanoparticles), we show how this dilution series technique improves resolution of the particle signal which in turn improves the accuracy of particle counts, quantification of particulate mass and determination of particle size. The technique proposed here is meant to offer a systematic and reproducible approach to the SP-ICPMS analysis of environmental samples and improve the quality and consistency of data generated from this relatively new analytical tool.

  15. Fast batch injection analysis of H(2)O(2) using an array of Pt-modified gold microelectrodes obtained from split electronic chips.

    PubMed

    Pacheco, Bruno D; Valério, Jaqueline; Angnes, Lúcio; Pedrotti, Jairo J

    2011-06-24

    A fast and robust analytical method for amperometric determination of hydrogen peroxide (H(2)O(2)) based on batch injection analysis (BIA) on an array of gold microelectrodes modified with platinum is proposed. The gold microelectrode array (n=14) was obtained from electronic chips developed for surface mounted device technology (SMD), whose size offers advantages to adapt them in batch cells. The effect of the dispensing rate, volume injected, distance between the platinum microelectrodes and the pipette tip, as well as the volume of solution in the cell on the analytical response were evaluated. The method allows the H(2)O(2) amperometric determination in the concentration range from 0.8 μmolL(-1) to 100 μmolL(-1). The analytical frequency can attain 300 determinations per hour and the detection limit was estimated in 0.34 μmolL(-1) (3σ). The anodic current peaks obtained after a series of 23 successive injections of 50 μL of 25 μmolL(-1) H(2)O(2) showed an RSD<0.9%. To ensure the good selectivity to detect H(2)O(2), its determination was performed in a differential mode, with selective destruction of the H(2)O(2) with catalase in 10 mmolL(-1) phosphate buffer solution. Practical application of the analytical procedure involved H(2)O(2) determination in rainwater of São Paulo City. A comparison of the results obtained by the proposed amperometric method with another one which combines flow injection analysis (FIA) with spectrophotometric detection showed good agreement. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. A method for generating high resolution satellite image time series

    NASA Astrophysics Data System (ADS)

    Guo, Tao

    2014-10-01

    There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation, environment and etc. applications.

  17. Analysis of Androgenic Steroids in Environmental Waters by Large-volume Injection Liquid Chromatography Tandem Mass Spectrometry

    PubMed Central

    Backe, Will J.; Ort, Christoph; Brewer, Alex J.; Field, Jennifer A.

    2014-01-01

    A new method was developed for the analysis of natural and synthetic androgenic steroids and their selected metabolites in aquatic environmental matrices using direct large-volume injection (LVI) high performance liquid chromatography (HPLC) tandem mass spectrometry (MS/MS). Method accuracy ranged from 88 to 108% for analytes with well-matched internal standards. Precision, quantified by relative standard deviation (RSD), was less than 12%. Detection limits for the method ranged from 1.2 to 360 ng/L. The method was demonstrated on a series of 1-hr composite wastewater influent samples collected over a day with the purpose of assessing temporal profiles of androgen loads in wastewater. Testosterone, androstenedione, boldenone, and nandrolone were detected in the sample series at concentrations up to 290 ng/L and loads up to 535 mg. Boldenone, a synthetic androgen, had a temporal profile that was strongly correlated to testosterone, a natural human androgen, suggesting its source may be endogenous. An analysis of the sample particulate fraction revealed detectable amounts of sorbed testosterone and androstenedione. Androstenedione sorbed to the particulate fraction accounted for an estimated five to seven percent of the total androstenedione mass. PMID:21391574

  18. Analysis of androgenic steroids in environmental waters by large-volume injection liquid chromatography tandem mass spectrometry.

    PubMed

    Backe, Will J; Ort, Christoph; Brewer, Alex J; Field, Jennifer A

    2011-04-01

    A new method was developed for the analysis of natural and synthetic androgenic steroids and their selected metabolites in aquatic environmental matrixes using direct large-volume injection (LVI) high-performance liquid chromatography (HPLC) tandem mass spectrometry (MS/MS). Method accuracy ranged from 87.6 to 108% for analytes with well-matched internal standards. Precision, quantified by relative standard deviation (RSD), was less than 12%. Detection limits for the method ranged from 1.2 to 360 ng/L. The method was demonstrated on a series of 1 h composite wastewater influent samples collected over a day with the purpose of assessing temporal profiles of androgen loads in wastewater. Testosterone, androstenedione, boldenone, and nandrolone were detected in the sample series at concentrations up to 290 ng/L and loads up to 535 mg/h. Boldenone, a synthetic androgen, had a temporal profile that was strongly correlated to testosterone, a natural human androgen, suggesting its source may be endogenous. An analysis of the sample particulate fraction revealed detectable amounts of sorbed testosterone and androstenedione. Androstenedione sorbed to the particulate fraction accounted for an estimated 5 to 7% of the total androstenedione mass.

  19. The optimal modified variational iteration method for the Lane-Emden equations with Neumann and Robin boundary conditions

    NASA Astrophysics Data System (ADS)

    Singh, Randhir; Das, Nilima; Kumar, Jitendra

    2017-06-01

    An effective analytical technique is proposed for the solution of the Lane-Emden equations. The proposed technique is based on the variational iteration method (VIM) and the convergence control parameter h . In order to avoid solving a sequence of nonlinear algebraic or complicated integrals for the derivation of unknown constant, the boundary conditions are used before designing the recursive scheme for solution. The series solutions are found which converges rapidly to the exact solution. Convergence analysis and error bounds are discussed. Accuracy, applicability of the method is examined by solving three singular problems: i) nonlinear Poisson-Boltzmann equation, ii) distribution of heat sources in the human head, iii) second-kind Lane-Emden equation.

  20. Combined micro-droplet and thin-film-assisted pre-concentration of lead traces for on-line monitoring using anodic stripping voltammetry.

    PubMed

    Belostotsky, Inessa; Gridin, Vladimir V; Schechter, Israel; Yarnitzky, Chaim N

    2003-02-01

    An improved analytical method for airborne lead traces is reported. It is based on using a Venturi scrubber sampling device for simultaneous thin-film stripping and droplet entrapment of aerosol influxes. At least threefold enhancement of the lead-trace pre-concentration is achieved. The sampled traces are analyzed by square-wave anodic stripping voltammetry. The method was tested by a series of pilot experiments. These were performed using contaminant-controlled air intakes. Reproducible calibration plots were obtained. The data were validated by traditional analysis using filter sampling. LODs are comparable with the conventional techniques. The method was successfully applied to on-line and in situ environmental monitoring of lead.

  1. An accurate boundary element method for the exterior elastic scattering problem in two dimensions

    NASA Astrophysics Data System (ADS)

    Bao, Gang; Xu, Liwei; Yin, Tao

    2017-11-01

    This paper is concerned with a Galerkin boundary element method solving the two dimensional exterior elastic wave scattering problem. The original problem is first reduced to the so-called Burton-Miller [1] boundary integral formulation, and essential mathematical features of its variational form are discussed. In numerical implementations, a newly-derived and analytically accurate regularization formula [2] is employed for the numerical evaluation of hyper-singular boundary integral operator. A new computational approach is employed based on the series expansions of Hankel functions for the computation of weakly-singular boundary integral operators during the reduction of corresponding Galerkin equations into a discrete linear system. The effectiveness of proposed numerical methods is demonstrated using several numerical examples.

  2. Exact exchange-correlation potentials of singlet two-electron systems

    NASA Astrophysics Data System (ADS)

    Ryabinkin, Ilya G.; Ospadov, Egor; Staroverov, Viktor N.

    2017-10-01

    We suggest a non-iterative analytic method for constructing the exchange-correlation potential, v XC ( r ) , of any singlet ground-state two-electron system. The method is based on a convenient formula for v XC ( r ) in terms of quantities determined only by the system's electronic wave function, exact or approximate, and is essentially different from the Kohn-Sham inversion technique. When applied to Gaussian-basis-set wave functions, the method yields finite-basis-set approximations to the corresponding basis-set-limit v XC ( r ) , whereas the Kohn-Sham inversion produces physically inappropriate (oscillatory and divergent) potentials. The effectiveness of the procedure is demonstrated by computing accurate exchange-correlation potentials of several two-electron systems (helium isoelectronic series, H2, H3 + ) using common ab initio methods and Gaussian basis sets.

  3. Quantitative structure-retention relationships applied to development of liquid chromatography gradient-elution method for the separation of sartans.

    PubMed

    Golubović, Jelena; Protić, Ana; Otašević, Biljana; Zečević, Mira

    2016-04-01

    QSRR are mathematically derived relationships between the chromatographic parameters determined for a representative series of analytes in given separation systems and the molecular descriptors accounting for the structural differences among the investigated analytes. Artificial neural network is a technique of data analysis, which sets out to emulate the human brain's way of working. The aim of the present work was to optimize separation of six angiotensin receptor antagonists, so-called sartans: losartan, valsartan, irbesartan, telmisartan, candesartan cilexetil and eprosartan in a gradient-elution HPLC method. For this purpose, ANN as a mathematical tool was used for establishing a QSRR model based on molecular descriptors of sartans and varied instrumental conditions. The optimized model can be further used for prediction of an external congener of sartans and analysis of the influence of the analyte structure, represented through molecular descriptors, on retention behaviour. Molecular descriptors included in modelling were electrostatic, geometrical and quantum-chemical descriptors: connolly solvent excluded volume non-1,4 van der Waals energy, octanol/water distribution coefficient, polarizability, number of proton-donor sites and number of proton-acceptor sites. Varied instrumental conditions were gradient time, buffer pH and buffer molarity. High prediction ability of the optimized network enabled complete separation of the analytes within the run time of 15.5 min under following conditions: gradient time of 12.5 min, buffer pH of 3.95 and buffer molarity of 25 mM. Applied methodology showed the potential to predict retention behaviour of an external analyte with the properties within the training space. Connolly solvent excluded volume, polarizability and number of proton-acceptor sites appeared to be most influential paramateres on retention behaviour of the sartans. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. The lunar libration: comparisons between various models - a model fitted to LLR observations

    NASA Astrophysics Data System (ADS)

    Chapront, J.; Francou, G.

    2005-09-01

    We consider 4 libration models: 3 numerical models built by JPL (ephemerides for the libration in DE245, DE403 and DE405) and an analytical model improved with numerical complements fitted to recent LLR observations. The analytical solution uses 3 angular variables (ρ1, ρ2, τ) which represent the deviations with respect to Cassini's laws. After having referred the models to a unique reference frame, we study the differences between the models which depend on gravitational and tidal parameters of the Moon, as well as amplitudes and frequencies of the free librations. It appears that the differences vary widely depending of the above quantities. They correspond to a few meters displacement on the lunar surface, reminding that LLR distances are precise to the centimeter level. Taking advantage of the lunar libration theory built by Moons (1984) and improved by Chapront et al. (1999) we are able to establish 4 solutions and to represent their differences by Fourier series after a numerical substitution of the gravitational constants and free libration parameters. The results are confirmed by frequency analyses performed separately. Using DE245 as a basic reference ephemeris, we approximate the differences between the analytical and numerical models with Poisson series. The analytical solution - improved with numerical complements under the form of Poisson series - is valid over several centuries with an internal precision better than 5 centimeters.

  5. A singularity free analytical solution of artificial satellite motion with drag

    NASA Technical Reports Server (NTRS)

    Mueller, A.

    1978-01-01

    An analytical satellite theory based on the regular, canonical Poincare-Similar (PS phi) elements is described along with an accurate density model which can be implemented into the drag theory. A computationally efficient manner in which to expand the equations of motion into a fourier series is discussed.

  6. 78 FR 63522 - Syntax Analytics, LLC and Syntax ETF Trust; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-24

    ... Analytics, LLC and Syntax ETF Trust; Notice of Application October 18, 2013. AGENCY: Securities and Exchange... Trust (``Trust''). Summary of Application: Applicants request an order that permits: (a) Actively... unit investment trusts outside of the same group of investment companies as the series to acquire...

  7. Analytical advantages of copolymeric microspheres for fluorimetric sensing - tuneable sensitivity sensors and titration agents.

    PubMed

    Stelmach, Emilia; Maksymiuk, Krzysztof; Michalska, Agata

    2017-01-15

    Analytical benefits related to application of copolymeric microspheres containing different number of carboxylic acid mers have been studied on example of acrylate copolymers. These structures can be used as a reagent in heterogeneous pH titration, benefiting from different number of reactive groups - i.e. different concentration of a titrant - within the series of copolymers. Thus introducing the same amount of different microspheres from a series to the sample, different amount of the titrant is introduced. Copolymeric microspheres also can be used as optical sensors - in this respect the increasing number of reactive groups in the series is useful to improve the analytical performance of microprobes - sensitivity of determination or/and response range. The increase in ion-permeability of the spheres with increasing number of reactive mers is advantageous. It is shown that for pH sensitive microspheres containing higher number of carboxyl groups the higher sensitivity for alkaline pH samples is observed for an indicator present in the beads. The significant increase of optical responses is related to enhanced ion transport within the microspheres. For zinc or potassium ions model sensors tested it was shown that by choice of pH conditions and type of microspheres from the series, the optical responses can be tuned - to enhance sensitivity for analyte concentration change as well as to change the response pattern from sigmoidal (higher sensitivity, narrow range) to linear (broader response range). For classical optode systems (e.g. microspheres containing an optical transducer - pH sensitive dye and optically silent ionophore - receptor) copolymeric microspheres containing carboxylic acid mers in their structure allow application of the sensor in alkaline pH range, which is usually inaccessible for applied optical transducer. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. High-latitude analytical formulas for scintillation levels

    NASA Astrophysics Data System (ADS)

    Aarons, J.; MacKenzie, E.; Bhavnani, K.

    The paper deals with the seasonal, solar flux, and magnetic dependence at auroral and subauroral latitudes as well as at a mid-latitude station. Analytical formulas are developed from a large data base. The data base used is a series of measurements of the scintillations of one synchronous satellite beacon, ATS 3, transmitting at 137 MHz. The analytical terms provide mean scintillation excursions as a function of time of day, month, solar flux, and magnetic index.

  9. Three-dimensional photoacoustic tomography based on graphics-processing-unit-accelerated finite element method.

    PubMed

    Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying

    2013-12-01

    Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.

  10. A robust interrupted time series model for analyzing complex health care intervention data.

    PubMed

    Cruz, Maricela; Bender, Miriam; Ombao, Hernando

    2017-12-20

    Current health policy calls for greater use of evidence-based care delivery services to improve patient quality and safety outcomes. Care delivery is complex, with interacting and interdependent components that challenge traditional statistical analytic techniques, in particular, when modeling a time series of outcomes data that might be "interrupted" by a change in a particular method of health care delivery. Interrupted time series (ITS) is a robust quasi-experimental design with the ability to infer the effectiveness of an intervention that accounts for data dependency. Current standardized methods for analyzing ITS data do not model changes in variation and correlation following the intervention. This is a key limitation since it is plausible for data variability and dependency to change because of the intervention. Moreover, present methodology either assumes a prespecified interruption time point with an instantaneous effect or removes data for which the effect of intervention is not fully realized. In this paper, we describe and develop a novel robust interrupted time series (robust-ITS) model that overcomes these omissions and limitations. The robust-ITS model formally performs inference on (1) identifying the change point; (2) differences in preintervention and postintervention correlation; (3) differences in the outcome variance preintervention and postintervention; and (4) differences in the mean preintervention and postintervention. We illustrate the proposed method by analyzing patient satisfaction data from a hospital that implemented and evaluated a new nursing care delivery model as the intervention of interest. The robust-ITS model is implemented in an R Shiny toolbox, which is freely available to the community. Copyright © 2017 John Wiley & Sons, Ltd.

  11. A Systematic Review of Methodology: Time Series Regression Analysis for Environmental Factors and Infectious Diseases

    PubMed Central

    Imai, Chisato; Hashizume, Masahiro

    2015-01-01

    Background: Time series analysis is suitable for investigations of relatively direct and short-term effects of exposures on outcomes. In environmental epidemiology studies, this method has been one of the standard approaches to assess impacts of environmental factors on acute non-infectious diseases (e.g. cardiovascular deaths), with conventionally generalized linear or additive models (GLM and GAM). However, the same analysis practices are often observed with infectious diseases despite of the substantial differences from non-infectious diseases that may result in analytical challenges. Methods: Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, systematic review was conducted to elucidate important issues in assessing the associations between environmental factors and infectious diseases using time series analysis with GLM and GAM. Published studies on the associations between weather factors and malaria, cholera, dengue, and influenza were targeted. Findings: Our review raised issues regarding the estimation of susceptible population and exposure lag times, the adequacy of seasonal adjustments, the presence of strong autocorrelations, and the lack of a smaller observation time unit of outcomes (i.e. daily data). These concerns may be attributable to features specific to infectious diseases, such as transmission among individuals and complicated causal mechanisms. Conclusion: The consequence of not taking adequate measures to address these issues is distortion of the appropriate risk quantifications of exposures factors. Future studies should pay careful attention to details and examine alternative models or methods that improve studies using time series regression analysis for environmental determinants of infectious diseases. PMID:25859149

  12. Asymptotic co- and post-seismic displacements in a homogeneous Maxwell sphere

    NASA Astrophysics Data System (ADS)

    Tang, He; Sun, Wenke

    2018-07-01

    The deformations of the Earth caused by internal and external forces are usually expressed through Green's functions or the superposition of normal modes, that is, via numerical methods, which are applicable for computing both co- and post-seismic deformations. It is difficult to express these deformations in an analytical form, even for a uniform viscoelastic sphere. In this study, we present a set of asymptotic solutions for computing co- and post-seismic displacements; these solutions can be further applied to solving co- and post-seismic geoid, gravity and strain changes. Expressions are derived for a uniform Maxwell Earth by combining the reciprocity theorem, which links earthquake, tidal, shear and loading deformations, with the asymptotic solutions of these three external forces (tidal, shear and loading) and analytical inverse Laplace transformation formulae. Since the asymptotic solutions are given in a purely analytical form without series summations or extra convergence skills, they can be practically applied in an efficient way, especially when computing post-seismic deformations and glacial isotactic adjustments of the Earth over long timescales.

  13. Asymptotic Co- and Post-seismic displacements in a homogeneous Maxwell sphere

    NASA Astrophysics Data System (ADS)

    Tang, He; Sun, Wenke

    2018-05-01

    The deformations of the Earth caused by internal and external forces are usually expressed through Green's functions or the superposition of normal modes, i.e. via numerical methods, which are applicable for computing both co- and post-seismic deformations. It is difficult to express these deformations in an analytical form, even for a uniform viscoelastic sphere. In this study, we present a set of asymptotic solutions for computing co- and post-seismic displacements; these solutions can be further applied to solving co- and post-seismic geoid, gravity, and strain changes. Expressions are derived for a uniform Maxwell Earth by combining the reciprocity theorem, which links earthquake, tidal, shear and loading deformations, with the asymptotic solutions of these three external forces (tidal, shear and loading) and analytical inverse Laplace transformation formulae. Since the asymptotic solutions are given in a purely analytical form without series summations or extra convergence skills, they can be practically applied in an efficient way, especially when computing post-seismic deformations and glacial isotactic adjustments of the Earth over long timescales.

  14. High-order moments of spin-orbit energy in a multielectron configuration

    NASA Astrophysics Data System (ADS)

    Na, Xieyu; Poirier, M.

    2016-07-01

    In order to analyze the energy-level distribution in complex ions such as those found in warm dense plasmas, this paper provides values for high-order moments of the spin-orbit energy in a multielectron configuration. Using second-quantization results and standard angular algebra or fully analytical expressions, explicit values are given for moments up to 10th order for the spin-orbit energy. Two analytical methods are proposed, using the uncoupled or coupled orbital and spin angular momenta. The case of multiple open subshells is considered with the help of cumulants. The proposed expressions for spin-orbit energy moments are compared to numerical computations from Cowan's code and agree with them. The convergence of the Gram-Charlier expansion involving these spin-orbit moments is analyzed. While a spectrum with infinitely thin components cannot be adequately represented by such an expansion, a suitable convolution procedure ensures the convergence of the Gram-Charlier series provided high-order terms are accounted for. A corrected analytical formula for the third-order moment involving both spin-orbit and electron-electron interactions turns out to be in fair agreement with Cowan's numerical computations.

  15. A model of freezing foods with liquid nitrogen using special functions

    NASA Astrophysics Data System (ADS)

    Rodríguez Vega, Martín.

    2014-05-01

    A food freezing model is analyzed analytically. The model is based on the heat diffusion equation in the case of cylindrical shaped food frozen by liquid nitrogen; and assuming that the thermal conductivity of the cylindrical food is radially modulated. The model is solved using the Laplace transform method, the Bromwich theorem, and the residue theorem. The temperature profile in the cylindrical food is presented as an infinite series of special functions. All the required computations are performed with computer algebra software, specifically Maple. Using the numeric values of the thermal and geometric parameters for the cylindrical food, as well as the thermal parameters of the liquid nitrogen freezing system, the temporal evolution of the temperature in different regions in the interior of the cylindrical food is presented both analytically and graphically. The duration of the liquid nitrogen freezing process to achieve the specified effect on the cylindrical food is computed. The analytical results are expected to be of importance in food engineering and cooking engineering. As a future research line, the formulation and solution of freezing models with thermal memory is proposed.

  16. Percolation and epidemics in a two-dimensional small world

    NASA Astrophysics Data System (ADS)

    Newman, M. E.; Jensen, I.; Ziff, R. M.

    2002-02-01

    Percolation on two-dimensional small-world networks has been proposed as a model for the spread of plant diseases. In this paper we give an analytic solution of this model using a combination of generating function methods and high-order series expansion. Our solution gives accurate predictions for quantities such as the position of the percolation threshold and the typical size of disease outbreaks as a function of the density of ``shortcuts'' in the small-world network. Our results agree with scaling hypotheses and numerical simulations for the same model.

  17. Hot-spot investigations of utility scale panel configurations

    NASA Technical Reports Server (NTRS)

    Arnett, J. C.; Dally, R. B.; Rumburg, J. P.

    1984-01-01

    The causes of array faults and efforts to mitigate their effects are examined. Research is concentrated on the panel for the 900 kw second phase of the Sacramento Municipal Utility District (SMUD) project. The panel is designed for hot spot tolerance without comprising efficiency under normal operating conditions. Series/paralleling internal to each module improves tolerance in the power quadrant to cell short or open circuits. Analtyical methods are developed for predicting worst case shade patterns and calculating the resultant cell temperature. Experiments conducted on a prototype panel support the analytical calculations.

  18. Development of an achiral supercritical fluid chromatography method with ultraviolet absorbance and mass spectrometric detection for impurity profiling of drug candidates. Part II. Selection of an orthogonal set of stationary phases.

    PubMed

    Lemasson, Elise; Bertin, Sophie; Hennig, Philippe; Boiteux, Hélène; Lesellier, Eric; West, Caroline

    2015-08-21

    Impurity profiling of organic products that are synthesized as possible drug candidates requires complementary analytical methods to ensure that all impurities are identified. Supercritical fluid chromatography (SFC) is a very useful tool to achieve this objective, as an adequate selection of stationary phases can provide orthogonal separations so as to maximize the chances to see all impurities. In this series of papers, we have developed a method for achiral SFC-MS profiling of drug candidates, based on a selection of 160 analytes issued from Servier Research Laboratories. In the first part of this study, focusing on mobile phase selection, a gradient elution with carbon dioxide and methanol comprising 2% water and 20mM ammonium acetate proved to be the best in terms of chromatographic performance, while also providing good MS response [1]. The objective of this second part was the selection of an orthogonal set of ultra-high performance stationary phases, that was carried out in two steps. Firstly, a reduced set of analytes (20) was used to screen 23 columns. The columns selected were all 1.7-2.5μm fully porous or 2.6-2.7μm superficially porous particles, with a variety of stationary phase chemistries. Derringer desirability functions were used to rank the columns according to retention window, column efficiency evaluated with peak width of selected analytes, and the proportion of analytes successfully eluted with good peak shapes. The columns providing the worst performances were thus eliminated and a shorter selection of columns (11) was obtained. Secondly, based on 160 tested analytes, the 11 columns were ranked again. The retention data obtained on these columns were then compared to define a reduced set of the best columns providing the greatest orthogonality, to maximize the chances to see all impurities within a limited number of runs. Two high-performance columns were thus selected: ACQUITY UPC(2) HSS C18 SB and Nucleoshell HILIC. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Dietary standards for school catering in France: serving moderate quantities to improve dietary quality without increasing the food-related cost of meals.

    PubMed

    Vieux, Florent; Dubois, Christophe; Allegre, Laëtitia; Mandon, Lionel; Ciantar, Laurent; Darmon, Nicole

    2013-01-01

    To assess the impact on food-related cost of meals to fulfill the new compulsory dietary standards for primary schools in France. A descriptive study assessed the relationship between the level of compliance with the standards of observed school meals and their food-related cost. An analytical study assessed the cost of series of meals published in professional journals, and complying or not with new dietary standards. The costs were based on prices actually paid for food used to prepare school meals. Food-related cost of meals. Parametric and nonparametric tests from a total of 42 and 120 series of 20 meals in the analytical and descriptive studies, respectively. The descriptive study indicated that meeting the standards was not related to cost. The analytical study showed that fulfilling the frequency guidelines increased the cost, whereas fulfilling the portion sizes criteria decreased it. Series of meals fully respecting the standards (ie, frequency and portion sizes) cost significantly less (-0.10 €/meal) than series not fulfilling them, because the standards recommend smaller portion sizes. Introducing portion sizes rules in dietary standards for school catering may help increase dietary quality without increasing the food cost of meals. Copyright © 2013 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  20. Case series: toxicity from 25B-NBOMe--a cluster of N-bomb cases.

    PubMed

    Gee, Paul; Schep, Leo J; Jensen, Berit P; Moore, Grant; Barrington, Stuart

    2016-01-01

    Background A new class of hallucinogens called NBOMes has emerged. This class includes analogues 25I-NBOMe, 25C-NBOMe and 25B-NBOMe. Case reports and judicial seizures indicate that 25I-NBOMe and 25C-NBOMe are more prevalently abused. There have been a few confirmed reports of 25B-NBOMe use or toxicity. Report Observational case series. This report describes a series of 10 patients who suffered adverse effects from 25B-NBOMe. Hallucinations and violent agitation predominate along with serotonergic/stimulant signs such as mydriasis, tachycardia, hypertension and hyperthermia. The majority (7/10) required sedation with benzodiazepines. Analytical method 25B-NBOMe concentrations in plasma and urine were quantified in all patients using a validated liquid chromatography-tandem mass spectrometry (LC-MS/MS) method. Peak plasma levels were measured between 0.7-10.1 ng/ml. Discussion The NBOMes are desired by users because of their hallucinogenic and stimulant effects. They are often sold as LSD or synthetic LSD. Reported cases of 25B- NBOMe toxicity are reviewed and compared to our series. Seizures and one pharmacological death have been described but neither were observed in our series. Based on our experience with cases of mild to moderate toxicity, we suggest that management should be supportive and focused on preventing further (self) harm. High doses of benzodiazepines may be required to control agitation. Patients who develop significant hyperthermia need to be actively managed. Conclusions Effects from 25B-NBOMe in our series were similar to previous individual case reports. The clinical features were also similar to effects from other analogues in the class (25I-NBOMe, 25C-NBOMe). Violent agitation frequently present along with signs of serotonergic stimulation. Hyperthermia, rhabdomyolysis and kidney injury were also observed.

  1. A two-component Matched Interface and Boundary (MIB) regularization for charge singularity in implicit solvation

    NASA Astrophysics Data System (ADS)

    Geng, Weihua; Zhao, Shan

    2017-12-01

    We present a new Matched Interface and Boundary (MIB) regularization method for treating charge singularity in solvated biomolecules whose electrostatics are described by the Poisson-Boltzmann (PB) equation. In a regularization method, by decomposing the potential function into two or three components, the singular component can be analytically represented by the Green's function, while other components possess a higher regularity. Our new regularization combines the efficiency of two-component schemes with the accuracy of the three-component schemes. Based on this regularization, a new MIB finite difference algorithm is developed for solving both linear and nonlinear PB equations, where the nonlinearity is handled by using the inexact-Newton's method. Compared with the existing MIB PB solver based on a three-component regularization, the present algorithm is simpler to implement by circumventing the work to solve a boundary value Poisson equation inside the molecular interface and to compute related interface jump conditions numerically. Moreover, the new MIB algorithm becomes computationally less expensive, while maintains the same second order accuracy. This is numerically verified by calculating the electrostatic potential and solvation energy on the Kirkwood sphere on which the analytical solutions are available and on a series of proteins with various sizes.

  2. Micro transflection on a metallic stick: an innovative approach of reflection infrared spectroscopy for minimally invasive investigation of painting varnishes.

    PubMed

    Rosi, Francesca; Legan, Lea; Miliani, Costanza; Ropret, Polonca

    2017-05-01

    A new analytical approach, based on micro-transflection measurements from a diamond-coated metal sampling stick, is presented for the analysis of painting varnishes. Minimally invasive sampling is performed from the varnished surface using the stick, which is directly used as a transflection substrate for micro Fourier transform infrared (FTIR) measurements. With use of a series of varnished model paints, the micro-transflection method has been proved to be a valuable tool for the identification of surface components thanks to the selectivity of the sampling, the enhancement of the absorbance signal, and the easier spectral interpretation because the profiles are similar to transmission mode ones. Driven by these positive outcomes, the method was then tested as tool supporting noninvasive reflection FTIR spectroscopy during the assessment of varnish removal by solvent cleaning on paint models. Finally, the integrated analytical approach based on the two reflection methods was successfully applied for the monitoring of the cleaning of the sixteenth century painting Presentation in the Temple by Vittore Carpaccio. Graphical Abstract Micro-transflection FTIR on a metallic stick for the identification of varnishes during painting cleanings.

  3. Capillary electrophoretic study of dibasic acids of different structures: Relation to separation of oxidative intermediates in remediation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Z.; Cocke, D.L.

    Dicarboxylic acids are important in environmental chemistry because they are intermediates in oxidative processes involved in natural remediation and waste management processes such as oxidative detoxification and advanced oxidation. Capillary electrophoresis (CE), a promising technique for separating and analyzing these intermediates, has been used to examine a series of dibasic acids of different structures and conformations. This series includes malonic acid, succinic acid, glutaric acid, adipic acid, pimelic acid, maleic acid, fumaric acid, phthalic acid, and trans, trans-muconic acid. The CE parameters as well as structural variations (molecular structure and molecular isomers, buffer composition, pH, applied voltage, injection mode, current,more » temperature, and detection wavelength) that affect the separations and analytical results have been examined in this study. Those factors that affect the separation have been delineated. Among these parameters, the pH has been found to be the most important, which affects the double-layer of the capillary wall, the electro-osmotic flow and analyte mobility. The optimum pH for separating these dibasic acids, as well as the other parameters are discussed in detail and related to the development of methods for analyzing oxidation intermediates in oxidative waste management procedures.« less

  4. Failure analysis of thick composite cylinders under external pressure

    NASA Technical Reports Server (NTRS)

    Caiazzo, A.; Rosen, B. W.

    1992-01-01

    Failure of thick section composites due to local compression strength and overall structural instability is treated. Effects of material nonlinearity, imperfect fiber architecture, and structural imperfections upon anticipated failure stresses are determined. Comparisons with experimental data for a series of test cylinders are described. Predicting the failure strength of composite structures requires consideration of stability and material strength modes of failure using linear and nonlinear analysis techniques. Material strength prediction requires the accurate definition of the local multiaxial stress state in the material. An elasticity solution for the linear static analysis of thick anisotropic cylinders and rings is used herein to predict the axisymmetric stress state in the cylinders. Asymmetric nonlinear behavior due to initial cylinder out of roundness and the effects of end closure structure are treated using finite element methods. It is assumed that local fiber or ply waviness is an important factor in the initiation of material failure. An analytical model for the prediction of compression failure of fiber composites, which includes the effects of fiber misalignments, matrix inelasticity, and multiaxial applied stresses is used for material strength calculations. Analytical results are compared to experimental data for a series of glass and carbon fiber reinforced epoxy cylinders subjected to external pressure. Recommendations for pretest characterization and other experimental issues are presented. Implications for material and structural design are discussed.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rycroft, Chris H.; Bazant, Martin Z.

    An advection-diffusion-limited dissolution model of an object being eroded by a two-dimensional potential flow is presented. By taking advantage of the conformal invariance of the model, a numerical method is introduced that tracks the evolution of the object boundary in terms of a time-dependent Laurent series. Simulations of a variety of dissolving objects are shown, which shrink and collapse to a single point in finite time. The simulations reveal a surprising exact relationship, whereby the collapse point is the root of a non-Analytic function given in terms of the flow velocity and the Laurent series coefficients describing the initial shape.more » This result is subsequently derived using residue calculus. The structure of the non-Analytic function is examined for three different test cases, and a practical approach to determine the collapse point using a generalized Newton-Raphson root-finding algorithm is outlined. These examples also illustrate the possibility that the model breaks down in finite time prior to complete collapse, due to a topological singularity, as the dissolving boundary overlaps itself rather than breaking up into multiple domains (analogous to droplet pinch-off in fluid mechanics). In conclusion, the model raises fundamental mathematical questions about broken symmetries in finite-Time singularities of both continuous and stochastic dynamical systems.« less

  6. Asymmetric collapse by dissolution or melting in a uniform flow

    PubMed Central

    Bazant, Martin Z.

    2016-01-01

    An advection–diffusion-limited dissolution model of an object being eroded by a two-dimensional potential flow is presented. By taking advantage of the conformal invariance of the model, a numerical method is introduced that tracks the evolution of the object boundary in terms of a time-dependent Laurent series. Simulations of a variety of dissolving objects are shown, which shrink and collapse to a single point in finite time. The simulations reveal a surprising exact relationship, whereby the collapse point is the root of a non-analytic function given in terms of the flow velocity and the Laurent series coefficients describing the initial shape. This result is subsequently derived using residue calculus. The structure of the non-analytic function is examined for three different test cases, and a practical approach to determine the collapse point using a generalized Newton–Raphson root-finding algorithm is outlined. These examples also illustrate the possibility that the model breaks down in finite time prior to complete collapse, due to a topological singularity, as the dissolving boundary overlaps itself rather than breaking up into multiple domains (analogous to droplet pinch-off in fluid mechanics). The model raises fundamental mathematical questions about broken symmetries in finite-time singularities of both continuous and stochastic dynamical systems. PMID:26997890

  7. Asymmetric collapse by dissolution or melting in a uniform flow

    DOE PAGES

    Rycroft, Chris H.; Bazant, Martin Z.

    2016-01-06

    An advection-diffusion-limited dissolution model of an object being eroded by a two-dimensional potential flow is presented. By taking advantage of the conformal invariance of the model, a numerical method is introduced that tracks the evolution of the object boundary in terms of a time-dependent Laurent series. Simulations of a variety of dissolving objects are shown, which shrink and collapse to a single point in finite time. The simulations reveal a surprising exact relationship, whereby the collapse point is the root of a non-Analytic function given in terms of the flow velocity and the Laurent series coefficients describing the initial shape.more » This result is subsequently derived using residue calculus. The structure of the non-Analytic function is examined for three different test cases, and a practical approach to determine the collapse point using a generalized Newton-Raphson root-finding algorithm is outlined. These examples also illustrate the possibility that the model breaks down in finite time prior to complete collapse, due to a topological singularity, as the dissolving boundary overlaps itself rather than breaking up into multiple domains (analogous to droplet pinch-off in fluid mechanics). In conclusion, the model raises fundamental mathematical questions about broken symmetries in finite-Time singularities of both continuous and stochastic dynamical systems.« less

  8. Exhaled human breath measurement method for assessing exposure to halogenated volatile organic compounds.

    PubMed

    Pleil, J D; Lindstrom, A B

    1997-05-01

    The organic constituents of exhaled human breath are representative of blood-borne concentrations through gas exchange in the blood/breath interface in the lungs. The presence of specific compounds can be an indicator of recent exposure or represent a biological response of the subject. For volatile organic compounds (VOCs), sampling and analysis of breath is preferred to direct measurement from blood samples because breath collection is noninvasive, potentially infectious waste is avoided, and the measurement of gas-phase analytes is much simpler in a gas matrix rather than in a complex biological tissue such as blood. To exploit these advantages, we have developed the "single breath canister" (SBC) technique, a simple direct collection method for individual alveolar breath samples, and adapted conventional gas chromatography-mass spectrometry analytical methods for trace-concentration VOC analysis. The focus of this paper is to describe briefly the techniques for making VOC measurements in breath, to present some specific applications for which these methods are relevant, and to demonstrate how to estimate exposure to example VOCs on the basis of breath elimination. We present data from three different exposure scenarios: (a) vinyl chloride and cis-1,2-dichloroethene from showering with contaminated water from a private well, (b) chloroform and bromodichloromethane from high-intensity swimming in chlorinated pool water, and (c) trichloroethene from a controlled exposure chamber experiment. In all cases, for all subjects, the experiment is the same: preexposure breath measurement, exposure to halogenated VOC, and a postexposure time-dependent series of breath measurements. Data are presented only to demonstrate the use of the method and how to interpret the analytical results.

  9. Using the MCNP Taylor series perturbation feature (efficiently) for shielding problems

    NASA Astrophysics Data System (ADS)

    Favorite, Jeffrey

    2017-09-01

    The Taylor series or differential operator perturbation method, implemented in MCNP and invoked using the PERT card, can be used for efficient parameter studies in shielding problems. This paper shows how only two PERT cards are needed to generate an entire parameter study, including statistical uncertainty estimates (an additional three PERT cards can be used to give exact statistical uncertainties). One realistic example problem involves a detailed helium-3 neutron detector model and its efficiency as a function of the density of its high-density polyethylene moderator. The MCNP differential operator perturbation capability is extremely accurate for this problem. A second problem involves the density of the polyethylene reflector of the BeRP ball and is an example of first-order sensitivity analysis using the PERT capability. A third problem is an analytic verification of the PERT capability.

  10. Statistical and temporal irradiance fluctuations modeling for a ground-to-geostationary satellite optical link.

    PubMed

    Camboulives, A-R; Velluet, M-T; Poulenard, S; Saint-Antonin, L; Michau, V

    2018-02-01

    An optical communication link performance between the ground and a geostationary satellite can be impaired by scintillation, beam wandering, and beam spreading due to its propagation through atmospheric turbulence. These effects on the link performance can be mitigated by tracking and error correction codes coupled with interleaving. Precise numerical tools capable of describing the irradiance fluctuations statistically and of creating an irradiance time series are needed to characterize the benefits of these techniques and optimize them. The wave optics propagation methods have proven their capability of modeling the effects of atmospheric turbulence on a beam, but these are known to be computationally intensive. We present an analytical-numerical model which provides good results on the probability density functions of irradiance fluctuations as well as a time series with an important saving of time and computational resources.

  11. [Metabonomics-a useful tool for individualized cancer therapy].

    PubMed

    Chai, Yanlan; Wang, Juan; Liu, Zi

    2013-11-01

    Metabonomics has developed rapidly in post-genome era, and becomes a hot topic of omics. The core idea of metabonomics is to determine the metabolites of relatively low-weight molecular in organisms or cells, by a series of analytical methods such as nuclear magnetic resonance, color spectrum and mass spectrogram, then to transform the data of metabolic pattern into useful information, by chemometric tools and pattern recognition software, and to reveal the essence of life activities of the body. With advantages of high-throughput, high-sensitivity and high-accuracy, metabolomics shows great potential and value in cancer individualized treatment. This paper introduces the concept,contents and methods of metabonomics and reviews its application in cancer individualized therapy.

  12. Data Analysis for the Behavioral Sciences Using SPSS

    NASA Astrophysics Data System (ADS)

    Lawner Weinberg, Sharon; Knapp Abramowitz, Sarah

    2002-04-01

    This book is written from the perspective that statistics is an integrated set of tools used together to uncover the story contained in numerical data. Accordingly, the book comes with a disk containing a series of real data sets to motivate discussions of appropriate methods of analysis. The presentation is based on a conceptual approach supported by an understanding of underlying mathematical foundations. Students learn that more than one method of analysis is typically needed and that an ample characterization of results is a critical component of any data analytic plan. The use of real data and SPSS to perform computations and create graphical summaries enables a greater emphasis on conceptual understanding and interpretation.

  13. Experimental and analytical research on the aerodynamics of wind driven turbines. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohrbach, C.; Wainauski, H.; Worobel, R.

    1977-12-01

    This aerodynamic research program was aimed at providing a reliable, comprehensive data base on a series of wind turbine models covering a broad range of the prime aerodynamic and geometric variables. Such data obtained under controlled laboratory conditions on turbines designed by the same method, of the same size, and tested in the same wind tunnel had not been available in the literature. Moreover, this research program was further aimed at providing a basis for evaluating the adequacy of existing wind turbine aerodynamic design and performance methodology, for assessing the potential of recent advanced theories and for providing a basismore » for further method development and refinement.« less

  14. Treating electron transport in MCNP{sup trademark}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, H.G.

    1996-12-31

    The transport of electrons and other charged particles is fundamentally different from that of neutrons and photons. A neutron, in aluminum slowing down from 0.5 MeV to 0.0625 MeV will have about 30 collisions; a photon will have fewer than ten. An electron with the same energy loss will undergo 10{sup 5} individual interactions. This great increase in computational complexity makes a single- collision Monte Carlo approach to electron transport unfeasible for many situations of practical interest. Considerable theoretical work has been done to develop a variety of analytic and semi-analytic multiple-scattering theories for the transport of charged particles. Themore » theories used in the algorithms in MCNP are the Goudsmit-Saunderson theory for angular deflections, the Landau an theory of energy-loss fluctuations, and the Blunck-Leisegang enhancements of the Landau theory. In order to follow an electron through a significant energy loss, it is necessary to break the electron`s path into many steps. These steps are chosen to be long enough to encompass many collisions (so that multiple-scattering theories are valid) but short enough that the mean energy loss in any one step is small (for the approximations in the multiple-scattering theories). The energy loss and angular deflection of the electron during each step can then be sampled from probability distributions based on the appropriate multiple- scattering theories. This subsumption of the effects of many individual collisions into single steps that are sampled probabilistically constitutes the ``condensed history`` Monte Carlo method. This method is exemplified in the ETRAN series of electron/photon transport codes. The ETRAN codes are also the basis for the Integrated TIGER Series, a system of general-purpose, application-oriented electron/photon transport codes. The electron physics in MCNP is similar to that of the Integrated TIGER Series.« less

  15. Comparison of ITRF2014 station coordinate input time series of DORIS, VLBI and GNSS

    NASA Astrophysics Data System (ADS)

    Tornatore, Vincenza; Tanır Kayıkçı, Emine; Roggero, Marco

    2016-12-01

    In this paper station coordinate time series from three space geodesy techniques that have contributed to the realization of the International Terrestrial Reference Frame 2014 (ITRF2014) are compared. In particular the height component time series extracted from official combined intra-technique solutions submitted for ITRF2014 by DORIS, VLBI and GNSS Combination Centers have been investigated. The main goal of this study is to assess the level of agreement among these three space geodetic techniques. A novel analytic method, modeling time series as discrete-time Markov processes, is presented and applied to the compared time series. The analysis method has proven to be particularly suited to obtain quasi-cyclostationary residuals which are an important property to carry out a reliable harmonic analysis. We looked for common signatures among the three techniques. Frequencies and amplitudes of the detected signals have been reported along with their percentage of incidence. Our comparison shows that two of the estimated signals, having one-year and 14 days periods, are common to all the techniques. Different hypotheses on the nature of the signal having a period of 14 days are presented. As a final check we have compared the estimated velocities and their standard deviations (STD) for the sites that co-located the VLBI, GNSS and DORIS stations, obtaining a good agreement among the three techniques both in the horizontal (1.0 mm/yr mean STD) and in the vertical (0.7 mm/yr mean STD) component, although some sites show larger STDs, mainly due to lack of data, different data spans or noisy observations.

  16. Analytic approximations to the modon dispersion relation. [in oceanography

    NASA Technical Reports Server (NTRS)

    Boyd, J. P.

    1981-01-01

    Three explicit analytic approximations are given to the modon dispersion relation developed by Flierl et al. (1980) to describe Gulf Stream rings and related phenomena in the oceans and atmosphere. The solutions are in the form of k(q), and are developed in the form of a power series in q for small q, an inverse power series in 1/q for large q, and a two-point Pade approximant. The low order Pade approximant is shown to yield a solution for the dispersion relation with a maximum relative error for the lowest branch of the function equal to one in 700 in the q interval zero to infinity.

  17. An Analytical Method to Measure Free-Water Tritium in Foods using Azeotropic Distillation.

    PubMed

    Soga, Keisuke; Kamei, Toshiyuki; Hachisuka, Akiko; Nishimaki-Mogami, Tomoko

    2016-01-01

    A series of accidents at the Fukushima Dai-ichi Nuclear Power Plant has raised concerns about the discharge of contaminated water containing tritium ((3)H) from the nuclear power plant into the environment and into foods. In this study, we explored convenient analytical methods to measure free-water (3)H in foods using a liquid scintillation counting and azeotropic distillation method. The detection limit was 10 Bq/L, corresponding to about 0.01% of 1 mSv/year. The (3)H recoveries were 85-90% in fruits, vegetables, meats and fishes, 75-85% in rice and cereal crops, and less than 50% in sweets containing little water. We found that, in the case of sweets, adding water to the sample before the azeotropic distillation increased the recovery and precision. Then, the recoveries reached more than 75% and RSD was less than 10% in all food categories (13 kinds). Considering its sensitivity, precision and simplicity, this method is practical and useful for (3)H analysis in various foods, and should be suitable for the safety assessment of foods. In addition, we examined the level of (3)H in foods on the Japanese market. No (3)H radioactivity was detected in any of 42 analyzed foods.

  18. A three-dimensional semi-analytical solution for predicting drug release through the orifice of a spherical device.

    PubMed

    Simon, Laurent; Ospina, Juan

    2016-07-25

    Three-dimensional solute transport was investigated for a spherical device with a release hole. The governing equation was derived using the Fick's second law. A mixed Neumann-Dirichlet condition was imposed at the boundary to represent diffusion through a small region on the surface of the device. The cumulative percentage of drug released was calculated in the Laplace domain and represented by the first term of an infinite series of Legendre and modified Bessel functions of the first kind. Application of the Zakian algorithm yielded the time-domain closed-form expression. The first-order solution closely matched a numerical solution generated by Mathematica(®). The proposed method allowed computation of the characteristic time. A larger surface pore resulted in a smaller effective time constant. The agreement between the numerical solution and the semi-analytical method improved noticeably as the size of the orifice increased. It took four time constants for the device to release approximately ninety-eight of its drug content. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Electromagnetic field analysis and modeling of a relative position detection sensor for high speed maglev trains.

    PubMed

    Xue, Song; He, Ning; Long, Zhiqiang

    2012-01-01

    The long stator track for high speed maglev trains has a tooth-slot structure. The sensor obtains precise relative position information for the traction system by detecting the long stator tooth-slot structure based on nondestructive detection technology. The magnetic field modeling of the sensor is a typical three-dimensional (3-D) electromagnetic problem with complex boundary conditions, and is studied semi-analytically in this paper. A second-order vector potential (SOVP) is introduced to simplify the vector field problem to a scalar field one, the solution of which can be expressed in terms of series expansions according to Multipole Theory (MT) and the New Equivalent Source (NES) method. The coefficients of the expansions are determined by the least squares method based on the boundary conditions. Then, the solution is compared to the simulation result through Finite Element Analysis (FEA). The comparison results show that the semi-analytical solution agrees approximately with the numerical solution. Finally, based on electromagnetic modeling, a difference coil structure is designed to improve the sensitivity and accuracy of the sensor.

  20. Electromagnetic Field Analysis and Modeling of a Relative Position Detection Sensor for High Speed Maglev Trains

    PubMed Central

    Xue, Song; He, Ning; Long, Zhiqiang

    2012-01-01

    The long stator track for high speed maglev trains has a tooth-slot structure. The sensor obtains precise relative position information for the traction system by detecting the long stator tooth-slot structure based on nondestructive detection technology. The magnetic field modeling of the sensor is a typical three-dimensional (3-D) electromagnetic problem with complex boundary conditions, and is studied semi-analytically in this paper. A second-order vector potential (SOVP) is introduced to simplify the vector field problem to a scalar field one, the solution of which can be expressed in terms of series expansions according to Multipole Theory (MT) and the New Equivalent Source (NES) method. The coefficients of the expansions are determined by the least squares method based on the boundary conditions. Then, the solution is compared to the simulation result through Finite Element Analysis (FEA). The comparison results show that the semi-analytical solution agrees approximately with the numerical solution. Finally, based on electromagnetic modeling, a difference coil structure is designed to improve the sensitivity and accuracy of the sensor. PMID:22778652

  1. System parameter identification from projection of inverse analysis

    NASA Astrophysics Data System (ADS)

    Liu, K.; Law, S. S.; Zhu, X. Q.

    2017-05-01

    The output of a system due to a change of its parameters is often approximated with the sensitivity matrix from the first order Taylor series. The system output can be measured in practice, but the perturbation in the system parameters is usually not available. Inverse sensitivity analysis can be adopted to estimate the unknown system parameter perturbation from the difference between the observation output data and corresponding analytical output data calculated from the original system model. The inverse sensitivity analysis is re-visited in this paper with improvements based on the Principal Component Analysis on the analytical data calculated from the known system model. The identification equation is projected into a subspace of principal components of the system output, and the sensitivity of the inverse analysis is improved with an iterative model updating procedure. The proposed method is numerical validated with a planar truss structure and dynamic experiments with a seven-storey planar steel frame. Results show that it is robust to measurement noise, and the location and extent of stiffness perturbation can be identified with better accuracy compared with the conventional response sensitivity-based method.

  2. MRI of human hair.

    PubMed

    Mattle, Eveline; Weiger, Markus; Schmidig, Daniel; Boesiger, Peter; Fey, Michael

    2009-06-01

    Hair care for humans is a major world industry with specialised tools, chemicals and techniques. Studying the effect of hair care products has become a considerable field of research, and besides mechanical and optical testing numerous advanced analytical techniques have been employed in this area. In the present work, another means of studying the properties of hair is added by demonstrating the feasibility of magnetic resonance imaging (MRI) of the human hair. Established dedicated nuclear magnetic resonance microscopy hardware (solenoidal radiofrequency microcoils and planar field gradients) and methods (constant time imaging) were adapted to the specific needs of hair MRI. Images were produced at a spatial resolution high enough to resolve the inner structure of the hair, showing contrast between cortex and medulla. Quantitative evaluation of a scan series with different echo times provided a T*(2) value of 2.6 ms for the cortex and a water content of about 90% for hairs saturated with water. The demonstration of the feasibility of hair MRI potentially adds a new tool to the large variety of analytical methods used nowadays in the development of hair care products.

  3. Influence of transverse-shear and large-deformation effects on the low-speed impact response of laminated composite plates

    NASA Technical Reports Server (NTRS)

    Ambur, Damodar R.; Starnes, James H., Jr.; Prasad, Chunchu B.

    1993-01-01

    An analytical procedure is presented for determining the transient response of simply supported, rectangular laminated composite plates subjected to impact loads from airgun-propelled or dropped-weight impactors. A first-order shear-deformation theory is included in the analysis to represent properly any local short-wave-length transient bending response. The impact force is modeled as a locally distributed load with a cosine-cosine distribution. A double Fourier series expansion and the Timoshenko small-increment method are used to determine the contact force, out-of-plane deflections, and in-plane strains and stresses at any plate location due to an impact force at any plate location. The results of experimental and analytical studies are compared for quasi-isotropic laminates. The results indicate that using the appropriate local force distribution for the locally loaded area and including transverse-shear-deformation effects in the laminated plate response analysis are important. The applicability of the present analytical procedure based on small deformation theory is investigated by comparing analytical and experimental results for combinations of quasi-isotropic laminate thicknesses and impact energy levels. The results of this study indicate that large-deformation effects influence the response of both 24- and 32-ply laminated plates, and that a geometrically nonlinear analysis is required for predicting the response accurately.

  4. Experimental and analytical characterization of triaxially braided textile composites

    NASA Technical Reports Server (NTRS)

    Masters, John E.; Fedro, Mark J.; Ifju, Peter G.

    1993-01-01

    There were two components, experimental and analytical, to this investigation of triaxially braided textile composite materials. The experimental portion of the study centered on measuring the materials' longitudinal and transverse tensile moduli, Poisson's ratio, and strengths. The identification of the damage mechanisms exhibited by these materials was also a prime objective of the experimental investigation. The analytical portion of the investigation utilized the Textile Composites Analysis (TECA) model to predict modulus and strength. The analytical and experimental results were compared to assess the effectiveness of the analysis. The figures contained in this paper reflect the presentation made at the conference. They may be divided into four sections: a definition of the material system tested; followed by a series of figures summarizing the experimental results (these figures contain results of a Moire interferometry study of the strain distribution in the material, examples and descriptions of the types of damage encountered in these materials, and a summary of the measured properties); a description of the TECA model follows the experimental results (this includes a series of predicted results and a comparison with measured values); and finally, a brief summary completes the paper.

  5. Predicting adverse hemodynamic events in critically ill patients.

    PubMed

    Yoon, Joo H; Pinsky, Michael R

    2018-06-01

    The art of predicting future hemodynamic instability in the critically ill has rapidly become a science with the advent of advanced analytical processed based on computer-driven machine learning techniques. How these methods have progressed beyond severity scoring systems to interface with decision-support is summarized. Data mining of large multidimensional clinical time-series databases using a variety of machine learning tools has led to our ability to identify alert artifact and filter it from bedside alarms, display real-time risk stratification at the bedside to aid in clinical decision-making and predict the subsequent development of cardiorespiratory insufficiency hours before these events occur. This fast evolving filed is primarily limited by linkage of high-quality granular to physiologic rationale across heterogeneous clinical care domains. Using advanced analytic tools to glean knowledge from clinical data streams is rapidly becoming a reality whose clinical impact potential is great.

  6. Thermodynamic properties and static structure factor for a Yukawa fluid in the mean spherical approximation.

    PubMed

    Montes-Perez, J; Cruz-Vera, A; Herrera, J N

    2011-12-01

    This work presents the full analytic expressions for the thermodynamic properties and the static structure factor for a hard sphere plus 1-Yukawa fluid within the mean spherical approximation. To obtain these properties of the fluid type Yukawa analytically it was necessary to solve an equation of fourth order for the scaling parameter on a large scale. The physical root of this equation was determined by imposing physical conditions. The results of this work are obtained from seminal papers of Blum and Høye. We show that is not necessary the use the series expansion to solve the equation for the scaling parameter. We applied our theoretical result to find the thermodynamic and the static structure factor for krypton. Our results are in good agreement with those obtained in an experimental form or by simulation using the Monte Carlo method.

  7. Computing the multifractal spectrum from time series: an algorithmic approach.

    PubMed

    Harikrishnan, K P; Misra, R; Ambika, G; Amritkar, R E

    2009-12-01

    We show that the existing methods for computing the f(alpha) spectrum from a time series can be improved by using a new algorithmic scheme. The scheme relies on the basic idea that the smooth convex profile of a typical f(alpha) spectrum can be fitted with an analytic function involving a set of four independent parameters. While the standard existing schemes [P. Grassberger et al., J. Stat. Phys. 51, 135 (1988); A. Chhabra and R. V. Jensen, Phys. Rev. Lett. 62, 1327 (1989)] generally compute only an incomplete f(alpha) spectrum (usually the top portion), we show that this can be overcome by an algorithmic approach, which is automated to compute the D(q) and f(alpha) spectra from a time series for any embedding dimension. The scheme is first tested with the logistic attractor with known f(alpha) curve and subsequently applied to higher-dimensional cases. We also show that the scheme can be effectively adapted for analyzing practical time series involving noise, with examples from two widely different real world systems. Moreover, some preliminary results indicating that the set of four independent parameters may be used as diagnostic measures are also included.

  8. Preparation, Characterization, and Selectivity Study of Mixed-Valence Sulfites

    ERIC Educational Resources Information Center

    Silva, Luciana A.; de Andrade, Jailson B.

    2010-01-01

    A project involving the synthesis of an isomorphic double sulfite series and characterization by classical inorganic chemical analyses is described. The project is performed by upper-level undergraduate students in the laboratory. This compound series is suitable for examining several chemical concepts and analytical techniques in inorganic…

  9. Parallel Aircraft Trajectory Optimization with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Gray, Justin S.; Naylor, Bret

    2016-01-01

    Trajectory optimization is an integral component for the design of aerospace vehicles, but emerging aircraft technologies have introduced new demands on trajectory analysis that current tools are not well suited to address. Designing aircraft with technologies such as hybrid electric propulsion and morphing wings requires consideration of the operational behavior as well as the physical design characteristics of the aircraft. The addition of operational variables can dramatically increase the number of design variables which motivates the use of gradient based optimization with analytic derivatives to solve the larger optimization problems. In this work we develop an aircraft trajectory analysis tool using a Legendre-Gauss-Lobatto based collocation scheme, providing analytic derivatives via the OpenMDAO multidisciplinary optimization framework. This collocation method uses an implicit time integration scheme that provides a high degree of sparsity and thus several potential options for parallelization. The performance of the new implementation was investigated via a series of single and multi-trajectory optimizations using a combination of parallel computing and constraint aggregation. The computational performance results show that in order to take full advantage of the sparsity in the problem it is vital to parallelize both the non-linear analysis evaluations and the derivative computations themselves. The constraint aggregation results showed a significant numerical challenge due to difficulty in achieving tight convergence tolerances. Overall, the results demonstrate the value of applying analytic derivatives to trajectory optimization problems and lay the foundation for future application of this collocation based method to the design of aircraft with where operational scheduling of technologies is key to achieving good performance.

  10. Analytical approximations for effective relative permeability in the capillary limit

    NASA Astrophysics Data System (ADS)

    Rabinovich, Avinoam; Li, Boxiao; Durlofsky, Louis J.

    2016-10-01

    We present an analytical method for calculating two-phase effective relative permeability, krjeff, where j designates phase (here CO2 and water), under steady state and capillary-limit assumptions. These effective relative permeabilities may be applied in experimental settings and for upscaling in the context of numerical flow simulations, e.g., for CO2 storage. An exact solution for effective absolute permeability, keff, in two-dimensional log-normally distributed isotropic permeability (k) fields is the geometric mean. We show that this does not hold for krjeff since log normality is not maintained in the capillary-limit phase permeability field (Kj=k·krj) when capillary pressure, and thus the saturation field, is varied. Nevertheless, the geometric mean is still shown to be suitable for approximating krjeff when the variance of ln⁡k is low. For high-variance cases, we apply a correction to the geometric average gas effective relative permeability using a Winsorized mean, which neglects large and small Kj values symmetrically. The analytical method is extended to anisotropically correlated log-normal permeability fields using power law averaging. In these cases, the Winsorized mean treatment is applied to the gas curves for cases described by negative power law exponents (flow across incomplete layers). The accuracy of our analytical expressions for krjeff is demonstrated through extensive numerical tests, using low-variance and high-variance permeability realizations with a range of correlation structures. We also present integral expressions for geometric-mean and power law average krjeff for the systems considered, which enable derivation of closed-form series solutions for krjeff without generating permeability realizations.

  11. A critical evaluation of perturbation theories by Monte Carlo simulation of the first four perturbation terms in a Helmholtz energy expansion for the Lennard-Jones fluid

    NASA Astrophysics Data System (ADS)

    van Westen, Thijs; Gross, Joachim

    2017-07-01

    The Helmholtz energy of a fluid interacting by a Lennard-Jones pair potential is expanded in a perturbation series. Both the methods of Barker-Henderson (BH) and of Weeks-Chandler-Andersen (WCA) are evaluated for the division of the intermolecular potential into reference and perturbation parts. The first four perturbation terms are evaluated for various densities and temperatures (in the ranges ρ*=0 -1.5 and T*=0.5 -12 ) using Monte Carlo simulations in the canonical ensemble. The simulation results are used to test several approximate theoretical methods for describing perturbation terms or for developing an approximate infinite order perturbation series. Additionally, the simulations serve as a basis for developing fully analytical third order BH and WCA perturbation theories. The development of analytical theories allows (1) a careful comparison between the BH and WCA formalisms, and (2) a systematic examination of the effect of higher-order perturbation terms on calculated thermodynamic properties of fluids. Properties included in the comparison are supercritical thermodynamic properties (pressure, internal energy, and chemical potential), vapor-liquid phase equilibria, second virial coefficients, and heat capacities. For all properties studied, we find a systematically improved description upon using a higher-order perturbation theory. A result of particular relevance is that a third order perturbation theory is capable of providing a quantitative description of second virial coefficients to temperatures as low as the triple-point of the Lennard-Jones fluid. We find no reason to prefer the WCA formalism over the BH formalism.

  12. Measuring the impact of medicines regulatory interventions – Systematic review and methodological considerations

    PubMed Central

    Morales, Daniel R.; Pacurariu, Alexandra; Kurz, Xavier

    2017-01-01

    Aims Evaluating the public health impact of regulatory interventions is important but there is currently no common methodological approach to guide this evaluation. This systematic review provides a descriptive overview of the analytical methods for impact research. Methods We searched MEDLINE and EMBASE for articles with an empirical analysis evaluating the impact of European Union or non‐European Union regulatory actions to safeguard public health published until March 2017. References from systematic reviews and articles from other known sources were added. Regulatory interventions, data sources, outcomes of interest, methodology and key findings were extracted. Results From 1246 screened articles, 229 were eligible for full‐text review and 153 articles in English language were included in the descriptive analysis. Over a third of articles studied analgesics and antidepressants. Interventions most frequently evaluated are regulatory safety communications (28.8%), black box warnings (23.5%) and direct healthcare professional communications (10.5%); 55% of studies measured changes in drug utilization patterns, 27% evaluated health outcomes, and 18% targeted knowledge, behaviour or changes in clinical practice. Unintended consequences like switching therapies or spill‐over effects were rarely evaluated. Two‐thirds used before–after time series and 15.7% before–after cross‐sectional study designs. Various analytical approaches were applied including interrupted time series regression (31.4%), simple descriptive analysis (28.8%) and descriptive analysis with significance tests (23.5%). Conclusion Whilst impact evaluation of pharmacovigilance and product‐specific regulatory interventions is increasing, the marked heterogeneity in study conduct and reporting highlights the need for scientific guidance to ensure robust methodologies are applied and systematic dissemination of results occurs. PMID:29105853

  13. Study of phase clustering method for analyzing large volumes of meteorological observation data

    NASA Astrophysics Data System (ADS)

    Volkov, Yu. V.; Krutikov, V. A.; Botygin, I. A.; Sherstnev, V. S.; Sherstneva, A. I.

    2017-11-01

    The article describes an iterative parallel phase grouping algorithm for temperature field classification. The algorithm is based on modified method of structure forming by using analytic signal. The developed method allows to solve tasks of climate classification as well as climatic zoning for any time or spatial scale. When used to surface temperature measurement series, the developed algorithm allows to find climatic structures with correlated changes of temperature field, to make conclusion on climate uniformity in a given area and to overview climate changes over time by analyzing offset in type groups. The information on climate type groups specific for selected geographical areas is expanded by genetic scheme of class distribution depending on change in mutual correlation level between ground temperature monthly average.

  14. New approach to detect seismic surface waves in 1Hz-sampled GPS time series

    PubMed Central

    Houlié, N.; Occhipinti, G.; Blanchard, T.; Shapiro, N.; Lognonné, P.; Murakami, M.

    2011-01-01

    Recently, co-seismic seismic source characterization based on GPS measurements has been completed in near- and far-field with remarkable results. However, the accuracy of the ground displacement measurement inferred from GPS phase residuals is still depending of the distribution of satellites in the sky. We test here a method, based on the double difference (DD) computations of Line of Sight (LOS), that allows detecting 3D co-seismic ground shaking. The DD method is a quasi-analytically free of most of intrinsic errors affecting GPS measurements. The seismic waves presented in this study produced DD amplitudes 4 and 7 times stronger than the background noise. The method is benchmarked using the GEONET GPS stations recording the Hokkaido Earthquake (2003 September 25th, Mw = 8.3). PMID:22355563

  15. Field portable low temperature porous layer open tubular cryoadsorption headspace sampling and analysis part II: Applications.

    PubMed

    Harries, Megan; Bukovsky-Reyes, Santiago; Bruno, Thomas J

    2016-01-15

    This paper details the sampling methods used with the field portable porous layer open tubular cryoadsorption (PLOT-cryo) approach, described in Part I of this two-part series, applied to several analytes of interest. We conducted tests with coumarin and 2,4,6-trinitrotoluene (two solutes that were used in initial development of PLOT-cryo technology), naphthalene, aviation turbine kerosene, and diesel fuel, on a variety of matrices and test beds. We demonstrated that these analytes can be easily detected and reliably identified using the portable unit for analyte collection. By leveraging efficiency-boosting temperature control and the high flow rate multiple capillary wafer, very short collection times (as low as 3s) yielded accurate detection. For diesel fuel spiked on glass beads, we determined a method detection limit below 1 ppm. We observed greater variability among separate samples analyzed with the portable unit than previously documented in work using the laboratory-based PLOT-cryo technology. We identify three likely sources that may help explain the additional variation: the use of a compressed air source to generate suction, matrix geometry, and variability in the local vapor concentration around the sampling probe as solute depletion occurs both locally around the probe and in the test bed as a whole. This field-portable adaptation of the PLOT-cryo approach has numerous and diverse potential applications. Published by Elsevier B.V.

  16. Chiral Separation of G-type Chemical Warfare Nerve Agents via Analytical Supercritical Fluid Chromatography

    PubMed Central

    Kasten, Shane A; Zulli, Steven; Jones, Jonathan L; Dephillipo, Thomas; Cerasoli, Douglas M

    2014-01-01

    Chemical warfare nerve agents (CWNAs) are extremely toxic organophosphorus compounds that contain a chiral phosphorus center. Undirected synthesis of G-type CWNAs produces stereoisomers of tabun, sarin, soman, and cyclosarin (GA, GB, GD, and GF, respectively). Analytical-scale methods were developed using a supercritical fluid chromatography (SFC) system in tandem with a mass spectrometer for the separation, quantitation, and isolation of individual stereoisomers of GA, GB, GD, and GF. Screening various chiral stationary phases (CSPs) for the capacity to provide full baseline separation of the CWNAs revealed that a Regis WhelkO1 (SS) column was capable of separating the enantiomers of GA, GB, and GF, with elution of the P(+) enantiomer preceding elution of the corresponding P(–) enantiomer; two WhelkO1 (SS) columns had to be connected in series to achieve complete baseline resolution. The four diastereomers of GD were also resolved using two tandem WhelkO1 (SS) columns, with complete baseline separation of the two P(+) epimers. A single WhelkO1 (RR) column with inverse stereochemistry resulted in baseline separation of the GD P(–) epimers. The analytical methods described can be scaled to allow isolation of individual stereoisomers to assist in screening and development of countermeasures to organophosphorus nerve agents. Chirality 26:817–824, 2014. © 2014 The Authors. Chirality published by John Wiley Periodicals, Inc. PMID:25298066

  17. First-order analytic propagation of satellites in the exponential atmosphere of an oblate planet

    NASA Astrophysics Data System (ADS)

    Martinusi, Vladimir; Dell'Elce, Lamberto; Kerschen, Gaëtan

    2017-04-01

    The paper offers the fully analytic solution to the motion of a satellite orbiting under the influence of the two major perturbations, due to the oblateness and the atmospheric drag. The solution is presented in a time-explicit form, and takes into account an exponential distribution of the atmospheric density, an assumption that is reasonably close to reality. The approach involves two essential steps. The first one concerns a new approximate mathematical model that admits a closed-form solution with respect to a set of new variables. The second step is the determination of an infinitesimal contact transformation that allows to navigate between the new and the original variables. This contact transformation is obtained in exact form, and afterwards a Taylor series approximation is proposed in order to make all the computations explicit. The aforementioned transformation accommodates both perturbations, improving the accuracy of the orbit predictions by one order of magnitude with respect to the case when the atmospheric drag is absent from the transformation. Numerical simulations are performed for a low Earth orbit starting at an altitude of 350 km, and they show that the incorporation of drag terms into the contact transformation generates an error reduction by a factor of 7 in the position vector. The proposed method aims at improving the accuracy of analytic orbit propagation and transforming it into a viable alternative to the computationally intensive numerical methods.

  18. Field Portable Low Temperature Porous Layer Open Tubular Cryoadsorption Headspace Sampling and Analysis Part II: Applications*

    PubMed Central

    Harries, Megan; Bukovsky-Reyes, Santiago; Bruno, Thomas J.

    2016-01-01

    This paper details the sampling methods used with the field portable porous layer open tubular cryoadsorption (PLOT-cryo) approach, described in Part I of this two-part series, applied to several analytes of interest. We conducted tests with coumarin and 2,4,6-trinitrotoluene (two solutes that were used in initial development of PLOT-cryo technology), naphthalene, aviation turbine kerosene, and diesel fuel, on a variety of matrices and test beds. We demonstrated that these analytes can be easily detected and reliably identified using the portable unit for analyte collection. By leveraging efficiency-boosting temperature control and the high flow rate multiple capillary wafer, very short collection times (as low as 3 s) yielded accurate detection. For diesel fuel spiked on glass beads, we determined a method detection limit below 1 ppm. We observed greater variability among separate samples analyzed with the portable unit than previously documented in work using the laboratory-based PLOT-cryo technology. We identify three likely sources that may help explain the additional variation: the use of a compressed air source to generate suction, matrix geometry, and variability in the local vapor concentration around the sampling probe as solute depletion occurs both locally around the probe and in the test bed as a whole. This field-portable adaptation of the PLOT-cryo approach has numerous and diverse potential applications. PMID:26726934

  19. A systematic review of methodology: time series regression analysis for environmental factors and infectious diseases.

    PubMed

    Imai, Chisato; Hashizume, Masahiro

    2015-03-01

    Time series analysis is suitable for investigations of relatively direct and short-term effects of exposures on outcomes. In environmental epidemiology studies, this method has been one of the standard approaches to assess impacts of environmental factors on acute non-infectious diseases (e.g. cardiovascular deaths), with conventionally generalized linear or additive models (GLM and GAM). However, the same analysis practices are often observed with infectious diseases despite of the substantial differences from non-infectious diseases that may result in analytical challenges. Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, systematic review was conducted to elucidate important issues in assessing the associations between environmental factors and infectious diseases using time series analysis with GLM and GAM. Published studies on the associations between weather factors and malaria, cholera, dengue, and influenza were targeted. Our review raised issues regarding the estimation of susceptible population and exposure lag times, the adequacy of seasonal adjustments, the presence of strong autocorrelations, and the lack of a smaller observation time unit of outcomes (i.e. daily data). These concerns may be attributable to features specific to infectious diseases, such as transmission among individuals and complicated causal mechanisms. The consequence of not taking adequate measures to address these issues is distortion of the appropriate risk quantifications of exposures factors. Future studies should pay careful attention to details and examine alternative models or methods that improve studies using time series regression analysis for environmental determinants of infectious diseases.

  20. An asymptotically consistent approximant method with application to soft- and hard-sphere fluids.

    PubMed

    Barlow, N S; Schultz, A J; Weinstein, S J; Kofke, D A

    2012-11-28

    A modified Padé approximant is used to construct an equation of state, which has the same large-density asymptotic behavior as the model fluid being described, while still retaining the low-density behavior of the virial equation of state (virial series). Within this framework, all sequences of rational functions that are analytic in the physical domain converge to the correct behavior at the same rate, eliminating the ambiguity of choosing the correct form of Padé approximant. The method is applied to fluids composed of "soft" spherical particles with separation distance r interacting through an inverse-power pair potential, φ = ε(σ∕r)(n), where ε and σ are model parameters and n is the "hardness" of the spheres. For n < 9, the approximants provide a significant improvement over the 8-term virial series, when compared against molecular simulation data. For n ≥ 9, both the approximants and the 8-term virial series give an accurate description of the fluid behavior, when compared with simulation data. When taking the limit as n → ∞, an equation of state for hard spheres is obtained, which is closer to simulation data than the 10-term virial series for hard spheres, and is comparable in accuracy to other recently proposed equations of state. By applying a least square fit to the approximants, we obtain a general and accurate soft-sphere equation of state as a function of n, valid over the full range of density in the fluid phase.

  1. Analytic representations of mK , FK, mη, and Fη in two loop S U (3 ) chiral perturbation theory

    NASA Astrophysics Data System (ADS)

    Ananthanarayan, B.; Bijnens, Johan; Friot, Samuel; Ghosh, Shayan

    2018-06-01

    In this work, we consider expressions for the masses and decay constants of the pseudoscalar mesons in S U (3 ) chiral perturbation theory. These involve sunset diagrams and their derivatives evaluated at p2=mP2 (P =π , K , η ). Recalling that there are three mass scales in this theory, mπ, mK and mη, there are instances when the finite part of the sunset diagrams do not admit an expression in terms of elementary functions, and have therefore been evaluated numerically in the past. In a recent publication, an expansion in the external momentum was performed to obtain approximate analytic expressions for mπ and Fπ, the pion mass and decay constant. We provide fully analytic exact expressions for mK and mη, the kaon and eta masses, and FK and Fη, the kaon and eta decay constants. These expressions, calculated using Mellin-Barnes methods, are in the form of double series in terms of two mass ratios. A numerical analysis of the results to evaluate the relative size of contributions coming from loops, chiral logarithms as well as phenomenological low-energy constants is presented. We also present a set of approximate analytic expressions for mK, FK, mη and Fη that facilitate comparisons with lattice results. Finally, we show how exact analytic expressions for mπ and Fπ may be obtained, the latter having been used in conjunction with the results for FK to produce a recently published analytic representation of FK/Fπ.

  2. Urban Rain Gauge Siting Selection Based on Gis-Multicriteria Analysis

    NASA Astrophysics Data System (ADS)

    Fu, Yanli; Jing, Changfeng; Du, Mingyi

    2016-06-01

    With the increasingly rapid growth of urbanization and climate change, urban rainfall monitoring as well as urban waterlogging has widely been paid attention. In the light of conventional siting selection methods do not take into consideration of geographic surroundings and spatial-temporal scale for the urban rain gauge site selection, this paper primarily aims at finding the appropriate siting selection rules and methods for rain gauge in urban area. Additionally, for optimization gauge location, a spatial decision support system (DSS) aided by geographical information system (GIS) has been developed. In terms of a series of criteria, the rain gauge optimal site-search problem can be addressed by a multicriteria decision analysis (MCDA). A series of spatial analytical techniques are required for MCDA to identify the prospective sites. With the platform of GIS, using spatial kernel density analysis can reflect the population density; GIS buffer analysis is used to optimize the location with the rain gauge signal transmission character. Experiment results show that the rules and the proposed method are proper for the rain gauge site selection in urban areas, which is significant for the siting selection of urban hydrological facilities and infrastructure, such as water gauge.

  3. On the Gibbs phenomenon 1: Recovering exponential accuracy from the Fourier partial sum of a non-periodic analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang; Solomonoff, Alex; Vandeven, Herve

    1992-01-01

    It is well known that the Fourier series of an analytic or periodic function, truncated after 2N+1 terms, converges exponentially with N, even in the maximum norm, although the function is still analytic. This is known as the Gibbs phenomenon. Here, we show that the first 2N+1 Fourier coefficients contain enough information about the function, so that an exponentially convergent approximation (in the maximum norm) can be constructed.

  4. Indoor air - assessment: Methods of analysis for environmental carcinogens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, M.R.; Naugle, D.F.; Berry, M.A.

    1990-06-01

    The monograph describes, in a general way, published sampling procedures and analytical approaches for known and suspected carcinogens. The primary focus is upon carcinogens found in indoor air, although the methods described are applicable to other media or environments. In cases where there are no published methods for a particular pollutant in indoor air, methods developed for the workplace and for ambient air are included since they should be adaptable to indoor air. Known and suspected carcinogens have been grouped into six categories for the purposes of this and related work. The categories are radon, asbestos, organic compounds, inorganic species,more » particles, and non-ionizing radiation. Some methods of assessing exposure that are not specific to any particular pollutant category are covered in a separate section. The report is the fifth in a series of EPA/Environmental Criteria and Assessment Office Monographs.« less

  5. Modeling Sound Propagation Through Non-Axisymmetric Jets

    NASA Technical Reports Server (NTRS)

    Leib, Stewart J.

    2014-01-01

    A method for computing the far-field adjoint Green's function of the generalized acoustic analogy equations under a locally parallel mean flow approximation is presented. The method is based on expanding the mean-flow-dependent coefficients in the governing equation and the scalar Green's function in truncated Fourier series in the azimuthal direction and a finite difference approximation in the radial direction in circular cylindrical coordinates. The combined spectral/finite difference method yields a highly banded system of algebraic equations that can be efficiently solved using a standard sparse system solver. The method is applied to test cases, with mean flow specified by analytical functions, corresponding to two noise reduction concepts of current interest: the offset jet and the fluid shield. Sample results for the Green's function are given for these two test cases and recommendations made as to the use of the method as part of a RANS-based jet noise prediction code.

  6. Adjustment of Pesticide Concentrations for Temporal Changes in Analytical Recovery, 1992-2006

    USGS Publications Warehouse

    Martin, Jeffrey D.; Stone, Wesley W.; Wydoski, Duane S.; Sandstrom, Mark W.

    2009-01-01

    Recovery is the proportion of a target analyte that is quantified by an analytical method and is a primary indicator of the analytical bias of a measurement. Recovery is measured by analysis of quality-control (QC) water samples that have known amounts of target analytes added ('spiked' QC samples). For pesticides, recovery is the measured amount of pesticide in the spiked QC sample expressed as percentage of the amount spiked, ideally 100 percent. Temporal changes in recovery have the potential to adversely affect time-trend analysis of pesticide concentrations by introducing trends in environmental concentrations that are caused by trends in performance of the analytical method rather than by trends in pesticide use or other environmental conditions. This report examines temporal changes in the recovery of 44 pesticides and 8 pesticide degradates (hereafter referred to as 'pesticides') that were selected for a national analysis of time trends in pesticide concentrations in streams. Water samples were analyzed for these pesticides from 1992 to 2006 by gas chromatography/mass spectrometry. Recovery was measured by analysis of pesticide-spiked QC water samples. Temporal changes in pesticide recovery were investigated by calculating robust, locally weighted scatterplot smooths (lowess smooths) for the time series of pesticide recoveries in 5,132 laboratory reagent spikes; 1,234 stream-water matrix spikes; and 863 groundwater matrix spikes. A 10-percent smoothing window was selected to show broad, 6- to 12-month time scale changes in recovery for most of the 52 pesticides. Temporal patterns in recovery were similar (in phase) for laboratory reagent spikes and for matrix spikes for most pesticides. In-phase temporal changes among spike types support the hypothesis that temporal change in method performance is the primary cause of temporal change in recovery. Although temporal patterns of recovery were in phase for most pesticides, recovery in matrix spikes was greater than recovery in reagent spikes for nearly every pesticide. Models of recovery based on matrix spikes are deemed more appropriate for adjusting concentrations of pesticides measured in groundwater and stream-water samples than models based on laboratory reagent spikes because (1) matrix spikes are expected to more closely match the matrix of environmental water samples than are reagent spikes and (2) method performance is often matrix dependent, as was shown by higher recovery in matrix spikes for most of the pesticides. Models of recovery, based on lowess smooths of matrix spikes, were developed separately for groundwater and stream-water samples. The models of recovery can be used to adjust concentrations of pesticides measured in groundwater or stream-water samples to 100 percent recovery to compensate for temporal changes in the performance (bias) of the analytical method.

  7. Data and Analytics to Inform Energy Retrofit of High Performance Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Yang, Le; Hill, David

    Buildings consume more than one-third of the world?s primary energy. Reducing energy use in buildings with energy efficient technologies is feasible and also driven by energy policies such as energy benchmarking, disclosure, rating, and labeling in both the developed and developing countries. Current energy retrofits focus on the existing building stocks, especially older buildings, but the growing number of new high performance buildings built around the world raises a question that how these buildings perform and whether there are retrofit opportunities to further reduce their energy use. This is a new and unique problem for the building industry. Traditional energymore » audit or analysis methods are inadequate to look deep into the energy use of the high performance buildings. This study aims to tackle this problem with a new holistic approach powered by building performance data and analytics. First, three types of measured data are introduced, including the time series energy use, building systems operating conditions, and indoor and outdoor environmental parameters. An energy data model based on the ISO Standard 12655 is used to represent the energy use in buildings in a three-level hierarchy. Secondly, a suite of analytics were proposed to analyze energy use and to identify retrofit measures for high performance buildings. The data-driven analytics are based on monitored data at short time intervals, and cover three levels of analysis ? energy profiling, benchmarking and diagnostics. Thirdly, the analytics were applied to a high performance building in California to analyze its energy use and identify retrofit opportunities, including: (1) analyzing patterns of major energy end-use categories at various time scales, (2) benchmarking the whole building total energy use as well as major end-uses against its peers, (3) benchmarking the power usage effectiveness for the data center, which is the largest electricity consumer in this building, and (4) diagnosing HVAC equipment using detailed time-series operating data. Finally, a few energy efficiency measures were identified for retrofit, and their energy savings were estimated to be 20percent of the whole-building electricity consumption. Based on the analyses, the building manager took a few steps to improve the operation of fans, chillers, and data centers, which will lead to actual energy savings. This study demonstrated that there are energy retrofit opportunities for high performance buildings and detailed measured building performance data and analytics can help identify and estimate energy savings and to inform the decision making during the retrofit process. Challenges of data collection and analytics were also discussed to shape best practice of retrofitting high performance buildings.« less

  8. Bibliografica Analitica: Tematica Universitaria. Serie Bibliografica No. 1 (Analytical Bibliography: University Topics. Bibliographic Series No. 1).

    ERIC Educational Resources Information Center

    Rossi Etchelouz, Nelly Yvis, Ed.

    This annotated bibliography lists approximately 30 documents written between 1959 and 1967 relevant to issues and problems at the university level in Latin America. The documents, mainly from Latin America with some from the United States and Europe, concern development, problems, costs, curriculum, planning, and resources. (VM)

  9. Bibliografia Analitica: Tematica Universitaria. Serie Bibliografica No. 2 (Analytical Bibliography: University Topics. Bibliographic Series No. 2).

    ERIC Educational Resources Information Center

    Rossi Etchelouz, Nelly Yvis, Ed.

    This annotated bibliography lists approximately 50 documents written between 1963 and 1971 relevant to issues and problems at the university level in Latin America. The documents, mainly from Latin America with some from the United States and Europe, concern development problems, costs, curriculum, planning, and resources. (VM)

  10. Fall Enrollment, 1978. Research and Planning Series Report No. 79-1.

    ERIC Educational Resources Information Center

    Elliott, Loretta Glaze; And Others

    The first in a series of annual analytical reports prepared by the Missouri Department of Higher Education from the annual state data collection is presented. Tables, charts, and graphs provide numerical data, supplemented by brief analyses, in these areas: enrollment by sector; enrollment trends for fall 1974 through fall 1978; fall enrollment…

  11. Self-force via m-mode regularization and 2+1D evolution. II. Scalar-field implementation on Kerr spacetime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolan, Sam R.; Barack, Leor; Wardell, Barry

    2011-10-15

    This is the second in a series of papers aimed at developing a practical time-domain method for self-force calculations in Kerr spacetime. The key elements of the method are (i) removal of a singular part of the perturbation field with a suitable analytic 'puncture' based on the Detweiler-Whiting decomposition, (ii) decomposition of the perturbation equations in azimuthal (m-)modes, taking advantage of the axial symmetry of the Kerr background, (iii) numerical evolution of the individual m-modes in 2+1 dimensions with a finite-difference scheme, and (iv) reconstruction of the physical self-force from the mode sum. Here we report an implementation of themore » method to compute the scalar-field self-force along circular equatorial geodesic orbits around a Kerr black hole. This constitutes a first time-domain computation of the self-force in Kerr geometry. Our time-domain code reproduces the results of a recent frequency-domain calculation by Warburton and Barack, but has the added advantage of being readily adaptable to include the backreaction from the self-force in a self-consistent manner. In a forthcoming paper--the third in the series--we apply our method to the gravitational self-force (in the Lorenz gauge).« less

  12. On the analytic lunar and solar perturbations of a near earth satellite

    NASA Technical Reports Server (NTRS)

    Estes, R. H.

    1972-01-01

    The disturbing function of the moon (sun) is expanded as a sum of products of two harmonic functions, one depending on the position of the satellite and the other on the position of the moon (sun). The harmonic functions depending on the position of the perturbing body are developed into trigonometric series with the ecliptic elements l, l', F, D, and Gamma of the lunar theory which are nearly linear with respect to time. Perturbation of elements are in the form of trigonometric series with the ecliptic lunar elements and the equatorial elements omega and Omega of the satellite so that analytic integration is simple and the results accurate over a long period of time.

  13. Removal of emerging micropollutants from water using cyclodextrin.

    PubMed

    Nagy, Zsuzsanna Magdolna; Molnár, Mónika; Fekete-Kertész, Ildikó; Molnár-Perl, Ibolya; Fenyvesi, Éva; Gruiz, Katalin

    2014-07-01

    Small scale laboratory experiment series were performed to study the suitability of a cyclodextrin-based sorbent (ß-cyclodextrin bead polymer, BCDP) for modelling the removal of micropollutants from drinking water and purified waste water using simulated inflow test solutions containing target analytes (ibuprofen, naproxen, ketoprofen, bisphenol-A, diclofenac, β-estradiol, ethinylestradiol, estriol, cholesterol at 2-6 μg/L level). This work was focused on the preliminary evaluation of BCDP as a sorbent in two different model systems (filtration and fluidization) applied for risk reduction of emerging micropollutants. For comparison different filter systems combined with various sorbents (commercial filter and activated carbon) were applied and evaluated in the filtration experiment series. The spiked test solution (inflow) and the treated outflows were characterized by an integrated methodology including chemical analytical methods gas chromatography-tandem mass spectrometry (GC-MS/MS) and various environmental toxicity tests to determine the efficiency and selectivity of the applied sorbents. Under experimental conditions the cyclodextrin-based filters used for purification of drinking water in most cases were able to absorb more than 90% of the bisphenol-A and of the estrogenic compounds. Both the analytical chemistry and toxicity results showed efficient elimination of these pollutants. Especially the toxicity of the filtrate decreased considerably. Laboratory experiment modelling post-purification of waste water was also performed applying fluidization technology by ß-cyclodextrin bead polymer. The BCDP removed efficiently from the spiked test solution most of the micropollutants, especially the bisphenol-A (94%) and the hormones (87-99%) The results confirmed that the BCDP-containing sorbents provide a good solution to water quality problems and they are able to decrease the load and risk posed by micropollutants to the water systems. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Bound and resonance states of positronic copper atoms

    NASA Astrophysics Data System (ADS)

    Yamashita, Takuma; Umair, Muhammad; Kino, Yasushi

    2017-10-01

    We report a theoretical calculation for the bound and S-wave resonance states of the positronic copper atom (e+Cu). A positron is a positively charged particle; therefore, a positronic atom has an attractive correlation between the positron and electron. A Gaussian expansion method is adopted to directly describe this correlation as well as the strong repulsive interaction with the nucleus. The correlation between the positron and electron is much more important than that between electrons in an analogous system of Cu-, although the formation of a positronium (Ps) in e+Cu is not expressed in the ground state structure explicitly. Resonance states are calculated with a complex scaling method and identified above the first excited state of the copper atom. Resonance states below Ps (n = 2) + Cu+ classified to a dipole series show agreement with a simple analytical law. Comparison of the resonance energies and widths of e+Cu with those of e+K, of which the potential energy of the host atom resembles that of e+Cu, reveals that the positions of the resonance for the e+Cu dipole series deviate equally from those of e+K.

  15. Optimal Trajectories for the Helicopter in One-Engine-Inoperative Terminal-Area Operations

    NASA Technical Reports Server (NTRS)

    Zhao, Yiyuan; Chen, Robert T. N.

    1996-01-01

    This paper presents a summary of a series of recent analytical studies conducted to investigate One-Engine-Inoperative (OEI) optimal control strategies and the associated optimal trajectories for a twin engine helicopter in Category-A terminal-area operations. These studies also examine the associated heliport size requirements and the maximum gross weight capability of the helicopter. Using an eight states, two controls, augmented point-mass model representative of the study helicopter, Continued TakeOff (CTO), Rejected TakeOff (RTO), Balked Landing (BL), and Continued Landing (CL) are investigated for both Vertical-TakeOff-and-Landing (VTOL) and Short-TakeOff-and-Landing (STOL) terminal-area operations. The formulation of the nonlinear optimal control problems with considerations for realistic constraints, solution methods for the two-point boundary-value problem, a new real-time generation method for the optimal OEI trajectories, and the main results of this series of trajectory optimization studies are presented. In particular, a new balanced- weight concept for determining the takeoff decision point for VTOL Category-A operations is proposed, extending the balanced-field length concept used for STOL operations.

  16. A modelling approach to assessing the timescale uncertainties in proxy series with chronological errors

    NASA Astrophysics Data System (ADS)

    Divine, D. V.; Godtliebsen, F.; Rue, H.

    2012-01-01

    The paper proposes an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models based on tie points of mixed origin.

  17. Solutions for transients in arbitrarily branching cables: III. Voltage clamp problems.

    PubMed

    Major, G

    1993-07-01

    Branched cable voltage recording and voltage clamp analytical solutions derived in two previous papers are used to explore practical issues concerning voltage clamp. Single exponentials can be fitted reasonably well to the decay phase of clamped synaptic currents, although they contain many underlying components. The effective time constant depends on the fit interval. The smoothing effects on synaptic clamp currents of dendritic cables and series resistance are explored with a single cylinder + soma model, for inputs with different time courses. "Soma" and "cable" charging currents cannot be separated easily when the soma is much smaller than the dendrites. Subtractive soma capacitance compensation and series resistance compensation are discussed. In a hippocampal CA1 pyramidal neurone model, voltage control at most dendritic sites is extremely poor. Parameter dependencies are illustrated. The effects of series resistance compound those of dendritic cables and depend on the "effective capacitance" of the cell. Plausible combinations of parameters can cause order-of-magnitude distortions to clamp current waveform measures of simulated Schaeffer collateral inputs. These voltage clamp problems are unlikely to be solved by the use of switch clamp methods.

  18. Analysis of statistical and standard algorithms for detecting muscle onset with surface electromyography.

    PubMed

    Tenan, Matthew S; Tweedell, Andrew J; Haynes, Courtney A

    2017-01-01

    The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60-90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity.

  19. Full-Field Stress Determination Around Circular Discontinuity in a Tensile-Loaded Plate using x-displacements Only

    NASA Astrophysics Data System (ADS)

    Baek, Tae Hyun; Chung, Tae Jin; Panganiban, Henry

    The significant effects of stress raisers demand well-defined evaluation techniques to accurately determine the stress along the geometric boundary. A simple and accurate method for the determination of stress concentration around circular geometric discontinuity in a tensile-loaded plate is illustrated. The method is based on the least-squares technique, mapping functions, and a complex power series representation (Laurent series) of the stress functions for the calculation of tangential stress around the hole. Traction-free conditions were satisfied at the geometric discontinuity using conformal mapping and analytic continuation. In this study, we use only a relatively small amount of x-component displacement data of points away from the discontinuity of concern with their respective coordinates. Having this information we can easily obtain full-field stresses at the edge of the geometric discontinuity. Excellent results were obtained when the number of terms of the power series expansions, m=1. The maximum stress concentration calculation results using the present method and FEM using ANSYS agree well by less than one per cent difference. Experimental advantage of the method underscores the use of relatively small amount of data which are conveniently determined being away from the edge. Moreover, the small amount of measured input data needed affords the approach suitable for applications such as the multi-parameter concept used to obtain stress intensity factors from measured data. The use of laser speckle interferometry and moiré interferometry are also potential future related fields since the optical system for one-directional measurement is much simple.

  20. Phase coupling and synchrony in the spatiotemporal dynamics of muskrat and mink populations across Canada

    PubMed Central

    Haydon, D. T.; Stenseth, N. C.; Boyce, M. S.; Greenwood, P. E.

    2001-01-01

    Population ecologists have traditionally focused on the patterns and causes of population variation in the temporal domain for which a substantial body of practical analytic techniques have been developed. More recently, numerous studies have documented how populations may fluctuate synchronously over large spatial areas; analyses of such spatially extended time-series have started to provide additional clues regarding the causes of these population fluctuations and explanations for their synchronous occurrence. Here, we report on the development of a phase-based method for identifying coupling between temporally coincident but spatially distributed cyclic time-series, which we apply to the numbers of muskrat and mink recorded at 81 locations across Canada. The analysis reveals remarkable parallel clines in the strength of coupling between proximate populations of both species—declining from west to east—together with a corresponding increase in observed synchrony between these populations the further east they are located. PMID:11606729

  1. Promoting Awareness of Key Resources for Evidence-Informed Decision-making in Public Health: An Evaluation of a Webinar Series about Knowledge Translation Methods and Tools

    PubMed Central

    Yost, Jennifer; Mackintosh, Jeannie; Read, Kristin; Dobbins, Maureen

    2016-01-01

    The National Collaborating Centre for Methods and Tools (NCCMT) has developed several resources to support evidence-informed decision-making – the process of distilling and disseminating best available evidence from research, context, and experience – and knowledge translation, applying best evidence in practice. One such resource, the Registry of Methods and Tools, is a free online database of 195 methods and tools to support knowledge translation. Building on the identification of webinars as a strategy to improve the dissemination of information, NCCMT launched the Spotlight on Knowledge Translation Methods and Tools webinar series in 2012 to promote awareness and use of the Registry. To inform continued implementation of this webinar series, NCCMT conducted an evaluation of the series’ potential to improve awareness and use of the methods/tools within the Registry, as well as identify areas for improvement and “what worked.” For this evaluation, the following data were analyzed: electronic follow-up surveys administered immediately following each webinar; an additional electronic survey administered 6 months after two webinars; and Google Analytics for each webinar. As of November 2015, there have been 22 webinars conducted, reaching 2048 people in multiple sectors across Canada and around the world. Evaluation results indicate that the webinars increase awareness about the Registry and stimulate use of the methods/tools. Although webinar attendees were significantly less likely to have used the methods/tools 6 months after webinars, this may be attributed to the lack of an identified opportunity in their work to use the method/tool. Despite technological challenges and requests for further examples of how the methods/tools have been used, there is overwhelming positive feedback that the format, presenters, content, and interaction across webinars “worked.” This evaluation supports that webinars are a valuable strategy for increasing awareness and stimulating use of resources for evidence-informed decision-making and knowledge translation in public health practice. PMID:27148518

  2. A reanalysis of cluster randomized trials showed interrupted time-series studies were valuable in health system evaluation.

    PubMed

    Fretheim, Atle; Zhang, Fang; Ross-Degnan, Dennis; Oxman, Andrew D; Cheyne, Helen; Foy, Robbie; Goodacre, Steve; Herrin, Jeph; Kerse, Ngaire; McKinlay, R James; Wright, Adam; Soumerai, Stephen B

    2015-03-01

    There is often substantial uncertainty about the impacts of health system and policy interventions. Despite that, randomized controlled trials (RCTs) are uncommon in this field, partly because experiments can be difficult to carry out. An alternative method for impact evaluation is the interrupted time-series (ITS) design. Little is known, however, about how results from the two methods compare. Our aim was to explore whether ITS studies yield results that differ from those of randomized trials. We conducted single-arm ITS analyses (segmented regression) based on data from the intervention arm of cluster randomized trials (C-RCTs), that is, discarding control arm data. Secondarily, we included the control group data in the analyses, by subtracting control group data points from intervention group data points, thereby constructing a time series representing the difference between the intervention and control groups. We compared the results from the single-arm and controlled ITS analyses with results based on conventional aggregated analyses of trial data. The findings were largely concordant, yielding effect estimates with overlapping 95% confidence intervals (CI) across different analytical methods. However, our analyses revealed the importance of a concurrent control group and of taking baseline and follow-up trends into account in the analysis of C-RCTs. The ITS design is valuable for evaluation of health systems interventions, both when RCTs are not feasible and in the analysis and interpretation of data from C-RCTs. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Rapid computation of directional wellbore drawdown in a confined aquifer via Poisson resummation

    NASA Astrophysics Data System (ADS)

    Blumenthal, Benjamin J.; Zhan, Hongbin

    2016-08-01

    We have derived a rapidly computed analytical solution for drawdown caused by a partially or fully penetrating directional wellbore (vertical, horizontal, or slant) via Green's function method. The mathematical model assumes an anisotropic, homogeneous, confined, box-shaped aquifer. Any dimension of the box can have one of six possible boundary conditions: 1) both sides no-flux; 2) one side no-flux - one side constant-head; 3) both sides constant-head; 4) one side no-flux; 5) one side constant-head; 6) free boundary conditions. The solution has been optimized for rapid computation via Poisson Resummation, derivation of convergence rates, and numerical optimization of integration techniques. Upon application of the Poisson Resummation method, we were able to derive two sets of solutions with inverse convergence rates, namely an early-time rapidly convergent series (solution-A) and a late-time rapidly convergent series (solution-B). From this work we were able to link Green's function method (solution-B) back to image well theory (solution-A). We then derived an equation defining when the convergence rate between solution-A and solution-B is the same, which we termed the switch time. Utilizing the more rapidly convergent solution at the appropriate time, we obtained rapid convergence at all times. We have also shown that one may simplify each of the three infinite series for the three-dimensional solution to 11 terms and still maintain a maximum relative error of less than 10-14.

  4. Regular and singular pulse and front solutions and possible isochronous behavior in the short-pulse equation: Phase-plane, multi-infinite series and variational approaches

    NASA Astrophysics Data System (ADS)

    Gambino, G.; Tanriver, U.; Guha, P.; Choudhury, A. Ghose; Choudhury, S. Roy

    2015-02-01

    In this paper we employ three recent analytical approaches to investigate the possible classes of traveling wave solutions of some members of a family of so-called short-pulse equations (SPE). A recent, novel application of phase-plane analysis is first employed to show the existence of breaking kink wave solutions in certain parameter regimes. Secondly, smooth traveling waves are derived using a recent technique to derive convergent multi-infinite series solutions for the homoclinic (heteroclinic) orbits of the traveling-wave equations for the SPE equation, as well as for its generalized version with arbitrary coefficients. These correspond to pulse (kink or shock) solutions respectively of the original PDEs. We perform many numerical tests in different parameter regime to pinpoint real saddle equilibrium points of the corresponding traveling-wave equations, as well as ensure simultaneous convergence and continuity of the multi-infinite series solutions for the homoclinic/heteroclinic orbits anchored by these saddle points. Unlike the majority of unaccelerated convergent series, high accuracy is attained with relatively few terms. And finally, variational methods are employed to generate families of both regular and embedded solitary wave solutions for the SPE PDE. The technique for obtaining the embedded solitons incorporates several recent generalizations of the usual variational technique and it is thus topical in itself. One unusual feature of the solitary waves derived here is that we are able to obtain them in analytical form (within the assumed ansatz for the trial functions). Thus, a direct error analysis is performed, showing the accuracy of the resulting solitary waves. Given the importance of solitary wave solutions in wave dynamics and information propagation in nonlinear PDEs, as well as the fact that not much is known about solutions of the family of generalized SPE equations considered here, the results obtained are both new and timely.

  5. Finite Element Analysis of Geodesically Stiffened Cylindrical Composite Shells Using a Layerwise Theory

    NASA Technical Reports Server (NTRS)

    Gerhard, Craig Steven; Gurdal, Zafer; Kapania, Rakesh K.

    1996-01-01

    Layerwise finite element analyses of geodesically stiffened cylindrical shells are presented. The layerwise laminate theory of Reddy (LWTR) is developed and adapted to circular cylindrical shells. The Ritz variational method is used to develop an analytical approach for studying the buckling of simply supported geodesically stiffened shells with discrete stiffeners. This method utilizes a Lagrange multiplier technique to attach the stiffeners to the shell. The development of the layerwise shells couples a one-dimensional finite element through the thickness with a Navier solution that satisfies the boundary conditions. The buckling results from the Ritz discrete analytical method are compared with smeared buckling results and with NASA Testbed finite element results. The development of layerwise shell and beam finite elements is presented and these elements are used to perform the displacement field, stress, and first-ply failure analyses. The layerwise shell elements are used to model the shell skin and the layerwise beam elements are used to model the stiffeners. This arrangement allows the beam stiffeners to be assembled directly into the global stiffness matrix. A series of analytical studies are made to compare the response of geodesically stiffened shells as a function of loading, shell geometry, shell radii, shell laminate thickness, stiffener height, and geometric nonlinearity. Comparisons of the structural response of geodesically stiffened shells, axial and ring stiffened shells, and unstiffened shells are provided. In addition, interlaminar stress results near the stiffener intersection are presented. First-ply failure analyses for geodesically stiffened shells utilizing the Tsai-Wu failure criterion are presented for a few selected cases.

  6. Simultaneous GC–EI-MS Determination of Δ9-Tetrahydrocannabinol, 11-Hydroxy-Δ9-Tetrahydrocannabinol, and 11-nor-9-Carboxy-Δ9-Tetrahydrocannabinol in Human Urine Following Tandem Enzyme-Alkaline Hydrolysis

    PubMed Central

    Abraham, Tsadik T.; Lowe, Ross H.; Pirnay, Stephane O.; Darwin, William D.; Huesti, Marilyn A.

    2009-01-01

    A sensitive and specific method for extraction and quantification of Δ9-tetrahydrocannabinol (THC), 11-hydroxy-Δ9-tetrahydrocannabinol (11-OH-THC), and 11-nor-9-carboxy-Δ9-tetrahydrocannabinol (THCCOOH) in human urine was developed and fully validated. To ensure complete hydrolysis of conjugates and capture of total analyte content, urine samples were hydrolyzed by two methods in series. Initial hydrolysis was with Escherichia coli β-glucuronidase (Type IX–A) followed by a second hydrolysis utilizing 10N NaOH. Specimens were adjusted to pH 5−6.5, treated with acetonitrile to precipitate protein, and centrifuged, and the supernatants were subjected to solid-phase extraction. Extracted analytes were derivatized with BSTFA and quantified by gas chromatography–mass spectrometry with electron impact ionization. Standard curves were linear from 2.5 to 300 ng/mL. Extraction efficiencies were 57.0−59.3% for THC, 68.3−75.5% for 11-OH-THC, and 71.5−79.7% for THCCOOH. Intra- and interassay precision across the linear range of the assay ranged from 0.1 to 4.3% and 2.6 to 7.4%, respectively. Accuracy was within 15% of target concentrations. This method was applied to the analysis of urine specimens collected from individuals participating in controlled administration cannabis studies, and it may be a useful analytical procedure for determining recency of cannabis use in forensic toxicology applications. PMID:17988462

  7. A Guided Tour of Mathematical Methods

    NASA Astrophysics Data System (ADS)

    Snieder, Roel

    2009-04-01

    1. Introduction; 2. Dimensional analysis; 3. Power series; 4. Spherical and cylindrical co-ordinates; 5. The gradient; 6. The divergence of a vector field; 7. The curl of a vector field; 8. The theorem of Gauss; 9. The theorem of Stokes; 10. The Laplacian; 11. Conservation laws; 12. Scale analysis; 13. Linear algebra; 14. The Dirac delta function; 15. Fourier analysis; 16. Analytic functions; 17. Complex integration; 18. Green's functions: principles; 19. Green's functions: examples; 20. Normal modes; 21. Potential theory; 22. Cartesian tensors; 23. Perturbation theory; 24. Asymptotic evaluation of integrals; 25. Variational calculus; 26. Epilogue, on power and knowledge; References.

  8. Contact problem on indentation of an elastic half-plane with an inhomogeneous coating by a flat punch in the presence of tangential stresses on a surface

    NASA Astrophysics Data System (ADS)

    Volkov, Sergei S.; Vasiliev, Andrey S.; Aizikovich, Sergei M.; Sadyrin, Evgeniy V.

    2018-05-01

    Indentation of an elastic half-space with functionally graded coating by a rigid flat punch is studied. The half-plane is additionally subjected to distributed tangential stresses. Tangential stresses are represented in a form of Fourier series. The problem is reduced to the solution of two dual integral equations over even and odd functions describing distribution of unknown normal contact stresses. The solutions of these dual integral equations are constructed by the bilateral asymptotic method. Approximated analytical expressions for contact normal stresses are provided.

  9. The statistical theory of the fracture of fragile bodies. Part 2: The integral equation method

    NASA Technical Reports Server (NTRS)

    Kittl, P.

    1984-01-01

    It is demonstrated how with the aid of a bending test, the Weibull fracture risk function can be determined - without postulating its analytical form - by resolving an integral equation. The respective solutions for rectangular and circular section beams are given. In the first case the function is expressed as an algorithm and in the second, in the form of series. Taking into account that the cumulative fracture probability appearing in the solution to the integral equation must be continuous and monotonically increasing, any case of fabrication or selection of samples can be treated.

  10. Visibility graph approach to exchange rate series

    NASA Astrophysics Data System (ADS)

    Yang, Yue; Wang, Jianbo; Yang, Huijie; Mang, Jingshi

    2009-10-01

    By means of a visibility graph, we investigate six important exchange rate series. It is found that the series convert into scale-free and hierarchically structured networks. The relationship between the scaling exponents of the degree distributions and the Hurst exponents obeys the analytical prediction for fractal Brownian motions. The visibility graph can be used to obtain reliable values of Hurst exponents of the series. The characteristics are explained by using the multifractal structures of the series. The exchange rate of EURO to Japanese Yen is widely used to evaluate risk and to estimate trends in speculative investments. Interestingly, the hierarchies of the visibility graphs for the exchange rate series of these two currencies are significantly weak compared with that of the other series.

  11. Long-term, high-frequency water quality monitoring in an agricultural catchment: insights from spectral analysis

    NASA Astrophysics Data System (ADS)

    Aubert, Alice; Kirchner, James; Faucheux, Mikael; Merot, Philippe; Gascuel-Odoux, Chantal

    2013-04-01

    The choice of sampling frequency is a key issue in the design and operation of environmental observatories. The choice of sampling frequency creates a spectral window (or temporal filter) that highlights some timescales and processes, and de-emphasizes others (1). New online measurement technologies can monitor surface water quality almost continuously, allowing the creation of very rich time series. The question of how best to analyze such detailed temporal datasets is an important issue in environmental monitoring. In the present work, we studied water quality data from the AgrHys long-term hydrological observatory (located at Kervidy-Naizin, Western France) sampled at daily and 20-minute time scales. Manual sampling has provided 12 years of daily measurements of nitrate, dissolved organic carbon (DOC), chloride and sulfate (2), and 3 years of daily measurements of about 30 other solutes. In addition, a UV-spectrometry probe (Spectrolyser) provides one year of 20-minute measurements for nitrate and DOC. Spectral analysis of the daily water quality time series reveals that our intensively farmed catchment exhibits universal 1/f scaling (power spectrum slope of -1) for a large number of solutes, confirming and extending the earlier discovery of universal 1/f scaling in the relatively pristine Plynlimon catchment (3). 1/f time series confound conventional methods for assessing the statistical significance of trends. Indeed, conventional methods assume that there is a clear separation of scales between the signal (the trend line) and the noise (the scatter around the line). This is not true for 1/f noise, since it overestimates the occurrence of significant trends. Our results raise the possibility that 1/f scaling is widespread in water quality time series, thus posing fundamental challenges to water quality trend analysis. Power spectra of the 20-minute nitrate and DOC time series show 1/f scaling at frequencies below 1/day, consistent with the longer-term daily measurements. At higher frequencies, however, the spectra steepen to a slope of -2, indicating that at sub-daily time scales the concentration time series become relatively smooth. However, at time scales shorter than 2-3 hours, the spectra flatten to a slope near zero (white noise), reflecting analytical noise in the measurement probe. This result demonstrates that measuring water quality dynamics at high frequencies also requires high measurement precision, because as measurements are taken closer and closer together in time, the real-world differences that must be measured between adjacent measurements become smaller and smaller. Our results highlight the importance of quantifying the spectral properties of analytical noise in environmental measurements, to identify frequency ranges where measurements could be dominated by analytical noise instead of real-world signals. 1. Kirchner, J.W., Feng, X., Neal, C., Robson, A.J., 2004. The fine structure of water-quality dynamics: the (high-frequency) wave of the future. Hydrological Processes, 18(7): 1353-1359 2. Aubert, A.H. et al., 2012. The chemical signature of a livestock farming catchment: synthesis from a high-frequency multi-element long term monitoring. HESSD, 9(8): 9715 - 9741 3. Kirchner, J.W. and Neal, C., 2013. Universal fractal scaling in water quality dynamics across the periodic table. Manuscript in review.

  12. In Search of a Pony: Sources, Methods, Outcomes, and Motivated Reasoning.

    PubMed

    Stone, Marc B

    2018-05-01

    It is highly desirable to be able to evaluate the effect of policy interventions. Such evaluations should have expected outcomes based upon sound theory and be carefully planned, objectively evaluated and prospectively executed. In many cases, however, assessments originate with investigators' poorly substantiated beliefs about the effects of a policy. Instead of designing studies that test falsifiable hypotheses, these investigators adopt methods and data sources that serve as little more than descriptions of these beliefs in the guise of analysis. Interrupted time series analysis is one of the most popular forms of analysis used to present these beliefs. It is intuitively appealing but, in most cases, it is based upon false analogies, fallacious assumptions and analytical errors.

  13. Detection of interference phase by digital computation of quadrature signals in homodyne laser interferometry.

    PubMed

    Rerucha, Simon; Buchta, Zdenek; Sarbort, Martin; Lazar, Josef; Cip, Ondrej

    2012-10-19

    We have proposed an approach to the interference phase extraction in the homodyne laser interferometry. The method employs a series of computational steps to reconstruct the signals for quadrature detection from an interference signal from a non-polarising interferometer sampled by a simple photodetector. The complexity trade-off is the use of laser beam with frequency modulation capability. It is analytically derived and its validity and performance is experimentally verified. The method has proven to be a feasible alternative for the traditional homodyne detection since it performs with comparable accuracy, especially where the optical setup complexity is principal issue and the modulation of laser beam is not a heavy burden (e.g., in multi-axis sensor or laser diode based systems).

  14. Optimization and single-laboratory validation of a method for the determination of flavonolignans in milk thistle seeds by high-performance liquid chromatography with ultraviolet detection.

    PubMed

    Mudge, Elizabeth; Paley, Lori; Schieber, Andreas; Brown, Paula N

    2015-10-01

    Seeds of milk thistle, Silybum marianum (L.) Gaertn., are used for treatment and prevention of liver disorders and were identified as a high priority ingredient requiring a validated analytical method. An AOAC International expert panel reviewed existing methods and made recommendations concerning method optimization prior to validation. A series of extraction and separation studies were undertaken on the selected method for determining flavonolignans from milk thistle seeds and finished products to address the review panel recommendations. Once optimized, a single-laboratory validation study was conducted. The method was assessed for repeatability, accuracy, selectivity, LOD, LOQ, analyte stability, and linearity. Flavonolignan content ranged from 1.40 to 52.86% in raw materials and dry finished products and ranged from 36.16 to 1570.7 μg/mL in liquid tinctures. Repeatability for the individual flavonolignans in raw materials and finished products ranged from 1.03 to 9.88% RSDr, with HorRat values between 0.21 and 1.55. Calibration curves for all flavonolignan concentrations had correlation coefficients of >99.8%. The LODs for the flavonolignans ranged from 0.20 to 0.48 μg/mL at 288 nm. Based on the results of this single-laboratory validation, this method is suitable for the quantitation of the six major flavonolignans in milk thistle raw materials and finished products, as well as multicomponent products containing dandelion, schizandra berry, and artichoke extracts. It is recommended that this method be adopted as First Action Official Method status by AOAC International.

  15. Validation and long-term evaluation of a modified on-line chiral analytical method for therapeutic drug monitoring of (R,S)-methadone in clinical samples.

    PubMed

    Ansermot, Nicolas; Rudaz, Serge; Brawand-Amey, Marlyse; Fleury-Souverain, Sandrine; Veuthey, Jean-Luc; Eap, Chin B

    2009-08-01

    Matrix effects, which represent an important issue in liquid chromatography coupled to mass spectrometry or tandem mass spectrometry detection, should be closely assessed during method development. In the case of quantitative analysis, the use of stable isotope-labelled internal standard with physico-chemical properties and ionization behaviour similar to the analyte is recommended. In this paper, an example of the choice of a co-eluting deuterated internal standard to compensate for short-term and long-term matrix effect in the case of chiral (R,S)-methadone plasma quantification is reported. The method was fully validated over a concentration range of 5-800 ng/mL for each methadone enantiomer with satisfactory relative bias (-1.0 to 1.0%), repeatability (0.9-4.9%) and intermediate precision (1.4-12.0%). From the results obtained during validation, a control chart process during 52 series of routine analysis was established using both intermediate precision standard deviation and FDA acceptance criteria. The results of routine quality control samples were generally included in the +/-15% variability around the target value and mainly in the two standard deviation interval illustrating the long-term stability of the method. The intermediate precision variability estimated in method validation was found to be coherent with the routine use of the method. During this period, 257 trough concentration and 54 peak concentration plasma samples of patients undergoing (R,S)-methadone treatment were successfully analysed for routine therapeutic drug monitoring.

  16. Determination of the distribution constants of aromatic compounds and steroids in biphasic micellar phosphonium ionic liquid/aqueous buffer systems by capillary electrokinetic chromatography.

    PubMed

    Lokajová, Jana; Railila, Annika; King, Alistair W T; Wiedmer, Susanne K

    2013-09-20

    The distribution constants of some analytes, closely connected to the petrochemical industry, between an aqueous phase and a phosphonium ionic liquid phase, were determined by ionic liquid micellar electrokinetic chromatography (MEKC). The phosphonium ionic liquids studied were the water-soluble tributyl(tetradecyl)phosphonium with chloride or acetate as the counter ion. The retention factors were calculated and used for determination of the distribution constants. For calculating the retention factors the electrophoretic mobilities of the ionic liquids were required, thus, we adopted the iterative process, based on a homologous series of alkyl benzoates. Calculation of the distribution constants required information on the phase-ratio of the systems. For this the critical micelle concentrations (CMC) of the ionic liquids were needed. The CMCs were calculated using a method based on PeakMaster simulations, using the electrophoretic mobilities of system peaks. The resulting distribution constants for the neutral analytes between the ionic liquid and the aqueous (buffer) phase were compared with octanol-water partitioning coefficients. The results indicate that there are other factors affecting the distribution of analytes between phases, than just simple hydrophobic interactions. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Modeling and Analysis of Structural Dynamics for a One-Tenth Scale Model NGST Sunshield

    NASA Technical Reports Server (NTRS)

    Johnston, John; Lienard, Sebastien; Brodeur, Steve (Technical Monitor)

    2001-01-01

    New modeling and analysis techniques have been developed for predicting the dynamic behavior of the Next Generation Space Telescope (NGST) sunshield. The sunshield consists of multiple layers of pretensioned, thin-film membranes supported by deployable booms. Modeling the structural dynamic behavior of the sunshield is a challenging aspect of the problem due to the effects of membrane wrinkling. A finite element model of the sunshield was developed using an approximate engineering approach, the cable network method, to account for membrane wrinkling effects. Ground testing of a one-tenth scale model of the NGST sunshield were carried out to provide data for validating the analytical model. A series of analyses were performed to predict the behavior of the sunshield under the ground test conditions. Modal analyses were performed to predict the frequencies and mode shapes of the test article and transient response analyses were completed to simulate impulse excitation tests. Comparison was made between analytical predictions and test measurements for the dynamic behavior of the sunshield. In general, the results show good agreement with the analytical model correctly predicting the approximate frequency and mode shapes for the significant structural modes.

  18. A discussion on validity of the diffusion theory by Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Peng, Dong-qing; Li, Hui; Xie, Shusen

    2008-12-01

    Diffusion theory was widely used as a basis of the experiments and methods in determining the optical properties of biological tissues. A simple analytical solution could be obtained easily from the diffusion equation after a series of approximations. Thus, a misinterpret of analytical solution would be made: while the effective attenuation coefficient of several semi-infinite bio-tissues were the same, the distribution of light fluence in the tissues would be the same. In order to assess the validity of knowledge above, depth resolved internal fluence of several semi-infinite biological tissues which have the same effective attenuation coefficient were simulated with wide collimated beam in the paper by using Monte Carlo method in different condition. Also, the influence of bio-tissue refractive index on the distribution of light fluence was discussed in detail. Our results showed that, when the refractive index of several bio-tissues which had the same effective attenuation coefficient were the same, the depth resolved internal fluence would be the same; otherwise, the depth resolved internal fluence would be not the same. The change of refractive index of tissue would have affection on the light depth distribution in tissue. Therefore, the refractive index is an important optical property of tissue, and should be taken in account while using the diffusion approximation theory.

  19. Limited Rationality and Its Quantification Through the Interval Number Judgments With Permutations.

    PubMed

    Liu, Fang; Pedrycz, Witold; Zhang, Wei-Guo

    2017-12-01

    The relative importance of alternatives expressed in terms of interval numbers in the fuzzy analytic hierarchy process aims to capture the uncertainty experienced by decision makers (DMs) when making a series of comparisons. Under the assumption of full rationality, the judgements of DMs in the typical analytic hierarchy process could be consistent. However, since the uncertainty in articulating the opinions of DMs is unavoidable, the interval number judgements are associated with the limited rationality. In this paper, we investigate the concept of limited rationality by introducing interval multiplicative reciprocal comparison matrices. By analyzing the consistency of interval multiplicative reciprocal comparison matrices, it is observed that the interval number judgements are inconsistent. By considering the permutations of alternatives, the concepts of approximation-consistency and acceptable approximation-consistency of interval multiplicative reciprocal comparison matrices are proposed. The exchange method is designed to generate all the permutations. A novel method of determining the interval weight vector is proposed under the consideration of randomness in comparing alternatives, and a vector of interval weights is determined. A new algorithm of solving decision making problems with interval multiplicative reciprocal preference relations is provided. Two numerical examples are carried out to illustrate the proposed approach and offer a comparison with the methods available in the literature.

  20. Visibility graphs and symbolic dynamics

    NASA Astrophysics Data System (ADS)

    Lacasa, Lucas; Just, Wolfram

    2018-07-01

    Visibility algorithms are a family of geometric and ordering criteria by which a real-valued time series of N data is mapped into a graph of N nodes. This graph has been shown to often inherit in its topology nontrivial properties of the series structure, and can thus be seen as a combinatorial representation of a dynamical system. Here we explore in some detail the relation between visibility graphs and symbolic dynamics. To do that, we consider the degree sequence of horizontal visibility graphs generated by the one-parameter logistic map, for a range of values of the parameter for which the map shows chaotic behaviour. Numerically, we observe that in the chaotic region the block entropies of these sequences systematically converge to the Lyapunov exponent of the time series. Hence, Pesin's identity suggests that these block entropies are converging to the Kolmogorov-Sinai entropy of the physical measure, which ultimately suggests that the algorithm is implicitly and adaptively constructing phase space partitions which might have the generating property. To give analytical insight, we explore the relation k(x) , x ∈ [ 0 , 1 ] that, for a given datum with value x, assigns in graph space a node with degree k. In the case of the out-degree sequence, such relation is indeed a piece-wise constant function. By making use of explicit methods and tools from symbolic dynamics we are able to analytically show that the algorithm indeed performs an effective partition of the phase space and that such partition is naturally expressed as a countable union of subintervals, where the endpoints of each subinterval are related to the fixed point structure of the iterates of the map and the subinterval enumeration is associated with particular ordering structures that we called motifs.

  1. Assessing Outcome in Cognitive Behavior Therapy for Child Depression: An Illustrative Case Series

    ERIC Educational Resources Information Center

    Eckshtain, Dikla; Gaynor, Scott T.

    2009-01-01

    Recent meta-analytic data suggest a need for ongoing evaluation of treatments for youth depression. The present article calls attention to a number of issues relevant to the empirical evaluation of if and how cognitive behavior therapy for child depression works. A case series of 6 children and a primary caregiver received treatment--individual…

  2. Narrative research methods in palliative care contexts: two case studies.

    PubMed

    Thomas, Carol; Reeve, Joanne; Bingley, Amanda; Brown, Janice; Payne, Sheila; Lynch, Tom

    2009-05-01

    Narrative methods have played a minor role in research with dying patients to date, and deserve to be more widely understood. This article illustrates the utility and value of these methods through the narrative analysis of semi-structured interview data gathered in a series of interviews with two terminally ill cancer patients and their spouses. The methods and findings associated with these two case studies are outlined and discussed. The authors' contention is that an analytical focus on the naturalistic storytelling of patients and informal carers can throw new light on individuals' perceived illness states and symptoms, care-related needs, behaviors, and desires. In addition, the juxtaposition of two cases that share a number of markers of risk and need at the end of life illustrates how the narrative analysis of patients' experiential accounts can assist in uncovering important distinctions between cases that are of relevance to care management.

  3. Supporting Building Portfolio Investment and Policy Decision Making through an Integrated Building Utility Data Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aziz, Azizan; Lasternas, Bertrand; Alschuler, Elena

    The American Recovery and Reinvestment Act stimulus funding of 2009 for smart grid projects resulted in the tripling of smart meters deployment. In 2012, the Green Button initiative provided utility customers with access to their real-time1 energy usage. The availability of finely granular data provides an enormous potential for energy data analytics and energy benchmarking. The sheer volume of time-series utility data from a large number of buildings also poses challenges in data collection, quality control, and database management for rigorous and meaningful analyses. In this paper, we will describe a building portfolio-level data analytics tool for operational optimization, businessmore » investment and policy assessment using 15-minute to monthly intervals utility data. The analytics tool is developed on top of the U.S. Department of Energy’s Standard Energy Efficiency Data (SEED) platform, an open source software application that manages energy performance data of large groups of buildings. To support the significantly large volume of granular interval data, we integrated a parallel time-series database to the existing relational database. The time-series database improves on the current utility data input, focusing on real-time data collection, storage, analytics and data quality control. The fully integrated data platform supports APIs for utility apps development by third party software developers. These apps will provide actionable intelligence for building owners and facilities managers. Unlike a commercial system, this platform is an open source platform funded by the U.S. Government, accessible to the public, researchers and other developers, to support initiatives in reducing building energy consumption.« less

  4. The Savannah River Site`s groundwater monitoring program. Third quarter 1990

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1991-05-06

    The Environmental Protection Department/Environmental Monitoring Section (EPD/EMS) administers the Savannah River Site`s (SRS) Groundwater Monitoring Program. During third quarter 1990 (July through September) EPD/EMS conducted routine sampling of monitoring wells and drinking water locations. EPD/EMS established two sets of flagging criteria in 1986 to assist in the management of sample results. The flagging criteria do not define contamination levels; instead they aid personnel in sample scheduling, interpretation of data, and trend identification. The flagging criteria are based on detection limits, background levels in SRS groundwater, and drinking water standards. All analytical results from third quarter 1990 are listed in thismore » report, which is distributed to all site custodians. One or more analytes exceeded Flag 2 in 87 monitoring well series. Analytes exceeded Flat 2 for the first since 1984 in 14 monitoring well series. In addition to groundwater monitoring, EPD/EMS collected drinking water samples from SRS drinking water systems supplied by wells. The drinking water samples were analyzed for radioactive constituents.« less

  5. The Savannah River Site's groundwater monitoring program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1991-05-06

    The Environmental Protection Department/Environmental Monitoring Section (EPD/EMS) administers the Savannah River Site's (SRS) Groundwater Monitoring Program. During third quarter 1990 (July through September) EPD/EMS conducted routine sampling of monitoring wells and drinking water locations. EPD/EMS established two sets of flagging criteria in 1986 to assist in the management of sample results. The flagging criteria do not define contamination levels; instead they aid personnel in sample scheduling, interpretation of data, and trend identification. The flagging criteria are based on detection limits, background levels in SRS groundwater, and drinking water standards. All analytical results from third quarter 1990 are listed in thismore » report, which is distributed to all site custodians. One or more analytes exceeded Flag 2 in 87 monitoring well series. Analytes exceeded Flat 2 for the first since 1984 in 14 monitoring well series. In addition to groundwater monitoring, EPD/EMS collected drinking water samples from SRS drinking water systems supplied by wells. The drinking water samples were analyzed for radioactive constituents.« less

  6. Vertical and pitching resonance of train cars moving over a series of simple beams

    NASA Astrophysics Data System (ADS)

    Yang, Y. B.; Yau, J. D.

    2015-02-01

    The resonant response, including both vertical and pitching motions, of an undamped sprung mass unit moving over a series of simple beams is studied by a semi-analytical approach. For a sprung mass that is very small compared with the beam, we first simplify the sprung mass as a constant moving force and obtain the response of the beam in closed form. With this, we then solve for the response of the sprung mass passing over a series of simple beams, and validate the solution by an independent finite element analysis. To evaluate the pitching resonance, we consider the cases of a two-axle model and a coach model traveling over rough rails supported by a series of simple beams. The resonance of a train car is characterized by the fact that its response continues to build up, as it travels over more and more beams. For train cars with long axle intervals, the vertical acceleration induced by pitching resonance dominates the peak response of the train traveling over a series of simple beams. The present semi-analytical study allows us to grasp the key parameters involved in the primary/sub-resonant responses. Other phenomena of resonance are also discussed in the exemplar study.

  7. A Fourier Method for Sidelobe Reduction in Equally Spaced Linear Arrays

    NASA Astrophysics Data System (ADS)

    Safaai-Jazi, Ahmad; Stutzman, Warren L.

    2018-04-01

    Uniformly excited, equally spaced linear arrays have a sidelobe level larger than -13.3 dB, which is too high for many applications. This limitation can be remedied by nonuniform excitation of array elements. We present an efficient method for sidelobe reduction in equally spaced linear arrays with low penalty on the directivity. The method involves the following steps: construction of a periodic function containing only the sidelobes of the uniformly excited array, calculation of the Fourier series of this periodic function, subtracting the series from the array factor of the original uniformly excited array after it is truncated, and finally mitigating the truncation effects which yields significant increase in sidelobe level reduction. A sidelobe reduction factor is incorporated into element currents that makes much larger sidelobe reductions possible and also allows varying the sidelobe level incrementally. It is shown that such newly formed arrays can provide sidelobe levels that are at least 22.7 dB below those of the uniformly excited arrays with the same size and number of elements. Analytical expressions for element currents are presented. Radiation characteristics of the sidelobe-reduced arrays introduced here are examined, and numerical results for directivity, sidelobe level, and half-power beam width are presented for example cases. Performance improvements over popular conventional array synthesis methods, such as Chebyshev and linear current tapered arrays, are obtained with the new method.

  8. Gravity Field Recovery from the Cartwheel Formation by the Semi-analytical Approach

    NASA Astrophysics Data System (ADS)

    Li, Huishu; Reubelt, Tilo; Antoni, Markus; Sneeuw, Nico; Zhong, Min; Zhou, Zebing

    2016-04-01

    Past and current gravimetric satellite missions have contributed drastically to our knowledge of the Earth's gravity field. Nevertheless, several geoscience disciplines push for even higher requirements on accuracy, homogeneity and time- and space-resolution of the Earth's gravity field. Apart from better instruments or new observables, alternative satellite formations could improve the signal and error structure. With respect to other methods, one significant advantage of the semi-analytical approach is its effective pre-mission error assessment for gravity field missions. The semi-analytical approach builds a linear analytical relationship between the Fourier spectrum of the observables and the spherical harmonic spectrum of the gravity field. The spectral link between observables and gravity field parameters is given by the transfer coefficients, which constitutes the observation model. In connection with a stochastic model, it can be used for pre-mission error assessment of gravity field mission. The cartwheel formation is formed by two satellites on elliptic orbits in the same plane. The time dependent ranging will be considered in the transfer coefficients via convolution including the series expansion of the eccentricity functions. The transfer coefficients are applied to assess the error patterns, which are caused by different orientation of the cartwheel for range-rate and range acceleration. This work will present the isotropy and magnitude of the formal errors of the gravity field coefficients, for different orientations of the cartwheel.

  9. Cavitation in liquid cryogens. 4: Combined correlations for venturi, hydrofoil, ogives, and pumps

    NASA Technical Reports Server (NTRS)

    Hord, J.

    1974-01-01

    The results of a series of experimental and analytical cavitation studies are presented. Cross-correlation is performed of the developed cavity data for a venturi, a hydrofoil and three scaled ogives. The new correlating parameter, MTWO, improves data correlation for these stationary bodies and for pumping equipment. Existing techniques for predicting the cavitating performance of pumping machinery were extended to include variations in flow coefficient, cavitation parameter, and equipment geometry. The new predictive formulations hold promise as a design tool and universal method for correlating pumping machinery performance. Application of these predictive formulas requires prescribed cavitation test data or an independent method of estimating the cavitation parameter for each pump. The latter would permit prediction of performance without testing; potential methods for evaluating the cavitation parameter prior to testing are suggested.

  10. Photogrammetry of the Viking Lander imagery

    NASA Technical Reports Server (NTRS)

    Wu, S. S. C.; Schafer, F. J.

    1982-01-01

    The problem of photogrammetric mapping which uses Viking Lander photography as its basis is solved in two ways: (1) by converting the azimuth and elevation scanning imagery to the equivalent of a frame picture, using computerized rectification; and (2) by interfacing a high-speed, general-purpose computer to the analytical plotter employed, so that all correction computations can be performed in real time during the model-orientation and map-compilation process. Both the efficiency of the Viking Lander cameras and the validity of the rectification method have been established by a series of pre-mission tests which compared the accuracy of terrestrial maps compiled by this method with maps made from aerial photographs. In addition, 1:10-scale topographic maps of Viking Lander sites 1 and 2 having a contour interval of 1.0 cm have been made to test the rectification method.

  11. An automated integration-free path-integral method based on Kleinert's variational perturbation theory

    NASA Astrophysics Data System (ADS)

    Wong, Kin-Yiu; Gao, Jiali

    2007-12-01

    Based on Kleinert's variational perturbation (KP) theory [Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. (World Scientific, Singapore, 2004)], we present an analytic path-integral approach for computing the effective centroid potential. The approach enables the KP theory to be applied to any realistic systems beyond the first-order perturbation (i.e., the original Feynman-Kleinert [Phys. Rev. A 34, 5080 (1986)] variational method). Accurate values are obtained for several systems in which exact quantum results are known. Furthermore, the computed kinetic isotope effects for a series of proton transfer reactions, in which the potential energy surfaces are evaluated by density-functional theory, are in good accordance with experiments. We hope that our method could be used by non-path-integral experts or experimentalists as a "black box" for any given system.

  12. Analytical validation of a novel multiplex test for detection of advanced adenoma and colorectal cancer in symptomatic patients.

    PubMed

    Dillon, Roslyn; Croner, Lisa J; Bucci, John; Kairs, Stefanie N; You, Jia; Beasley, Sharon; Blimline, Mark; Carino, Rochele B; Chan, Vicky C; Cuevas, Danissa; Diggs, Jeff; Jennings, Megan; Levy, Jacob; Mina, Ginger; Yee, Alvin; Wilcox, Bruce

    2018-05-30

    Early detection of colorectal cancer (CRC) is key to reducing associated mortality. Despite the importance of early detection, approximately 40% of individuals in the United States between the ages of 50-75 have never been screened for CRC. The low compliance with colonoscopy and fecal-based screening may be addressed with a non-invasive alternative such as a blood-based test. We describe here the analytical validation of a multiplexed blood-based assay that measures the plasma concentrations of 15 proteins to assess advanced adenoma (AA) and CRC risk in symptomatic patients. The test was developed on an electrochemiluminescent immunoassay platform employing four multi-marker panels, to be implemented in the clinic as a laboratory developed test (LDT). Under the Clinical Laboratory Improvement Amendments (CLIA) and College of American Pathologists (CAP) regulations, a United States-based clinical laboratory utilizing an LDT must establish performance characteristics relating to analytical validity prior to releasing patient test results. This report describes a series of studies demonstrating the precision, accuracy, analytical sensitivity, and analytical specificity for each of the 15 assays, as required by CLIA/CAP. In addition, the report describes studies characterizing each of the assays' dynamic range, parallelism, tolerance to common interfering substances, spike recovery, and stability to sample freeze-thaw cycles. Upon completion of the analytical characterization, a clinical accuracy study was performed to evaluate concordance of AA and CRC classifier model calls using the analytical method intended for use in the clinic. Of 434 symptomatic patient samples tested, the percent agreement with original CRC and AA calls was 87% and 92% respectively. All studies followed CLSI guidelines and met the regulatory requirements for implementation of a new LDT. The results provide the analytical evidence to support the implementation of the novel multi-marker test as a clinical test for evaluating CRC and AA risk in symptomatic individuals. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Empirical testing of an analytical model predicting electrical isolation of photovoltaic models

    NASA Astrophysics Data System (ADS)

    Garcia, A., III; Minning, C. P.; Cuddihy, E. F.

    A major design requirement for photovoltaic modules is that the encapsulation system be capable of withstanding large DC potentials without electrical breakdown. Presented is a simple analytical model which can be used to estimate material thickness to meet this requirement for a candidate encapsulation system or to predict the breakdown voltage of an existing module design. A series of electrical tests to verify the model are described in detail. The results of these verification tests confirmed the utility of the analytical model for preliminary design of photovoltaic modules.

  14. Direct replication of Gervais & Norenzayan (2012): No evidence that analytic thinking decreases religious belief

    PubMed Central

    Sanchez, Clinton; Sundermeier, Brian; Gray, Kenneth

    2017-01-01

    Gervais & Norenzayan (2012) reported in Science a series of 4 experiments in which manipulations intended to foster analytic thinking decreased religious belief. We conducted a precise, large, multi-site pre-registered replication of one of these experiments. We observed little to no effect of the experimental manipulation on religious belief (d = 0.07 in the wrong direction, 95% CI[-0.12, 0.25], N = 941). The original finding does not seem to provide reliable or valid evidence that analytic thinking causes a decrease in religious belief. PMID:28234942

  15. Direct replication of Gervais & Norenzayan (2012): No evidence that analytic thinking decreases religious belief.

    PubMed

    Sanchez, Clinton; Sundermeier, Brian; Gray, Kenneth; Calin-Jageman, Robert J

    2017-01-01

    Gervais & Norenzayan (2012) reported in Science a series of 4 experiments in which manipulations intended to foster analytic thinking decreased religious belief. We conducted a precise, large, multi-site pre-registered replication of one of these experiments. We observed little to no effect of the experimental manipulation on religious belief (d = 0.07 in the wrong direction, 95% CI[-0.12, 0.25], N = 941). The original finding does not seem to provide reliable or valid evidence that analytic thinking causes a decrease in religious belief.

  16. Data Series Subtraction with Unknown and Unmodeled Background Noise

    NASA Technical Reports Server (NTRS)

    Vitale, Stefano; Congedo, Giuseppe; Dolesi, Rita; Ferroni, Valerio; Hueller, Mauro; Vetrugno, Daniele; Weber, William Joseph; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; hide

    2014-01-01

    LISA Pathfinder (LPF), the precursor mission to a gravitational wave observatory of the European Space Agency, will measure the degree to which two test masses can be put into free fall, aiming to demonstrate a suppression of disturbance forces corresponding to a residual relative acceleration with a power spectral density (PSD) below (30 fm/sq s/Hz)(sup 2) around 1 mHz. In LPF data analysis, the disturbance forces are obtained as the difference between the acceleration data and a linear combination of other measured data series. In many circumstances, the coefficients for this linear combination are obtained by fitting these data series to the acceleration, and the disturbance forces appear then as the data series of the residuals of the fit. Thus the background noise or, more precisely, its PSD, whose knowledge is needed to build up the likelihood function in ordinary maximum likelihood fitting, is here unknown, and its estimate constitutes instead one of the goals of the fit. In this paper we present a fitting method that does not require the knowledge of the PSD of the background noise. The method is based on the analytical marginalization of the posterior parameter probability density with respect to the background noise PSD, and returns an estimate both for the fitting parameters and for the PSD. We show that both these estimates are unbiased, and that, when using averaged Welchs periodograms for the residuals, the estimate of the PSD is consistent, as its error tends to zero with the inverse square root of the number of averaged periodograms. Additionally, we find that the method is equivalent to some implementations of iteratively reweighted least-squares fitting. We have tested the method both on simulated data of known PSD and on data from several experiments performed with the LISA Pathfinder end-to-end mission simulator.

  17. Intumescent Reaction Mechanisms: An Analytic Model.

    DTIC Science & Technology

    1983-05-01

    or amide (such as urea , melamine , dicyan- diamide, urea formaldehyde , etc.) is the release of nonflammable gases (CO2, E . etc.) that physically...mo)/dT Versus I/T (Polysulfide) 34 5-3 Fourier Series Representation of TGA Data for Polysulfide, DIP-30, EPON Resin , and Borax 40 5-4 Fourier Series...Representation of TGA Data for Part A, Part B, and Part A+B 41 5-S Fourier Series Representation of d(m/m )/dT for Polysul- fide, DIP-30, EPOI Resin

  18. Falcon: A Temporal Visual Analysis System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A.

    2016-09-05

    Flexible visible exploration of long, high-resolution time series from multiple sensor streams is a challenge in several domains. Falcon is a visual analytics approach that helps researchers acquire a deep understanding of patterns in log and imagery data. Falcon allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations with multiple levels of detail. These capabilities are applicable to the analysis of any quantitative time series.

  19. Serial Founder Effects During Range Expansion: A Spatial Analog of Genetic Drift

    PubMed Central

    Slatkin, Montgomery; Excoffier, Laurent

    2012-01-01

    Range expansions cause a series of founder events. We show that, in a one-dimensional habitat, these founder events are the spatial analog of genetic drift in a randomly mating population. The spatial series of allele frequencies created by successive founder events is equivalent to the time series of allele frequencies in a population of effective size ke, the effective number of founders. We derive an expression for ke in a discrete-population model that allows for local population growth and migration among established populations. If there is selection, the net effect is determined approximately by the product of the selection coefficients and the number of generations between successive founding events. We use the model of a single population to compute analytically several quantities for an allele present in the source population: (i) the probability that it survives the series of colonization events, (ii) the probability that it reaches a specified threshold frequency in the last population, and (iii) the mean and variance of the frequencies in each population. We show that the analytic theory provides a good approximation to simulation results. A consequence of our approximation is that the average heterozygosity of neutral alleles decreases by a factor of 1 – 1/(2ke) in each new population. Therefore, the population genetic consequences of surfing can be predicted approximately by the effective number of founders and the effective selection coefficients, even in the presence of migration among populations. We also show that our analytic results are applicable to a model of range expansion in a continuously distributed population. PMID:22367031

  20. Serial founder effects during range expansion: a spatial analog of genetic drift.

    PubMed

    Slatkin, Montgomery; Excoffier, Laurent

    2012-05-01

    Range expansions cause a series of founder events. We show that, in a one-dimensional habitat, these founder events are the spatial analog of genetic drift in a randomly mating population. The spatial series of allele frequencies created by successive founder events is equivalent to the time series of allele frequencies in a population of effective size ke, the effective number of founders. We derive an expression for ke in a discrete-population model that allows for local population growth and migration among established populations. If there is selection, the net effect is determined approximately by the product of the selection coefficients and the number of generations between successive founding events. We use the model of a single population to compute analytically several quantities for an allele present in the source population: (i) the probability that it survives the series of colonization events, (ii) the probability that it reaches a specified threshold frequency in the last population, and (iii) the mean and variance of the frequencies in each population. We show that the analytic theory provides a good approximation to simulation results. A consequence of our approximation is that the average heterozygosity of neutral alleles decreases by a factor of 1-1/(2ke) in each new population. Therefore, the population genetic consequences of surfing can be predicted approximately by the effective number of founders and the effective selection coefficients, even in the presence of migration among populations. We also show that our analytic results are applicable to a model of range expansion in a continuously distributed population.

  1. Hyphenated analytical techniques for materials characterisation

    NASA Astrophysics Data System (ADS)

    Armstrong, Gordon; Kailas, Lekshmi

    2017-09-01

    This topical review will provide a survey of the current state of the art in ‘hyphenated’ techniques for characterisation of bulk materials, surface, and interfaces, whereby two or more analytical methods investigating different properties are applied simultaneously to the same sample to better characterise the sample than can be achieved by conducting separate analyses in series using different instruments. It is intended for final year undergraduates and recent graduates, who may have some background knowledge of standard analytical techniques, but are not familiar with ‘hyphenated’ techniques or hybrid instrumentation. The review will begin by defining ‘complementary’, ‘hybrid’ and ‘hyphenated’ techniques, as there is not a broad consensus among analytical scientists as to what each term means. The motivating factors driving increased development of hyphenated analytical methods will also be discussed. This introduction will conclude with a brief discussion of gas chromatography-mass spectroscopy and energy dispersive x-ray analysis in electron microscopy as two examples, in the context that combining complementary techniques for chemical analysis were among the earliest examples of hyphenated characterisation methods. The emphasis of the main review will be on techniques which are sufficiently well-established that the instrumentation is commercially available, to examine physical properties including physical, mechanical, electrical and thermal, in addition to variations in composition, rather than methods solely to identify and quantify chemical species. Therefore, the proposed topical review will address three broad categories of techniques that the reader may expect to encounter in a well-equipped materials characterisation laboratory: microscopy based techniques, scanning probe-based techniques, and thermal analysis based techniques. Examples drawn from recent literature, and a concluding case study, will be used to explain the practical issues that arise in combining different techniques. We will consider how the complementary and varied information obtained by combining these techniques may be interpreted together to better understand the sample in greater detail than that was possible before, and also how combining different techniques can simplify sample preparation and ensure reliable comparisons are made between multiple analyses on the same samples—a topic of particular importance as nanoscale technologies become more prevalent in applied and industrial research and development (R&D). The review will conclude with a brief outline of the emerging state of the art in the research laboratory, and a suggested approach to using hyphenated techniques, whether in the teaching, quality control or R&D laboratory.

  2. A novel control algorithm for interaction between surface waves and a permeable floating structure

    NASA Astrophysics Data System (ADS)

    Tsai, Pei-Wei; Alsaedi, A.; Hayat, T.; Chen, Cheng-Wu

    2016-04-01

    An analytical solution is undertaken to describe the wave-induced flow field and the surge motion of a permeable platform structure with fuzzy controllers in an oceanic environment. In the design procedure of the controller, a parallel distributed compensation (PDC) scheme is utilized to construct a global fuzzy logic controller by blending all local state feedback controllers. A stability analysis is carried out for a real structure system by using Lyapunov method. The corresponding boundary value problems are then incorporated into scattering and radiation problems. They are analytically solved, based on separation of variables, to obtain series solutions in terms of the harmonic incident wave motion and surge motion. The dependence of the wave-induced flow field and its resonant frequency on wave characteristics and structure properties including platform width, thickness and mass has been thus drawn with a parametric approach. From which mathematical models are applied for the wave-induced displacement of the surge motion. A nonlinearly inverted pendulum system is employed to demonstrate that the controller tuned by swarm intelligence method can not only stabilize the nonlinear system, but has the robustness against external disturbance.

  3. Influences of sampling volume and sample concentration on the analysis of atmospheric carbonyls by 2,4-dinitrophenylhydrazine cartridge.

    PubMed

    Pal, Raktim; Kim, Ki-Hyun

    2008-03-10

    In this study, the analytical bias involved in the application of the 2,4-dinitrophenylhydrazine (2,4-DNPH)-coated cartridge sampling method was investigated for the analysis of five atmospheric carbonyl species (i.e., acetaldehyde, propionaldehyde, butyraldehyde, isovaleraldehyde, and valeraldehyde). In order to evaluate the potential bias of the sampling technique, a series of the laboratory experiments were conducted to cover a wide range of volumes (1-20 L) and concentration levels (approximately 100-2000 ppb in case of acetaldehyde). The results of these experiments were then evaluated in terms of the recovery rate (RR) for each carbonyl species. The detection properties of these carbonyls were clearly distinguished between light and heavy species in terms of RR and its relative standard error (R.S.E.). It also indicates that the studied analytical approach can yield the most reliable pattern for light carbonyls, especially acetaldehyde. When these experimental results were tested further by a two-factor analysis of variance (ANOVA), the analysis based on the cartridge sampling method is affected more sensitively by the concentration levels of samples rather than the sampling volume.

  4. Piezoelectrically actuated flextensional micromachined ultrasound transducers--I: theory.

    PubMed

    Perçin, Gökhan; Khuri-Yakub, Butrus T

    2002-05-01

    This series of two papers considers piezoelectrically actuated flextensional micromachined ultrasound transducers (PAFMUTs) and consists of theory, fabrication, and experimental parts. The theory presented in this paper is developed for an ultrasound transducer application presented in the second part. In the absence of analytical expressions for the equivalent circuit parameters of a flextensional transducer, it is difficult to calculate its optimal parameters and dimensions and difficult to choose suitable materials. The influence of coupling between flexural and extensional deformation and that of coupling between the structure and the acoustic volume on the dynamic response of piezoelectrically actuated flextensional transducer are analyzed using two analytical methods: classical thin (Kirchhoff) plate theory and Mindlin plate theory. Classical thin plate theory and Mindlin plate theory are applied to derive two-dimensional plate equations for the transducer and to calculate the coupled electromechanical field variables such as mechanical displacement and electrical input impedance. In these methods, the variations across the thickness direction vanish by using the bending moments per unit length or stress resultants. Thus, two-dimensional plate equations for a step-wise laminated circular plate are obtained as well as two different solutions to the corresponding systems. An equivalent circuit of the transducer is also obtained from these solutions.

  5. Historical Data Analysis of Hospital Discharges Related to the Amerithrax Attack in Florida

    PubMed Central

    Burke, Lauralyn K.; Brown, C. Perry; Johnson, Tammie M.

    2016-01-01

    Interrupted time-series analysis (ITSA) can be used to identify, quantify, and evaluate the magnitude and direction of an event on the basis of time-series data. This study evaluates the impact of the bioterrorist anthrax attacks (“Amerithrax”) on hospital inpatient discharges in the metropolitan statistical area of Palm Beach, Broward, and Miami-Dade counties in the fourth quarter of 2001. Three statistical methods—standardized incidence ratio (SIR), segmented regression, and an autoregressive integrated moving average (ARIMA)—were used to determine whether Amerithrax influenced inpatient utilization. The SIR found a non–statistically significant 2 percent decrease in hospital discharges. Although the segmented regression test found a slight increase in the discharge rate during the fourth quarter, it was also not statistically significant; therefore, it could not be attributed to Amerithrax. Segmented regression diagnostics preparing for ARIMA indicated that the quarterly data time frame was not serially correlated and violated one of the assumptions for the use of the ARIMA method and therefore could not properly evaluate the impact on the time-series data. Lack of data granularity of the time frames hindered the successful evaluation of the impact by the three analytic methods. This study demonstrates that the granularity of the data points is as important as the number of data points in a time series. ITSA is important for the ability to evaluate the impact that any hazard may have on inpatient utilization. Knowledge of hospital utilization patterns during disasters offer healthcare and civic professionals valuable information to plan, respond, mitigate, and evaluate any outcomes stemming from biothreats. PMID:27843420

  6. Analytic models of ducted turbomachinery tone noise sources. Volume 1: Analysis

    NASA Technical Reports Server (NTRS)

    Clark, T. L.; Ganz, U. W.; Graf, G. A.; Westall, J. S.

    1974-01-01

    The analytic models developed for computing the periodic sound pressure of subsonic fans and compressors in an infinite, hardwall annular duct with uniform flow are described. The basic sound-generating mechanism is the scattering into sound waves of velocity disturbances appearing to the rotor or stator blades as a series of harmonic gusts. The models include component interactions and rotor alone.

  7. Smoking: The Health Consequences of Tobacco Use. An Annotated Bibliography with Analytical Introduction. Science and Social Responsibility Series, No. 2.

    ERIC Educational Resources Information Center

    Schmitz, Cecilia M.; Gray, Richard A.

    This volume contains an extensive introduction to the health consequences of tobacco use and extended annotations of the most important English-language monographs and articles to appear on the subject in the 1980s and 1990s arranged in classified order under select headings. The introductory analytical essay by Richard A. Gray covers: early and…

  8. Comparison of thermal analytic model with experimental test results for 30-sentimeter-diameter engineering model mercury ion thruster

    NASA Technical Reports Server (NTRS)

    Oglebay, J. C.

    1977-01-01

    A thermal analytic model for a 30-cm engineering model mercury-ion thruster was developed and calibrated using the experimental test results of tests of a pre-engineering model 30-cm thruster. A series of tests, performed later, simulated a wide range of thermal environments on an operating 30-cm engineering model thruster, which was instrumented to measure the temperature distribution within it. The modified analytic model is described and analytic and experimental results compared for various operating conditions. Based on the comparisons, it is concluded that the analytic model can be used as a preliminary design tool to predict thruster steady-state temperature distributions for stage and mission studies and to define the thermal interface bewteen the thruster and other elements of a spacecraft.

  9. A study on industrial accident rate forecasting and program development of estimated zero accident time in Korea.

    PubMed

    Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won

    2011-01-01

    To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.

  10. Analytical quality by design: a tool for regulatory flexibility and robust analytics.

    PubMed

    Peraman, Ramalingam; Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy

    2015-01-01

    Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT).

  11. Analytical Quality by Design: A Tool for Regulatory Flexibility and Robust Analytics

    PubMed Central

    Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy

    2015-01-01

    Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT). PMID:25722723

  12. Analytical torque calculation and experimental verification of synchronous permanent magnet couplings with Halbach arrays

    NASA Astrophysics Data System (ADS)

    Seo, Sung-Won; Kim, Young-Hyun; Lee, Jung-Ho; Choi, Jang-Young

    2018-05-01

    This paper presents analytical torque calculation and experimental verification of synchronous permanent magnet couplings (SPMCs) with Halbach arrays. A Halbach array is composed of various numbers of segments per pole; we calculate and compare the magnetic torques for 2, 3, and 4 segments. Firstly, based on the magnetic vector potential, and using a 2D polar coordinate system, we obtain analytical solutions for the magnetic field. Next, through a series of processes, we perform magnetic torque calculations using the derived solutions and a Maxwell stress tensor. Finally, the analytical results are verified by comparison with the results of 2D and 3D finite element analysis and the results of an experiment.

  13. Falcon: Visual analysis of large, irregularly sampled, and multivariate time series data in additive manufacturing

    DOE PAGES

    Steed, Chad A.; Halsey, William; Dehoff, Ryan; ...

    2017-02-16

    Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less

  14. Falcon: Visual analysis of large, irregularly sampled, and multivariate time series data in additive manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A.; Halsey, William; Dehoff, Ryan

    Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less

  15. Analysis of temperature time series to estimate direction and magnitude of water fluxes in near-surface sediments

    NASA Astrophysics Data System (ADS)

    Munz, Matthias; Oswald, Sascha E.; Schmidt, Christian

    2017-04-01

    The application of heat as a hydrological tracer has become a standard method for quantifying water fluxes between groundwater and surface water. Typically, time series of temperatures in the surface water and in the sediment are observed and are subsequently evaluated by a vertical 1D representation of heat transport by advection and dispersion. Several analytical solutions as well as their implementation into user-friendly software exist in order to estimate water fluxes from the observed temperatures. The underlying assumption of a stationary, one-dimensional vertical flow field is frequently violated in natural systems. Here subsurface water flow often has a significant horizontal component. We developed a methodology for identifying the geometry of the subsurface flow field based on the variations of diurnal temperature amplitudes with depths. For instance: Purely vertical heat transport is characterized by an exponential decline of temperature amplitudes with increasing depth. Pure horizontal flow would be indicated by a constant, depth independent vertical amplitude profile. The decline of temperature amplitudes with depths could be fitted by polynomials of different order whereby the best fit was defined by the highest Akaike Information Criterion. The stepwise model optimization and selection, evaluating the shape of vertical amplitude ratio profiles was used to determine the predominant subsurface flow field, which could be systematically categorized in purely vertical and horizontal (hyporheic, parafluvial) components. Analytical solutions to estimate water fluxes from the observed temperatures are restricted to specific boundary conditions such as a sinusoidal upper temperature boundary. In contrast numerical solutions offer higher flexibility and can handle temperature data which is characterized by irregular variations such as storm-event induced temperature changes and thus cannot readily be incorporated in analytical solutions. There are several numerical models that simulate heat transport in porous media (e.g. VS2DH, HydroGeoSphere, FEFLOW) but there can be a steep learning curve to the modelling frameworks and may therefore not readily accessible to routinely infer water fluxes between groundwater and surface water. We developed a user-friendly, straightforeward to use software to estimate water FLUXes Based On Temperatures- FLUX-BOT. FLUX-BOT is a numerical code written in MATLAB that calculates time variable vertical water fluxes in saturated sediments based on the inversion of measured temperature time series observed at multiple depths. It applies a cell-centered Crank-Nicolson implicit finite difference scheme to solve the one-dimensional heat advection-conduction equation (FLUX-BOT can be downloaded from the following web site: https://bitbucket.org/flux-bot/flux-bot). We provide applications of FLUX-BOT to generic as well as to measured temperature data to demonstrate its performance. Both, the empirical analysis of temperature amplitudes as well as the numerical inversion of measured temperature time series to estimate the vertical magnitude of water fluxes extent the suite of current heat tracing methods and may provide insight into temperature data from an additional perspective.

  16. Studying the interpretation of dreams in the company of analytic candidates.

    PubMed

    Levy, Joshua

    2009-08-01

    Seminars serve as an important, though undervalued, component of psychoanalytic education. The focus of this paper is on the teaching of Freud's The Interpretation of Dreams through a series of seminars presented to analytic candidates at the Toronto Psychoanalytic Institutes. This has been an essential book for introducing generations of candidates to the psychoanalytic concept of the mind and for shaping candidates' understanding and attitudes toward working with their patients' dreams. Four of Freud's basic dream concepts-(1) the method and its application to the exploration of the relationship between manifest and latent dream content, (2) the sources of dreams (day residues), (3) the dream-work, and (4) wish fulfillment-are critically studied in the seminars. Detailed discussion of these basic dream concepts among the candidates and with the teacher, as well as the candidates' feedback at the conclusion of the seminars, are summarized and discussed. Through the teaching and study within the seminar framework of the fundamentals of Freud's dream theory, a shared growth experience results for both teacher and candidates.

  17. Application of wavelet packet transform to compressing Raman spectra data

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Peng, Fei; Cheng, Qinghua; Xu, Dahai

    2008-12-01

    Abstract The Wavelet transform has been established with the Fourier transform as a data-processing method in analytical fields. The main fields of application are related to de-noising, compression, variable reduction, and signal suppression. Raman spectroscopy (RS) is characterized by the frequency excursion that can show the information of molecule. Every substance has its own feature Raman spectroscopy, which can analyze the structure, components, concentrations and some other properties of samples easily. RS is a powerful analytical tool for detection and identification. There are many databases of RS. But the data of Raman spectrum needs large space to storing and long time to searching. In this paper, Wavelet packet is chosen to compress Raman spectra data of some benzene series. The obtained results show that the energy retained is as high as 99.9% after compression, while the percentage for number of zeros is 87.50%. It was concluded that the Wavelet packet has significance in compressing the RS data.

  18. Instrumental neutron activation analysis for studying size-fractionated aerosols

    NASA Astrophysics Data System (ADS)

    Salma, Imre; Zemplén-Papp, Éva

    1999-10-01

    Instrumental neutron activation analysis (INAA) was utilized for studying aerosol samples collected into a coarse and a fine size fraction on Nuclepore polycarbonate membrane filters. As a result of the panoramic INAA, 49 elements were determined in an amount of about 200-400 μg of particulate matter by two irradiations and four γ-spectrometric measurements. The analytical calculations were performed by the absolute ( k0) standardization method. The calibration procedures, application protocol and the data evaluation process are described and discussed. They make it possible now to analyse a considerable number of samples, with assuring the quality of the results. As a means of demonstrating the system's analytical capabilities, the concentration ranges, median or mean atmospheric concentrations and detection limits are presented for an extensive series of aerosol samples collected within the framework of an urban air pollution study in Budapest. For most elements, the precision of the analysis was found to be beyond the uncertainty represented by the sampling techniques and sample variability.

  19. Padé Approximant and Minimax Rational Approximation in Standard Cosmology

    NASA Astrophysics Data System (ADS)

    Zaninetti, Lorenzo

    2016-02-01

    The luminosity distance in the standard cosmology as given by $\\Lambda$CDM and consequently the distance modulus for supernovae can be defined by the Pad\\'e approximant. A comparison with a known analytical solution shows that the Pad\\'e approximant for the luminosity distance has an error of $4\\%$ at redshift $= 10$. A similar procedure for the Taylor expansion of the luminosity distance gives an error of $4\\%$ at redshift $=0.7 $; this means that for the luminosity distance, the Pad\\'e approximation is superior to the Taylor series. The availability of an analytical expression for the distance modulus allows applying the Levenberg--Marquardt method to derive the fundamental parameters from the available compilations for supernovae. A new luminosity function for galaxies derived from the truncated gamma probability density function models the observed luminosity function for galaxies when the observed range in absolute magnitude is modeled by the Pad\\'e approximant. A comparison of $\\Lambda$CDM with other cosmologies is done adopting a statistical point of view.

  20. New solutions to the constant-head test performed at a partially penetrating well

    NASA Astrophysics Data System (ADS)

    Chang, Y. C.; Yeh, H. D.

    2009-05-01

    SummaryThe mathematical model describing the aquifer response to a constant-head test performed at a fully penetrating well can be easily solved by the conventional integral transform technique. In addition, the Dirichlet-type condition should be chosen as the boundary condition along the rim of wellbore for such a test well. However, the boundary condition for a test well with partial penetration must be considered as a mixed-type condition. Generally, the Dirichlet condition is prescribed along the well screen and the Neumann type no-flow condition is specified over the unscreened part of the test well. The model for such a mixed boundary problem in a confined aquifer system of infinite radial extent and finite vertical extent is solved by the dual series equations and perturbation method. This approach provides analytical results for the drawdown in the partially penetrating well and the well discharge along the screen. The semi-analytical solutions are particularly useful for the practical applications from the computational point of view.

  1. Prediction of retention times in comprehensive two-dimensional gas chromatography using thermodynamic models.

    PubMed

    McGinitie, Teague M; Harynuk, James J

    2012-09-14

    A method was developed to accurately predict both the primary and secondary retention times for a series of alkanes, ketones and alcohols in a flow-modulated GC×GC system. This was accomplished through the use of a three-parameter thermodynamic model where ΔH, ΔS, and ΔC(p) for an analyte's interaction with the stationary phases in both dimensions are known. Coupling this thermodynamic model with a time summation calculation it was possible to accurately predict both (1)t(r) and (2)t(r) for all analytes. The model was able to predict retention times regardless of the temperature ramp used, with an average error of only 0.64% for (1)t(r) and an average error of only 2.22% for (2)t(r). The model shows promise for the accurate prediction of retention times in GC×GC for a wide range of compounds and is able to utilize data collected from 1D experiments. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Radiated flow of chemically reacting nanoliquid with an induced magnetic field across a permeable vertical plate

    NASA Astrophysics Data System (ADS)

    Mahanthesh, B.; Gireesha, B. J.; Athira, P. R.

    Impact of induced magnetic field over a flat porous plate by utilizing incompressible water-copper nanoliquid is examined analytically. Flow is supposed to be laminar, steady and two-dimensional. The plate is subjected to a regular free stream velocity as well as suction velocity. Flow formulation is developed by considering Maxwell-Garnetts (MG) and Brinkman models of nanoliquid. Impacts of thermal radiation, viscous dissipation, temperature dependent heat source/sink and first order chemical reaction are also retained. The subjected non-linear problems are non-dimensionalized and analytic solutions are presented via series expansion method. The graphs are plotted to analyze the influence of pertinent parameters on flow, magnetism, heat and mass transfer fields as well as friction factor, current density, Nusselt and Sherwood numbers. It is found that friction factor at the plate is more for larger magnetic Prandtl number. Also the rate of heat transfer decayed with increasing nanoparticles volume fraction and the strength of magnetism.

  3. [Study on HPLC fingerprint of 11 Taraxacum species in northeast of China].

    PubMed

    Zhu, Dan; Zhao, Xin; Xu, Qiao; Ning, Wei

    2011-04-01

    To study the RP-HPLC fingerprints of 11 plants in the genus Taraxacum for their quality control. The fingerprints were determined using an Agilent 1100 series instrument system. Chromatographic analyses were performed on a Kromasil 100-5 C18 (4.6 mm x 250 mm, 5 microm) analytical column,eluted with methanol and water containing 0.5% acetic acid as the mobile phases in gradient elution at the flow rate of 1.0 mL x min(-1). The detection wavelength was 323 nm. The temperature of column was 35 degrees C. Eleven species of Taraxacum in northeast of China were detected respectively. Twenty-five common peaks existed in 11 RP-HPLC fingerprints. By comparing the retention time and the on-line UV spectra, peaks No. 10, No. 12, No. 16 and No. 25 were identified as chlorogenic acid, caffeic acid, p-coumaroy acid and luteolin respectively. The analytical method with good precision and reproducibility can be useful in the quality control of Taraxacum plants.

  4. Analysis of statistical and standard algorithms for detecting muscle onset with surface electromyography

    PubMed Central

    Tweedell, Andrew J.; Haynes, Courtney A.

    2017-01-01

    The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60–90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity. PMID:28489897

  5. Multiple internal standard normalization for improving HS-SPME-GC-MS quantitation in virgin olive oil volatile organic compounds (VOO-VOCs) profile.

    PubMed

    Fortini, Martina; Migliorini, Marzia; Cherubini, Chiara; Cecchi, Lorenzo; Calamai, Luca

    2017-04-01

    The commercial value of virgin olive oils (VOOs) strongly depends on their classification, also based on the aroma of the oils, usually evaluated by a panel test. Nowadays, a reliable analytical method is still needed to evaluate the volatile organic compounds (VOCs) and support the standard panel test method. To date, the use of HS-SPME sampling coupled to GC-MS is generally accepted for the analysis of VOCs in VOOs. However, VOO is a challenging matrix due to the simultaneous presence of: i) compounds at ppm and ppb concentrations; ii) molecules belonging to different chemical classes and iii) analytes with a wide range of molecular mass. Therefore, HS-SPME-GC-MS quantitation based upon the use of external standard method or of only a single internal standard (ISTD) for data normalization in an internal standard method, may be troublesome. In this work a multiple internal standard normalization is proposed to overcome these problems and improving quantitation of VOO-VOCs. As many as 11 ISTDs were used for quantitation of 71 VOCs. For each of them the most suitable ISTD was selected and a good linearity in a wide range of calibration was obtained. Except for E-2-hexenal, without ISTD or with an unsuitable ISTD, the linear range of calibration was narrower with respect to that obtained by a suitable ISTD, confirming the usefulness of multiple internal standard normalization for the correct quantitation of VOCs profile in VOOs. The method was validated for 71 VOCs, and then applied to a series of lampante virgin olive oils and extra virgin olive oils. In light of our results, we propose the application of this analytical approach for routine quantitative analyses and to support sensorial analysis for the evaluation of positive and negative VOOs attributes. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. A study on the application of Fourier series in IMRT treatment planning.

    PubMed

    Almeida-Trinidad, R; Garnica-Garza, H M

    2007-12-01

    In intensity-modulated radiotherapy, a set of x-ray fluence profiles is iteratively adjusted until a desired absorbed dose distribution is obtained. The purpose of this article is to present a method that allows the optimization of fluence profiles based on the Fourier series decomposition of an initial approximation to the profile. The method has the advantage that a new fluence profile can be obtained in a precise and controlled way with the tuning of only two parameters, namely the phase of the sine and cosine terms of one of the Fourier components, in contrast to the point-by-point tuning of the profile. Also, because the method uses analytical functions, the resultant profiles do not exhibit numerical artifacts. A test case consisting of a mathematical phantom with a target wrapped around a critical structure is discussed to illustrate the algorithm. It is shown that the degree of conformality of the absorbed dose distribution can be tailored by varying the number of Fourier terms made available to the optimization algorithm. For the test case discussed here, it is shown that the number of Fourier terms to be modified depends on the number of radiation beams incident on the target but it is in general in the order of 10 terms.

  7. Adaptation of High-Throughput Screening in Drug Discovery—Toxicological Screening Tests

    PubMed Central

    Szymański, Paweł; Markowicz, Magdalena; Mikiciuk-Olasik, Elżbieta

    2012-01-01

    High-throughput screening (HTS) is one of the newest techniques used in drug design and may be applied in biological and chemical sciences. This method, due to utilization of robots, detectors and software that regulate the whole process, enables a series of analyses of chemical compounds to be conducted in a short time and the affinity of biological structures which is often related to toxicity to be defined. Since 2008 we have implemented the automation of this technique and as a consequence, the possibility to examine 100,000 compounds per day. The HTS method is more frequently utilized in conjunction with analytical techniques such as NMR or coupled methods e.g., LC-MS/MS. Series of studies enable the establishment of the rate of affinity for targets or the level of toxicity. Moreover, researches are conducted concerning conjugation of nanoparticles with drugs and the determination of the toxicity of such structures. For these purposes there are frequently used cell lines. Due to the miniaturization of all systems, it is possible to examine the compound’s toxicity having only 1–3 mg of this compound. Determination of cytotoxicity in this way leads to a significant decrease in the expenditure and to a reduction in the length of the study. PMID:22312262

  8. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  9. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  10. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  11. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  12. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  13. SAM Radiochemical Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target radiochemical analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select radiochemical analytes.

  14. Engineering applications and analysis of vibratory motion fourth order fluid film over the time dependent heated flat plate

    NASA Astrophysics Data System (ADS)

    Mohmand, Muhammad Ismail; Mamat, Mustafa Bin; Shah, Qayyum

    2017-07-01

    This article deals with the time dependent analysis of thermally conducting and Magneto-hydrodynamic (MHD) liquid film flow of a fourth order fluid past a vertical and vibratory plate. In this article have been developed for higher order complex nature fluids. The governing-equations have been modeled in the terms of nonlinear partial differential equations with the help of physical boundary circumstances. Two different analytical approaches i.e. Adomian decomposition method (ADM) and the optimal homotopy asymptotic method (OHAM), have been used for discoveryof the series clarification of the problems. Solutions obtained via two diversemethods have been compared using the graphs, tables and found an excellent contract. Variants of the embedded flow parameters in the solution have been analysed through the graphical diagrams.

  15. Detection of Interference Phase by Digital Computation of Quadrature Signals in Homodyne Laser Interferometry

    PubMed Central

    Rerucha, Simon; Buchta, Zdenek; Sarbort, Martin; Lazar, Josef; Cip, Ondrej

    2012-01-01

    We have proposed an approach to the interference phase extraction in the homodyne laser interferometry. The method employs a series of computational steps to reconstruct the signals for quadrature detection from an interference signal from a non-polarising interferometer sampled by a simple photodetector. The complexity trade-off is the use of laser beam with frequency modulation capability. It is analytically derived and its validity and performance is experimentally verified. The method has proven to be a feasible alternative for the traditional homodyne detection since it performs with comparable accuracy, especially where the optical setup complexity is principal issue and the modulation of laser beam is not a heavy burden (e.g., in multi-axis sensor or laser diode based systems). PMID:23202038

  16. Stochastic modelling of intermittency.

    PubMed

    Stemler, Thomas; Werner, Johannes P; Benner, Hartmut; Just, Wolfram

    2010-01-13

    Recently, methods have been developed to model low-dimensional chaotic systems in terms of stochastic differential equations. We tested such methods in an electronic circuit experiment. We aimed to obtain reliable drift and diffusion coefficients even without a pronounced time-scale separation of the chaotic dynamics. By comparing the analytical solutions of the corresponding Fokker-Planck equation with experimental data, we show here that crisis-induced intermittency can be described in terms of a stochastic model which is dominated by state-space-dependent diffusion. Further on, we demonstrate and discuss some limits of these modelling approaches using numerical simulations. This enables us to state a criterion that can be used to decide whether a stochastic model will capture the essential features of a given time series. This journal is © 2010 The Royal Society

  17. Cyanoacrylate Skin Surface Stripping and the 3S-Biokit Advent in Tropical Dermatology: A Look from Liège

    PubMed Central

    Piérard, Gérald E.; Piérard-Franchimont, Claudine; Paquet, Philippe; Hermanns-Lê, Trinh; Delvenne, Philippe

    2014-01-01

    In the dermatopathology field, some simple available laboratory tests require minimum equipment for establishing a diagnosis. Among them, the cyanoacrylate skin surface stripping (CSSS), formerly named skin surface biopsy or follicular biopsy, represents a convenient low cost procedure. It is a minimally invasive method collecting a continuous sheet of stratum corneum and horny follicular casts. In the vast majority of cases, it is painless and is unassociated with adverse events. CSSS can be performed in subjects of any age. The method has a number of applications in diagnostic dermatopathology and cosmetology, as well as in experimental dermatology settings. A series of derived analytic procedures include xerosis grading, comedometry, corneofungimetry, corneodynamics of stratum corneum renewal, corneomelametry, corneosurfametry, and corneoxenometry. PMID:25177726

  18. Surface charge method for molecular surfaces with curved areal elements I. Spherical triangles

    NASA Astrophysics Data System (ADS)

    Yu, Yi-Kuo

    2018-03-01

    Parametrizing a curved surface with flat triangles in electrostatics problems creates a diverging electric field. One way to avoid this is to have curved areal elements. However, charge density integration over curved patches appears difficult. This paper, dealing with spherical triangles, is the first in a series aiming to solve this problem. Here, we lay the ground work for employing curved patches for applying the surface charge method to electrostatics. We show analytically how one may control the accuracy by expanding in powers of the the arc length (multiplied by the curvature). To accommodate not extremely small curved areal elements, we have provided enough details to include higher order corrections that are needed for better accuracy when slightly larger surface elements are used.

  19. Validation conform ISO-15189 of assays in the field of autoimmunity: Joint efforts in The Netherlands.

    PubMed

    Mulder, Leontine; van der Molen, Renate; Koelman, Carin; van Leeuwen, Ester; Roos, Anja; Damoiseaux, Jan

    2018-05-01

    ISO 15189:2012 requires validation of methods used in the medical laboratory, and lists a series of performance parameters for consideration to include. Although these performance parameters are feasible for clinical chemistry analytes, application in the validation of autoimmunity tests is a challenge. Lack of gold standards or reference methods in combination with the scarcity of well-defined diagnostic samples of patients with rare diseases make validation of new assays difficult. The present manuscript describes the initiative of Dutch medical immunology laboratory specialists to combine efforts and perform multi-center validation studies of new assays in the field of autoimmunity. Validation data and reports are made available to interested Dutch laboratory specialists. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Comparison of univariate and multivariate calibration for the determination of micronutrients in pellets of plant materials by laser induced breakdown spectrometry

    NASA Astrophysics Data System (ADS)

    Braga, Jez Willian Batista; Trevizan, Lilian Cristina; Nunes, Lidiane Cristina; Rufini, Iolanda Aparecida; Santos, Dário, Jr.; Krug, Francisco José

    2010-01-01

    The application of laser induced breakdown spectrometry (LIBS) aiming the direct analysis of plant materials is a great challenge that still needs efforts for its development and validation. In this way, a series of experimental approaches has been carried out in order to show that LIBS can be used as an alternative method to wet acid digestions based methods for analysis of agricultural and environmental samples. The large amount of information provided by LIBS spectra for these complex samples increases the difficulties for selecting the most appropriated wavelengths for each analyte. Some applications have suggested that improvements in both accuracy and precision can be achieved by the application of multivariate calibration in LIBS data when compared to the univariate regression developed with line emission intensities. In the present work, the performance of univariate and multivariate calibration, based on partial least squares regression (PLSR), was compared for analysis of pellets of plant materials made from an appropriate mixture of cryogenically ground samples with cellulose as the binding agent. The development of a specific PLSR model for each analyte and the selection of spectral regions containing only lines of the analyte of interest were the best conditions for the analysis. In this particular application, these models showed a similar performance, but PLSR seemed to be more robust due to a lower occurrence of outliers in comparison to the univariate method. Data suggests that efforts dealing with sample presentation and fitness of standards for LIBS analysis must be done in order to fulfill the boundary conditions for matrix independent development and validation.

  1. Frequency adaptation in controlled stochastic resonance utilizing delayed feedback method: two-pole approximation for response function.

    PubMed

    Tutu, Hiroki

    2011-06-01

    Stochastic resonance (SR) enhanced by time-delayed feedback control is studied. The system in the absence of control is described by a Langevin equation for a bistable system, and possesses a usual SR response. The control with the feedback loop, the delay time of which equals to one-half of the period (2π/Ω) of the input signal, gives rise to a noise-induced oscillatory switching cycle between two states in the output time series, while its average frequency is just smaller than Ω in a small noise regime. As the noise intensity D approaches an appropriate level, the noise constructively works to adapt the frequency of the switching cycle to Ω, and this changes the dynamics into a state wherein the phase of the output signal is entrained to that of the input signal from its phase slipped state. The behavior is characterized by power loss of the external signal or response function. This paper deals with the response function based on a dichotomic model. A method of delay-coordinate series expansion, which reduces a non-Markovian transition probability flux to a series of memory fluxes on a discrete delay-coordinate system, is proposed. Its primitive implementation suggests that the method can be a potential tool for a systematic analysis of SR phenomenon with delayed feedback loop. We show that a D-dependent behavior of poles of a finite Laplace transform of the response function qualitatively characterizes the structure of the power loss, and we also show analytical results for the correlation function and the power spectral density.

  2. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  3. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  4. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  5. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  6. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  7. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  8. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  9. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture... Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MREs are listed as follows: (1) Official Methods of Analysis of AOAC...

  10. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  11. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiedenman, B. J.; White, T. L.; Mahannah, R. N.

    Ion Chromatography (IC) is the principal analytical method used to support studies of Sludge Reciept and Adjustment Tank (SRAT) chemistry at DWPF. A series of prior analytical ''Round Robin'' (RR) studies included both supernate and sludge samples from SRAT simulant, previously reported as memos, are tabulated in this report.2,3 From these studies it was determined to standardize IC column size to 4 mm diameter, eliminating the capillary column from use. As a follow on test, the DWPF laboratory, the PSAL laboratory, and the AD laboratory participated in the current analytical RR to determine a suite of anions in SRAT simulantmore » by IC, results also are tabulated in this report. The particular goal was to confirm the laboratories ability to measure and quantitate glycolate ion. The target was + or - 20% inter-lab agreement of the analyte averages for the RR. Each of the three laboratories analyzed a batch of 12 samples. For each laboratory, the percent relative standard deviation (%RSD) of the averages on nitrate, glycolate, and oxalate, was 10% or less. The three laboratories all met the goal of 20% relative agreement for nitrate and glycolate. For oxalate, the PSAL laboratory reported an average value that was 20% higher than the average values reported by the DWPF laboratory and the AD laboratory. Because of this wider window of agreement, it was concluded to continue the practice of an additional acid digestion for total oxalate measurement. It should also be noted that large amounts of glycolate in the SRAT samples will have an impact on detection limits of near eluting peaks, namely Fluoride and Formate. A suite of scoping experiments are presented in the report to identify and isolate other potential interlaboratory disceprancies. Specific ion chromatography inter-laboratory method conditions and differences are tabulated. Most differences were minor but there are some temperature control equipment differences that are significant leading to a recommendation of a heated jacket for analytical columns that are remoted for use in radiohoods. A suggested method improvement would be to implement column temperture control at a temperature slightly above ambient to avoid peak shifting due to temperature fluctuations. Temperature control in this manner would improve short and longer term peak retention time stability. An unknown peak was observed during the analysis of glycolic acid and SRAT simulant. The unknown peak was determined to best match diglycolic acid. The development of a method for acetate is summaraized, and no significant amount of acetate was observed in the SRAT products tested. In addition, an alternative Gas Chromatograph (GC) method for glycolate is summarized.« less

  13. Comparison of PSF maxima and minima of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems

    NASA Astrophysics Data System (ADS)

    Ratnam, Challa; Lakshmana Rao, Vadlamudi; Lachaa Goud, Sivagouni

    2006-10-01

    In the present paper, and a series of papers to follow, the Fourier analytical properties of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems are investigated. First, the transmission function for MACA and CMACA is derived using Fourier methods and, based on the Fresnel-Kirchoff diffraction theory, the formulae for the point spread function are formulated. The PSF maxima and minima are calculated for both the MACA and CMACA systems. The dependence of these properties on the number of zones is studied and reported in this paper.

  14. A Guided Tour of Mathematical Methods for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Snieder, Roel; van Wijk, Kasper

    2015-05-01

    1. Introduction; 2. Dimensional analysis; 3. Power series; 4. Spherical and cylindrical coordinates; 5. Gradient; 6. Divergence of a vector field; 7. Curl of a vector field; 8. Theorem of Gauss; 9. Theorem of Stokes; 10. The Laplacian; 11. Scale analysis; 12. Linear algebra; 13. Dirac delta function; 14. Fourier analysis; 15. Analytic functions; 16. Complex integration; 17. Green's functions: principles; 18. Green's functions: examples; 19. Normal modes; 20. Potential-field theory; 21. Probability and statistics; 22. Inverse problems; 23. Perturbation theory; 24. Asymptotic evaluation of integrals; 25. Conservation laws; 26. Cartesian tensors; 27. Variational calculus; 28. Epilogue on power and knowledge.

  15. Analytical Investigation of Elastic Thin-Walled Cylinder and Truncated Cone Shell Intersection Under Internal Pressure.

    PubMed

    Zamani, J; Soltani, B; Aghaei, M

    2014-10-01

    An elastic solution of cylinder-truncated cone shell intersection under internal pressure is presented. The edge solution theory that has been used in this study takes bending moments and shearing forces into account in the thin-walled shell of revolution element. The general solution of the cone equations is based on power series method. The effect of cone apex angle on the stress distribution in conical and cylindrical parts of structure is investigated. In addition, the effect of the intersection and boundary locations on the circumferential and longitudinal stresses is evaluated and it is shown that how quantitatively they are essential.

  16. Simulation of Foam Impact Effects on Components of the Space Shuttle Thermal Protection System. Chapter 7

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.; Park, Young-Keun

    2004-01-01

    A series of three dimensional simulations has been performed to investigate analytically the effect of insulating foam impacts on ceramic tile and reinforced carbon-carbon components of the Space Shuttle thermal protection system. The simulations employed a hybrid particle-finite element method and a parallel code developed for use in spacecraft design applications. The conclusions suggested by the numerical study are in general consistent with experiment. The results emphasize the need for additional material testing work on the dynamic mechanical response of thermal protection system materials, and additional impact experiments for use in validating computational models of impact effects.

  17. Towards Adaptive Educational Assessments: Predicting Student Performance using Temporal Stability and Data Analytics in Learning Management Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thakur, Gautam; Olama, Mohammed M; McNair, Wade

    Data-driven assessments and adaptive feedback are becoming a cornerstone research in educational data analytics and involve developing methods for exploring the unique types of data that come from the educational context. For example, predicting college student performance is crucial for both the students and educational institutions. It can support timely intervention to prevent students from failing a course, increasing efficacy of advising functions, and improving course completion rate. In this paper, we present our efforts in using data analytics that enable educationists to design novel data-driven assessment and feedback mechanisms. In order to achieve this objective, we investigate temporal stabilitymore » of students grades and perform predictive analytics on academic data collected from 2009 through 2013 in one of the most commonly used learning management systems, called Moodle. First, we have identified the data features useful for assessments and predicting student outcomes such as students scores in homework assignments, quizzes, exams, in addition to their activities in discussion forums and their total Grade Point Average(GPA) at the same term they enrolled in the course. Second, time series models in both frequency and time domains are applied to characterize the progression as well as overall projections of the grades. In particular, the model analyzed the stability as well as fluctuation of grades among students during the collegiate years (from freshman to senior) and disciplines. Third, Logistic Regression and Neural Network predictive models are used to identify students as early as possible who are in danger of failing the course they are currently enrolled in. These models compute the likelihood of any given student failing (or passing) the current course. The time series analysis indicates that assessments and continuous feedback are critical for freshman and sophomores (even with easy courses) than for seniors, and those assessments may be provided using the predictive models. Numerical results are presented to evaluate and compare the performance of the developed models and their predictive accuracy. Our results show that there are strong ties associated with the first few weeks for coursework and they have an impact on the design and distribution of individual modules.« less

  18. Separation of very hydrophobic analytes by micellar electrokinetic chromatography IV. Modeling of the effective electrophoretic mobility from carbon number equivalents and octanol-water partition coefficients.

    PubMed

    Huhn, Carolin; Pyell, Ute

    2008-07-11

    It is investigated whether those relationships derived within an optimization scheme developed previously to optimize separations in micellar electrokinetic chromatography can be used to model effective electrophoretic mobilities of analytes strongly differing in their properties (polarity and type of interaction with the pseudostationary phase). The modeling is based on two parameter sets: (i) carbon number equivalents or octanol-water partition coefficients as analyte descriptors and (ii) four coefficients describing properties of the separation electrolyte (based on retention data for a homologous series of alkyl phenyl ketones used as reference analytes). The applicability of the proposed model is validated comparing experimental and calculated effective electrophoretic mobilities. The results demonstrate that the model can effectively be used to predict effective electrophoretic mobilities of neutral analytes from the determined carbon number equivalents or from octanol-water partition coefficients provided that the solvation parameters of the analytes of interest are similar to those of the reference analytes.

  19. Reply to ``Comment on `Free surface Hele-Shaw flows around an obstacle: A random walk simulation' ''

    NASA Astrophysics Data System (ADS)

    Bogoyavlenskiy, Vladislav A.; Cotts, Eric J.

    2007-09-01

    As pointed out by Vasconcelos in his Comment, our computer simulations of Hele-Shaw flows around series of wedges differ from analytical solutions existing for this problem. We attribute the discrepancy to the notion that these analytical solutions correspond to ideal, steady-state flow regimes which are hardly applicable when a rigid obstacle interacts with a moving liquid-gas interface.

  20. 40 CFR 161.180 - Enforcement analytical method.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 161.180... DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data Requirements § 161.180 Enforcement analytical method. An analytical method suitable for enforcement purposes must be...

  1. A method for the geometrically nonlinear analysis of compressively loaded prismatic composite structures

    NASA Technical Reports Server (NTRS)

    Stoll, Frederick; Gurdal, Zafer; Starnes, James H., Jr.

    1991-01-01

    A method was developed for the geometrically nonlinear analysis of the static response of thin-walled stiffened composite structures loaded in uniaxial or biaxial compression. The method is applicable to arbitrary prismatic configurations composed of linked plate strips, such as stiffened panels and thin-walled columns. The longitudinal ends of the structure are assumed to be simply supported, and geometric shape imperfections can be modeled. The method can predict the nonlinear phenomena of postbuckling strength and imperfection sensitivity which are exhibited by some buckling-dominated structures. The method is computer-based and is semi-analytic in nature, making it computationally economical in comparison to finite element methods. The method uses a perturbation approach based on the use of a series of buckling mode shapes to represent displacement contributions associated with nonlinear response. Displacement contributions which are of second order in the model amplitudes are incorported in addition to the buckling mode shapes. The principle of virtual work is applied using a finite basis of buckling modes, and terms through the third order in the model amplitudes are retained. A set of cubic nonlinear algebraic equations are obtained, from which approximate equilibrium solutions are determined. Buckling mode shapes for the general class of structure are obtained using the VIPASA analysis code within the PASCO stiffened-panel design code. Thus, subject to some additional restrictions in loading and plate anisotropy, structures which can be modeled with respect to buckling behavior by VIPASA can be analyzed with respect to nonlinear response using the new method. Results obtained using the method are compared with both experimental and analytical results in the literature. The configurations investigated include several different unstiffened and blade-stiffening panel configurations, featuring both homogeneous, isotropic materials, and laminated composite material.

  2. Use of X-ray diffraction to quantify amorphous supplementary cementitious materials in anhydrous and hydrated blended cements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snellings, R., E-mail: ruben.snellings@epfl.ch; Salze, A.; Scrivener, K.L., E-mail: karen.scrivener@epfl.ch

    2014-10-15

    The content of individual amorphous supplementary cementitious materials (SCMs) in anhydrous and hydrated blended cements was quantified by the PONKCS [1] X-ray diffraction (XRD) method. The analytical precision and accuracy of the method were assessed through comparison to a series of mixes of known phase composition and of increasing complexity. A 2σ precision smaller than 2–3 wt.% and an accuracy better than 2 wt.% were achieved for SCMs in mixes with quartz, anhydrous Portland cement, and hydrated Portland cement. The extent of reaction of SCMs in hydrating binders measured by XRD was 1) internally consistent as confirmed through the standardmore » addition method and 2) showed a linear correlation to the cumulative heat release as measured independently by isothermal conduction calorimetry. The advantages, limitations and applicability of the method are discussed with reference to existing methods that measure the degree of reaction of SCMs in blended cements.« less

  3. Using geovisual analytics in Google Earth to understand disease distribution: a case study of campylobacteriosis in the Czech Republic (2008-2012).

    PubMed

    Marek, Lukáš; Tuček, Pavel; Pászto, Vít

    2015-01-28

    Visual analytics aims to connect the processing power of information technologies and the user's ability of logical thinking and reasoning through the complex visual interaction. Moreover, the most of the data contain the spatial component. Therefore, the need for geovisual tools and methods arises. Either one can develop own system but the dissemination of findings and its usability might be problematic or the widespread and well-known platform can be utilized. The aim of this paper is to prove the applicability of Google Earth™ software as a tool for geovisual analytics that helps to understand the spatio-temporal patterns of the disease distribution. We combined the complex joint spatio-temporal analysis with comprehensive visualisation. We analysed the spatio-temporal distribution of the campylobacteriosis in the Czech Republic between 2008 and 2012. We applied three main approaches in the study: (1) the geovisual analytics of the surveillance data that were visualised in the form of bubble chart; (2) the geovisual analytics of the disease's weekly incidence surfaces computed by spatio-temporal kriging and (3) the spatio-temporal scan statistics that was employed in order to identify high or low rates clusters of affected municipalities. The final data are stored in Keyhole Markup Language files and visualised in Google Earth™ in order to apply geovisual analytics. Using geovisual analytics we were able to display and retrieve information from complex dataset efficiently. Instead of searching for patterns in a series of static maps or using numerical statistics, we created the set of interactive visualisations in order to explore and communicate results of analyses to the wider audience. The results of the geovisual analytics identified periodical patterns in the behaviour of the disease as well as fourteen spatio-temporal clusters of increased relative risk. We prove that Google Earth™ software is a usable tool for the geovisual analysis of the disease distribution. Google Earth™ has many indisputable advantages (widespread, freely available, intuitive interface, space-time visualisation capabilities and animations, communication of results), nevertheless it is still needed to combine it with pre-processing tools that prepare the data into a form suitable for the geovisual analytics itself.

  4. A novel finite element analysis of three-dimensional circular crack

    NASA Astrophysics Data System (ADS)

    Ping, X. C.; Wang, C. G.; Cheng, L. P.

    2018-06-01

    A novel singular element containing a part of the circular crack front is established to solve the singular stress fields of circular cracks by using the numerical series eigensolutions of singular stress fields. The element is derived from the Hellinger-Reissner variational principle and can be directly incorporated into existing 3D brick elements. The singular stress fields are determined as the system unknowns appearing as displacement nodal values. The numerical studies are conducted to demonstrate the simplicity of the proposed technique in handling fracture problems of circular cracks. The usage of the novel singular element can avoid mesh refinement near the crack front domain without loss of calculation accuracy and velocity of convergence. Compared with the conventional finite element methods and existing analytical methods, the present method is more suitable for dealing with complicated structures with a large number of elements.

  5. Design and Analysis Tool for External-Compression Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Slater, John W.

    2012-01-01

    A computational tool named SUPIN has been developed to design and analyze external-compression supersonic inlets for aircraft at cruise speeds from Mach 1.6 to 2.0. The inlet types available include the axisymmetric outward-turning, two-dimensional single-duct, two-dimensional bifurcated-duct, and streamline-traced Busemann inlets. The aerodynamic performance is characterized by the flow rates, total pressure recovery, and drag. The inlet flowfield is divided into parts to provide a framework for the geometry and aerodynamic modeling and the parts are defined in terms of geometric factors. The low-fidelity aerodynamic analysis and design methods are based on analytic, empirical, and numerical methods which provide for quick analysis. SUPIN provides inlet geometry in the form of coordinates and surface grids useable by grid generation methods for higher-fidelity computational fluid dynamics (CFD) analysis. SUPIN is demonstrated through a series of design studies and CFD analyses were performed to verify some of the analysis results.

  6. Eigenmodes of Multilayer Slit Structures

    NASA Astrophysics Data System (ADS)

    Kovalenko, A. N.

    2017-12-01

    We generalize the high-efficiency numerical-analytical method of calculating the eigenmodes of a microstrip line, which was proposed in [1], to multilayer slit structures. The obtained relationships make it possible to allow for the multilayer nature of the medium on the basis of solving the electrodynamic problem for a two-layer structure. The algebraic models of a single line and coupled slit lines in a multilayer dielectric medium are constructed. The matrix elements of the system of linear algebraic equations, which is used to determine the expansion coefficients of the electric field inside the slits in a Chebyshev basis, are converted to rapidly convergent series. The constructed models allow one to use computer simulation to obtain numerical results with high speed and accuracy, regardless of the number of dielectric layers. The presented results of a numerical study of the method convergence confirm high efficiency of the method.

  7. Stereoselective Luche reduction of deoxynivalenol and three of its acetylated derivatives at C8.

    PubMed

    Fruhmann, Philipp; Hametner, Christian; Mikula, Hannes; Adam, Gerhard; Krska, Rudolf; Fröhlich, Johannes

    2014-01-10

    The trichothecene mycotoxin deoxynivalenol (DON) is a well known and common contaminant in food and feed. Acetylated derivatives and other biosynthetic precursors can occur together with the main toxin. A key biosynthetic step towards DON involves an oxidation of the 8-OH group of 7,8-dihydroxycalonectrin. Since analytical standards for the intermediates are not available and these intermediates are therefore rarely studied, we aimed for a synthetic method to invert this reaction, making a series of calonectrin-derived precursors accessible. We did this by developing an efficient protocol for stereoselective Luche reduction at C8. This method was used to access 3,7,8,15-tetrahydroxyscirpene, 3-deacetyl-7,8-dihydroxycalonectrin, 15-deacetyl-7,8-dihydroxycalonectrin and 7,8-dihydroxycalonectrin, which were characterized using several NMR techniques. Beside the development of a method which could basically be used for all type B trichothecenes, we opened a synthetic route towards different acetylated calonectrins.

  8. The new philosophy of psychiatry: its (recent) past, present and future: a review of the Oxford University Press series International Perspectives in Philosophy and Psychiatry

    PubMed Central

    Banner, Natalie F; Thornton, Tim

    2007-01-01

    There has been a recent growth in philosophy of psychiatry that draws heavily (although not exclusively) on analytic philosophy with the aim of a better understanding of psychiatry through an analysis of some of its fundamental concepts. This 'new philosophy of psychiatry' is an addition to both analytic philosophy and to the broader interpretation of mental health care. Nevertheless, it is already a flourishing philosophical field. One indication of this is the new Oxford University Press series International Perspectives in Philosophy and Psychiatry seven volumes of which (by Bolton and Hill; Bracken and Thomas; Fulford, Morris, Sadler, and Stanghellini; Hughes, Louw, and Sabat; Pickering; Sadler; and Stanghellini) are examined in this critical review.

  9. Educacion de Adultos en America Latina. Estudio Bibliografico. Serie Bibliografica # 3 (Adult Education in Latin America: Bibliographical Study. Bibliographical Series # 3).

    ERIC Educational Resources Information Center

    Schlaen, Norah

    This analytical bibliography describes a group of selected documents that examine the current status and extent of development of adult education in Latin America. Documents were selected for inclusion based upon their focus on and concern for international and national goals and needs of adult education programs and for distribution to countries…

  10. Determination of trigonelline, nicotinic acid, and caffeine in Yunnan Arabica coffee by microwave-assisted extraction and HPLC with two columns in series.

    PubMed

    Liu, Hongcheng; Shao, Jinliang; Li, Qiwan; Li, Yangang; Yan, Hong Mei; He, Lizhong

    2012-01-01

    A simple, rapid method was developed for simultaneous extraction of trigonelline, nicotinic acid, and caffeine from coffee, and separation by two chromatographic columns in series. The trigonelline, nicotinic acid, and caffeine were extracted with microwave-assisted extraction (MAE). The optimal conditions selected were 3 min, 200 psi, and 120 degrees C. The chromatographic separation was performed with two columns in series, polyaromatic hydrocarbon C18 (250 x 4.6 mm id, 5 microm particle size) and Bondapak NH2 (300 x 3.9 mm id, 5 microm particle size). Isocratic elution was with 0.02 M phosphoric acid-methanol (70 + 30, v/v) mobile phase at a flow rate of 0.8 mL/min. Good recoveries and RSD values were found for all analytes in the matrix. The LOD of the three compounds was 0.02 mg/L, and the LOQ was 0.005% in the matrix. The concentrations of trigonelline, nicotinic acid, and caffeine in instant coffee, roasted coffee, and raw coffee (Yunnan Arabica coffee) were assessed by MAE and hot water extraction; the correlation coefficients between concentrations of the three compounds obtained were close to 1.

  11. Open-source Software for Demand Forecasting of Clinical Laboratory Test Volumes Using Time-series Analysis

    PubMed Central

    Mohammed, Emad A.; Naugler, Christopher

    2017-01-01

    Background: Demand forecasting is the area of predictive analytics devoted to predicting future volumes of services or consumables. Fair understanding and estimation of how demand will vary facilitates the optimal utilization of resources. In a medical laboratory, accurate forecasting of future demand, that is, test volumes, can increase efficiency and facilitate long-term laboratory planning. Importantly, in an era of utilization management initiatives, accurately predicted volumes compared to the realized test volumes can form a precise way to evaluate utilization management initiatives. Laboratory test volumes are often highly amenable to forecasting by time-series models; however, the statistical software needed to do this is generally either expensive or highly technical. Method: In this paper, we describe an open-source web-based software tool for time-series forecasting and explain how to use it as a demand forecasting tool in clinical laboratories to estimate test volumes. Results: This tool has three different models, that is, Holt-Winters multiplicative, Holt-Winters additive, and simple linear regression. Moreover, these models are ranked and the best one is highlighted. Conclusion: This tool will allow anyone with historic test volume data to model future demand. PMID:28400996

  12. Simulating the effects of ground-water withdrawals on streamflow in a precipitation-runoff model

    USGS Publications Warehouse

    Zarriello, Philip J.; Barlow, P.M.; Duda, P.B.

    2004-01-01

    Precipitation-runoff models are used to assess the effects of water use and management alternatives on streamflow. Often, ground-water withdrawals are a major water-use component that affect streamflow, but the ability of surface-water models to simulate ground-water withdrawals is limited. As part of a Hydrologic Simulation Program-FORTRAN (HSPF) precipitation-runoff model developed to analyze the effect of ground-water and surface-water withdrawals on streamflow in the Ipswich River in northeastern Massachusetts, an analytical technique (STRMDEPL) was developed for calculating the effects of pumped wells on streamflow. STRMDEPL is a FORTRAN program based on two analytical solutions that solve equations for ground-water flow to a well completed in a semi-infinite, homogeneous, and isotropic aquifer in direct hydraulic connection to a fully penetrating stream. One analytical method calculates unimpeded flow at the stream-aquifer boundary and the other method calculates the resistance to flow caused by semipervious streambed and streambank material. The principle of superposition is used with these analytical equations to calculate time-varying streamflow depletions due to daily pumping. The HSPF model can readily incorporate streamflow depletions caused by a well or surface-water withdrawal, or by multiple wells or surface-water withdrawals, or both, as a combined time-varying outflow demand from affected channel reaches. These demands are stored as a time series in the Watershed Data Management (WDM) file. This time-series data is read into the model as an external source used to specify flow from the first outflow gate in the reach where these withdrawals are located. Although the STRMDEPL program can be run independently of the HSPF model, an extension was developed to run this program within GenScn, a scenario generator and graphical user interface developed for use with the HSPF model. This extension requires that actual pumping rates for each well be stored in a unique WDM dataset identified by an attribute that associates each well with the model reach from which water is withdrawn. Other attributes identify the type and characteristics of the data. The interface allows users to easily add new pumping wells, delete exiting pumping wells, or change properties of the simulated aquifer or well. Development of this application enhanced the ability of the HSPF model to simulate complex water-use conditions in the Ipswich River Basin. The STRMDEPL program and the GenScn extension provide a valuable tool for water managers to evaluate the effects of pumped wells on streamflow and to test alternative water-use scenarios. Copyright ASCE 2004.

  13. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Enforcement analytical method. 158.355... DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An analytical method suitable for enforcement purposes must be provided for each active ingredient in the...

  14. Simultaneous determination of 24 polycyclic aromatic hydrocarbons in edible oil by tandem solid-phase extraction and gas chromatography coupled/tandem mass spectrometry.

    PubMed

    Xu, Ting; Tang, Hua; Chen, Dazhou; Dong, Haifeng; Li, Lei

    2015-01-01

    An efficient and fast tandem SPE method followed by GC/MS/MS has been developed for the determination and the quantification of 24 polycyclic aromatic hydrocarbons (PAHs) in edible oil. This method includes the monitoring of 15 + 1 PAHs designated as a priority by the European Union in their 2005/108/EC recommendation and 16 PAHs listed by the U. S. Environmental Protection Agency. The sample preparation procedures were based on SPE in which PAH-dedicated cartridges with molecularly imprinted polymers and graphitized carbon black were used in series. The novel tandem SPE combination of selective extraction and purification of light and heavy PAHs provided highly purified analytes. Identification and quantification of 24 target PAHs were performed using GC/MS/MS with the isotope dilution approaches using D-labeled and (13)C-labeled PAHs. The advantages of GC/MS/MS as compared to other detection methods include high sensitivity, selectivity, and interpretation ability. The method showed satisfactory linearity (R(2) > 0.998) over the range assayed (0.5-200 μg/kg); the LODs ranged from 0.03 to 0.6 μg/kg, and LOQs from 0.1 to 2.0 μg/kg. The recoveries using this method at three spiked concentration levels (2, 10, and 50 μg/kg) ranged from 56.8 to 117.7%. The RSD was lower than 12.7% in all cases. The proposed analytical method has been successfully applied for the analysis of the 24 PAHs in edible oil.

  15. Qualitative research in nutrition and dietetics: data analysis issues.

    PubMed

    Fade, S A; Swift, J A

    2011-04-01

    Although much of the analysis conducted in qualitative research falls within the broad church of thematic analysis, the wide scope of qualitative enquiry presents the researcher with a number of choices regarding data analysis techniques. This review, the third in the series, provides an overview of a number of techniques and practical steps that can be taken to provide some structure and focus to the intellectual work of thematic analysis in nutrition and dietetics. Because appropriate research methods are crucial to ensure high-quality research, it also describes a process for choosing appropriate analytical methods that considers the extent to which they help answer the research question(s) and are compatible with the philosophical assumptions about ontology, epistemology and methodology that underpin the overall design of a study. Other reviews in this series provide a model for embarking on a qualitative research project in nutrition and dietetics, an overview of the principal techniques of data collection, sampling and quality assessment of this kind of research and some practical advice relevant to nutrition and dietetics, along with glossaries of key terms. © 2010 The Authors. Journal compilation © 2010 The British Dietetic Association Ltd.

  16. Developing a Performance Assessment Framework and Indicators for Communicable Disease Management in Natural Disasters.

    PubMed

    Babaie, Javad; Ardalan, Ali; Vatandoost, Hasan; Goya, Mohammad Mehdi; Akbarisari, Ali

    2016-02-01

    Communicable disease management (CDM) is an important component of disaster public health response operations. However, there is a lack of any performance assessment (PA) framework and related indicators for the PA. This study aimed to develop a PA framework and indicators in CDM in disasters. In this study, a series of methods were used. First, a systematic literature review (SLR) was performed in order to extract the existing PA frameworks and indicators. Then, using a qualitative approach, some interviews with purposively selected experts were conducted and used in developing the PA framework and indicators. Finally, the analytical hierarchy process (AHP) was used for weighting of the developed indicators. The input, process, products, and outcomes (IPPO) framework was found to be an appropriate framework for CDM PA. Seven main functions were revealed to CDM during disasters. Forty PA indicators were developed for the four categories. There is a lack of any existing PA framework in CDM in disasters. Thus, in this study, a PA framework (IPPO framework) was developed for the PA of CDM in disasters through a series of methods. It can be an appropriate framework and its indicators could measure the performance of CDM in disasters.

  17. Photo-degradation of CT-DNA with a series of carbothioamide ruthenium (II) complexes - Synthesis and structural analysis

    NASA Astrophysics Data System (ADS)

    Muthuraj, V.; Umadevi, M.

    2018-04-01

    The present research article is related with the method of preparation, structure and spectroscopic properties of a series of carbothioamide ruthenium (II) complexes with N and S donor ligands namely, 2-((6-chloro-4-oxo-4H-chromen-3-yl)methylene) hydrazine carbothioamide (ClChrTs)/2-((6-methoxy-4-oxo-4H-chromen-3-yl)methylene)hydrazine carbothioamide (MeOChrTS). The synthesized complexes were characterized by several techniques using analytical methods as well as by spectral techniques such as FT-IR, 1HNMR, 13CNMR, ESI mass and thermogravimetry/differential thermal analysis (TG-DTA). The IR spectra shows that the ligand acts as a neutral bidentate with N and S donor atoms. The biological activity of the prepared compounds and metal complexes were tested against cell line of calf-thymus DNA via an intercalation mechanism (MCF-7). In addition, the interaction of Ru(II) complexes and its free ligands with CT-DNA were also investigated by titration with UV-Vis spectra, fluorescence spectra, and Circular dichroism studies. Results suggest that both of the two Ru(II) complexes can bind with calf-thymus DNA via an intercalation mechanism.

  18. Final technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edward DeLong

    2011-10-07

    Our overarching goals in this project were to: Develop and improve high-throughput sequencing methods and analytical approaches for quantitative analyses of microbial gene expression at the Hawaii Ocean Time Series Station and the Bermuda Atlantic Time Series Station; Conduct field analyses following gene expression patterns in picoplankton microbial communities in general, and Prochlorococcus flow sorted from that community, as they respond to different environmental variables (light, macronutrients, dissolved organic carbon), that are predicted to influence activity, productivity, and carbon cycling; Use the expression analyses of flow sorted Prochlorococcus to identify horizontally transferred genes and gene products, in particular those thatmore » are located in genomic islands and likely to confer habitat-specific fitness advantages; Use the microbial community gene expression data that we generate to gain insights, and test hypotheses, about the variability, genomic context, activity and function of as yet uncharacterized gene products, that appear highly expressed in the environment. We achieved the above goals, and even more over the course of the project. This includes a number of novel methodological developments, as well as the standardization of microbial community gene expression analyses in both field surveys, and experimental modalities. The availability of these methods, tools and approaches is changing current practice in microbial community analyses.« less

  19. The analytic structure of non-global logarithms: Convergence of the dressed gluon expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larkoski, Andrew J.; Moult, Ian; Neill, Duff Austin

    Non-global logarithms (NGLs) are the leading manifestation of correlations between distinct phase space regions in QCD and gauge theories and have proven a challenge to understand using traditional resummation techniques. Recently, the dressed gluon ex-pansion was introduced that enables an expansion of the NGL series in terms of a “dressed gluon” building block, defined by an all-orders factorization theorem. Here, we clarify the nature of the dressed gluon expansion, and prove that it has an infinite radius of convergence as a solution to the leading logarithmic and large-N c master equation for NGLs, the Banfi-Marchesini-Smye (BMS) equation. The dressed gluonmore » expansion therefore provides an expansion of the NGL series that can be truncated at any order, with reliable uncertainty estimates. In contrast, manifest in the results of the fixed-order expansion of the BMS equation up to 12-loops is a breakdown of convergence at a finite value of α slog. We explain this finite radius of convergence using the dressed gluon expansion, showing how the dynamics of the buffer region, a region of phase space near the boundary of the jet that was identified in early studies of NGLs, leads to large contributions to the fixed order expansion. We also use the dressed gluon expansion to discuss the convergence of the next-to-leading NGL series, and the role of collinear logarithms that appear at this order. Finally, we show how an understanding of the analytic behavior obtained from the dressed gluon expansion allows us to improve the fixed order NGL series using conformal transformations to extend the domain of analyticity. Furthermore, this allows us to calculate the NGL distribution for all values of α slog from the coefficients of the fixed order expansion.« less

  20. Revisiting the Fundamental Analytical Solutions of Heat and Mass Transfer: The Kernel of Multirate and Multidimensional Diffusion

    NASA Astrophysics Data System (ADS)

    Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; Birkholzer, Jens T.

    2017-11-01

    There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1-D, 2-D, and 3-D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, td. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, td0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the first two terms for high-accuracy approximations (with less than 10-7 relative error) for 1-D isotropic (spheres, cylinders, slabs) and 2-D/3-D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1-D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2-D/3-D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.

Top